Skip to content

Conversation

@rockygeekz
Copy link

@rockygeekz rockygeekz commented Dec 7, 2025

Fixes #26741

This change updates the TypeScript definitions to allow constructing float16 tensors using Float16Array in environments where it is available. Runtime behavior remains unchanged (float16 is still represented internally as Uint16Array).

  • Introduces a GlobalFloat16Array helper type to safely detect Float16Array without requiring global polyfills.
  • Adds type-specific and inferred constructor overloads for float16.
  • No changes to runtime logic or public C APIs.

This resolves compile-time errors when passing Float16Array to the Tensor constructor in the onnxruntime-web package.


Description

This PR enhances the TypeScript typings for float16 tensors within the ONNX Runtime JavaScript API:

  • Adds GlobalFloat16Array, a conditional utility type that resolves to the instance type of Float16Array only when available.
  • Updates constructor definitions to accept either:
    • Uint16Array (existing behavior),
    • Float16Array (new behavior, when supported by the JS environment),
    • or readonly number[].
  • Extends inferred-type constructors to support new Tensor(new Float16Array(...)).
  • Ensures TypeScript consumers can pass Float16Array without encountering type errors.

Internally, ONNX Runtime continues to treat float16 data as Uint16Array, so runtime behavior is unchanged.


Motivation and Context

Modern JavaScript runtimes (browsers and Node versions) have begun introducing native Float16Array support. Developers using ONNX Runtime in TypeScript projects may attempt to construct float16 tensors using:

new Tensor(new Float16Array(784), [28, 28]);

This change updates the TypeScript definitions to allow constructing
float16 tensors using Float16Array in environments where it is available.
Runtime behavior remains unchanged (float16 is still represented as Uint16Array).

- Introduce GlobalFloat16Array helper type to safely detect Float16Array
  without requiring global polyfills.
- Add type-specific and inferred constructor overloads for float16.
- No changes to runtime logic or public C APIs.

This resolves compile-time errors when passing Float16Array to the
Tensor constructor in the onnxruntime-web package.
@rockygeekz
Copy link
Author

@microsoft-github-policy-service agree

@RReverser
Copy link

I think you could reuse the

export type TryGetGlobalType<Name extends string, Fallback = unknown> = typeof globalThis extends {
helper instead of rolling custom check.

@rockygeekz
Copy link
Author

Updated to use TryGetGlobalType for Float16Array.
Let me know if you'd like any further adjustments!


// Helper type: resolves to the instance type of `Float16Array` if it exists in the global scope,
// or `never` otherwise. Uses the shared TryGetGlobalType helper.
export type GlobalFloat16Array = TryGetGlobalType<'Float16Array', never>;
Copy link

@RReverser RReverser Dec 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it needs to be exported / part of the public API, just a local helper.

Suggested change
export type GlobalFloat16Array = TryGetGlobalType<'Float16Array', never>;
type GlobalFloat16Array = TryGetGlobalType<'Float16Array', never>;

*/
new (
type: 'float16',
data: Tensor.DataTypeMap['float16'] | GlobalFloat16Array | readonly number[],

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, wouldn't it be easier to inline this helper into the DataTypeMap?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated DataTypeMap.float16 to inline TryGetGlobalType<'Float16Array'> and map it to the instance type via prototype, so Float16Array instances are accepted where supported without changing runtime behavior.

@RReverser
Copy link

Thanks for working on this! Just to be clear, I'm not a maintainer on this repo, we'll need for one of them to review too.

string: string[];
bool: Uint8Array;
float16: Uint16Array; // Keep using Uint16Array until we have a concrete solution for float 16.
float16: Uint16Array | (TryGetGlobalType<'Float16Array'> extends { prototype: infer P } ? P : never);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TryGetGlobalType does the fallback for you btw, so this can be done much simpler.

Suggested change
float16: Uint16Array | (TryGetGlobalType<'Float16Array'> extends { prototype: infer P } ? P : never);
float16: Uint16Array | TryGetGlobalType<'Float16Array', never>

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the suggestion! Updated DataTypeMap['float16'] to use TryGetGlobalType<'Float16Array', never> directly.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see you removed manual overloads when converting to change to the DataMap, does the code from the original issue still work as expected without TS errors?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just double-checked locally using the public dist/cjs entrypoint,
new Tensor(new Float16Array(...)) typechecks correctly with no TS errors.
Everything works as expected.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh I wonder why they even need all those manual overloads if datatypemap is enough.

// #region CPU tensor - infer element types
/**
* Construct a new float32 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: Float32Array, dims?: readonly number[]): TypedTensor<'float32'>;
/**
* Construct a new int8 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: Int8Array, dims?: readonly number[]): TypedTensor<'int8'>;
/**
* Construct a new uint8 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: Uint8Array, dims?: readonly number[]): TypedTensor<'uint8'>;
/**
* Construct a new uint8 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: Uint8ClampedArray, dims?: readonly number[]): TypedTensor<'uint8'>;
/**
* Construct a new uint16 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: Uint16Array, dims?: readonly number[]): TypedTensor<'uint16'>;
/**
* Construct a new int16 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: Int16Array, dims?: readonly number[]): TypedTensor<'int16'>;
/**
* Construct a new int32 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: Int32Array, dims?: readonly number[]): TypedTensor<'int32'>;
/**
* Construct a new int64 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: BigInt64Array, dims?: readonly number[]): TypedTensor<'int64'>;
/**
* Construct a new string tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: readonly string[], dims?: readonly number[]): TypedTensor<'string'>;
/**
* Construct a new bool tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: readonly boolean[], dims?: readonly number[]): TypedTensor<'bool'>;
/**
* Construct a new float64 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: Float64Array, dims?: readonly number[]): TypedTensor<'float64'>;
/**
* Construct a new uint32 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: Uint32Array, dims?: readonly number[]): TypedTensor<'uint32'>;
/**
* Construct a new uint64 tensor object from the given data and dims.
*
* @param data - Specify the CPU tensor data.
* @param dims - Specify the dimension of the tensor. If omitted, a 1-D tensor is assumed.
*/
new (data: BigUint64Array, dims?: readonly number[]): TypedTensor<'uint64'>;

Oh well, perhaps an opportunity for future cleanup. Thanks for double-checking!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested locally and saw no errors. If I missed something, please let me know!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I believe you :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Web] TypeScript definitions incorrectly forbid Float16Array

2 participants