Consider this example:
function foo<T extends number[]>(
x: T,
y: (T["length"] extends 2 ? string : never)): T {
return x;
}
foo([1,2], "test");
Upon compilation, this code fails because [1,2]
is inferred as number[]
instead of [number, number]
. The length of a number[]
array is not always 2, so it's understandable.
However:
function bar<T extends [] | number[]>(
x: T,
y: (T["length"] extends 2 ? string : never)): T {
return x;
}
bar([1,2], "test");
Interestingly, this code works! In this scenario, [1,2]
is correctly inferred as [number, number]
.
But why does this happen? Intuitively, T
extending number[]
and T extends [] | number[]
should be the same since []
already extends number[]
. However, the additional []
seems to alter how the Typescript compiler infers the generic type. What's the underlying rule?
(For a related question showcasing the use of this feature: TypeScript: Require that two arrays be the same length?)