When dealing with multiple possible inference candidates for a generic type argument, a tradeoff arises. For example, consider a function like
declare function f<T>(x: T, y: T): void;
and then a call like
declare const a: A;
declare const b: B;
f(a, b); // <-- ?
What should be the expected behavior in this scenario? The compiler could either reject the call if A
and B
are not identical types, or it could allow any call by creating a union type A | B
. Both approaches have their own use cases, and opinions on the desired behavior vary.
Currently, TypeScript employs heuristic rules to handle such situations, which generally work well in real-world scenarios but can sometimes be disappointing.
One of the key heuristics is that union types will not be generated unless the multiple candidates belong to the same primitive type. For instance, if instead of using both strings like "a"
and "b"
, you used a string and a number, you would receive the expected error:
const foo = {
a: 'hello',
b: 'world',
3: 'abc'
} as const
function bar<T extends object, K extends keyof T>(
obj: T, key1: K, key2: K) { };
bar(foo, 'a', 3) // error
// ---------> ~ 3 is not assignable to 'a'
But if both types are literal types that would widen to the same primitive type, like "a"
and "b"
being subtypes of string
, TypeScript does generate union types to allow the call to succeed, as seen in your example.
Even though this may not be the desired behavior in your case, these heuristics generally work well in many other scenarios.
Is there a way to signal to the compiler that a different behavior is desired? For example, you might want to specify that key2
should only be compared against the type K</code inferred from <code>key1
, without affecting the inference of K
itself. This type of non-inferential type parameter usage is not directly supported in TypeScript, but there are discussed approaches to achieve it, such as defining a NoInfer<T>
utility type:
function bar<T extends object, K extends keyof T>(
obj: T, key1: K, key2: NoInfer<K>) { };
bar(foo, 'a', 'b') // error
// ---------> ~~~ 'b' not assignable to 'a'
Currently, there is no native implementation of NoInfer<T>
, but workarounds like defining it yourself using
type NoInfer<T> = T & {}
can provide the desired behavior by giving it a lower inference priority compared to
T
.
Another approach would be to introduce an additional type parameter constrained to the first one, which can also prevent inference:
function bar<T extends object, K1 extends keyof T, K2 extends K1>(
obj: T, key1: K1, key2: K2
) { };
bar(foo, 'a', 'b') // error
Either method should resolve the issue, at least in the given example.
Playground link to code