Although contrived, I usually pass an object literal to a function and capture the values of the literal in a generic format, like this:
type Values<V> = {
a: V;
b: V;
};
function mapValues<V>(v: Values<V>): V {
return v as any; // ignore
}
const vn = mapValues({ a: 1, b: 2 }); // inferred number
const vs = mapValues({ a: '1', b: '2' }); // inferred string
Both vn
and vs
are correctly inferred as number
or string
based on what I passed to mapValues
.
This method even works for indexed types:
function mapValues2<V>(v: { [key: string]: V }): V {
return v as any;
}
const v2n = mapValues2({ a: 1, b: 2 }); // inferred number
const v2s = mapValues2({ a: '1', b: '2' }); // inferred string
The object literal v
correctly identifies its values as string
/number
(respectively), allowing me to capture that inferred V
and use it in the return type.
However, when I introduce a mapped type, like so:
enum Foo {
a,
b,
}
function mapValues3<K, V>(o: K, v: { [key in keyof K]: V }): V {
return v as any;
}
const v3n = mapValues3(Foo, { a: 1, b: 2 }); // inferred unknown
const v3s = mapValues3(Foo, { a: '1', b: '2' }); // inferred unknown
It seems like V
loses its ability to determine whether it was string
/number
(respectively), resulting in an inferred value of unknown
.
If I explicitly specify const v3n: number
, then it works as expected, but I prefer to rely on type inference to determine V
for me.
I'm puzzled as to why switching from [key: string]
in the second snippet to [key in keyof K]
in the third snippet affects the inference of the : V
side of the object literal's type inference.
Any insights or ideas on this matter?