loke.dev
Header image for An Obscure Rule for Function Subtyping

An Obscure Rule for Function Subtyping

An investigation into why TypeScript treats function parameters and return values with a conflicting set of logic.

· 10 min read

Suppose you have a basic hierarchy of classes representing animals.

class Animal {
  eat() { console.log("Eating..."); }
}

class Dog extends Animal {
  bark() { console.log("Woof!"); }
}

class Greyhound extends Dog {
  runFast() { console.log("Zoom!"); }
}

In the world of structural typing, we generally understand that a Dog is an Animal. This is straightforward. If a function requires an Animal, you can give it a Dog. But the moment we start talking about functions that take other functions as arguments, our intuition usually falls off a cliff.

If I have a variable expecting a function that takes a Dog, can I assign it a function that takes an Animal? Or does it have to be a Greyhound? Most developers guess wrong the first time because the logic for function parameters is the exact inverse of the logic for return values.

The Mental Model: Producer vs. Consumer

To understand function subtyping, you have to stop thinking about "is-a" relationships for a moment and start thinking about direction of flow.

A function's return value is a producer. It hands a value out to the caller.
A function's parameter is a consumer. It receives a value from the caller.

These two roles have completely different safety requirements. In type theory, we call these relationships Covariance and Contravariance. If that sounds like category theory jargon that you’d rather ignore, stay with me. It’s actually the only way to make sense of why TypeScript lets you do some things and blocks you from others.

Covariance: The Intuitive Side (Return Values)

Let’s look at return values first because they behave exactly how you’d expect.

type GetAnimal = () => Animal;
type GetDog = () => Dog;

let getter: GetAnimal;
const dogGetter: GetDog = () => new Dog();

getter = dogGetter; // This is perfectly fine.

Why is this fine? Because getter is expected to return an Animal. If it returns a Dog, the caller is happy because a Dog is an Animal. The caller might try to call .eat() on the result, and since every Dog has an .eat() method, no runtime errors occur.

Return types are covariant. The subtyping of the function follows the same direction as the subtyping of the return type. Dog is a subtype of Animal, so () => Dog is a subtype of () => Animal.

Contravariance: The Inversion (Parameters)

Now, let’s flip the script. What happens when we pass arguments into a function?

type HandleDog = (d: Dog) => void;

const handleAnimal = (a: Animal) => a.eat();
const handleGreyhound = (g: Greyhound) => g.runFast();

let myHandler: HandleDog;

myHandler = handleAnimal;    // Wait, is this allowed?
myHandler = handleGreyhound; // Or is this allowed?

If you look at this through the lens of "Subtypes can be used where Supertypes are expected," you might think handleGreyhound is the correct choice. But it's actually the opposite.

`myHandler = handleAnimal` is safe.
`myHandler = handleGreyhound` is dangerous.

Here is the "Why": The type HandleDog is a contract. It says "I promise that whenever I call this function, I will provide at least a Dog."

If we assigned handleGreyhound to myHandler, the code would crash. Why? Because myHandler might be called with a standard Labrador. The handleGreyhound function expects to be able to call .runFast(), but a Labrador doesn't have that method.

Conversely, handleAnimal is safe. It only expects an Animal. If we give it a Dog, it only knows how to call .eat(). Since every Dog is an Animal, the code is robust.

In TypeScript, function parameters are contravariant. The subtyping of the function goes in the opposite direction of the parameter types. Dog is a subtype of Animal, but (a: Animal) => void is a subtype of (d: Dog) => void.

The strictFunctionTypes Flag

I should pause here. If you are trying to replicate the "dangerous" example above in a standard TypeScript project and it isn't throwing an error, you probably have strictFunctionTypes turned off (or you aren't in strict mode).

Historically, TypeScript was "bivariant" for function parameters. This means it allowed both the safe and the dangerous assignments. Why? Because of the way Arrays work in JavaScript.

const dogs: Dog[] = [new Dog()];
const animals: Animal[] = dogs; // This is allowed because of covariance in arrays

If parameters were strictly contravariant from day one, several common patterns in early TypeScript and React would have been incredibly verbose to type. However, as the ecosystem matured, the team introduced --strictFunctionTypes in TS 2.6 to close this soundness hole. In modern, professional TypeScript, you should always have this enabled.

Why Does This Matter in Practice?

You encounter this rule most often when dealing with event listeners or callbacks in higher-order functions.

Imagine you are building a UI framework. You have a base Event and a specific MouseEvent.

interface BaseEvent {
  timestamp: number;
}

interface MouseEvent extends BaseEvent {
  x: number;
  y: number;
}

function listenToEvents(handler: (e: BaseEvent) => void) {
  // ... implementation
}

// This is the common use case
const logger = (e: BaseEvent) => console.log(e.timestamp);
listenToEvents(logger); 

But what if you try to pass a specific handler to a generic listener?

const mouseLogger = (e: MouseEvent) => console.log(e.x);

// Error! Under strictFunctionTypes, this is unsafe.
listenToEvents(mouseLogger); 

TypeScript stops you here because listenToEvents might trigger a KeyboardEvent (which is also a BaseEvent). If it passed that KeyboardEvent into mouseLogger, e.x would be undefined, and your logic would fail. This obscure rule is literally preventing a runtime crash.

The Weird Exception: Method Shorthand

Here is a nuance that trips up even senior developers. There is a specific way to define functions in an interface that intentionally bypasses contravariance checks, even with strictFunctionTypes enabled.

Observe the difference between these two interface definitions:

interface StrictHandler {
  handle: (d: Dog) => void; // Property syntax
}

interface BivariantHandler {
  handle(d: Dog): void;     // Method syntax
}

let s: StrictHandler;
let b: BivariantHandler;

const handleAnimal = (a: Animal) => console.log("Done");

s = { handle: handleAnimal }; // Error (strictly checked)
b = { handle: handleAnimal }; // Success (bivariant)

Wait, what? Why does the syntax change the type checking logic?

This was a deliberate design choice by the TypeScript team. Method syntax is checked bivariantly to support the common patterns of the DOM and existing JS libraries where strict contravariance would be too restrictive.

If you want absolute type safety, always use property syntax (handle: (d: Dog) => void) for callbacks. If you find yourself fighting the compiler and you *know* what you're doing is safe in your specific context, method syntax (handle(d: Dog): void) is a "soft" way to loosen the rules without resorting to any.

Visualizing the Hierarchy

If we were to draw the "Type Space" for these functions, it would look like this:

1. Wide Inputs, Narrow Outputs: (a: Animal) => Greyhound
* This is the "Strongest" function. It can handle *anything* and returns the *most specific* thing.
2. Middle Ground: (d: Dog) => Dog
3. Narrow Inputs, Wide Outputs: (g: Greyhound) => Animal
* This is the "Weakest" function. It requires a very specific input but only promises a generic output.

Because of this, you can assign a "Strong" function to a "Weak" variable, but never the other way around.

type WeakFunc = (g: Greyhound) => Animal;

const strongFunc = (a: Animal): Greyhound => new Greyhound();

let fn: WeakFunc = strongFunc; // Totally safe.

The strongFunc is perfectly capable of fulfilling the contract of WeakFunc.
- WeakFunc says: "I will give you a Greyhound." strongFunc says: "Cool, I can handle any Animal, so a Greyhound is fine."
- WeakFunc says: "I expect an Animal back." strongFunc says: "I'm giving you a Greyhound, which is an Animal. You're welcome."

The "Obscure" Part: Multi-parameter Functions

The complexity ramps up when you have functions with multiple parameters, some of which might be optional or utilize rest parameters.

type BinaryOp = (a: number, b: number) => number;

const add: BinaryOp = (a, b) => a + b;
const square: BinaryOp = (a) => a * a; // Allowed!

Wait—why is square allowed to be a BinaryOp when it only takes one argument?

This is another JS-specific reality. In JavaScript, it is completely standard to ignore arguments you don't need. Think of Array.prototype.map(x => x * 2). The map function actually passes the index and the whole array as the second and third arguments, but we usually ignore them.

TypeScript permits this because ignoring an argument is safe. The function square simply never looks at the second argument provided by the caller. However, the reverse is not true:

type UnaryOp = (a: number) => number;
const power: UnaryOp = (a, b) => Math.pow(a, b); // Error!

If a caller only promises to provide one argument (UnaryOp), we cannot assign a function that *requires* two (power). The second argument would be undefined, likely leading to NaN or a crash.

Complexity with Generics

This rule becomes the "final boss" when you combine it with Generics and variance annotations (in and out keywords introduced in TS 4.7).

When you define a generic type, you can now tell TypeScript explicitly how that type should behave regarding subtyping.

type Consumer<in T> = (arg: T) => void;
type Producer<out T> = () => T;

- out T (Covariant): Used when T only appears in output positions. This makes Producer<Dog> a subtype of Producer<Animal>.
- in T (Contravariant): Used when T only appears in input positions. This makes Consumer<Animal> a subtype of Consumer<Dog>.

Why would you use these? Performance and clarity. Without these annotations, TypeScript has to "calculate" the variance by looking at every usage of T within the type. On massive codebases, this calculation is expensive. By using in and out, you are explicitly stating the subtyping relationship, speeding up the compiler and making the intent clear to other developers.

The Liskov Substitution Principle (LSP)

If you've ever studied SOLID principles, function subtyping is essentially the Liskov Substitution Principle in its purest form. LSP states that "objects of a superclass should be replaceable with objects of its subclasses without breaking the application."

When applied to functions:
1. Contravariance of arguments: You can accept *more* than what was asked for (a wider type).
2. Covariance of returns: You can return *more* than what was promised (a more specific type).

If you follow these two rules, you are mathematically guaranteed that your substitution is safe.

Debugging the "Not Assignable" Error

The next time you see a massive TypeScript error that says:

Type '(e: MouseEvent) => void' is not assignable to type '(e: BaseEvent) => void'.

Don't just reach for as any. Look at the parameters. You are likely trying to pass a function that is too picky about its inputs into a slot that expects a function that can handle anything.

The fix is usually to broaden the type of the parameter in your implementation or to ensure that the caller is indeed only going to pass the specific subtype you are looking for.

Final Thoughts

The subtyping of functions feels backward because we are used to the "Dog is an Animal" hierarchy. But functions aren't just objects; they are transformations.

- Return values flow out of the function, so they follow the hierarchy (Covariant).
- Parameters flow into the function, so they oppose the hierarchy (Contravariant).

Mastering this distinction is what separates "getting it to work" from "writing type-safe architecture." It's the difference between a codebase that crashes when an unexpected event fires and one that catches the error at compile-time before a single user ever sees it.

The rule is obscure, yes. It's confusing at first, absolutely. But it’s also one of the most powerful tools in the TypeScript engine for maintaining sound, predictable code.