loke.dev
Header image for Signals All the Way Down

Signals All the Way Down

Standardized reactivity is finally coming to the browser, but the underlying push-pull engine is more complex than a simple event listener.

· 8 min read

Consider a standard variable in JavaScript: let x = 10;. If you define let y = x + 5;, the value of y is 15. If you later change x to 20, y remains 15. To keep them in sync, you have to manually re-assign y. We have spent the last decade of web development trying to automate that re-assignment. We called it data binding, then observables, then hooks, and now, we are finally settling on Signals.

The TC39 proposal for a standardized Signal API marks a turning point. It isn’t just another library; it’s an attempt to bake a high-performance reactivity engine directly into the JavaScript language.

// Using the current TC39 proposal syntax (via polyfill)
const counter = new Signal.State(0);
const isEven = new Signal.Computed(() => (counter.get() % 2) === 0);

console.log(isEven.get()); // true

counter.set(1);
console.log(isEven.get()); // false

On the surface, this looks like a simple getter/setter wrapper. But under the hood, something much more sophisticated than an EventEmitter is happening. We are moving from a world of "imperative event handling" to "declarative dependency tracking."

The Ghost in the Machine: How Signals Actually Work

Most developers mistake Signals for a variation of the Observer pattern. While they share DNA, the execution model is fundamentally different. Standard observers are "Push" based. When a value changes, the subject pushes that change to every subscriber immediately.

Signals use a Push-Pull hybrid mechanism.

When a Signal.State changes, it doesn't immediately force all its dependents to re-calculate. Instead, it sends a "dirtiness" notification up the graph. It essentially taps its dependents on the shoulder and says, "Hey, I've changed, don't trust your cached value."

The actual calculation only happens when you Pull the value back out via .get().

Why this matters: The Diamond Problem

To understand why the Push-Pull hybrid is the "secret sauce," we have to look at the "Diamond Problem." Imagine a state graph that looks like this:

1. State A
2. Computed B (depends on A)
3. Computed C (depends on A)
4. Computed D (depends on both B and C)

In a pure "Push" system (like simple EventListeners or some early Reactive Streams), if you update A, it triggers B and C. Then B triggers D, and C triggers D. In many naive implementations, D will evaluate twice. Worse, during the first evaluation of D, it might see the new value of B but the *old* value of C, leading to a "glitch"—a temporary, inconsistent state.

Standardized Signals solve this by using version numbering and a two-phase update.

1. The Probing Phase (Push): The change propagates through the graph to mark nodes as stale.
2. The Reconciliation Phase (Pull): When D is accessed, it checks its parents (B and C). It sees they are stale. It asks B to update. B asks A for its version. The graph settles in a single pass.

This ensures Glitch-Free reactivity. You never see a state that shouldn't exist.

The TC39 Proposal: Signal.State and Signal.Computed

The current proposal introduces two primary classes. I've found that the best way to understand them is to think of Signal.State as the "Source of Truth" and Signal.Computed as the "Derived Logic."

const quantity = new Signal.State(1);
const price = new Signal.State(100);

const total = new Signal.Computed(() => {
  console.log("Calculating total..."); 
  return quantity.get() * price.get();
});

// Total isn't calculated yet because nobody has 'pulled' it.
console.log(total.get()); // "Calculating total..." -> 100

// If we update price but don't read total...
price.set(120); 
// ...nothing is logged. No wasted CPU cycles.

The magic here is in the Automatic Dependency Tracking. You don't pass an array of dependencies like you do in React’s useMemo. The signal tracks which other signals were accessed *during its execution*.

If you have a conditional in your computed:

const showFullTotal = new Signal.State(false);
const displayValue = new Signal.Computed(() => {
  if (showFullTotal.get()) {
    return `Total: ${total.get()}`;
  }
  return "Hidden";
});

If showFullTotal is false, displayValue does not subscribe to total. If total changes while the display is hidden, displayValue won't even be marked as stale. This is "fine-grained reactivity" at its most efficient.

Why Do We Need This in the Browser?

You might be thinking, "SolidJS, Vue, Preact, and Angular already have signals. Why do we need a browser standard?"

I've spent a lot of time jumping between frameworks, and the fragmentation is exhausting. Every framework has its own flavor of the reactive graph. If you write a library for Vue, it won't work in Svelte. If you write a state management utility for Preact, it's useless in Angular.

By moving the reactivity engine into the language itself:
1. Interoperability: A "Signal" becomes a primitive. A library written in vanilla JS using Signals can be consumed by *any* framework.
2. Performance: Engines like V8 can optimize Signal structures at the C++ level rather than the user-land JavaScript level.
3. Memory Management: The browser can handle the complex cleanup of dependency graphs more efficiently, reducing the risk of memory leaks that occur when you forget to unsubscribe from an observer.

Beyond Values: The Watcher API

Signals by themselves are passive. They are a graph of potential energy. To make them *do* something (like update the DOM), you need a Watcher.

In the proposal, Signal.subtle.Watcher is the lower-level API designed for framework authors. It’s what allows us to "react" to the graph becoming stale.

let needsUpdate = false;

const watcher = new Signal.subtle.Watcher(() => {
  // This callback runs when a dependency becomes "dirty"
  if (!needsUpdate) {
    needsUpdate = true;
    queueMicrotask(processUpdates);
  }
});

function processUpdates() {
  needsUpdate = false;
  watcher.watch(); // Re-arm the watcher
  updateTheDom(); 
}

// Tell the watcher to track the signals used in this function
watcher.watch();

The reason this is under the subtle namespace is that it's a bit dangerous. It's easy to create infinite loops if you update a signal inside the watcher that the watcher is observing. This isn't meant for the average dev to use daily; it’s the "engine room" for the next generation of frameworks.

Practical Example: Building a Generic Store

Let's look at how we might use signals to build a state store that feels modern but remains framework-agnostic.

class UserStore {
  #name = new Signal.State("Guest");
  #status = new Signal.State("offline");

  // Public getters return the computed or the state
  get name() { return this.#name.get(); }
  get status() { return this.#status.get(); }

  // Computed property within the class
  get profileHeader() {
    return new Signal.Computed(() => {
      return `${this.#name.get()} (${this.#status.get()})`;
    }).get();
  }

  // Actions
  updateUser(newName, newStatus) {
    this.#name.set(newName);
    this.#status.set(newStatus);
  }
}

const user = new UserStore();

This structure is incredibly robust. Because Signals track access, any framework's "Effect" that calls user.profileHeader will automatically become a dependency of the internal #name and #status signals. You get deep, granular reactivity without the user of your class ever knowing they are using signals.

The Edge Cases: What Could Go Wrong?

No technology is a silver bullet. Signals have their own set of "gotchas" that I've run into while experimenting with the polyfills.

1. The "Pull" Overhead

If you have a deeply nested chain of 1,000 computed signals, and you pull the value of the leaf node, the engine has to walk back up the tree to verify versions. While this is usually faster than re-running everything, it’s not free. Excessive abstraction can lead to a "lookup tax."

2. Side Effects in Computeds

This is the cardinal sin. A Signal.Computed should be pure.

// AVOID THIS
const data = new Signal.Computed(() => {
  const val = someState.get();
  fetch('/log?value=' + val); // Side effect!
  return val * 2;
});

Because computeds are lazily evaluated and can be re-run at any time (or not at all!), putting side effects inside them leads to unpredictable behavior. Side effects belong in Effects or Watchers.

3. Equality Checks

By default, Signals use Object.is for equality. If you set a signal to the same object reference, it won't trigger an update. This is great for performance but can trip you up if you are a fan of mutating objects in place (which you shouldn't do anyway, but we all have bad days).

const user = new Signal.State({ name: "Alice" });
const current = user.get();
current.name = "Bob";
user.set(current); // Nothing happens! The reference is the same.

Signals vs. Promises vs. Streams

It’s easy to get confused about when to use what. Here is my rule of thumb:

* Promises: Use for a single asynchronous value (a one-time fetch).
* AsyncIterators/Streams: Use for a sequence of discrete events over time (clicks, websocket messages).
* Signals: Use for Synchronous State that represents a "value over time."

Signals are not great for "events." You wouldn't want to represent a "Submit Click" as a signal, because a signal is always "something." What is the value of a click after it happened? true? If you click it again, it's still true, so the signal won't fire. Events are actions; Signals are states.

The Future: Signals in the DOM?

The most exciting part of this proposal isn't just the logic—it's the potential for DOM integration. Imagine a future where we can do this:

const count = new Signal.State(0);
const btn = document.createElement('button');

// This is hypothetical, but part of the long-term vision
btn.textContent = count; 

If the browser's DOM nodes could accept Signals directly, we would no longer need a Virtual DOM. When count changes, the browser knows exactly which text node to update. This would effectively kill the "Diffing" overhead that frameworks like React incur. We would move from "Re-rendering the component" to "Updating the leaf node."

Wrapping Up

Signals represent a shift in the JavaScript mindset. We are moving away from the "React model" where the UI is a function of state that re-runs constantly, and toward a "Graph model" where the UI is a live observer of a state network.

The TC39 proposal is still in the early stages (Stage 1 as of my last check, moving toward Stage 2), but the momentum is massive. Frameworks are already aligning their internal engines with this spec.

If you want to get ahead, start thinking about your application state as a directed acyclic graph. Stop thinking about "When should I call this function?" and start thinking about "What does this value depend on?"

Once you start seeing the signals, it’s hard to go back to just variables. It’s signals all the way down.