loke.dev
Header image for FFI Is the New Standard Library

FFI Is the New Standard Library

We're entering an era where the most performant JavaScript code is actually written in Rust, and the bridge between them has never been thinner.

· 8 min read

I used to spend days obsessing over V8's hidden classes and trying to hint the JIT compiler into optimizing my hot loops. I’d stare at node --trace-opt output like it was some ancient scripture, hoping to find out why a simple data transformation was lagging. The frustration peaked when I tried to implement a custom compression algorithm in pure JavaScript; the garbage collector just couldn’t keep up with the heap churn, and the performance was abysmal compared to a simple C script. The resolution didn't come from a "magic" Node flag. It came when I stopped trying to force JavaScript to be something it isn't and started treating it as an orchestrator for Rust.

We are living through a fundamental shift in how "high-performance" Node.js code is written. The old advice was to "profile your JS and optimize the hot paths." The new reality is that the most performant JavaScript libraries aren't JavaScript at all. They are Rust cores wrapped in a thin, ergonomic Foreign Function Interface (FFI) layer.

The Death of the "Pure JS" Requirement

For a long time, the Node.js community had a phobia of native modules. If you published a package that required node-gyp and a C++ compiler, you’d be greeted with a barrage of GitHub issues from developers on Windows or restricted CI environments who couldn't get the thing to build.

C++ addons were brittle. One wrong pointer and you’d get a Segmentation Fault that brought down the entire process without a single line of stack trace.

Then Rust arrived, and with it, napi-rs.

Suddenly, the bridge between the high-level world of JavaScript and the bare-metal world of systems programming became safe, type-secure, and—most importantly—easy to distribute. We’re at a point where the "Standard Library" of a modern Node.js application is effectively the entire Crates.io ecosystem. If you need fast TOML parsing, heavy-duty image processing, or complex cryptography, you don't look for a JS library anymore. You look for the Rust equivalent.

Why Rust is Winning the Node.js Ecosystem

It isn't just about raw speed. If it were only about speed, C++ would have won a decade ago. Rust wins because of Memory Safety and Tooling.

In a traditional C++ Node addon, you are constantly dancing with the V8 garbage collector. If you hold a reference to a JS object longer than you should, or if you don't handle HandleScopes correctly, the VM crashes. Rust’s ownership model maps surprisingly well to the requirements of the Node-API (N-API).

napi-rs allows us to write code like this:

use napi_derive::napi;

#[napi]
pub fn fibonacci(n: u32) -> u32 {
  match n {
    0 => 0,
    1 => 1,
    _ => fibonacci(n - 1) + fibonacci(n - 2),
  }
}

That’s it. No manual header files, no complex binding.gyp configurations. The macro handles the boilerplate of converting JavaScript numbers into Rust integers and back again.

The Cost of Crossing the Bridge

I see developers jump into FFI thinking it’s a silver bullet, but they often ignore the "bridge tax." Every time you call a function from JS to Rust, there is an overhead. Data has to be converted, or pointers have to be validated.

If you are calling a Rust function to add two numbers, the overhead of the FFI call will likely make it *slower* than doing it in pure JS. The JIT is incredibly good at optimizing simple math.

The rule of thumb: Only cross the bridge when the work being done on the other side significantly outweighs the cost of the crossing.

Example: Heavy Data Transformation

Let's look at a scenario where FFI shines: calculating a hash for a massive set of objects. In JS, you'd iterate over an array, stringify, and hash. In Rust, you can do this using zero-copy buffers.

Here is a practical look at how we might handle a heavy computational task:

use napi::bindgen_prelude::*;
use napi_derive::napi;
use sha2::{Sha256, Digest};

#[napi]
pub fn bulk_hash(input: Vec<String>) -> Vec<String> {
    input.into_iter().map(|s| {
        let mut hasher = Sha256::new();
        hasher.update(s.as_bytes());
        let result = hasher.finalize();
        format!("{:x}", result)
    }).collect()
}

On the JavaScript side, you call this like any other async or sync function:

import { bulk_hash } from './native-binding.node';

const data = Array(100000).fill("some heavy data string to hash");

console.time('Rust Hashing');
const hashes = bulk_hash(data);
console.timeEnd('Rust Hashing');

The magic here isn't just the hashing speed. It's that napi-rs automatically handles the conversion of the JavaScript Array of Strings into a Rust Vec<String>.

Shared Memory: The Secret Sauce

One of the biggest performance killers in Node.js is moving large chunks of data (like images or large JSON blobs) between the JS heap and the native layer. If you copy the data, you lose.

The modern way to handle this is via Buffers or TypedArrays, which allow Rust and JS to look at the same memory addresses.

Suppose you're writing a library to process image pixels. You don't want to pass a million-item array of integers. You want to pass a Buffer.

#[napi]
pub fn invert_colors(mut data: Buffer) {
    // We treat the Buffer as a mutable slice of bytes
    // No data is copied here; we are editing the JS memory directly
    for byte in data.iter_mut() {
        *byte = 255 - *byte;
    }
}

And in JS:

import { readFileSync, writeFileSync } from 'fs';
import { invert_colors } from './native.node';

const buf = readFileSync('input.raw');
// This happens in-place in Rust
invert_colors(buf); 
writeFileSync('output.raw', buf);

This is where FFI becomes the "Standard Library." You are no longer limited by what the V8 heap can handle comfortably. You can allocate memory outside the JS heap (using Rust’s default allocator) and just pass a reference to JS.

The Architecture of a Modern "Standard" Tool

If you look at the tools that have defined the last three years of web development—swc, turbopack, esbuild (Go-based, but similar principle), parcel—they all follow this pattern.

The "Standard Library" for a frontend toolchain is now:
1. A Rust Core: Handles the heavy lifting (AST parsing, minification, bundling).
2. Crate ecosystem: Using nom or swc_ecma_parser instead of writing a parser from scratch.
3. N-API Bindings: Exposing a clean JS API.
4. Prebuilt Binaries: Shipped via npm as optional dependencies.

This last point is crucial. The reason FFI feels like a standard library now is that the distribution problem is solved. Using napi-rs and GitHub Actions, you can cross-compile your Rust code for every major OS and architecture (macOS arm64, Linux x64, Windows, etc.) and publish them as a single npm package. The user never even knows Rust is involved; they just know the package is 20x faster than the one written in 2018.

The "Gotchas" You Will Encounter

I’d be lying if I said it was all easy. There are specific walls you’ll hit when you start treating Rust as your extended standard library.

1. Async Contexts

Running Rust code synchronously on the main Node.js thread will block the Event Loop. If your Rust function takes 500ms to run, your entire web server is dead for those 500ms.

You need to use the Task trait in napi-rs to offload work to the libuv thread pool.

use napi::{Task, Env, Result, JsString};

pub struct AsyncHasher {
    pub input: String,
}

impl Task for AsyncHasher {
    type Output = String;
    type JsValue = JsString;

    fn compute(&mut self) -> Result<Self::Output> {
        // This runs on a background thread
        let mut hasher = sha2::Sha256::new();
        hasher.update(self.input.as_bytes());
        Ok(format!("{:x}", hasher.finalize()))
    }

    fn resolve(&mut self, env: Env, output: Self::Output) -> Result<Self::JsValue> {
        // This runs back on the main JS thread
        env.create_string(&output)
    }
}

// Exposed to JS
#[napi]
pub fn async_hash(input: String) -> AsyncTask<AsyncHasher> {
    AsyncTask::new(AsyncHasher { input })
}

2. String Encoding

JavaScript strings are UTF-16. Rust strings are UTF-8. Every time you pass a string from JS to Rust, it undergoes a conversion. For small strings, you won't notice. If you're passing a 100MB string, the conversion cost will be higher than the actual work you're doing in Rust.

The Fix: Use Buffer or Uint8Array if you're dealing with massive amounts of text.

3. The Binary Size

Rust binaries are "thick." Adding a simple Rust dependency can add 1MB or 2MB to your npm package size. In the world of backend Node.js, this rarely matters. In the world of Lambda functions or Edge functions, it’s something to watch.

When Should You Not Use FFI?

Despite the title of this post, you shouldn't rewrite everything in Rust. I’ve seen teams waste weeks porting business logic to Rust only to find it was slower because of the serialization overhead.

Don't use FFI if:
* You are doing simple I/O (Node's fs and net are already backed by high-performance C++).
* Your data structures are highly nested and complex (serializing them across the bridge is a nightmare).
* The logic changes every day. Rust's compile times will slow your iteration cycle compared to JS.

Do use FFI if:
* You're doing heavy math (Signal processing, Cryptography).
* You're parsing huge files (CSV, JSON, custom formats).
* You need to interface with a system-level library that only has C/Rust headers.
* You need predictable performance without GC spikes.

The Future: Toward a More Unified Runtime

We’re seeing the lines blur even further. Bun and Deno have built-in FFI support that doesn't even require a compilation step like napi-rs—they can call dynamic libraries (.so, .dll, .dylib) directly using JIT-generated wrappers.

// Example of Bun's FFI - no "build step" required
import { dlopen, FFIType } from "bun:ffi";

const { symbols: { add } } = dlopen("libadd.so", {
  add: {
    args: [FFIType.i32, FFIType.i32],
    returns: FFIType.i32,
  },
});

console.log(add(1, 2));

This is the "Standard Library" dream realized. The language itself becomes less of a silo and more of a coordinator.

Wrapping Up

The era of struggling to make JavaScript do things it wasn't designed for is ending. We don't need to wait for a new TC39 proposal to get a faster way to process binary data or a more robust way to handle multi-threading.

By embracing FFI as the new standard library, we acknowledge that the best tool for the job might not be JavaScript, but we can still use JavaScript to hold the whole thing together. The bridge is thin, the safety is there, and the performance gains are too large to ignore. If you haven't yet, it's time to set up napi-rs and see what your Node apps are actually capable of when they have a system language backing them up.