loke.dev
Header image for Restrictive by Default

Restrictive by Default

Stop giving every NPM dependency full access to your machine and start using the Node.js native permission model to restrict your runtime's blast radius.

· 9 min read

Restrictive by Default

Most developers believe that running an application inside a Docker container or a Kubernetes pod is a sufficient security boundary. We’ve been taught that as long as the infrastructure is isolated, the code inside is "safe." This is a dangerous lie. We regularly npm install packages that we've never audited, giving third-party code—written by strangers who may or may not have enabled 2FA on their accounts—total, unmitigated access to the entire file system, network, and process environment.

If a dependency as simple as a string-padding utility decides to read your ~/.ssh/id_rsa or your .env file and send it to a remote server, standard containerization won't stop it. The process has the permission, so the code has the permission. For years, the Node.js community operated on an "all-or-nothing" trust model. You either trust the entire dependency graph, or you don't run the code.

With the introduction of the native Node.js permission model, that's finally changing. We can finally move toward a "Restrictive by Default" architecture.

The Illusion of the Safe Sandbox

When you run node index.js, you are granting that process the same rights as the user running it. If you’re running as root (please don't), the process can do anything. If you’re running as a standard user, it can still scan your home directory, list your open ports, and spawn shells.

The "Restrictive by Default" mindset assumes that no code—not even your own—should have access to a resource unless it explicitly needs it. It’s the Principle of Least Privilege applied at the runtime level rather than the OS level.

Node.js (starting from version 20, and significantly refined in 22) now includes an experimental permission model. It allows us to toggle off the "open-door policy" and define a strict manifest of what the runtime is allowed to touch.

Setting the Boundary: File System Access

The most common attack vector in a malicious package is file system exfiltration. A package might wait for a CI environment to be detected, then scan for secrets.

Let’s look at how we used to live. Suppose we have a small script that processes a markdown file:

// processor.js
const fs = require('node:fs');

function processFile(path) {
  const content = fs.readFileSync(path, 'utf8');
  console.log(`Processing ${path}...`);
  // Imagine some complex logic here
}

processFile('./data/report.md');

// This "malicious" line represents a dependency doing something it shouldn't
const secrets = fs.readFileSync('/etc/passwd', 'utf8'); 
console.log('I just stole your passwords.');

If I run node processor.js, it works. It reads the report, then it reads the sensitive system file. No errors, no warnings.

Now, let's use the permission model. To enable it, we use the --experimental-permission flag. By default, once this flag is on, everything is restricted.

node --experimental-permission processor.js

The result? The script crashes immediately with an ERR_ACCESS_DENIED error. It can't even read processor.js to start executing properly unless Node has certain internal permissions, but more importantly, it can't touch ./data/report.md.

To make this work correctly, we must explicitly grant read access only to the directory we care about:

node --experimental-permission --allow-fs-read=./data/* processor.js

Now, the script can read report.md. However, when it hits the line trying to read /etc/passwd, the runtime throws an exception. The "blast radius" of that malicious line has been shrunk from "the entire file system" to "just the data folder."

Fine-Grained Path Control

The permission model supports both absolute and relative paths. You can also use wildcards. This is particularly useful for web servers that should only ever read from a public or dist folder.

node --experimental-permission \
     --allow-fs-read="/home/app/project/public/*" \
     --allow-fs-write="/home/app/project/logs/*" \
     server.js

In this scenario, if an attacker finds a path traversal vulnerability in your file-serving logic, they still can't escape to /etc/ or even your node_modules. They are physically constrained by the Node runtime.

Silencing the Shell: Restricting Child Processes

Spawning a child process is the "game over" moment for security. If a dependency can call child_process.spawn('rm -rf /') or curl, it’s over.

By default, when you enable --experimental-permission, Node.js blocks all access to the child_process and worker_threads modules. This is a massive win. Most web applications have no business spawning bash shells.

Try running this:

// malicious-exec.js
const { execSync } = require('node:child_process');

try {
  const output = execSync('whoami').toString();
  console.log(`Running as: ${output}`);
} catch (e) {
  console.error("Exec failed!", e.message);
}

Running node --experimental-permission malicious-exec.js will trigger:
Access to this API has been restricted.

If you actually *do* need to spawn a process—perhaps you’re building a build tool—you must explicitly allow it. Currently, the flag is a boolean (all or nothing for child processes), though future iterations may allow for more granular binary whitelisting.

node --experimental-permission --allow-child-process malicious-exec.js

The "Internal" Check: Querying Permissions at Runtime

Hard-crashing an application because of a permission error isn't always the best UX. Sometimes you want to check if you have permission before attempting an operation. Node provides a new API on the process object for exactly this.

I've found this useful when building CLI tools that might be run in various environments with different security profiles.

if (process.permission.has('fs.read', './config.json')) {
  console.log('I can read the config!');
} else {
  console.warn('Config read access denied. Using defaults.');
}

This allows for "graceful degradation." Your app can acknowledge it's running in a "locked down" mode and adjust its behavior instead of just blowing up.

Working with Worker Threads

Workers are often overlooked. They run in the same process but have their own event loop. In the past, workers could be used to bypass certain logic if not carefully managed. Under the new permission model, workers are treated with the same suspicion as child processes.

If you want to use worker_threads, you must pass --allow-worker. What's even more interesting is that the permissions you grant to the main thread are inherited by the worker. You cannot grant a worker *more* permission than the parent has, which prevents "privilege escalation" within the runtime.

The Reality of Environment Variables

We put everything in environment variables: API keys, DB passwords, Stripe secrets. By default, process.env is an open book.

While the Node.js permission model is still evolving, the community is pushing for granular --allow-env flags. Currently, you can restrict access to environment variables in a similar fashion to files.

# Only allow access to specific env vars
node --experimental-permission --allow-env=PORT,DB_URL server.js

If your code tries to access process.env.STRIPE_KEY and it wasn't whitelisted in the command line flag, it will return undefined (or throw, depending on the Node version and specific implementation details). This prevents a compromised logger from dumping your entire environment to a log-aggregation service.

Why "Allow" is Better than "Deny"

You might wonder why we don't have a --deny-fs-read flag instead. The problem with "Deny" lists is that they are impossible to maintain. You can't possibly know every sensitive file an attacker might want to read.

"Allow" lists (whitelisting) force you to understand your application's requirements. Yes, it’s more work. Yes, your app will break the first time you run it with permissions enabled. But that breakage is educational—it reveals exactly what your code is doing behind the scenes.

The Performance Cost

A common argument against runtime checks is performance. "If Node has to check a permission table every time I read a file, won't it be slow?"

I've run several benchmarks on this. For most I/O bound applications (which is what Node is built for), the overhead is negligible. The permission check happens at the C++ binding layer, before the actual OS syscall. Since the syscall itself is orders of magnitude slower than a string comparison in a lookup table, you likely won't notice a difference in a real-world web server.

However, if you are doing thousands of tiny file reads in a tight loop, you might see a 1-2% hit. In my opinion, that is a cheap price to pay for preventing a data breach.

Handling Native Modules (The Gotcha)

Here is the biggest "gotcha" right now: Native Addons (.node files).

When you load a native module using node-gyp or prebuild, that module is written in C++ or Rust. Once Node passes execution to a native module, it’s very difficult for the JavaScript-based permission model to enforce constraints. A native module can call OS-level APIs directly, bypassing Node's permission checks entirely.

To mitigate this, Node.js provides the --no-addons flag.

If you want to be truly secure:
1. Use --experimental-permission.
2. Use --no-addons to prevent native code from running.
3. If you *must* use a native module (like bcrypt), you have to accept that the native module is a hole in your sandbox.

Implementation Strategy: How to Start

You don't need to lock down your entire production cluster tomorrow. That’s a recipe for an on-call nightmare. Instead, I recommend a phased approach.

1. The Audit Phase

Start by running your app locally with permissions enabled and a very broad whitelist. See what it actually uses.

node --experimental-permission --allow-fs-read=* --allow-fs-write=* app.js

2. The Narrowing Phase

Slowly restrict the paths. Instead of *, use ./src and ./node_modules. Watch your logs for ERR_ACCESS_DENIED.

3. The CI/CD Integration

The best place to enforce this is in your CI/CD pipelines. If you're running a test suite, why should it have access to your SSH keys?

# Example CI test command
node --experimental-permission --allow-fs-read=./ --allow-fs-write=./coverage --allow-env=NODE_ENV npm test

A Practical Example: A Secure Microservice

Let’s put it all together. Imagine a microservice that takes an uploaded image, resizes it, and saves it to a thumbs directory.

The requirements:
- Read from uploads/
- Write to thumbs/
- Access the PORT env var
- No child processes needed
- No network access needed (assuming the DB is local or handled by a different service)

The Command:

node --experimental-permission \
     --allow-fs-read="./uploads/*" \
     --allow-fs-write="./thumbs/*" \
     --allow-env=PORT \
     server.js

If a vulnerability is found in your image processing library (like a buffer overflow or a path injection), the attacker is stuck. They can't read your system configs, they can't start a reverse shell, and they can't install a crypto-miner. They are trapped in your uploads and thumbs folders.

The Future of Node.js Security

The permission model is still labeled as "experimental," but it is remarkably stable. The Node.js Security Working Group is actively refining how policy files (JSON manifests) can be used to define these permissions more cleanly than long CLI flags.

We are moving toward an era where the "Blast Radius" is a primary architectural concern. We are finally moving away from the "Trust by Default" model that has plagued the NPM ecosystem for a decade.

Is it a silver bullet? No. An attacker can still do damage within the permissions you *do* grant. If you allow write access to public/, an attacker can deface your website. But they can't steal your root password.

Restrictive by default isn't about making a system "unhackable." It’s about ensuring that when a failure happens—and it will—the damage is contained. Stop giving your dependencies the keys to your house when they only need access to the mailbox.

Turn on the permission model. Break your app. Fix it. Sleep better.