
JSON Is No Longer Lossy
The new source-text access in JSON.parse finally solves the BigInt rounding nightmare without the performance tax of a custom parser.
I’ve spent way too many hours staring at a debugger, wondering why a database ID that ended in a 7 suddenly ended in a 0 the moment it hit the browser. It’s a quiet, frustrating kind of gaslighting that every JavaScript developer eventually encounters: the moment JSON.parse decides your 64-bit integer is "close enough" to a float and rounds it into oblivion.
For a long time, JSON was effectively lossy. If your backend sent a number larger than Number.MAX_SAFE_INTEGER (9,007,199,254,740,991), the native parser would just shrug and truncate the precision. We were left with two bad choices: ask the backend team to send everything as strings (and listen to them grumble) or pull in a heavy custom parser that slowed down every network request.
But things changed recently. The ECMAScript spec quietly added source-text access to JSON.parse, and it fundamentally changes how we handle data.
The problem with the "Float-First" mentality
JavaScript numbers are 64-bit floats. JSON, by specification, doesn't actually have a limit on number size, but JSON.parse has historically converted everything to a JavaScript number immediately.
Look at this classic disaster:
const rawResponse = '{"id": 9007199254740993}';
const parsed = JSON.parse(rawResponse);
console.log(parsed.id);
// Output: 9007199254740992 (Oops, we lost a digit)The moment that string was parsed, the data was corrupted. By the time your application logic touched it, the original value was gone.
The New Way: Source-Text Access
The JSON.parse method has always had a "reviver" function—a second argument that lets you transform values as they are being parsed. However, until recently, the reviver only gave you the *already-parsed* (and thus already corrupted) value.
The new proposal (now widely supported in modern browsers and Node.js 20+) adds a third argument to the reviver: a context object that contains the original source string.
Here is how you actually fix the BigInt nightmare:
const rawResponse = '{"id": 9007199254740993, "name": "Alice"}';
const parsed = JSON.parse(rawResponse, (key, value, context) => {
// If the value is a number, we check the original source text
if (typeof value === 'number' && context.source) {
// If the source text contains a huge number, turn it into a BigInt
if (BigInt(context.source) > BigInt(Number.MAX_SAFE_INTEGER)) {
return BigInt(context.source);
}
}
return value;
});
console.log(parsed.id);
// Output: 9007199254740993n (Success! It's a BigInt)Why this is a big deal
Previously, if you wanted to handle BigInts, you had to use libraries like json-bigint. These libraries usually work by re-implementing the entire JSON parser in JavaScript. On a large JSON payload (like a 5MB autocomplete dump), a custom parser can be 10x slower than the native JSON.parse.
With source-text access, you get the native C++ parsing speed for the structure of the object, and you only pay a tiny tax for the specific fields you want to "revive."
It’s not just for BigInts
Think about high-precision decimals in fintech. If a bank API sends a balance of 1234.5600000000000001, standard JavaScript floats might mangle that. With source-text access, you can pipe that raw string directly into a library like Decimal.js or Big.js.
const data = '{"balance": 0.00000000000000000001}';
const parsed = JSON.parse(data, (key, value, context) => {
if (key === 'balance') {
// Keep it as a string or pass to a Decimal library
return context.source;
}
return value;
});
console.log(parsed.balance); // "0.00000000000000000001" (Exact string preserved)Some "Gotchas" to keep in mind
While this is amazing, it’s not a magic "fix everything" button. You have to be intentional.
1. The Context Object: The context argument is only provided if the value being parsed is a primitive (string, number, boolean, or null). It isn't provided for the objects or arrays themselves.
2. Performance: While way faster than a custom parser, the reviver function still runs for every single key-value pair in your JSON. If you have a massive array of 100,000 objects, don't do complex regex checks inside the reviver unless you absolutely have to.
3. The "Number" check: I usually check typeof value === 'number' first. This ensures I’m not accidentally trying to run BigInt logic on strings or booleans, which would just throw an error and ruin my day.
A Practical Helper Function
If you're tired of writing the same boilerplate, you can wrap this into a safe utility. I’ve started using something like this in my internal tools:
function safeParse(jsonString) {
return JSON.parse(jsonString, (key, value, { source }) => {
if (typeof value !== 'number') return value;
// If it's a number, check if it's outside safe integer bounds
const isOutsideSafeRange =
value > Number.MAX_SAFE_INTEGER ||
value < Number.MIN_SAFE_INTEGER;
if (isOutsideSafeRange && source) {
try {
return BigInt(source);
} catch {
return value; // Fallback for decimals
}
}
return value;
});
}Wrapping Up
The lossy nature of JSON was one of those "just deal with it" parts of JavaScript for decades. We built elaborate workarounds, used long.js, or forced our backends to change their data types just to accommodate the browser's limitations.
With the addition of context.source, JSON in JavaScript is finally "lossless." We can have our native performance and our 64-bit precision too. It's a small change to the API, but it solves a massive architectural headache. If you’re still using a custom JSON parser for BigInt support, it might be time to delete that dependency and go native.


