
Trust, But Verify: Keeping AI-Generated Data from Breaking My Remix Loaders
A deep dive into why passing LLM output directly to your UI is a recipe for disaster and how to use Zod to build bulletproof AI integrations in Remix.
I remember the first time I integrated OpenAI's GPT-4 into a Remix project. I was building a tool that generated custom workout plans based on user preferences. In my head, I envisioned a seamless pipeline: user submits a form, the action hits the OpenAI API, the LLM returns a beautiful JSON object, and I pass that right into my database and UI.
It worked perfectly... for exactly three hours.
Then, a user asked for a workout "for someone who only has a kettlebell and is shaped like a potato." The LLM, in its infinite wisdom and sense of humor, decided to return the JSON with a slightly different key structure than I expected. Instead of exercises: [], it returned routine: [].
My loader crashed. My UI hit an undefined property. The user saw a white screen of death. I saw my Saturday afternoon disappearing into a black hole of debugging.
Here’s the thing: LLMs are non-deterministic by nature. Even with "JSON Mode" or function calling, they are probabilistic engines, not rigid APIs. If you're building with Remix, where we pride ourselves on type safety and robust data loading, letting raw LLM output touch your frontend is like inviting a wild raccoon into your server room. It might be cute, but it’s going to bite a cable eventually.
The Remix Loader Problem
In a typical Remix app, your loader is the gatekeeper. It fetches data, prepares it, and hands it over to the component.
// The "Dangerous" Way
export const loader = async ({ request }: LoaderFunctionArgs) => {
const aiResponse = await openAI.chat.completions.create({
model: 'gpt-4-turbo-preview',
messages: [{ role: 'user', content: 'Generate a meal plan...' }],
response_format: { type: 'json_object' },
})
const data = JSON.parse(aiResponse.choices[0].message.content)
// ⚠️ If data doesn't match what the component expects,
// the whole route (or nested boundary) explodes.
return json({ plan: data })
}When that data object doesn't match your TypeScript interfaces, you don't get a nice error message. You get a runtime error that bubbles up to your nearest ErrorBoundary. While Remix handles these gracefully, showing a 500 error to a user just because the AI had a "creative moment" is a poor user experience.
Enter Zod: Your AI Insurance Policy
If you haven't used Zod yet, it’s a TypeScript-first schema declaration and validation library. It allows you to define what your data _should_ look like and then validates it at runtime.
When working with AI, Zod isn't just a utility; it's a necessity. We need to shift our mindset from "Trusting the API" to "Verifying the Payload."
Step 1: Define the Schema
Let’s say we’re building a feature that generates a list of "Must-Watch" movies for a specific genre. We need a strict schema for what a Movie object looks like.
import { z } from 'zod'
const MovieSchema = z.object({
title: z.string().min(1),
year: z.number().int().min(1888).max(2100),
rating: z.number().min(0).max(10),
synopsis: z.string().max(500),
tags: z.array(z.string()),
})
const RecommendationSchema = z.object({
genre: z.string(),
recommendations: z.array(MovieSchema),
})
// We can extract the TypeScript type from the schema!
type Recommendation = z.infer<typeof RecommendationSchema>Step 2: The "Parse or Fail" Pattern
Now, instead of just parsing the JSON and crossing our fingers, we use Zod to validate it. I prefer using safeParse because it doesn't throw an error; instead, it returns an object telling you if the validation succeeded.
export const loader = async ({ params }: LoaderFunctionArgs) => {
const genre = params.genre
const rawContent = await getAIRecommendations(genre)
// Here's where the magic happens
const result = RecommendationSchema.safeParse(rawContent)
if (!result.success) {
// Log the error for your eyes only
console.error('AI returned malformed data:', result.error.format())
// Throw a 422 or return a fallback
throw new Response(
'The AI is having a bit of a moment. Please try again.',
{
status: 422,
}
)
}
// result.data is now fully typed and guaranteed!
return json({ data: result.data })
}<Callout type="info"> Pro Tip: If the LLM output fails validation, you can actually use the Zod error messages to re-prompt the AI. "Hey, you sent me this JSON but the 'year' field was a string instead of a number. Please fix it." It's surprisingly effective. </Callout>
Why This Matters in Remix Specifically
Remix’s architecture is built around the idea that the server is the source of truth. By validating in the loader, we ensure that:
- Type Safety spans the network gap: The type we inferred from
RecommendationSchemaflows directly intouseLoaderData<typeof loader>(). - Clean Error Boundaries: If the AI fails, we catch it _before_ the component starts rendering. This prevents those weird "Cannot read property 'map' of undefined" errors that are a nightmare to track down in production.
- Sanitized Data: If the LLM decides to include extra, unnecessary fields (which they love to do), Zod strips them out (unless you use
.passthrough()), keeping your payload to the client slim.
Handling the "Almost Correct" Output
Sometimes the AI gets 90% of the way there. Maybe it returns a single string instead of an array, or it forgets a required field.
One trick I've started using is the .catch() or .default() methods in Zod. This allows the UI to still function even if part of the AI's response is missing.
const MovieSchema = z.object({
title: z.string().default('Unknown Title'),
year: z.coerce.number().default(2024), // Coerce helps if the AI sends "2024" as a string
rating: z.number().catch(5.0), // If rating is missing or weird, default to 5.0
})Using z.coerce is a lifesaver with LLMs. LLMs often struggle to distinguish between "10" (string) and 10 (number). Coercion tells Zod, "Hey, if it looks like a number, just make it a number."
Real-World Example: A Robust AI Loader
Let’s put it all together into something you might actually use in a production Remix app.
import { json, type LoaderFunctionArgs } from '@remix-run/node'
import { useLoaderData } from '@remix-run/react'
import { z } from 'zod'
const FeedbackSchema = z.object({
sentiment: z.enum(['positive', 'negative', 'neutral']),
summary: z.string(),
suggestedActions: z.array(z.string()).default([]),
})
export const loader = async ({ request }: LoaderFunctionArgs) => {
// 1. Get raw response (Hypothetical helper)
const rawAIResponse = await callMyLLMProvider('Analyze this feedback...')
// 2. Parse JSON safely
let jsonContent
try {
jsonContent = JSON.parse(rawAIResponse)
} catch (e) {
throw new Response('AI sent invalid JSON', { status: 500 })
}
// 3. Validate with Zod
const validation = FeedbackSchema.safeParse(jsonContent)
if (!validation.success) {
// You could fallback to a generic object here instead of crashing
return json({
feedback: {
sentiment: 'neutral',
summary: 'Analysis currently unavailable.',
suggestedActions: [],
},
})
}
return json({ feedback: validation.data })
}
export default function FeedbackRoute() {
const { feedback } = useLoaderData<typeof loader>()
return (
<div className="p-6">
<h1 className="text-2xl font-bold">Feedback Analysis</h1>
<div className={`badge ${feedback.sentiment}`}>
{feedback.sentiment.toUpperCase()}
</div>
<p className="mt-4">{feedback.summary}</p>
<ul className="list-disc ml-6 mt-2">
{feedback.suggestedActions.map((action, i) => (
<li key={i}>{action}</li>
))}
</ul>
</div>
)
}The "So What?"
I’ve learned the hard way that when you're building with AI, you're not just a developer anymore; you're a defensive coordinator. You have to assume the AI will fail, hallucinate, or simply ignore your instructions.
Using Zod in your Remix loaders creates a "Validation Boundary" that protects your UI from the chaos of the LLM. It gives you the confidence to use these incredible tools without the constant fear that a single misplaced comma in a generated string will take down your entire app.
So, next time you're about to JSON.parse that OpenAI response, take five minutes to write a Zod schema. Your future self—the one not getting paged at 2 AM—will thank you.
Now, if only I could find a Zod schema to validate my kettlebell-potato body type... but I think that's a problem even GPT-5 won't be able to solve.
Happy coding!

