Most Vercel deployment errors come down to six categories: missing environment variables, TypeScript/ESLint build failures, Function timeouts, memory limits, production-only 404s, and stale cache. This guide covers all of them with concrete fixes.
When I was deploying 32blog.com to Vercel, the build would die the moment I pushed — even though npm run build passed locally. "Works locally" does not mean "works on Vercel." Environment variables, Node.js version differences, cache behavior, Function limits — the gap is bigger than it looks.
This article covers the 7 most common error patterns when deploying Next.js to Vercel, with concrete fixes for each one. Use Ctrl+F to search for the error message you're seeing.
Build Passes Locally but Fails on Vercel
The most common scenario. The root cause is almost always one of two things: missing environment variables or a failed dependency resolution.
Missing Environment Variables
Locally, values in .env.local are loaded automatically. On Vercel, you have to add them explicitly in the dashboard — they don't get picked up otherwise.
When a build-time code path reads a missing variable, you get errors like these:
Error: Missing required environment variable: DATABASE_URL
Build failed with errors
Or the variable silently becomes undefined and the build crashes downstream:
TypeError: Cannot read properties of undefined (reading 'split')
Fix: Go to Vercel dashboard → Project → Settings → Environment Variables and add everything from your .env.local.
# Check your local .env.local
cat .env.local
# Every variable here needs to be added to Vercel
DATABASE_URL=postgresql://...
NEXT_PUBLIC_API_URL=https://api.example.com
AUTH_SECRET=...
After adding environment variables, don't just click "Redeploy" — use "Redeploy without cache" or push a new commit to ensure the build picks up the new values.
TypeScript Type Errors
When running npm run dev, Next.js uses Turbopack which skips full type checking. Vercel runs next build, which includes TypeScript compilation. So type errors that were silently ignored locally will break your Vercel build.
Type error: Property 'name' does not exist on type 'User | null'
Fix: Run the type check locally before pushing.
# Run before every push
npx tsc --noEmit
# Fix the errors, then push
You can disable type checking in next.config.ts, but this degrades production code quality and isn't recommended.
// next.config.ts (not recommended: disables type checking)
const nextConfig = {
typescript: {
ignoreBuildErrors: true, // only if absolutely necessary
},
};
ESLint Errors
next build runs ESLint as part of the build. Errors that don't surface during npm run dev will break the Vercel build.
./app/components/Button.tsx
ESLint: 'onClick' is defined but never used. (no-unused-vars)
Failed to compile.
Fix: Run lint locally before pushing.
# Check lint before pushing
npm run lint
# Auto-fix what's fixable
npm run lint -- --fix
You can temporarily disable lint during builds in next.config.ts, but treat it as technical debt to fix immediately.
// next.config.ts
const nextConfig = {
eslint: {
ignoreDuringBuilds: true, // temporary only
},
};
Function Timeout Errors
Vercel Functions have a timeout ceiling that depends on your plan. With Fluid Compute (default for all new projects since April 2025), the limits increased substantially:
| Plan | Default Timeout | Maximum Timeout |
|---|---|---|
| Hobby | 300s (5 min) | 300s (5 min) |
| Pro | 300s (5 min) | 800s (13 min) |
| Enterprise | 300s (5 min) | 800s (13 min) |
Error: Task timed out after 300.00 seconds
Finding the Cause
Open the Functions tab in your Vercel dashboard. You can see execution times for each function. Even with 300s available, functions approaching the limit need attention.
Common causes:
- Slow external API calls
- Unoptimized database queries
- Processing large files in memory
- Infinite loops or unintended recursion
Fix 1: Set maxDuration
// app/api/heavy-task/route.ts
export const maxDuration = 60; // up to 300s on Hobby, 800s on Pro
export async function GET() {
const result = await heavyDatabaseQuery();
return Response.json(result);
}
Fix 2: Move heavy work to a background queue
Don't run slow operations synchronously in API routes. Offload them to background jobs using Vercel Cron Jobs or an external service like Upstash QStash.
// app/api/trigger-job/route.ts
import { Client } from "@upstash/qstash";
const qstash = new Client({ token: process.env.QSTASH_TOKEN! });
export async function POST(request: Request) {
const body = await request.json();
// Enqueue the heavy work
await qstash.publishJSON({
url: `${process.env.VERCEL_URL}/api/process-job`,
body: body,
});
// Return immediately
return Response.json({ status: "queued" });
}
Fix 3: Use Node.js Runtime (Edge is now legacy)
Vercel now recommends Node.js over Edge Runtime for most use cases. With Fluid Compute, Node.js functions start quickly and have higher memory/timeout limits. Edge Runtime is considered legacy — it's limited to 25 seconds for the initial response and has a 1–2 MB code size limit.
// app/api/fast-api/route.ts
// Node.js runtime is the default — no config needed
// Only use Edge if you specifically need edge-location execution
export async function GET() {
return Response.json({ status: "ok" });
}
Out of Memory (OOM) Errors
When a function runs out of memory during build or execution, you'll see something like this:
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
Or in Vercel logs:
Error: Function exceeded memory limit
With Fluid Compute, the memory limits are: 2 GB / 1 vCPU (Hobby), up to 4 GB / 2 vCPU (Pro/Enterprise).
Build-time Memory Issues
Large projects or apps that statically generate many pages at build time can hit memory limits.
# Increase Node.js heap size for the build
NODE_OPTIONS="--max-old-space-size=4096" npm run build
You can set this as an environment variable in the Vercel dashboard:
# Vercel dashboard > Environment Variables
NODE_OPTIONS = --max-old-space-size=4096
Runtime Memory Issues
API routes that load large datasets into memory can cause OOM errors at runtime.
// Bad: loads 100MB of data into memory at once
export async function GET() {
const hugeData = await fetchAllRecords();
return Response.json(hugeData);
}
// Good: stream the response instead
export async function GET() {
const stream = new ReadableStream({
async start(controller) {
const records = await fetchRecordsInBatches();
for await (const batch of records) {
controller.enqueue(
new TextEncoder().encode(JSON.stringify(batch) + "\n")
);
}
controller.close();
},
});
return new Response(stream, {
headers: { "Content-Type": "application/json" },
});
}
404 Errors That Only Appear in Production
Pages that work locally but return 404 in production are usually caused by incomplete static generation or middleware misconfiguration.
Incomplete generateStaticParams
// app/blog/[slug]/page.tsx
export async function generateStaticParams() {
const posts = await getAllPosts();
return posts.map((post) => ({
slug: post.slug,
}));
}
Any slug not returned by generateStaticParams will 404 unless dynamicParams is true (the default).
// app/blog/[slug]/page.tsx
// dynamicParams defaults to true — ungenerated paths are rendered on-demand
export const dynamicParams = true;
// Setting false means only pre-generated paths work; everything else 404s
next-intl Routing Misconfiguration
If you're using next-intl for i18n, locale config mismatches can cause 404s in production. For a full setup walkthrough, see Complete Guide to Next.js i18n with next-intl.
// middleware.ts
import createMiddleware from "next-intl/middleware";
export default createMiddleware({
locales: ["ja", "en", "es"],
defaultLocale: "ja",
localePrefix: "as-needed", // default locale has no prefix
});
export const config = {
matcher: ["/((?!api|_next|_vercel|.*\\..*).*)"],
};
Cache Stuck in a Stale State
Vercel's caching behavior can produce unexpected results, especially when mixing fetch cache with ISR (Incremental Static Regeneration). For a deeper dive into how caching works in App Router, see the SSR guide.
// No explicit cache strategy — intent unclear
export async function getArticles() {
const res = await fetch("https://api.example.com/articles");
// Next.js 15+ defaults to no-store, but being explicit avoids confusion
return res.json();
}
// Be explicit about your cache strategy
export async function getArticles() {
const res = await fetch("https://api.example.com/articles", {
next: { revalidate: 60 }, // revalidate every 60 seconds
});
return res.json();
}
// Or disable cache entirely for real-time data
export async function getLatestPrice() {
const res = await fetch("https://api.example.com/price", {
cache: "no-store",
});
return res.json();
}
On-Demand Revalidation
To invalidate the cache immediately when content changes, use the Revalidation API.
// app/api/revalidate/route.ts
import { revalidatePath, revalidateTag } from "next/cache";
import { NextRequest } from "next/server";
export async function POST(request: NextRequest) {
const token = request.nextUrl.searchParams.get("token");
if (token !== process.env.REVALIDATION_TOKEN) {
return Response.json({ error: "Invalid token" }, { status: 401 });
}
const body = await request.json();
if (body.path) {
revalidatePath(body.path);
}
if (body.tag) {
revalidateTag(body.tag);
}
return Response.json({ revalidated: true });
}
CORS Errors That Only Show Up in Production
When your frontend calls an API and only gets CORS errors in production:
Access to fetch at 'https://api.example.com' from origin 'https://32blog.com'
has been blocked by CORS policy
Add CORS headers to your API routes.
// app/api/data/route.ts
export async function GET() {
const response = await fetchData();
return Response.json(response, {
headers: {
"Access-Control-Allow-Origin": "https://32blog.com",
"Access-Control-Allow-Methods": "GET, POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type, Authorization",
},
});
}
// Handle preflight requests
export async function OPTIONS() {
return new Response(null, {
headers: {
"Access-Control-Allow-Origin": "https://32blog.com",
"Access-Control-Allow-Methods": "GET, POST, OPTIONS",
"Access-Control-Allow-Headers": "Content-Type, Authorization",
},
});
}
Or configure headers globally in next.config.ts.
// next.config.ts
const nextConfig = {
async headers() {
return [
{
source: "/api/:path*",
headers: [
{ key: "Access-Control-Allow-Origin", value: "https://32blog.com" },
{ key: "Access-Control-Allow-Methods", value: "GET,POST,OPTIONS" },
{ key: "Access-Control-Allow-Headers", value: "Content-Type" },
],
},
];
},
};
export default nextConfig;
Access-Control-Allow-Origin: * allows all origins. That's fine for public APIs, but APIs that require authentication should restrict access to specific origins only.
FAQ
Why does my build pass locally but fail on Vercel?
The most common cause is missing environment variables. Variables in .env.local are loaded automatically in local dev, but Vercel requires them to be added explicitly in the dashboard. The second most common cause is TypeScript type errors — npm run dev with Turbopack skips type checking, but next build on Vercel does not. Run npx tsc --noEmit locally before every push.
What are the current Function timeout limits on Vercel?
With Fluid Compute (default since April 2025), all plans default to 300 seconds. Pro/Enterprise plans can extend to 800 seconds using maxDuration. Legacy projects not on Fluid Compute still use the old limits (Hobby: 10s, Pro: 300s).
Should I use Edge Runtime or Node.js Runtime?
Vercel now recommends Node.js over Edge Runtime for most use cases. With Fluid Compute, Node.js functions have fast cold starts, higher memory limits (up to 4 GB), and longer timeouts (up to 800s). Edge Runtime is limited to 25 seconds for the initial response and 1–2 MB code size. Use Edge only when you specifically need edge-location execution.
How do I fix "Function exceeded memory limit" errors?
The default memory limit is 2 GB with Fluid Compute (up to 4 GB on Pro/Enterprise). For build-time OOM, set NODE_OPTIONS=--max-old-space-size=4096 as an environment variable. For runtime OOM, switch to streaming responses instead of loading everything into memory at once.
Why do I get 404 errors only in production?
Usually caused by incomplete generateStaticParams or middleware misconfiguration. If dynamicParams is false, any path not returned by generateStaticParams will 404. Also check that your next-intl middleware matcher correctly excludes _next, api, and static file paths.
How do I force Vercel to clear its cache?
Use "Redeploy without cache" from the Vercel dashboard, or trigger on-demand revalidation via revalidatePath() or revalidateTag() in a Route Handler. For ISR pages, setting an explicit revalidate interval ensures the cache refreshes automatically.
What changed with Vercel Fluid Compute?
Fluid Compute (default since April 2025) replaced the traditional AWS Lambda model. Key changes: timeout defaults went from 10–15s to 300s, memory increased from 1 GB to 2 GB (4 GB max on Pro), billing shifted to Active CPU time (idle I/O wait doesn't count), and Edge Runtime is now considered legacy.
Wrapping Up
Here's a quick reference for the most common Vercel deployment errors:
| Error | Cause | Fix |
|---|---|---|
| Build failure | Missing env vars, type errors, ESLint errors | Add vars to Vercel. Run tsc --noEmit before pushing |
| Function Timeout | Slow processing, infinite loops | maxDuration (up to 800s Pro), async queue |
| OOM Error | Memory exhaustion (2 GB default) | NODE_OPTIONS=--max-old-space-size=4096, streaming |
| 404 (production only) | Incomplete static params, bad matcher | Check dynamicParams, fix matcher pattern |
| Stale cache | Implicit force-cache on fetch | Explicitly set revalidate or cache: "no-store" |
| CORS Error | Missing CORS headers | Add headers to API routes or next.config.ts |
"Works locally" is not "works on Vercel." Running tsc --noEmit and npm run lint before every push will eliminate half of all build-related errors. For the rest, Vercel's Functions logs and the environment variables checklist will get you there.
Related articles: