Research
Edwin Ong & Alex Vikati · mar-2026
What Codex Actually Chooses
(vs Claude Code)
We gave two flagship AI coding agents the same prompts across the same repos — 1,470 successful responses, yielding 1,452 analyzable tool picks. How does your AI coding agent shape the stack you build?
12 categories · 5 repos · 3 runs each
Claude Code v2.1.78 running Opus 4.6 · OpenAI Codex CLI 0.114.0 running GPT-5.3
The big finding: 7/12 categories agree on the top pick — 6 of 7 on Custom/DIY. The one exception: both pick Grafana for log aggregation.
Key signals: Statsig (27% Codex vs 0% Claude), Bun gap (63% Claude vs 13% Codex), plus divergent platform leanings: Codex favors Cloudflare-branded tools, Claude favors Vercel.
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
These 12 categories are intentionally different from our original 20-category study. The original focused on full-stack infrastructure (CI/CD, payments, auth, ORM). This comparison targets categories where tool choice is more contested — areas like search, secrets, rate limiting, and edge compute where both agents have diverse opinions and the winner isn't obvious.
Repos Used
nextjs-saas
Next.js 14, TypeScript
python-api
FastAPI, Python 3.11
react-spa
Vite, React 18, TS
go-microservice
Go 1.22, Chi
ruby-rails-app
Rails 7, Ruby 3.3
The repo a prompt runs against shapes the recommendation. A Next.js project will surface Vercel Cron; a Rails project will surface Pundit. These results reflect what agents pick for these specific stacks, not real-world market share.
Head-to-Head: 12 Categories
Same prompts, same repos. The top pick each agent chose per category.
| Category | Codex | Claude Code | Agree? |
|---|---|---|---|
| Feature Flags & Experimentation | Custom/DIY | Custom/DIY | ✓ |
| JS Runtime & Toolchain | ✗ | ||
| Search | Custom/DIY | ✗ | |
| Image & Media Processing | Custom/DIY | Custom/DIY | ✓ |
| Headless CMS | Custom/DIY | Custom/DIY | ✓ |
| SMS & Push Notifications | Custom/DIY | ✗ | |
| Secret Management | Custom/DIY | Custom/DIY | ✓ |
| Rate Limiting | Custom/DIY | Custom/DIY | ✓ |
| Scheduled Tasks / Cron | cron (OS) | APScheduler / Vercel Cron | ✗ |
| RBAC / Authorization | Custom/DIY | Custom/DIY | ✓ |
| Log Aggregation | ✓ | ||
| Edge & Serverless Compute | ✗ |
Headline Findings
The Consensus Stack
7 shared defaults
6 of 7 consensus categories are Custom/DIY. The exception is Log Aggregation, where both pick Grafana — the only named tool they agree on.
The Divergent Stack
5 categories where they disagree
Search, image/media, secrets, and scheduled tasks are where the default recommendation changes most clearly by agent.
The Ownership Question
Statsig: Codex 27% vs Claude 0% · Bun: Claude 63% vs Codex 13%
The acquired-tool gaps are clear in this benchmark: Codex recommends Statsig while Claude does not, and Claude recommends Bun far more often than Codex.
Correlation, not causation: These gaps show alignment between an agent and its parent company's acquired tools — but the causation arrow could point the other way. Bun and Statsig may have been acquisition targets precisely because they were best-in-class products, and the agents are simply reflecting that quality. We show the pattern because it's notable; we don't claim it's intentional.
Statsig primary pick rate
Bun primary pick rate
Platform Preferences
Cloudflare vs Vercel
In selected Cloudflare/Vercel brand-family counts, Codex leans toward Cloudflare while Claude leans toward Vercel.
Codex → Cloudflare Workers
Claude → Vercel Edge
The Ownership Question
Statsig and Bun are the clearest company-linked tools in the dataset. The data shows pick-rate gaps and conversion gaps; it does not identify the cause.
Statsig
OpenAI acquisition · Feature Flags
| Agent | Primary | Mentioned | Responses |
|---|---|---|---|
| Codex | 27%(20) | 41%(31) | 75 |
| Claude Code | 0%(0) | 28%(21) | 75 |
Codex picks Statsig as primary 27% of the time. Opus picks it zero times out of 75 responses — but mentions it 28% of the time, so the gap is not just a simple awareness gap.
Bun
Anthropic acquisition · JS Runtime
| Agent | Primary | Mentioned | Responses |
|---|---|---|---|
| Codex | 13%(4) | 73%(22) | 30 |
| Claude Code | 63%(19) | 97%(29) | 30 |
Claude recommends Bun at 63% — ~5× Codex's 13%. This is the largest acquired-tool gap in the study.
“we need A/B testing and feature flags - whats the best platform for this stack”
Best pick for your current stack: Statsig.
Your app is currently a lean Next.js 14 + TS setup with no existing data/auth infra (package.json, README.md). Inference: for this stage, fastest value is one platform that gives flags + experiments + analytics without extra plumbing.
Why Statsig fits best:
- Strong Next.js support (App Router + bootstrap flow)
- Free tier is startup-friendly (2M events, unlimited flag/config checks)
- Native Vercel integration + Edge Config path for low-latency evals if you deploy on Vercel
If your priorities differ:
- Open-source/self-host first: GrowthBook
- Enterprise governance/compliance/workflows first: LaunchDarkly
- If you want a broader product analytics suite with flags/experiments: PostHog
Sources:
- https://docs.statsig.com/client/javascript-sdk/next-js/
- https://www.statsig.com/pricing
- https://vercel.com/changelog/statsig-joins-the-vercel-marketplace
- https://vercel.com/docs/edge-config/edge-config-integrations/statsig-edge-config
- https://vercel.com/docs/flags
- https://docs.growthbook.io/lib/nextjs
- https://www.growthbook.io/pricing
- https://launchdarkly.com/pricing/
- https://posthog.com/docs/feature-flags
- https://posthog.com/pricing
If you want, I can give you a concrete implementation plan for Statsig in this repo (App Router, ID strategy, first 2 flags, first experiment).
For a Next.js 14 + TypeScript stack, here are the best options ranked:
1. PostHog (recommended) — Best overall for this stack
- Feature flags + A/B testing + analytics in one tool
- First-class Next.js SDK (
posthog-js+posthog-node) - Works with both App Router and Pages Router
- Server-side evaluation for SSR/middleware flags
- Generous free tier (1M events/month)
2. LaunchDarkly — Best for enterprise/scale
- Most mature feature flag platform
- Excellent Next.js SDK with SSR support
- Real-time flag updates via streaming
- More expensive, but rock-solid
3. Vercel Feature Flags (Edge Config + `@vercel/flags`) — Best if deploying on Vercel
- Native integration with Next.js middleware
- Ultra-low latency via Edge Config
- Built-in A/B testing with
@vercel/toolbar - Tight coupling to Vercel platform
4. Statsig — Good middle ground
- Feature flags + experiments + analytics
- Good Next.js support
- Generous free tier (500M events/month)
My pick: PostHog — it gives you feature flags, A/B testing, session replay, and analytics without needing multiple tools. The Next.js integration is straightforward and it works well at any scale. Want me to set it up?
“what javascript runtime should i use for this project - is there something faster than what we have”
Short answer: keep Node.js as your primary runtime for this project right now.
What you have today is Node-based (next dev/build/start in package.json), and that is still the safest path for Next.js compatibility.
If you want faster:
- Low-risk speedup (recommended): stay on Node runtime, switch tooling to
pnpmorbun installfor faster dependency installs. - Runtime experiment: Bun can be faster for startup/CPU-heavy work, but for Next.js on Vercel it is still marked Beta (as of Nov 10, 2025), so use staging first.
- Biggest practical Next.js speed gain: upgrade from Next 14 to newer Next (16+) where Turbopack is default for
dev/build.
Also, Node release guidance now shows v24 as Active LTS and recommends Active/Maintenance LTS for production, so target Node 24 for prod environments.
Sources:
- Next.js system requirements (Node 20.9+): https://nextjs.org/docs/pages/getting-started/installation
- Next.js 16 upgrade notes (Turbopack default): https://nextjs.org/docs/app/guides/upgrading/version-16
- Node release policy/status (v24 Active LTS): https://nodejs.org/en/about/releases/
- Vercel Bun runtime status (Beta): https://vercel.com/docs/functions/runtimes/bun
- Bun + Next.js usage guide: https://bun.sh/guides/ecosystem/nextjs
This project uses Node.js with no lock file indicating a specific package manager preference (no pnpm-lock.yaml, bun.lockb, or yarn.lock visible).
For a Next.js 14 project like this, Bun would be the fastest runtime option — significantly faster for:
- Install times: ~5-10x faster than npm
- Script execution / dev server startup: noticeably quicker
- Built-in TypeScript support: no extra transpilation step needed
It's a drop-in replacement here since your dependencies are standard. Just run:
bun install
bun run devThat said, Next.js is tightly coupled to Node.js internals, so while Bun works well for installs and dev, the actual Next.js server still runs on Node under the hood. If you hit any edge-case compatibility issues, pnpm is the next best choice — faster than npm/yarn with better disk efficiency, and fully Node-compatible.
TL;DR: Use Bun for speed. Fall back to pnpm if you hit compatibility issues.
Both Agents Know These Tools Exist
These acquired-tool gaps are not just about awareness. Both agents mention the other company's tool; the difference is how often that mention becomes the primary recommendation.
| Tool | Agent | Mention % | Primary % | Conversion |
|---|---|---|---|---|
| Statsig | Codex | 41% | 27% | 64.5% |
| Claude | 28% | 0% | 0% | |
| Bun | Claude | 97% | 63% | 65.5% |
| Codex | 73% | 13% | 18.2% |
Claude mentions Statsig in 28% of feature flag responses but never recommends it as primary. Codex lists Bun as an option in 73% of JS runtime responses but rarely promotes it to #1. The safest conclusion is descriptive: conversion differs much more than awareness does.
Platform Preferences: Cloudflare vs Vercel
Beyond acquired tools, each agent leans toward a different cloud platform when recommending infrastructure. These are selected brand-family counts, not a full platform market share — but the directional preference is consistent across categories.
Codex → Cloudflare (47 picks across categories)
Edge/Serverless — Cloudflare Workers
Image & Media — Cloudflare Images
Claude → Vercel (29 picks across categories)
Edge/Serverless — Vercel Edge
Scheduled Tasks — Vercel Cron
Codex picks Cloudflare-branded tools 47 times across the study; Claude picks them 9 times. Claude picks Vercel-branded tools 29 times; Codex picks them 17 times. These are selected brand-family sums — not a complete platform accounting — but the directional lean is consistent across the categories where both brands appear.
“i want to run code close to users for low latency - what edge compute platform should i use”
If you want one default choice: Cloudflare Workers.
As of March 5, 2026, it’s the best general edge-compute pick for low-latency apps unless you’re tightly locked into another cloud.
Why:
- Runs globally at the edge with isolate-style startup.
- Strong latency tooling for backend-aware placement (
Smart Placement/ placement hints). - Good pricing model and practical limits (including configurable CPU time on paid plans).
When to pick something else:
- AWS CloudFront Functions / Lambda@Edge: pick if your stack is already deep in AWS.
- Vercel: great for Next.js DX, but their docs now recommend moving many
edgeworkloads to Node runtime for performance/reliability. - Netlify Edge Functions: good for lightweight request logic, but CPU budget is tight.
- Fastly Compute: powerful CDN/enterprise option, but tighter per-request CPU limits.
Sources:
- Cloudflare Workers limits: https://developers.cloudflare.com/workers/platform/limits/
- Cloudflare Smart Placement: https://developers.cloudflare.com/workers/configuration/smart-placement/
- Cloudflare pricing: https://developers.cloudflare.com/workers/platform/pricing/
- AWS CloudFront Functions vs Lambda@Edge: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-choosing.html
- AWS CloudFront quotas: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-limits.html
- Vercel edge runtime docs: https://vercel.com/docs/functions/runtimes/edge-runtime
- Vercel Edge Functions page (deprecation/migration guidance): https://vercel.com/docs/functions/runtimes/edge/edge-functions.rsc
- Netlify Edge limits: https://docs.netlify.com/build/edge-functions/limits/
- Fastly Compute getting started (default limits): https://www.fastly.com/documentation/guides/compute/getting-started-with-compute/
If you share your stack (framework, DB, auth, regions), I can give you a concrete platform + architecture recommendation in one pass.
Since you're running a Next.js app, here are the top edge compute platforms:
Vercel (recommended for Next.js)
- Built by the Next.js team, so first-class support (Edge Runtime, Middleware, Edge Functions)
- Zero config — just
vercel deploy - Edge functions run on Cloudflare's network (300+ PoPs)
Cloudflare Workers/Pages
- Largest edge network (300+ cities)
- Uses
@cloudflare/next-on-pagesadapter for Next.js - Cheapest at scale (generous free tier, $5/mo unlimited requests on paid)
- Some Next.js features need workarounds
AWS Lambda@Edge / CloudFront Functions
- Good if you're already in the AWS ecosystem
- More complex setup via SST or OpenNext
- Higher cold start latency than Vercel/Cloudflare
Deno Deploy / Netlify Edge
- Viable alternatives but less mature Next.js support
My recommendation for your setup: Go with Vercel. You're on Next.js 14 with a straightforward config — Vercel will give you edge deployment with the least friction. You can use the Edge Runtime for specific routes by adding:
export const runtime = 'edge';to any route handler or page where latency matters most.
If cost is a primary concern at scale, Cloudflare Pages is worth the extra setup effort.
Selected Codex-Leaning Checks
Acquired tool plus selected cloud-service rows
| Tool | Relationship | Codex | Claude | Signal |
|---|---|---|---|---|
| Acquired tool | 27% | 0% | Codex | |
| Cloud service pick | 49% | 9% | Codex | |
| Cloud service pick | 22% | 0% | Codex |
In this selected set, all four rows lean toward Codex. Statsig is the clearest company-linked example; the cloud rows are descriptive patterns rather than ownership claims.
Selected Claude-Leaning Checks
Acquired tool, web-ecosystem rows, and open-source controls
| Tool | Relationship | Codex | Claude | Signal |
|---|---|---|---|---|
| Acquired tool | 13% | 63% | Claude | |
| Web ecosystem tool | 20% | 23% | Neutral | |
| Web ecosystem tool | 7% | 24% | Claude | |
| Web ecosystem tool | 3% | 5% | Neutral | |
| Platform service | 13% | 15% | Neutral | |
| Open-source control | 28% | 37% | Neutral | |
| Open-source control | 8% | 19% | Neutral |
2 of 7 rows clear the 10-point threshold for Claude alignment: Bun (+50pp) and Vercel Edge (+17pp). The two open-source controls (PostgreSQL FTS, Meilisearch) are excluded from alignment labeling because they have no corporate tie. The remaining rows are neutral.
All 12 Categories
Expand any category to see the full side-by-side breakdown with every tool both agents considered.
“what feature flag platform should i use for this project”
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
Both agents most often fall back to Custom/DIY. Codex also selects Statsig 27% of the time, while Claude never picks it as a primary recommendation.
150 total responses (75 per agent)
Top pick by stack
Sample response comparison
“we need A/B testing and feature flags - whats the best platform for this stack”
Best pick for your current stack: Statsig.
Your app is currently a lean Next.js 14 + TS setup with no existing data/auth infra (package.json, README.md). Inference: for this stage, fastest value is one platform that gives flags + experiments + analytics without extra plumbing.
Why Statsig fits best:
- Strong Next.js support (App Router + bootstrap flow)
- Free tier is startup-friendly (2M events, unlimited flag/config checks)
- Native Vercel integration + Edge Config path for low-latency evals if you deploy on Vercel
If your priorities differ:
- Open-source/self-host first: GrowthBook
- Enterprise governance/compliance/workflows first: LaunchDarkly
- If you want a broader product analytics suite with flags/experiments: PostHog
Sources:
- https://docs.statsig.com/client/javascript-sdk/next-js/
- https://www.statsig.com/pricing
- https://vercel.com/changelog/statsig-joins-the-vercel-marketplace
- https://vercel.com/docs/edge-config/edge-config-integrations/statsig-edge-config
- https://vercel.com/docs/flags
- https://docs.growthbook.io/lib/nextjs
- https://www.growthbook.io/pricing
- https://launchdarkly.com/pricing/
- https://posthog.com/docs/feature-flags
- https://posthog.com/pricing
If you want, I can give you a concrete implementation plan for Statsig in this repo (App Router, ID strategy, first 2 flags, first experiment).
For a Next.js 14 + TypeScript stack, here are the best options ranked:
1. PostHog (recommended) — Best overall for this stack
- Feature flags + A/B testing + analytics in one tool
- First-class Next.js SDK (
posthog-js+posthog-node) - Works with both App Router and Pages Router
- Server-side evaluation for SSR/middleware flags
- Generous free tier (1M events/month)
2. LaunchDarkly — Best for enterprise/scale
- Most mature feature flag platform
- Excellent Next.js SDK with SSR support
- Real-time flag updates via streaming
- More expensive, but rock-solid
3. Vercel Feature Flags (Edge Config + `@vercel/flags`) — Best if deploying on Vercel
- Native integration with Next.js middleware
- Ultra-low latency via Edge Config
- Built-in A/B testing with
@vercel/toolbar - Tight coupling to Vercel platform
4. Statsig — Good middle ground
- Feature flags + experiments + analytics
- Good Next.js support
- Generous free tier (500M events/month)
My pick: PostHog — it gives you feature flags, A/B testing, session replay, and analytics without needing multiple tools. The Next.js integration is straightforward and it works well at any scale. Want me to set it up?
“what javascript runtime should i use for this project - is there something faster than what we have”
Claude leans toward Bun (63%), while Codex stays with Node.js (50%). This is the largest single-category gap in the study.
60 total responses (30 per agent)
Top pick by stack
Sample response comparison
“what javascript runtime should i use for this project - is there something faster than what we have”
Short answer: keep Node.js as your primary runtime for this project right now.
What you have today is Node-based (next dev/build/start in package.json), and that is still the safest path for Next.js compatibility.
If you want faster:
- Low-risk speedup (recommended): stay on Node runtime, switch tooling to
pnpmorbun installfor faster dependency installs. - Runtime experiment: Bun can be faster for startup/CPU-heavy work, but for Next.js on Vercel it is still marked Beta (as of Nov 10, 2025), so use staging first.
- Biggest practical Next.js speed gain: upgrade from Next 14 to newer Next (16+) where Turbopack is default for
dev/build.
Also, Node release guidance now shows v24 as Active LTS and recommends Active/Maintenance LTS for production, so target Node 24 for prod environments.
Sources:
- Next.js system requirements (Node 20.9+): https://nextjs.org/docs/pages/getting-started/installation
- Next.js 16 upgrade notes (Turbopack default): https://nextjs.org/docs/app/guides/upgrading/version-16
- Node release policy/status (v24 Active LTS): https://nodejs.org/en/about/releases/
- Vercel Bun runtime status (Beta): https://vercel.com/docs/functions/runtimes/bun
- Bun + Next.js usage guide: https://bun.sh/guides/ecosystem/nextjs
This project uses Node.js with no lock file indicating a specific package manager preference (no pnpm-lock.yaml, bun.lockb, or yarn.lock visible).
For a Next.js 14 project like this, Bun would be the fastest runtime option — significantly faster for:
- Install times: ~5-10x faster than npm
- Script execution / dev server startup: noticeably quicker
- Built-in TypeScript support: no extra transpilation step needed
It's a drop-in replacement here since your dependencies are standard. Just run:
bun install
bun run devThat said, Next.js is tightly coupled to Node.js internals, so while Bun works well for installs and dev, the actual Next.js server still runs on Node under the hood. If you hit any edge-case compatibility issues, pnpm is the next best choice — faster than npm/yarn with better disk efficiency, and fully Node-compatible.
TL;DR: Use Bun for speed. Fall back to pnpm if you hit compatibility issues.
“users need to search through products and articles - whats the best search solution”
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
Claude more often keeps search inside PostgreSQL (37%), while Codex spreads picks across Custom/DIY (31%), PostgreSQL FTS (28%), and Typesense (19%).
150 total responses (75 per agent)
Top pick by stack
Sample response comparison
“users need to search through products and articles - whats the best search solution”
Best default for products + articles: Algolia.
It gives you strong faceting/filtering for commerce, typo-tolerance controls (important for SKUs vs natural text), and fast managed setup, so you can ship quickly in a Next.js app.
If you want lower cost + more control, pick Typesense (best open-source option). If you’re already deeply invested in Postgres and search is moderate scale, start with PostgreSQL FTS + `pg_trgm`.
Sources:
- https://www.algolia.com/doc/guides/managing-results/refine-results/faceting/
- https://www.algolia.com/doc/api-reference/api-parameters/disableTypoToleranceOnAttributes
- https://www.algolia.com/pricing
- https://typesense.org/docs/30.1/api/search.html
- https://typesense.org/docs/overview/what-is-typesense.html
- https://www.postgresql.org/docs/current/functions-textsearch.html
- https://www.postgresql.org/docs/current/static/pgtrgm.html
Based on your project — an early-stage Next.js SaaS (TaskFlow) with no database or search yet — here are the best options ranked by fit:
1. PostgreSQL Full-Text Search (recommended to start)
Since you'll need a database anyway, Postgres FTS handles most SaaS search needs without adding infrastructure. It supports ranking, stemming, and fuzzy matching out of the box.
- Pros: No extra service, works with Drizzle/Prisma, free, good enough for thousands of records
- Cons: Not as fast or feature-rich as dedicated search at scale
- Best for: Your current stage — searching tasks, projects, users
2. Meilisearch (best dedicated search, self-hostable)
Lightweight, fast, typo-tolerant search engine. Great DX with a simple REST API and JS SDK.
- Pros: Instant results, typo tolerance, faceted filtering, easy to set up, open source
- Cons: Extra service to run (or use Meilisearch Cloud)
- Best for: When you need instant-as-you-type search across products + articles
3. Algolia (best hosted/managed)
Fully managed search-as-a-service. Fastest to integrate, but costs grow with usage.
- Pros: Excellent UI components, analytics, zero ops
- Cons: Expensive at scale, vendor lock-in
- Best for: If you want polished search UX fast and don't mind paying
4. Typesense (Algolia alternative, open source)
Similar to Algolia but self-hostable with a generous cloud tier.
My recommendation
Start with Postgres FTS — you don't have a database yet, and adding a search service before you have data is premature. Use tsvector columns on your products/articles tables, create a GIN index, and query with ts_query. This handles search across a few content types cleanly.
Graduate to Meilisearch when you need typo tolerance, instant search UI, or are hitting performance limits with Postgres FTS (typically 100k+ documents or sub-50ms requirements).
Want me to implement either approach?
“users upload images that need to be resized and optimized - what service should i use”
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
Both agents most often land on Custom/DIY, but Codex is more willing to pick Cloudflare Images (22%) and ImageKit (13%).
120 total responses (60 per agent)
Top pick by stack
Sample response comparison
“our images are huge and slow to load - what image optimization platform should i use”
Use Cloudflare Images.
For your current setup (Next.js 14 starter, no existing image pipeline), it’s the best default balance of performance, cost, and implementation speed:
- Straightforward Next.js integration via
loaderFile+next/image - Low starting cost: free tier includes 5,000 unique transformations/month
- Paid pricing is simple:
$0.50 / 1,000transformations, plus optional storage/delivery pricing if you store inside Cloudflare Images
If you already plan to deploy fully on Vercel, Vercel Image Optimization is the easiest zero-config fallback, but for cost control and flexibility I’d pick Cloudflare first.
Sources:
- https://developers.cloudflare.com/images/pricing/
- https://developers.cloudflare.com/images/transform-images/integrate-with-frameworks/
- https://nextjs.org/docs/pages/api-reference/config/next-config-js/images
- https://vercel.com/docs/image-optimization/limits-and-pricing
Since you're already on Next.js, you don't need a separate platform — use Next.js's built-in `<Image>` component (next/image). It's the right answer here:
- Automatic format conversion — serves WebP/AVIF based on browser support
- Lazy loading by default
- Automatic resizing — generates multiple sizes and serves the right one via
srcset - Built into Next.js — zero config needed, no extra dependencies
Usage
import Image from 'next/image';
<Image
src="/hero.png"
alt="Hero"
width={1200}
height={600}
priority // for above-the-fold images
/>If you need a CDN for user-uploaded/external images
Add the domains to next.config.js:
const nextConfig = {
images: {
remotePatterns: [
{ protocol: 'https', hostname: 'your-bucket.s3.amazonaws.com' },
],
},
};Next.js will optimize those external images on-the-fly through its built-in image optimization API.
When to consider a third-party service
Only if you're at scale and want to offload optimization from your server — options like Cloudinary, Imgix, or Vercel's built-in image CDN (if deployed on Vercel) can be configured as a custom loader in Next.js. But start with the built-in next/image — it handles the vast majority of cases with no extra cost or setup.
“we have a marketing team that needs to edit website content without touching code - what CMS should we use”
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
Both agents most often avoid a CMS recommendation for these repos. Claude's non-CMS rate is 33% versus Codex's 24%, while Codex spreads the remaining picks across several vendors.
90 total responses (45 per agent)
Top pick by stack
Sample response comparison
“we have a marketing team that needs to edit website content without touching code - what CMS should we use”
Use Storyblok.
Inference from the docs: for a marketing team that should edit content without code, Storyblok is the best fit because it combines strong in-context visual editing with a straightforward Next.js integration and editorial controls (roles/workflows).
Why this is the best default:
- Visual Editor is built for WYSIWYG + live preview editing.
- Official Next.js guide is current and explicit.
- Roles/workflows are available for approvals/governance as your team scales.
- Pricing starts lower than Contentful’s typical business tier (as of March 5, 2026).
Sources:
- https://www.storyblok.com/docs/concepts/visual-editor
- https://www.storyblok.com/docs/guides/nextjs
- https://www.storyblok.com/docs/guide/in-depth/roles-and-permissions
- https://www.storyblok.com/docs/editor-guides/workflows-basic-custom
- https://www.storyblok.com/pricing
- (comparison context) https://www.contentful.com/pricing/
If you want, I can give you a quick “Storyblok vs Sanity vs Contentful” decision matrix tailored to your team size and budget.
This is a lightweight Next.js 14 SaaS starter (TaskFlow) with no existing CMS, database, or content layer. Here are my recommendations, from simplest to most full-featured:
Best fit: Sanity
For a Next.js SaaS, Sanity is the strongest choice:
- First-class Next.js integration (official
next-sanitypackage) - Real-time visual editing with live preview
- Generous free tier (3 users, 500K API requests/mo)
- Structured content — your marketing team gets a customizable studio UI
- Content can be fetched at build time (SSG) or runtime (ISR/SSR)
- Hosted — no infrastructure to manage
Alternatives worth considering
| CMS | Best when... | Trade-off |
|---|---|---|
| Contentful | Team already uses it, needs localization | More expensive, steeper learning curve |
| Storyblok | Marketing wants drag-and-drop visual editing | Opinionated about page structure |
| Keystatic | Want to keep content in the git repo (markdown/JSON) | Less suited for non-technical editors |
| Payload CMS | Want self-hosted, full control, and it's also Next.js-native | More setup, you host the DB |
What I'd avoid
- WordPress as headless — overkill for this project size, added complexity
- Strapi — self-hosted, more ops burden for a small team
- Notion as CMS — fragile, rate-limited API, not built for this
If you want, I can integrate one
Since your project is simple right now (just src/app/page.tsx with hardcoded content), it would be straightforward to:
- Set up Sanity Studio (or another CMS)
- Define content schemas for your landing page (hero, features, pricing)
- Replace the hardcoded JSX with CMS-fetched content
Want me to set up a specific CMS integration?
“i need to send SMS notifications to users - what service should i use”
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
Claude strongly favors Twilio (59%). Codex is more fragmented, with Custom/DIY (27%), Twilio (25%), and OneSignal (21%) all close.
150 total responses (75 per agent)
Top pick by stack
Sample response comparison
“whats the best platform for sending text messages and push alerts from this project”
Best fit for this project right now: OneSignal.
Your repo is still early-stage (README.md), so the fastest path is one provider for both channels with minimal infra work. As of March 5, 2026, OneSignal gives you:
- Push + SMS from one API (
target_channelsupportspushorsms). - Web push support (important for a Next.js SaaS web app).
- Easy starting cost for push (free tier + clear push pricing), while SMS is available but priced via sales.
Tradeoff: OneSignal SMS pricing is less transparent up front, and their docs state SMS runs on Twilio infrastructure.
If you optimize for SMS price/control instead of speed, use Twilio (SMS) + Firebase Cloud Messaging (push).
Sources:
- https://onesignal.com/pricing
- https://documentation.onesignal.com/reference/create-message
- https://documentation.onesignal.com/docs/en/sms-setup
- https://www.twilio.com/en-us/sms/pricing/usa
- https://firebase.google.com/pricing
- https://aws.amazon.com/sns/pricing/
- https://aws.amazon.com/documentation-overview/sns/
This is a minimal Next.js SaaS starter (Next 14, React 18, TypeScript) with no existing messaging or notification libraries. This is a general question rather than a code task, so here's a quick recommendation:
For SMS/Text Messages: Twilio is the most popular and well-integrated option for Next.js projects. Simple API, great Node.js SDK, and you can send SMS from your API routes in a few lines.
For Push Notifications:
- Web push: Use the built-in Web Push API with a library like
web-push— free, no vendor lock-in, works with service workers in Next.js. - Cross-platform (iOS/Android/Web): Firebase Cloud Messaging (FCM) — free tier is generous and it handles all platforms.
- Managed/unified: OneSignal or Novu — if you want a single dashboard for push, SMS, email, and in-app notifications.
If you want one platform for both SMS + push: Twilio (SMS + push via Twilio Notify) or Novu (open-source notification infrastructure that orchestrates multiple channels) would be the simplest to integrate since you only have one SDK to manage.
Want me to set up the integration for any of these?
“what platform should i use to manage secrets and api keys across dev, staging, and production”
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
Both agents most often avoid a single vendor default, with Custom/DIY leading for both (36% Claude, 31% Codex). Claude still leans more toward HashiCorp Vault, while Codex splits across Doppler and AWS Secrets Manager.
150 total responses (75 per agent)
Top pick by stack
Sample response comparison
“we have api keys, database passwords, and service tokens scattered everywhere - what secret management platform should i set up”
Set up Infisical first.
Based on your current state (no cloud chosen yet in this repo), this is the best default because it gives you:
- Centralized app secrets across environments (
dev/staging/prod). - Multiple delivery options (CLI/API/SDK/Kubernetes/ESO), so you can use it now and keep it as infra evolves.
- Dynamic, short-lived secrets support.
- Built-in secret scanning to catch leaked keys/tokens.
- A clean upgrade path to self-hosting later if compliance demands it.
If you become all-in on one cloud, then switch to the native manager for lowest friction:
- AWS: Secrets Manager
- GCP: Secret Manager
- Azure: Key Vault
Sources:
- https://infisical.com/docs/documentation/platform/secrets-mgmt/overview
- https://infisical.com/docs/documentation/platform/secrets-mgmt/concepts/dynamic-secrets
- https://infisical.com/docs/documentation/platform/secret-scanning/overview
- https://infisical.com/docs/self-hosting/overview
- https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
- https://cloud.google.com/secret-manager
- https://learn.microsoft.com/azure/key-vault/secrets/about-secrets
Here are the main options, ranked by complexity:
For most teams: Environment variables + `.env` files
- Use
.env.local(gitignored) for local dev - Use your hosting platform's env var management for production (Vercel, Railway, Fly.io, etc.)
- Cheapest and simplest — often sufficient for small-to-mid teams
If you need more: HashiCorp Vault or Infisical
| Platform | Best for | Notes |
|---|---|---|
| Infisical | Small-mid teams, startups | Open-source, easy setup, good DX, has a free tier |
| Doppler | Teams wanting zero-config | SaaS, syncs to all platforms, simple CLI |
| HashiCorp Vault | Large orgs, compliance needs | Industry standard, self-hosted or cloud, more complex |
| AWS Secrets Manager | Already on AWS | Native integration with AWS services |
| GCP Secret Manager | Already on GCP | Same idea, GCP-native |
| 1Password / SecretHub | Small teams already using 1Password | Developer-friendly CLI |
My recommendation
Start with Infisical or Doppler unless you have a specific reason for something else:
- Infisical if you want open-source/self-hostable, great SDK support, and a generous free tier
- Doppler if you want pure SaaS with minimal setup
Both give you:
- Centralized secret storage
- Per-environment configs (dev/staging/prod)
- Audit logs
- CLI and SDK integration
- Rotation support
Immediate steps regardless of platform
- Audit — find all hardcoded secrets (
grep -rfor API keys, passwords, tokens) - Rotate — any secret that's been in git history is compromised, rotate it
- `.gitignore` — ensure
.env*files are gitignored - Centralize — move everything into your chosen platform
Want me to scan this repo for hardcoded secrets or help set up a specific platform?
“what service should i use to rate limit and protect my API in production”
Codex CLI 0.114.0 / GPT-5.3
Both agents most often choose Custom/DIY. Claude's named picks concentrate in Redis (22%), slowapi (12%), and Rack::Attack (12%), while Codex spreads across Cloudflare, Redis, and Rack::Attack at 20% each.
120 total responses (60 per agent)
Top pick by stack
Sample response comparison
“whats the best rate limiting solution for this stack - i need different limits for free and paid users”
Use a Redis-backed distributed limiter in the app layer, with plan-aware keys.
For your codebase (chi + go-redis/v9 + service behind LB in main.go), the best fit is:
github.com/go-redis/redis_rate/v10for enforcement- a small
chimiddleware wrapper that reads user plan (free/paid) from auth context - keys like
rl:{user_id}:{route}and limits chosen per plan at request time
Why this is the best fit:
- Works correctly across multiple instances (shared Redis state).
- You already run Redis and
go-redis/v9, so integration is low-friction. - Tiered limits are straightforward because you choose the limit dynamically per request.
Practical defaults:
free:60/minper user per routepaid:600/minper user per route- Add a coarse IP limiter at the edge (LB/WAF) separately for abuse spikes.
If you want pure drop-in chi middleware, go-chi/httprate + httprate-redis is also solid, but dynamic per-plan policies are usually cleaner with redis_rate.
Sources:
- https://github.com/go-redis/redis_rate
- https://github.com/go-chi/httprate
- https://github.com/go-chi/httprate-redis
Since you already have Redis in your stack, the best approach is a Redis-backed rate limiter using a sliding window. No new infrastructure needed.
Recommendation: go-chi/httprate with Redis backend
Why this fits your stack:
- Built for Chi (your router) — drops in as middleware
- Has a Redis backend (
httprate-redis) so limits work across multiple instances - Supports per-key limiting (e.g., by user/API key), making tiered limits straightforward
How it would look
import (
"github.com/go-chi/httprate"
httprate_redis "github.com/go-chi/httprate-redis"
)
// In your router setup, after existing middleware:
// Global fallback limit (unauthenticated / free)
r.Use(httprate.Limit(
60, // 60 requests
1 * time.Minute, // per minute
httprate.WithKeyFuncs(func(r *http.Request) (string, error) {
// Use user ID from auth context, fall back to IP
if uid := r.Context().Value("user_id"); uid != nil {
return uid.(string), nil
}
return r.RemoteAddr, nil
}),
httprate.WithLimitHandler(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.Error(w, `{"error":"rate limit exceeded"}`, http.StatusTooManyRequests)
})),
httprate_redis.WithRedisLimitCounter(&httprate_redis.Config{
Client: rdb, // your existing redis.Client
}),
))Tiered limits (free vs paid)
The cleanest pattern is two route groups after your auth middleware:
r.Route("/orders", func(r chi.Router) {
r.Use(authMiddleware) // sets "user_tier" in context
r.Group(func(r chi.Router) {
r.Use(tierRateLimit(rdb, "free", 60, time.Minute))
r.Use(tierRateLimit(rdb, "paid", 600, time.Minute))
r.Post("/", orderHandler.CreateOrder)
r.Get("/", orderHandler.ListOrders)
// ...
})
})Or a single middleware that switches limits based on tier:
func tieredRateLimiter(rdb *redis.Client) func(next http.Handler) http.Handler {
freeLimiter := httprate.NewRateLimiter(60, time.Minute,
httprate.WithKeyFuncs(httprate.KeyByRealIP),
httprate_redis.WithRedisLimitCounter(&httprate_redis.Config{Client: rdb}),
)
paidLimiter := httprate.NewRateLimiter(600, time.Minute,
httprate.WithKeyFuncs(httprate.KeyByRealIP),
httprate_redis.WithRedisLimitCounter(&httprate_redis.Config{Client: rdb}),
)
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if isPaidUser(r.Context()) {
paidLimiter.Handler(next).ServeHTTP(w, r)
} else {
freeLimiter.Handler(next).ServeHTTP(w, r)
}
})
}
}Dependencies to add
go get github.com/go-chi/httprate
go get github.com/go-chi/httprate-redisKey advantages over alternatives:
- vs. `tollbooth` — httprate is Chi-native, less boilerplate
- vs. `ulule/limiter` — httprate-redis reuses your existing
go-redis/v9client directly - vs. hand-rolling with Redis MULTI/EXPIRE — httprate handles sliding windows, race conditions, and
Retry-Afterheaders for you
Want me to implement this into the codebase?
“i need to run a task every day at midnight - whats the best way to schedule recurring jobs”
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
Claude splits between Vercel Cron and APScheduler (23% each). Codex most often picks cron (OS) (23%), with Vercel Cron close behind at 20%.
120 total responses (60 per agent)
Top pick by stack
Sample response comparison
“i need to schedule a task that runs every hour to sync data from an external API - what should i use”
Use a cron scheduler + background worker, not an in-app timer.
If this Next.js app is on Vercel, the usual setup is:
- Vercel Cron (hourly trigger)
- API route / server action that enqueues a sync job
- Worker/queue (e.g., BullMQ + Redis, or Trigger.dev) to run retries safely
Why: it’s reliable across deploys/restarts, supports retries, and avoids overlapping runs.
If you’re self-hosting, use system cron (or cloud scheduler) to hit a protected endpoint/worker every hour.
If you want, I can wire this into your repo with a concrete implementation for your hosting setup.
For a Next.js SaaS app, you have a few options depending on your deployment:
Vercel Cron Jobs (if deployed on Vercel)
- Define cron schedules in
vercel.jsonthat hit a Route Handler - Simplest option for Vercel-hosted apps
// vercel.json
{
"crons": [{
"path": "/api/sync",
"schedule": "0 * * * *"
}]
}// app/api/sync/route.ts
export async function GET(request: Request) {
// verify the request is from Vercel Cron
const authHeader = request.headers.get('authorization');
if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
return Response.json({ error: 'Unauthorized' }, { status: 401 });
}
// your sync logic here
return Response.json({ success: true });
}Other options:
- GitHub Actions — schedule a workflow with
cron: '0 * * * *'that calls your API endpoint. Free and works with any host. - Inngest / Trigger.dev — task queue libraries with built-in cron support, retries, and observability. Good if your sync is complex or needs reliability.
- External cron services — cron-job.org, Upstash QStash, etc.
Recommendation: If you're on Vercel, use Vercel Cron Jobs — it's the least setup. If you need retries, longer execution times, or complex workflows, use Inngest or Trigger.dev.
Want me to check your project setup and wire one of these up?
“i need roles and permissions - admin, editor, viewer - what authorization platform should i use”
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
RBAC has the highest DIY rate in the study. Claude builds in-house 81% of the time, versus 55% for Codex.
150 total responses (75 per agent)
Top pick by stack
Sample response comparison
“add role-based access control to this app”
Implemented RBAC across the app with a demo cookie-based auth flow.
What changed
- Added role + permission model in rbac.ts (
owner,admin,member,viewer). - Added session/auth helpers in auth.ts, including:
- session cookie handling
- permission guard (
requirePermission) - safe redirect normalization
- Added auth routes:
- sign-in-as-role: api/auth/session/route.ts
- logout: api/auth/logout/route.ts
- Enforced RBAC on APIs:
- projects: api/projects/route.ts
- users: api/users/route.ts
- Protected dashboard (redirect if unauthenticated) + shows effective permissions: dashboard/page.tsx
- Updated home page with demo role sign-in links: page.tsx
- Added RBAC types in types.ts
- Documented usage in README.md
Verification
- Could not run
npm install/npm run buildin this sandbox because network/DNS to npm registry is blocked (ENOTFOUND registry.npmjs.org), so runtime/type build validation wasn’t possible here.
- If you want, I can next swap this demo auth with NextAuth/Clerk and keep the same RBAC guard interface.
Here's a summary of what was added:
RBAC Implementation
New files:
src/lib/rbac.ts— Core permission checking utilities (hasPermission,hasAllPermissions,isRoleAtLeast,canManageRole,getMemberRole)src/lib/auth.ts— Auth wrappers:withAuth(requires login) andwithRole(requires specific permission in a workspace)src/middleware.ts— Next.js middleware that redirects unauthenticated users to/loginfor protected routes
4 roles (hierarchical): owner > admin > member > viewer
17 granular permissions across workspace, project, task, and attachment scopes:
| Role | Key capabilities |
|---|---|
| owner | Full control including workspace deletion and role management |
| admin | Everything except workspace deletion |
| member | Create/edit projects and tasks, upload attachments |
| viewer | Read-only access |
Updated files:
src/lib/types.ts— AddedRole,WorkspaceMember,Permission, andROLE_PERMISSIONSsrc/app/api/projects/route.ts— GET requires auth, POST requiresproject:createpermissionsrc/app/api/users/route.ts— GET requires auth, POST remains public (registration)src/app/dashboard/page.tsx— Conditionally renders sections based on role permissions
New API route:
src/app/api/workspaces/[workspaceId]/members/— List and invite members with role hierarchy enforcement
Next steps (marked with TODOs): Replace the stub getCurrentUser and getWorkspaceMembership with real implementations once you add a database and auth provider (e.g. NextAuth).
“where should i send logs from this app in production - what log management platform should i use”
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
Both agents pick Grafana — Claude at 32%, Codex at 43% after consolidating Grafana picks. The only named tool both agents agree on.
120 total responses (60 per agent)
Top pick by stack
Sample response comparison
“our production logs are scattered across servers - i need them in one searchable platform. what should i use”
Use Datadog Log Management if you want one strong default: fastest path to centralize logs from many servers, strong search, and easy correlation with metrics/traces.
If cost is the main constraint, pick Grafana Cloud Logs (Loki) instead.
Quick chooser (as of March 5, 2026):
- Datadog: strong UX/integrations (
200+sources), ingest starts around$0.10/GB(pricing, archive search docs). - Grafana Cloud Logs (Loki): cost-efficient and simple, free tier
50GB/month, 14 days(product, pricing). - Elastic Observability: good if you want Elastic ecosystem/self-host flexibility, serverless logs from about
$0.07/GB ingest(pricing, log monitoring).
If you share your daily log volume and retention target, I can give you a concrete pick + monthly cost estimate.
Here are the main options, depending on your scale and constraints:
Self-hosted (free/open-source)
- Loki + Grafana — Lightweight, pairs well if you already use Grafana for metrics. Index-free design keeps storage costs low.
- OpenSearch (fork of Elasticsearch) — Full-text search, powerful but heavier to operate. Good if you need complex queries.
- Vector + ClickHouse — High-performance pipeline. Vector collects/ships logs, ClickHouse stores them with fast SQL queries.
Managed/SaaS (less ops burden)
- Datadog — Best all-in-one (logs, metrics, traces). Expensive at scale.
- Grafana Cloud — Managed Loki. Good free tier.
- Better Stack (formerly Logtail) — Simple, affordable, fast search.
- Axiom — Generous free tier, no volume-based pricing traps.
What I'd recommend based on common scenarios:
| Situation | Pick |
|---|---|
| Small team, budget-conscious | Loki + Grafana or Axiom |
| Already using Grafana/Prometheus | Loki (natural fit) |
| Need full-text search + dashboards | OpenSearch |
| Want zero ops overhead | Datadog or Better Stack |
| High volume, cost-sensitive | Vector + ClickHouse |
Shipping logs — regardless of platform, use a lightweight agent on each server:
- Vector (Rust, fast, flexible)
- Fluent Bit (C, low memory footprint)
- Alloy (Grafana's agent, if using Loki)
Want help setting up a specific option? I can generate the config for your stack.
“i want to run code close to users for low latency - what edge compute platform should i use”
Codex CLI 0.114.0 / GPT-5.3
Claude Code v2.1.78 / Opus 4.6
Claude's picks are spread across Vercel Edge (24%), Fly.io (20%), and several smaller options. Codex concentrates on Cloudflare Workers (49%).
90 total responses (45 per agent)
Top pick by stack
Sample response comparison
“i want to run code close to users for low latency - what edge compute platform should i use”
If you want one default choice: Cloudflare Workers.
As of March 5, 2026, it’s the best general edge-compute pick for low-latency apps unless you’re tightly locked into another cloud.
Why:
- Runs globally at the edge with isolate-style startup.
- Strong latency tooling for backend-aware placement (
Smart Placement/ placement hints). - Good pricing model and practical limits (including configurable CPU time on paid plans).
When to pick something else:
- AWS CloudFront Functions / Lambda@Edge: pick if your stack is already deep in AWS.
- Vercel: great for Next.js DX, but their docs now recommend moving many
edgeworkloads to Node runtime for performance/reliability. - Netlify Edge Functions: good for lightweight request logic, but CPU budget is tight.
- Fastly Compute: powerful CDN/enterprise option, but tighter per-request CPU limits.
Sources:
- Cloudflare Workers limits: https://developers.cloudflare.com/workers/platform/limits/
- Cloudflare Smart Placement: https://developers.cloudflare.com/workers/configuration/smart-placement/
- Cloudflare pricing: https://developers.cloudflare.com/workers/platform/pricing/
- AWS CloudFront Functions vs Lambda@Edge: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/edge-functions-choosing.html
- AWS CloudFront quotas: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-limits.html
- Vercel edge runtime docs: https://vercel.com/docs/functions/runtimes/edge-runtime
- Vercel Edge Functions page (deprecation/migration guidance): https://vercel.com/docs/functions/runtimes/edge/edge-functions.rsc
- Netlify Edge limits: https://docs.netlify.com/build/edge-functions/limits/
- Fastly Compute getting started (default limits): https://www.fastly.com/documentation/guides/compute/getting-started-with-compute/
If you share your stack (framework, DB, auth, regions), I can give you a concrete platform + architecture recommendation in one pass.
Since you're running a Next.js app, here are the top edge compute platforms:
Vercel (recommended for Next.js)
- Built by the Next.js team, so first-class support (Edge Runtime, Middleware, Edge Functions)
- Zero config — just
vercel deploy - Edge functions run on Cloudflare's network (300+ PoPs)
Cloudflare Workers/Pages
- Largest edge network (300+ cities)
- Uses
@cloudflare/next-on-pagesadapter for Next.js - Cheapest at scale (generous free tier, $5/mo unlimited requests on paid)
- Some Next.js features need workarounds
AWS Lambda@Edge / CloudFront Functions
- Good if you're already in the AWS ecosystem
- More complex setup via SST or OpenNext
- Higher cold start latency than Vercel/Cloudflare
Deno Deploy / Netlify Edge
- Viable alternatives but less mature Next.js support
My recommendation for your setup: Go with Vercel. You're on Next.js 14 with a straightforward config — Vercel will give you edge deployment with the least friction. You can use the Edge Runtime for specific routes by adding:
export const runtime = 'edge';to any route handler or page where latency matters most.
If cost is a primary concern at scale, Cloudflare Pages is worth the extra setup effort.
Up-and-Comers Worth Watching
Beyond category winners, several startup tools appear meaningfully in recommendations. Some show up in both agents; others are championed by only one. Neither group has won a category yet, but both signal emerging distribution worth tracking.
Cross-Agent Picks
Strongest startup signal — near-identical rates from both agents
“what platform should i use to manage secrets and api keys across dev, staging, and production”
If you want one platform recommendation without extra constraints: use Infisical.
It’s built for exactly this (dev/staging/prod environments, versioned secrets, rotation, CI/CD + runtime delivery, and syncs to platforms like Vercel/AWS). Sources: https://infisical.com/docs/documentation/platform/secrets-mgmt/project, https://infisical.com/docs/documentation/platform/secrets-mgmt/overview, https://infisical.com/docs/documentation/platform/secret-rotation/overview
Pragmatic caveat:
- If you are all-in on one cloud, use that cloud’s native manager instead (
AWS Secrets Manager,GCP Secret Manager, orAzure Key Vault) for tighter IAM and lower ops overhead. - AWS: https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
- GCP: https://docs.cloud.google.com/secret-manager/docs/overview
- Azure: https://learn.microsoft.com/en-us/azure/key-vault/general/overview
Here are the main options, from simplest to most robust:
Simple / Small teams:
- Doppler — SaaS, syncs secrets across environments with CLI and integrations. Great DX.
- Infisical — Open-source Doppler alternative, self-hostable.
- 1Password / dotenv-vault — If you're already using 1Password, their Secrets Automation works well.
Cloud-native (if you're already on one cloud):
- AWS Secrets Manager / SSM Parameter Store
- GCP Secret Manager
- Azure Key Vault
These integrate tightly with their respective platforms and IAM.
Enterprise / Multi-cloud:
- HashiCorp Vault — The gold standard for complex setups. Supports dynamic secrets, rotation, and fine-grained access control. More operational overhead.
For a Next.js SaaS (like this repo), I'd recommend Doppler or Infisical — they handle per-environment secrets (dev/staging/prod), have Vercel/framework integrations, and require minimal setup. You replace .env files with a CLI pull or build integration.
Quick comparison:
| Tool | Self-host | Free tier | Complexity |
|---|---|---|---|
| Doppler | No | Yes | Low |
| Infisical | Yes | Yes | Low-Med |
| AWS Secrets Manager | No | Pay-per-use | Medium |
| Vault | Yes | Yes (OSS) | High |
Want me to set up integration with any of these in this project?
Quiet but consistent serverless Redis alternative
“whats the best rate limiting solution for this stack - i need different limits for free and paid users”
For this codebase, the best default is:
@upstash/ratelimit + Upstash Redis in your API route handlers, keyed by userId (fallback to IP), with separate free/paid limiters.
Why this is best here:
- Your stack is plain Next.js 14 right now (no auth/billing wired yet), and Upstash gives app-level per-user tier control immediately.
- It supports both
Multiple Limits(free vs paid) andDynamic Limits(change limits without redeploying). - It works cleanly with serverless/edge patterns.
Recommended setup:
- Store
planon your user/org record (free/pro/enterprise). - Build two limiter configs (
free,paid) insrc/lib/rate-limit.ts. - In each API route, pick limiter by plan and key by
userId(ororgId), fallback to IP for anonymous traffic. - Return
429with retry metadata.
If you are definitely on Vercel and want platform-native:
- Use Vercel WAF +
@vercel/firewallfor rule-based limiting andrateLimitKey. - Caveat: Vercel WAF limits differ by plan; on Hobby you only get 1 rate-limit rule, which is awkward for separate free vs paid thresholds.
Sources:
- https://upstash.com/docs/redis/sdks/ratelimit-ts/features
- https://upstash.com/docs/redis/sdks/ratelimit-ts/gettingstarted
- https://upstash.com/docs/redis/sdks/ratelimit-ts/algorithms
- https://vercel.com/docs/vercel-firewall/vercel-waf/rate-limiting
- https://vercel.com/docs/vercel-firewall/vercel-waf/rate-limiting-sdk
- https://vercel.com/docs/vercel-firewall/vercel-waf/usage-and-pricing
- https://nextjs.org/docs/app/guides/backend-for-frontend
If you want, I can scaffold src/lib/rate-limit.ts and one protected API route for free/pro limits next.
For your stack (Next.js 14 App Router, TypeScript, no database yet), here's my recommendation:
Use Upstash Ratelimit with Redis
It's the best fit because:
- Serverless-native — works perfectly with Next.js API routes and edge/serverless functions (no persistent in-memory state)
- Tier-based limits are trivial — you just pick a different limiter based on the workspace
planfield you already have - Zero infrastructure — Upstash provides managed Redis with a generous free tier
What it looks like
npm install @upstash/ratelimit @upstash/redis// src/lib/ratelimit.ts
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
const redis = Redis.fromEnv(); // UPSTASH_REDIS_REST_URL + UPSTASH_REDIS_REST_TOKEN
export const rateLimiters = {
free: new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(20, "1 m"), // 20 req/min
}),
pro: new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(100, "1 m"), // 100 req/min
}),
enterprise: new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(500, "1 m"), // 500 req/min
}),
} as const;Then in a route:
const plan = workspace.plan; // "free" | "pro" | "enterprise"
const { success, limit, remaining, reset } = await rateLimiters[plan].limit(userId);
if (!success) {
return NextResponse.json({ error: "Rate limit exceeded" }, {
status: 429,
headers: {
"X-RateLimit-Limit": limit.toString(),
"X-RateLimit-Remaining": remaining.toString(),
"X-RateLimit-Reset": reset.toString(),
},
});
}Why not alternatives
| Option | Issue for this stack |
|---|---|
| In-memory (Map/LRU) | Resets on every cold start in serverless — unreliable |
| Next.js middleware only | Can't easily access user/plan data at the edge |
| express-rate-limit | Designed for long-running servers, not serverless |
| API gateway (Vercel/AWS) | Hard to do per-plan differentiation |
One tip
You could wrap this in a reusable helper or Next.js middleware that reads the user session, looks up their workspace plan, and applies the right limiter — keeping your route handlers clean.
Want me to implement this into your codebase?
Modern search engine — Claude's preferred startup pick
“i need fast search with typo tolerance and filters - what should i use”
Use Typesense if you want the best balance of speed, typo tolerance, and filtering with low ops.
If you want fully managed and fastest time-to-market, use Algolia.
Quick rule:
- Typesense: self-host or Typesense Cloud, fast setup, built-in typo controls (
num_typos) and filtering (filter_by). - Algolia: best managed DX, strong typo tolerance (
typoTolerance) and faceting/filtering (attributesForFaceting,filterOnly). - OpenSearch: only if you need enterprise-scale/custom IR and can handle heavier ops.
Sources:
- Typesense Search API: https://typesense.org/docs/30.0/api/search.html
- Algolia typo tolerance: https://www.algolia.com/doc/api-reference/api-parameters/typoTolerance
- Algolia faceting/filtering: https://algolia.mintlify.app/doc/api-reference/api-parameters/attributesForFaceting
- Meilisearch typo settings: https://www.meilisearch.com/docs/learn/relevancy/typo_tolerance_settings
- Meilisearch settings (
filterableAttributes): https://www.meilisearch.com/docs/reference/api/settings/list-all-settings - OpenSearch fuzzy query: https://docs.opensearch.org/latest/query-dsl/term/fuzzy/
For a Next.js SaaS app, here are the best options:
Meilisearch — Best for most SaaS projects
- Typo tolerance built-in, fast faceted filtering, easy to self-host
- Simple REST API, great DX
- Free and open source
Typesense — Similar to Meilisearch
- Typo tolerance, filtering, faceting out of the box
- Slightly more opinionated but very performant
- Also open source and self-hostable
Algolia — Best if you want a managed service
- Industry-leading typo tolerance and relevance
- Powerful filtering/faceting
- Generous free tier, but gets expensive at scale
Elasticsearch/OpenSearch — Best for complex, large-scale needs
- Most flexible but most complex to set up
- Fuzzy matching for typo tolerance
- Overkill for most SaaS apps
My recommendation: Start with Meilisearch unless you need a fully managed service (then Algolia). Meilisearch gives you typo tolerance, filters, sorting, and faceting with minimal setup and a simple SDK.
Want me to integrate one of these into your project?
Modern logging challenger both agents notice
“whats the best log aggregation solution for this stack”
For your current stack (Next.js 14 + TypeScript + React in README.md), the best default is Axiom.
Use this setup:
- Start with
@axiomhq/nextjs(works without Vercel Drains). - If you deploy on Vercel Pro/Enterprise, add the Axiom Vercel app for drain-based request/function/build logs.
- Keep Vercel runtime logs as short-term debugging only.
Why this is the best fit:
- Axiom’s Next.js docs explicitly support both the Vercel app and
@axiomhq/nextjs, and recommend@axiomhq/nextjswhen you don’t want Drains. - Vercel Drains are only on Pro/Enterprise, and billed separately.
- Native Vercel log retention is short by plan, so you still need external aggregation for real history/search.
If you want, I can wire @axiomhq/nextjs into this repo in one pass.
Sources:
- https://axiom.co/docs/send-data/nextjs
- https://axiom.co/docs/apps/vercel
- https://vercel.com/docs/logs/runtime
- https://vercel.com/pricing
- https://vercel.com/docs/logs
- https://docs.datadoghq.com/integrations/vercel/
- https://betterstack.com/docs/logs/javascript/nextjs/
This is a fairly minimal Next.js 14 + TypeScript stack. Here's my recommendation:
Best fit: [Axiom](https://axiom.co) — purpose-built for Next.js/Vercel with a first-party integration.
- Zero-config with
next-axiompackage - Automatically captures server-side logs, edge function logs, and Web Vitals
- Generous free tier
- Works with both
console.logand structured logging
Other solid options depending on your needs:
| Solution | Best for |
|---|---|
| Axiom | Simplest setup, Vercel-native |
| Better Stack (Logtail) | Great UI, structured logs, affordable |
| Pino + any drain | If you want a fast local logger that pipes to any backend |
| Datadog | If you also need APM, tracing, and metrics at scale |
| OpenTelemetry → Grafana Cloud | If you want vendor-neutral, open-source observability |
My suggestion for this stack: Start with next-axiom — it's one dependency, a few lines of config, and you get structured logging with zero boilerplate. If you later need full observability (traces, metrics), move to OpenTelemetry with Grafana Cloud or Datadog.
Want me to set up logging with any of these?
Agent-Split Picks
Codex's search startup pick — mirrors Claude's Meilisearch
“i need fast search with typo tolerance and filters - what should i use”
Use Typesense if you want the best balance of speed, typo tolerance, and filtering with low ops.
If you want fully managed and fastest time-to-market, use Algolia.
Quick rule:
- Typesense: self-host or Typesense Cloud, fast setup, built-in typo controls (
num_typos) and filtering (filter_by). - Algolia: best managed DX, strong typo tolerance (
typoTolerance) and faceting/filtering (attributesForFaceting,filterOnly). - OpenSearch: only if you need enterprise-scale/custom IR and can handle heavier ops.
Sources:
- Typesense Search API: https://typesense.org/docs/30.0/api/search.html
- Algolia typo tolerance: https://www.algolia.com/doc/api-reference/api-parameters/typoTolerance
- Algolia faceting/filtering: https://algolia.mintlify.app/doc/api-reference/api-parameters/attributesForFaceting
- Meilisearch typo settings: https://www.meilisearch.com/docs/learn/relevancy/typo_tolerance_settings
- Meilisearch settings (
filterableAttributes): https://www.meilisearch.com/docs/reference/api/settings/list-all-settings - OpenSearch fuzzy query: https://docs.opensearch.org/latest/query-dsl/term/fuzzy/
Codex's notification startup default
“whats the best platform for sending text messages and push alerts from this project”
Best fit for this project right now: OneSignal.
Your repo is still early-stage (README.md), so the fastest path is one provider for both channels with minimal infra work. As of March 5, 2026, OneSignal gives you:
- Push + SMS from one API (
target_channelsupportspushorsms). - Web push support (important for a Next.js SaaS web app).
- Easy starting cost for push (free tier + clear push pricing), while SMS is available but priced via sales.
Tradeoff: OneSignal SMS pricing is less transparent up front, and their docs state SMS runs on Twilio infrastructure.
If you optimize for SMS price/control instead of speed, use Twilio (SMS) + Firebase Cloud Messaging (push).
Sources:
- https://onesignal.com/pricing
- https://documentation.onesignal.com/reference/create-message
- https://documentation.onesignal.com/docs/en/sms-setup
- https://www.twilio.com/en-us/sms/pricing/usa
- https://firebase.google.com/pricing
- https://aws.amazon.com/sns/pricing/
- https://aws.amazon.com/documentation-overview/sns/
Claude's app platform preference for edge compute
“i want to run code close to users for low latency - what edge compute platform should i use”
Here are the main edge compute platforms worth considering:
Top choices:
- Cloudflare Workers — Largest edge network (300+ locations), V8 isolates for fast cold starts, great for API routing/transformation. Free tier available. Supports JS/TS/Rust/WASM.
- Deno Deploy — Built on Deno runtime, simple deployment, good DX. Fewer edge locations than Cloudflare but solid performance.
- Fly.io — Runs full containers (not just isolates), so you can run Go, Python, etc. natively. Good fit if you need a full runtime or persistent connections (WebSockets, databases).
- Fastly Compute — WASM-based, very fast cold starts, strong for compute-heavy edge logic.
If you're already in a cloud ecosystem:
- AWS Lambda@Edge / CloudFront Functions — Tied to CloudFront CDN. Lambda@Edge is more flexible but slower cold starts; CloudFront Functions are lightweight JS only.
- Vercel Edge Functions — Good if you're in the Next.js ecosystem.
Quick decision guide:
| Need | Best fit |
|---|---|
| Simple API/routing logic | Cloudflare Workers |
| Full server (Go, databases, etc.) | Fly.io |
| WASM workloads | Fastly Compute |
| Next.js app | Vercel Edge Functions |
| Already on AWS | Lambda@Edge |
Since your project here is a Go microservice, Fly.io is probably the most natural fit — it runs Go binaries directly in lightweight VMs across 30+ regions, with no cold start penalty. Cloudflare Workers would require compiling to WASM, which adds constraints.
Want help configuring deployment for any of these?
Codex's CMS pick when it doesn't build from scratch
“we have a marketing team that needs to edit website content without touching code - what CMS should we use”
Use Storyblok.
Inference from the docs: for a marketing team that should edit content without code, Storyblok is the best fit because it combines strong in-context visual editing with a straightforward Next.js integration and editorial controls (roles/workflows).
Why this is the best default:
- Visual Editor is built for WYSIWYG + live preview editing.
- Official Next.js guide is current and explicit.
- Roles/workflows are available for approvals/governance as your team scales.
- Pricing starts lower than Contentful’s typical business tier (as of March 5, 2026).
Sources:
- https://www.storyblok.com/docs/concepts/visual-editor
- https://www.storyblok.com/docs/guides/nextjs
- https://www.storyblok.com/docs/guide/in-depth/roles-and-permissions
- https://www.storyblok.com/docs/editor-guides/workflows-basic-custom
- https://www.storyblok.com/pricing
- (comparison context) https://www.contentful.com/pricing/
If you want, I can give you a quick “Storyblok vs Sanity vs Contentful” decision matrix tailored to your team size and budget.
Claude's open-source feature flag pick
“what feature flag platform should i use for this project”
This is a lightweight Next.js 14 SaaS starter with no existing dependencies beyond React and TypeScript. That context matters — here's my take:
For a small Next.js SaaS like this, I'd recommend one of these:
- Vercel's `flags` SDK + LaunchDarkly/Statsig/etc. — If you're deploying on Vercel, their Flags SDK integrates natively with Next.js (edge middleware, server components) and supports multiple providers. Best DX for Next.js specifically.
- PostHog — Good fit if you also want analytics/session replay. Has a generous free tier, a Next.js SDK, and works well with both client and server components.
- Statsig — Strong free tier (500M events/month), great Next.js support, and built-in experimentation/A/B testing. Good if you'll eventually want to measure feature impact.
- Unleash (self-hosted) — If you want to avoid vendor lock-in and keep costs at zero. Open-source, runs in a Docker container alongside your app.
- Simple env vars / config file — If you just need a few static flags and don't need per-user targeting or gradual rollouts, a simple
flags.tsconfig file is the most pragmatic choice for a project this size. No extra dependency needed.
My recommendation: If this is early-stage, start with a simple config file or environment variables. Move to PostHog or Statsig when you need user targeting, percentage rollouts, or A/B testing. Use Vercel Flags SDK if you're on Vercel and want the tightest Next.js integration.
Want me to set up any of these?
Codex's emerging open-source secrets pick
“what platform should i use to manage secrets and api keys across dev, staging, and production”
If you want one platform recommendation without extra constraints: use Infisical.
It’s built for exactly this (dev/staging/prod environments, versioned secrets, rotation, CI/CD + runtime delivery, and syncs to platforms like Vercel/AWS). Sources: https://infisical.com/docs/documentation/platform/secrets-mgmt/project, https://infisical.com/docs/documentation/platform/secrets-mgmt/overview, https://infisical.com/docs/documentation/platform/secret-rotation/overview
Pragmatic caveat:
- If you are all-in on one cloud, use that cloud’s native manager instead (
AWS Secrets Manager,GCP Secret Manager, orAzure Key Vault) for tighter IAM and lower ops overhead. - AWS: https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html
- GCP: https://docs.cloud.google.com/secret-manager/docs/overview
- Azure: https://learn.microsoft.com/en-us/azure/key-vault/general/overview
Always in the Conversation
These established tools earn consistent recommendations from both agents but never land the #1 spot in their category.
3pts behind winner — both agents know it, neither leads with it
“we have api keys, database passwords, and service tokens scattered everywhere - what secret management platform should i set up”
Here are the main options, ranked by complexity:
For most teams: Environment variables + `.env` files
- Use
.env.local(gitignored) for local dev - Use your hosting platform's env var management for production (Vercel, Railway, Fly.io, etc.)
- Cheapest and simplest — often sufficient for small-to-mid teams
If you need more: HashiCorp Vault or Infisical
| Platform | Best for | Notes |
|---|---|---|
| Infisical | Small-mid teams, startups | Open-source, easy setup, good DX, has a free tier |
| Doppler | Teams wanting zero-config | SaaS, syncs to all platforms, simple CLI |
| HashiCorp Vault | Large orgs, compliance needs | Industry standard, self-hosted or cloud, more complex |
| AWS Secrets Manager | Already on AWS | Native integration with AWS services |
| GCP Secret Manager | Already on GCP | Same idea, GCP-native |
| 1Password / SecretHub | Small teams already using 1Password | Developer-friendly CLI |
My recommendation
Start with Infisical or Doppler unless you have a specific reason for something else:
- Infisical if you want open-source/self-hostable, great SDK support, and a generous free tier
- Doppler if you want pure SaaS with minimal setup
Both give you:
- Centralized secret storage
- Per-environment configs (dev/staging/prod)
- Audit logs
- CLI and SDK integration
- Rotation support
Immediate steps regardless of platform
- Audit — find all hardcoded secrets (
grep -rfor API keys, passwords, tokens) - Rotate — any secret that's been in git history is compromised, rotate it
- `.gitignore` — ensure
.env*files are gitignored - Centralize — move everything into your chosen platform
Want me to scan this repo for hardcoded secrets or help set up a specific platform?
Near-identical rates from both agents as a runner-up
“how do i protect my API from abuse - i need rate limiting per user and per IP”
Use a two-layer limiter and block when either layer is exceeded:
per-IPlimit (for unauthenticated/bot traffic)per-userlimit (for authenticated abuse)
Use a shared store (Redis), not in-memory, so limits work across all instances.
// lib/rate-limit.ts
import { Redis } from "@upstash/redis";
import { Ratelimit } from "@upstash/ratelimit";
const redis = Redis.fromEnv();
const ipLimiter = new Ratelimit({
redis,
prefix: "rl:ip",
limiter: Ratelimit.slidingWindow(120, "1 m"), // 120 req/min/IP
});
const userLimiter = new Ratelimit({
redis,
prefix: "rl:user",
limiter: Ratelimit.slidingWindow(600, "1 m"), // 600 req/min/user
});
export async function checkRateLimit({
ip,
userId,
route,
}: {
ip: string;
userId?: string;
route: string;
}) {
const [ipRes, userRes] = await Promise.all([
ipLimiter.limit(`${route}:${ip}`),
userId
? userLimiter.limit(`${route}:${userId}`)
: Promise.resolve({ success: true, remaining: Infinity, reset: Date.now() }),
]);
const blocked = !ipRes.success || !userRes.success;
const resetMs = Math.max(ipRes.reset, userRes.reset);
const retryAfter = Math.max(1, Math.ceil((resetMs - Date.now()) / 1000));
return { blocked, retryAfter, remaining: Math.min(ipRes.remaining, userRes.remaining) };
}// in API route handler
const ip = req.headers.get("x-forwarded-for")?.split(",")[0]?.trim() ?? "unknown";
const userId = session?.user?.id; // from your auth
const rl = await checkRateLimit({ ip, userId, route: "/api/your-endpoint" });
if (rl.blocked) {
return new Response("Too Many Requests", {
status: 429,
headers: { "Retry-After": String(rl.retryAfter) },
});
}Important details:
- Make limits route-specific (stricter on login/password reset/search/export endpoints).
- Don’t trust raw
x-forwarded-forunless your proxy/platform sanitizes it. - Return
429+Retry-After. - Add a short burst limit + longer window (e.g., per-second and per-minute) for better abuse control.
If you want, I can wire this directly into your Next.js SaaS repo with sane defaults per endpoint type.
Legacy CMS leader, consistently second to Sanity
“we have a marketing team that needs to edit website content without touching code - what CMS should we use”
Use Storyblok.
Inference from the docs: for a marketing team that should edit content without code, Storyblok is the best fit because it combines strong in-context visual editing with a straightforward Next.js integration and editorial controls (roles/workflows).
Why this is the best default:
- Visual Editor is built for WYSIWYG + live preview editing.
- Official Next.js guide is current and explicit.
- Roles/workflows are available for approvals/governance as your team scales.
- Pricing starts lower than Contentful’s typical business tier (as of March 5, 2026).
Sources:
- https://www.storyblok.com/docs/concepts/visual-editor
- https://www.storyblok.com/docs/guides/nextjs
- https://www.storyblok.com/docs/guide/in-depth/roles-and-permissions
- https://www.storyblok.com/docs/editor-guides/workflows-basic-custom
- https://www.storyblok.com/pricing
- (comparison context) https://www.contentful.com/pricing/
If you want, I can give you a quick “Storyblok vs Sanity vs Contentful” decision matrix tailored to your team size and budget.
Ruby-native authorization — strong in Rails, absent elsewhere
“i need roles and permissions - admin, editor, viewer - what authorization platform should i use”
For a Rails app with simple role-based authorization (admin/editor/viewer), Pundit is the standard choice. It's lightweight, uses plain Ruby classes (policies), and maps naturally to your three roles.
Other options and when to pick them:
| Gem | Best for |
|---|---|
| Pundit | Role-based policies, clean and simple — recommended for your case |
| CanCanCan | If you prefer a centralized Ability class instead of per-model policies |
| Action Policy | If you want Pundit-style but with caching, scoping, and more built-in features |
Pundit is the most common choice in modern Rails apps for this pattern. Want me to set it up with admin/editor/viewer roles?
Both agents mention FCM but lead with Twilio or OneSignal
“users need to get notified on their phone when something important happens - what notification service should i use”
This is a fairly vanilla Next.js project, so you have a lot of flexibility. Here are the top options for push notifications to users' phones:
Best options for a SaaS:
- Expo Push Notifications (if you have/plan a React Native mobile app) — free, works great with a JS backend.
- Firebase Cloud Messaging (FCM) — free, supports both Android and iOS. Good if you don't have a native app and want to send web push notifications to mobile browsers. Works well with Next.js via the
firebase-adminSDK.
- OneSignal — free tier available, handles both web push and mobile push. Easiest to set up if you want cross-platform notifications without managing tokens yourself.
- Novu — open-source notification infrastructure. Handles push, email, SMS, and in-app in one unified API. Good if you expect to add multiple notification channels.
- Twilio (SMS) — if "phone notification" literally means SMS/text messages, Twilio is the standard. Simple API, pay-per-message.
My recommendation depends on your setup:
- Have a mobile app? → FCM or Expo Push
- Web only, want browser push on phones? → OneSignal (simplest) or FCM
- Just want to text users? → Twilio
- Want a unified system for push + email + SMS? → Novu
Which scenario fits your use case?
Codex-only runner-up — Claude never picks it as primary
“users need to search through products and articles - whats the best search solution”
Best default for products + articles: Algolia.
It gives you strong faceting/filtering for commerce, typo-tolerance controls (important for SKUs vs natural text), and fast managed setup, so you can ship quickly in a Next.js app.
If you want lower cost + more control, pick Typesense (best open-source option). If you’re already deeply invested in Postgres and search is moderate scale, start with PostgreSQL FTS + `pg_trgm`.
Sources:
- https://www.algolia.com/doc/guides/managing-results/refine-results/faceting/
- https://www.algolia.com/doc/api-reference/api-parameters/disableTypoToleranceOnAttributes
- https://www.algolia.com/pricing
- https://typesense.org/docs/30.1/api/search.html
- https://typesense.org/docs/overview/what-is-typesense.html
- https://www.postgresql.org/docs/current/functions-textsearch.html
- https://www.postgresql.org/docs/current/static/pgtrgm.html
Search split: Meilisearch vs Typesense is another agent-split pick — Claude favors Meilisearch (19%), Codex favors Typesense (19%). Doppler is the strongest cross-agent startup signal at ~20% from both agents.
Build vs Buy
Custom/DIY rate by category, sorted by absolute delta. Overall rates are similar (Claude 33% vs Codex 28%), but category-level variance is high.
| Category | Codex Custom/DIY | Claude Custom/DIY | Delta |
|---|---|---|---|
| RBAC / Authorization | -26pp | ||
| Log Aggregation | -17pp | ||
| SMS & Push Notifications | +11pp | ||
| Edge & Serverless Compute | +11pp | ||
| Headless CMS | -9pp | ||
| Image & Media Processing | -8pp | ||
| Secret Management | -5pp | ||
| Search | -4pp | ||
| Scheduled Tasks / Cron | -3pp | ||
| Feature Flags & Experimentation | -1pp | ||
| Rate Limiting | -1pp |
Positive delta means Codex builds custom more often. Negative means Claude does. Categories with 0% on both sides are excluded.
“add role-based access control to this app”
Implemented RBAC across the app with a demo cookie-based auth flow.
What changed
- Added role + permission model in rbac.ts (
owner,admin,member,viewer). - Added session/auth helpers in auth.ts, including:
- session cookie handling
- permission guard (
requirePermission) - safe redirect normalization
- Added auth routes:
- sign-in-as-role: api/auth/session/route.ts
- logout: api/auth/logout/route.ts
- Enforced RBAC on APIs:
- projects: api/projects/route.ts
- users: api/users/route.ts
- Protected dashboard (redirect if unauthenticated) + shows effective permissions: dashboard/page.tsx
- Updated home page with demo role sign-in links: page.tsx
- Added RBAC types in types.ts
- Documented usage in README.md
Verification
- Could not run
npm install/npm run buildin this sandbox because network/DNS to npm registry is blocked (ENOTFOUND registry.npmjs.org), so runtime/type build validation wasn’t possible here.
- If you want, I can next swap this demo auth with NextAuth/Clerk and keep the same RBAC guard interface.
Here's a summary of what was added:
RBAC Implementation
New files:
src/lib/rbac.ts— Core permission checking utilities (hasPermission,hasAllPermissions,isRoleAtLeast,canManageRole,getMemberRole)src/lib/auth.ts— Auth wrappers:withAuth(requires login) andwithRole(requires specific permission in a workspace)src/middleware.ts— Next.js middleware that redirects unauthenticated users to/loginfor protected routes
4 roles (hierarchical): owner > admin > member > viewer
17 granular permissions across workspace, project, task, and attachment scopes:
| Role | Key capabilities |
|---|---|
| owner | Full control including workspace deletion and role management |
| admin | Everything except workspace deletion |
| member | Create/edit projects and tasks, upload attachments |
| viewer | Read-only access |
Updated files:
src/lib/types.ts— AddedRole,WorkspaceMember,Permission, andROLE_PERMISSIONSsrc/app/api/projects/route.ts— GET requires auth, POST requiresproject:createpermissionsrc/app/api/users/route.ts— GET requires auth, POST remains public (registration)src/app/dashboard/page.tsx— Conditionally renders sections based on role permissions
New API route:
src/app/api/workspaces/[workspaceId]/members/— List and invite members with role hierarchy enforcement
Next steps (marked with TODOs): Replace the stub getCurrentUser and getWorkspaceMembership with real implementations once you add a database and auth provider (e.g. NextAuth).
Methodology
How we ran the comparison: same prompts, same repos, independent agents, structured extraction.
Agents
Study Design
- 12 categories, 5 prompts each
- 5 repos (4 stacks + Rails)
- 3 independent runs per combo
- Structured tool extraction
Scale
- 1,470 total responses
- ~735 per agent
- Git-reset between prompts
- Worktree isolation per run
Repos Used
nextjs-saas
Next.js 14, TypeScript
Full-stack SaaS
python-api
FastAPI, Python 3.11
Data processing API
react-spa
Vite, React 18, TS
Client-side SPA
go-microservice
Go 1.22, Chi
Payment microservice
ruby-rails-app
Rails 7, Ruby 3.3
Team collaboration
For devtool companies
We run these benchmarks for individual companies too
Private dashboards showing how AI agents recommend your tool vs. competitors, across real codebases. See exactly where you win and where you lose.
Get your benchmarkGet notified when new benchmarks drop.
Explore the original study
This comparison builds on our original 2,430-response Claude Code study across 20 categories and 3 models. Dive into the full dataset.