Build a real website with Contentstack and AI
AI is changing how websites get built.
Not in a vague, future-looking way. Right now, in day-to-day development work, tools like Cursor, ChatGPT, Claude, Gemini, and GitHub Copilot are helping teams scaffold applications, wire up SDKs, generate components, debug errors, and move from idea to working implementation much faster than before.
That speed is useful, but only if the workflow is grounded. If you drop an AI assistant into a project without enough context, it tends to produce generic code, shaky assumptions, and patterns that look plausible but fall apart once you try to use them in a real codebase.
That is exactly where Contentstack comes in.
Contentstack gives you a structured content foundation: clearly defined content types, published entries, environment-aware delivery, and APIs that your front end can depend on. Pair that with a good AI workflow, and you can move much faster without losing control of architecture, content modeling, or implementation quality.
This guide shows you how to do that in a practical way. We are not going to stay in the abstract. We are going to use a realistic website example, explain why each step matters, and show how to prompt an AI assistant so the output is actually useful.
Why this guide exists
A lot of documentation about AI-assisted coding jumps straight into prompts and generated files. A lot of CMS documentation assumes you already understand how headless content modeling maps to modern frontend architecture. When those two worlds meet, there is often a gap.
Developers new to Contentstack may not yet understand stacks, environments, tokens, delivery APIs, or content type UIDs. Developers new to AI-assisted coding may not yet understand why structure matters so much in prompting, or why the same assistant can give you either excellent code or completely unhelpful code depending on how you frame the request.
This guide is designed to close that gap.
By the end, you should understand not just how to connect Contentstack to a Next.js website, but also how to work with AI in a way that is repeatable, predictable, and useful outside of a one-off demo.
The use case we are building
To keep this grounded, we will use a service-based company website as our reference implementation.
It includes:
a homepage with a hero section
service landing pages
a blog
a team page
a navigation menu managed in the CMS
This is a good example because it mirrors the kinds of projects developers build all the time: product marketing sites, agency sites, service-company sites, and content-led websites where editors need to manage content without developers changing code for every update.
In other words, this is a practical baseline. Once you understand this pattern, you can extend it to product catalogs, campaign microsites, knowledge bases, or multi-site architectures.
Who this is for
This guide is for:
developers who are new to Contentstack
teams already using Contentstack and exploring AI-first workflows
developers using Cursor, ChatGPT, Claude, Gemini, or GitHub Copilot in their daily work
You do not need to be an expert in either Contentstack or AI-assisted coding to get value from this guide. But you do need a clear mental model, and that is where we will start.
What you will build
By the end of this guide, you will have a working Next.js website connected to Contentstack through the Content Delivery API using the TypeScript Delivery SDK. Content will live in Contentstack, and the front end will fetch and render published entries.
That means:
editors can create or update content in the CMS
the website can fetch published content through the SDK
new pages and posts can appear without frontend code changes
deployment can happen through Contentstack Launch
Conceptually, the system looks like this:
Mermaid Diagram (do not use shift+cmd+v)
That flow is the foundation for everything else in this guide.
Before you start: how to use AI properly
The biggest mistake developers make with AI coding assistants is treating them like magical autocomplete.
That works for small tasks. It does not work well for architecture, SDK integration, or CMS-driven applications where the assistant needs to understand structure. If you want good output, you need to give the model enough context to reason about your goal, your stack, and your data model.
Start by asking for a plan
Before asking an AI assistant to write code, ask it to create a plan first.
That might sound slower, but in practice it is faster because it prevents drift. A plan gives you a chance to catch wrong assumptions before the assistant starts generating multiple files.
For example:
I am building a content-driven marketing website with Next.js and Contentstack. Before writing code, create a step-by-step implementation plan. Assume I need a homepage, service pages, a blog, and a team page. Use the Contentstack TypeScript Delivery SDK and Next.js App Router. Call out what should be modeled in Contentstack first and what should be built in code after that.
This kind of request gives the assistant a goal, constraints, and an order of operations. That usually produces much better implementation work than going straight to “build this page.”
Set a clear goal
The AI needs to know what kind of website you are building and who it is for.
“Build me a website” is too vague. A stronger prompt gives the assistant a target:
I am building a marketing website for a B2B SaaS company. The site needs a homepage, service landing pages, blog posts, and a team page. Content is managed in Contentstack, and the frontend is built in Next.js App Router using TypeScript.
This matters because architecture follows intent. A SaaS website, a product catalog, and a newsroom may all use the same CMS, but the content model and generated UI are not the same.
Define point of view
It also helps to define who you are and who you want the assistant to be.
For example:
You are a senior frontend engineer experienced with Next.js, headless CMS integration, and TypeScript. I am a developer implementing a content-driven website and I want production-friendly code.
This changes the quality of output more than most people expect. It nudges the assistant away from generic explanation and toward implementation choices that are more aligned with how experienced engineers think about maintainability and reuse.
Give real context, not guesses
When working with Contentstack, this is especially important.
Do not ask the AI to “fetch blog posts” unless you also tell it the content type UID. Do not ask it to render a hero image unless you tell it the field UID. And if you have a sample API response, include it. The less guessing the assistant has to do, the better the code will be.
A practical prompt almost always includes:
the framework and version
the Contentstack content type UID
the relevant field UIDs
the environment variable names
the expected output format
Iterate instead of restarting
AI-assisted coding works best when you keep refining the thread.
If the assistant generates something close but not correct, do not start from scratch. Paste the broken file back in, describe the problem, and ask for a fix. That preserves context and usually leads to a better result in less time.
Recommended developer setup
If you are going to build with AI effectively, your tooling matters.
Recommended IDE setup
Cursor is the best fit for this style of workflow because it can reason across multiple files, use your existing codebase as context, and handle larger implementation tasks better than a simple inline autocomplete workflow.
A good baseline setup in Cursor looks like this:
use chat for planning and larger implementation prompts
use inline edits for targeted changes
use @file references to include source files in context
keep your prompts structured and explicit
If you prefer VS Code, GitHub Copilot can still be useful, but it tends to be stronger for inline completion than for full multi-file scaffolding. Browser-based assistants like ChatGPT and Claude are also useful, especially when you want to reason through architecture or debug an issue by pasting code and errors directly.
Recommended Contentstack tools and packages
For this guide, the main tools to use are:
@contentstack/delivery-sdk for content delivery in TypeScript
@contentstack/utils for rich text rendering utilities
Contentstack Launch for deployment and hosting
These give you a clean, modern baseline and help avoid older patterns that may still show up in low-quality AI output.
Understanding Contentstack before you write code
Before you start generating files, it helps to understand the mental model behind Contentstack.
Contentstack is a headless CMS. That means it manages content, not presentation. Your frontend is responsible for rendering the content, and Contentstack is responsible for storing it, structuring it, and delivering it through APIs.
If you are coming from a traditional CMS, that separation can feel unfamiliar at first. But it is one of the reasons headless architecture works so well with modern frameworks and AI-assisted workflows. The content model is structured. The frontend is decoupled. The integration surface is clear.
Here is the basic model:
Mermaid Diagram (do not use shift+cmd+v)
Stack
A stack is the main project container inside Contentstack. It holds your content models, entries, assets, environments, and settings. If you are building a website project, the stack is the place where that project’s content structure lives.
Content types
A content type defines the structure of a piece of content. You can think of it as a schema. A blog_post content type might include fields such as title, slug, body, cover_image, and published_date.
Entries
An entry is an instance of a content type. If blog_post is the model, then each article is an entry.
Assets
Assets are uploaded files such as images or PDFs. These are stored on Contentstack’s CDN and referenced from entries.
Environments
Environments let you control where content is published. A frontend application usually targets one environment at a time, such as development, staging, or production.
APIs and tokens
For this guide, the main API to understand is the Content Delivery API, or CDA. This is the read-only API your website uses to fetch published content. In code, we will interact with it through the TypeScript Delivery SDK.
Frequently asked questions
What will I build by the end of this guide?
A working Next.js (App Router) site that fetches and renders published Contentstack entries via the Content Delivery API using the TypeScript Delivery SDK.
What Contentstack setup is required before writing frontend code?
Create a stack, define content types and field UIDs, create an environment, generate a delivery token, and add sample entries so the frontend can target real data.
Which environment variables are required for the Delivery SDK setup?
At minimum: CONTENTSTACK_API_KEY, CONTENTSTACK_DELIVERY_TOKEN, and CONTENTSTACK_ENVIRONMENT. Store them in .env.local (and in Launch for deployment).
Why does the guide recommend lazy SDK initialization (getStack())?
It avoids brittle build-time failures when environment variables are missing in deployment environments by preventing SDK initialization during module import.
How should Contentstack rich text be rendered in Next.js?
Treat rich text as structured JSON and render it using @contentstack/utils (for example jsonToHtml), rather than assuming it is a plain HTML string.