# Build a real website with Contentstack and AI

- **Authors:** Tim Benniks
- **Published:** 2026-04-16T14:40:02.700Z
- **Updated:** 2026-04-16T14:59:37.162Z
- **Tags:** headless, ai, sdk, api, content_modeling, automation
- **Chapters:** 4
- **Source:** [https://developers.contentstack.com/guides/build-a-real-website-with-contentstack-and-ai](https://developers.contentstack.com/guides/build-a-real-website-with-contentstack-and-ai)

---

## Table of Contents

1. [Part 1: set up Contentstack](#part-1-set-up-contentstack)
2. [Part 2: build the frontend with AI](#part-2-build-the-frontend-with-ai)
3. [Part 3: common real-world workflows](#part-3-common-real-world-workflows)
4. [Part 4: deployment with Contentstack Launch](#part-4-deployment-with-contentstack-launch)

AI is changing how websites get built.

Not in a vague, future-looking way. Right now, in day-to-day development work, tools like Cursor, ChatGPT, Claude, Gemini, and GitHub Copilot are helping teams scaffold applications, wire up SDKs, generate components, debug errors, and move from idea to working implementation much faster than before.

That speed is useful, but only if the workflow is grounded. If you drop an AI assistant into a project without enough context, it tends to produce generic code, shaky assumptions, and patterns that look plausible but fall apart once you try to use them in a real codebase.

That is exactly where Contentstack comes in.

Contentstack gives you a structured content foundation: clearly defined content types, published entries, environment-aware delivery, and APIs that your front end can depend on. Pair that with a good AI workflow, and you can move much faster without losing control of architecture, content modeling, or implementation quality.

This guide shows you how to do that in a practical way. We are not going to stay in the abstract. We are going to use a realistic website example, explain why each step matters, and show how to prompt an AI assistant so the output is actually useful.

## Why this guide exists

A lot of documentation about AI-assisted coding jumps straight into prompts and generated files. A lot of CMS documentation assumes you already understand how headless content modeling maps to modern frontend architecture. When those two worlds meet, there is often a gap.

Developers new to Contentstack may not yet understand stacks, environments, tokens, delivery APIs, or content type UIDs. Developers new to AI-assisted coding may not yet understand why structure matters so much in prompting, or why the same assistant can give you either excellent code or completely unhelpful code depending on how you frame the request.

This guide is designed to close that gap.

By the end, you should understand not just how to connect Contentstack to a Next.js website, but also how to work with AI in a way that is repeatable, predictable, and useful outside of a one-off demo.

## The use case we are building

To keep this grounded, we will use a service-based company website as our reference implementation.

It includes:

- a homepage with a hero section


- service landing pages


- a blog


- a team page


- a navigation menu managed in the CMS



This is a good example because it mirrors the kinds of projects developers build all the time: product marketing sites, agency sites, service-company sites, and content-led websites where editors need to manage content without developers changing code for every update.

In other words, this is a practical baseline. Once you understand this pattern, you can extend it to product catalogs, campaign microsites, knowledge bases, or multi-site architectures.

## Who this is for

This guide is for:

- developers who are new to Contentstack


- teams already using Contentstack and exploring AI-first workflows


- developers using Cursor, ChatGPT, Claude, Gemini, or GitHub Copilot in their daily work



You do not need to be an expert in either Contentstack or AI-assisted coding to get value from this guide. But you do need a clear mental model, and that is where we will start.

## What you will build

By the end of this guide, you will have a working Next.js website connected to Contentstack through the Content Delivery API using the TypeScript Delivery SDK. Content will live in Contentstack, and the front end will fetch and render published entries.

That means:

- editors can create or update content in the CMS


- the website can fetch published content through the SDK


- new pages and posts can appear without frontend code changes


- deployment can happen through Contentstack Launch



Conceptually, the system looks like this:

Mermaid Diagram (do not use shift+cmd+v)

flowchart LR

  A[Content models in Contentstack] --> B[Editors create entries]

  B --> C[Entries published to an environment]

  C --> D[Content Delivery API / Delivery SDK]

  D --> E[Next.js application]

  E --> F[Rendered website]

That flow is the foundation for everything else in this guide.

## Before you start: how to use AI properly

The biggest mistake developers make with AI coding assistants is treating them like magical autocomplete.

That works for small tasks. It does not work well for architecture, SDK integration, or CMS-driven applications where the assistant needs to understand structure. If you want good output, you need to give the model enough context to reason about your goal, your stack, and your data model.

### Start by asking for a plan

Before asking an AI assistant to write code, ask it to create a plan first.

That might sound slower, but in practice it is faster because it prevents drift. A plan gives you a chance to catch wrong assumptions before the assistant starts generating multiple files.

For example:

I am building a content-driven marketing website with Next.js and Contentstack. Before writing code, create a step-by-step implementation plan. Assume I need a homepage, service pages, a blog, and a team page. Use the Contentstack TypeScript Delivery SDK and Next.js App Router. Call out what should be modeled in Contentstack first and what should be built in code after that.

This kind of request gives the assistant a goal, constraints, and an order of operations. That usually produces much better implementation work than going straight to “build this page.”

### Set a clear goal

The AI needs to know what kind of website you are building and who it is for.

“Build me a website” is too vague. A stronger prompt gives the assistant a target:

I am building a marketing website for a B2B SaaS company. The site needs a homepage, service landing pages, blog posts, and a team page. Content is managed in Contentstack, and the frontend is built in Next.js App Router using TypeScript.

This matters because architecture follows intent. A SaaS website, a product catalog, and a newsroom may all use the same CMS, but the content model and generated UI are not the same.

### Define point of view

It also helps to define who you are and who you want the assistant to be.

For example:

You are a senior frontend engineer experienced with Next.js, headless CMS integration, and TypeScript. I am a developer implementing a content-driven website and I want production-friendly code.

This changes the quality of output more than most people expect. It nudges the assistant away from generic explanation and toward implementation choices that are more aligned with how experienced engineers think about maintainability and reuse.

### Give real context, not guesses

When working with Contentstack, this is especially important.

Do not ask the AI to “fetch blog posts” unless you also tell it the content type UID. Do not ask it to render a hero image unless you tell it the field UID. And if you have a sample API response, include it. The less guessing the assistant has to do, the better the code will be.

A practical prompt almost always includes:

- the framework and version


- the Contentstack content type UID


- the relevant field UIDs


- the environment variable names


- the expected output format



### Iterate instead of restarting

AI-assisted coding works best when you keep refining the thread.

If the assistant generates something close but not correct, do not start from scratch. Paste the broken file back in, describe the problem, and ask for a fix. That preserves context and usually leads to a better result in less time.

## Recommended developer setup

If you are going to build with AI effectively, your tooling matters.

### Recommended IDE setup

Cursor is the best fit for this style of workflow because it can reason across multiple files, use your existing codebase as context, and handle larger implementation tasks better than a simple inline autocomplete workflow.

A good baseline setup in Cursor looks like this:

- use chat for planning and larger implementation prompts


- use inline edits for targeted changes


- use @file references to include source files in context


- keep your prompts structured and explicit



If you prefer VS Code, GitHub Copilot can still be useful, but it tends to be stronger for inline completion than for full multi-file scaffolding. Browser-based assistants like ChatGPT and Claude are also useful, especially when you want to reason through architecture or debug an issue by pasting code and errors directly.

### Recommended Contentstack tools and packages

For this guide, the main tools to use are:

- @contentstack/delivery-sdk for content delivery in TypeScript


- @contentstack/utils for rich text rendering utilities


- Contentstack Launch for deployment and hosting



These give you a clean, modern baseline and help avoid older patterns that may still show up in low-quality AI output.

## Understanding Contentstack before you write code

Before you start generating files, it helps to understand the mental model behind Contentstack.

Contentstack is a headless CMS. That means it manages content, not presentation. Your frontend is responsible for rendering the content, and Contentstack is responsible for storing it, structuring it, and delivering it through APIs.

If you are coming from a traditional CMS, that separation can feel unfamiliar at first. But it is one of the reasons headless architecture works so well with modern frameworks and AI-assisted workflows. The content model is structured. The frontend is decoupled. The integration surface is clear.

Here is the basic model:

Mermaid Diagram (do not use shift+cmd+v)


flowchart TD

  A[Stack] --> B[Content types]

  B --> C[Entries]

  A --> D[Assets]

  A --> E[Environments]

  E --> F[Published content]

  F --> G[Frontend fetches content]

### Stack

A stack is the main project container inside Contentstack. It holds your content models, entries, assets, environments, and settings. If you are building a website project, the stack is the place where that project’s content structure lives.

### Content types

A content type defines the structure of a piece of content. You can think of it as a schema. A blog_post content type might include fields such as title, slug, body, cover_image, and published_date.

### Entries

An entry is an instance of a content type. If blog_post is the model, then each article is an entry.

### Assets

Assets are uploaded files such as images or PDFs. These are stored on Contentstack’s CDN and referenced from entries.

### Environments

Environments let you control where content is published. A frontend application usually targets one environment at a time, such as development, staging, or production.

### APIs and tokens

For this guide, the main API to understand is the Content Delivery API, or CDA. This is the read-only API your website uses to fetch published content. In code, we will interact with it through the TypeScript Delivery SDK.




---

## Part 1: set up Contentstack

Before you write frontend code, set up the content side properly.

### Step 1: create your account and stack

Create a Contentstack account and create a new stack.

If you are using a free account, pay attention to the region your stack is assigned to. Some free accounts default to the EU region, and that matters when configuring SDKs and content delivery settings.

### Step 2: design your content model

For this example website, use a small but practical content model.

Content type

Purpose

Fields

homepage

Main landing page hero

hero_title, hero_subtitle, hero_image, cta_text, cta_link

landing_page

Service detail pages

page_title, slug, main_content, hero_image, service_price, launch_date

blog_post

Blog content

blog_post_title, slug, body, cover_image, published_date

team_member

Team page cards

name, role, photo, bio, linkedin_url

navigation_item

Global navigation

label, link, order

This is where the practical use case matters. We are not modeling random content for the sake of a tutorial. We are modeling a real site structure that maps cleanly to pages developers often need to build.

### Step 3: create an environment

Create a development environment and point it at your local site URL, typically http://localhost:3000.

### Step 4: create API credentials

You will need:

- your stack API key


- a delivery token scoped to the environment you want to fetch from



Store them in environment variables. Never hardcode them in files that could end up in version control.

```html
CONTENTSTACK_API_KEY=your_stack_api_key
CONTENTSTACK_DELIVERY_TOKEN=your_delivery_token
CONTENTSTACK_ENVIRONMENT=development
```

### Step 5: add sample content

Before you ask AI to generate frontend code, create a few real entries.

This is more important than it sounds. AI is much more effective when you can point it to actual field structures and real content instead of hypothetical data. Create at least:

- one homepage entry


- two or three service pages


- two or three blog posts


- several navigation items


- a few team members






---

## Part 2: build the frontend with AI

## Part 2: build the frontend with AI

Now that the content side is ready, you can start building the application.

### Step 6: create the Next.js project

For this guide, we will use Next.js with the App Router and TypeScript.

```bash
npx create-next-app@latest my-contentstack-site
cd my-contentstack-site
npm install @contentstack/delivery-sdk @contentstack/utils
```

Then create a .env.local file:

```bash
CONTENTSTACK_API_KEY=your_stack_api_key
CONTENTSTACK_DELIVERY_TOKEN=your_delivery_token
CONTENTSTACK_ENVIRONMENT=development
CONTENTSTACK_WEBHOOK_SECRET=your_webhook_secret
```

Step 7: prompt the AI with enough structure

At this point, you are ready to start using your assistant for implementation tasks. The important thing is not just to ask for a file. It is to provide enough constraints that the assistant can make good decisions.

Here is a strong example prompt:

```html
I am building a content-driven website with Next.js App Router and Contentstack.
Use TypeScript and @contentstack/delivery-sdk.
Initialize the stack using apiKey, deliveryToken, and environment from process.env.
I have a content type with UID landing_page and these fields:
- page_title
- slug
- main_content
- hero_image
- service_price
- launch_date

Create a reusable helper in lib/contentstack.ts and then create a page at app/services/[slug]/page.tsx.
Fetch a single landing_page entry where slug matches the route parameter.
Return a typed Server Component and render the title, hero image, rich text body, price, and date.
```

Notice what this prompt does well:

- it sets the goal


- it defines the tech stack


- it names the exact content type


- it names the fields


- it describes the desired output



That is the difference between useful AI output and generic filler.

### Step 8: create a reusable SDK layer

You should not scatter SDK initialization across random pages. Centralize it in one file and reuse helpers everywhere.

Here is a clean baseline:

```typescript
// lib/contentstack.ts
import contentstack, { BaseEntry, QueryOperation } from "@contentstack/delivery-sdk";

function getStack() {
  const {
    CONTENTSTACK_API_KEY,
    CONTENTSTACK_DELIVERY_TOKEN,
    CONTENTSTACK_ENVIRONMENT,
  } = process.env;

  if (!CONTENTSTACK_API_KEY || !CONTENTSTACK_DELIVERY_TOKEN || !CONTENTSTACK_ENVIRONMENT) {
    return null;
  }

  return contentstack.stack({
    apiKey: CONTENTSTACK_API_KEY,
    deliveryToken: CONTENTSTACK_DELIVERY_TOKEN,
    environment: CONTENTSTACK_ENVIRONMENT,
  });
}

export async function getEntries<T extends BaseEntry>(contentTypeUid: string): Promise<T[]> {
  const stack = getStack();
  if (!stack) return [];

  const { entries } = await stack
    .contentType(contentTypeUid)
    .entry()
    .query()
    .find<T>();

  return entries ?? [];
}

export async function getEntryBySlug<T extends BaseEntry>(
  contentTypeUid: string,
  slug: string
): Promise<T | null> {
  const stack = getStack();
  if (!stack) return null;

  const { entries } = await stack
    .contentType(contentTypeUid)
    .entry()
    .query()
    .where("slug", QueryOperation.EQUALS, slug)
    .find<T>();

  return entries?.[0] ?? null;
}
```

There are two things worth calling out here.

First, this uses a lazy getStack() function instead of initializing the SDK at module load time. That matters later when you deploy to Contentstack Launch, because missing environment variables during build will otherwise cause brittle failures.

Second, these helpers give your AI assistant a stable implementation surface. Once this file exists, you can reference it in prompts and ask the assistant to build pages on top of it rather than regenerating connection logic every time.

### Step 9: build pages one by one

Now the workflow becomes more natural. You can ask the AI to generate each page against your existing content layer.

For example, a services list page can fetch all landing_page entries and render them as cards. A blog detail page can fetch a blog_post by slug. A homepage can fetch a single homepage entry and render a hero.

Here is a simplified example for a homepage server component:

```typescript
// app/page.tsx
import { getEntries } from "@/lib/contentstack";

interface HomepageEntry {
  hero_title?: string;
  hero_subtitle?: string;
  cta_text?: string;
  cta_link?: string;
  hero_image?: { url?: string };
}

export default async function HomePage() {
  const entries = await getEntries<HomepageEntry>("homepage");
  const homepage = entries[0];

  if (!homepage) {
    return <main className="p-10">No homepage content found.</main>;
  }

  return (
    <main className="relative min-h-[70vh] overflow-hidden">
      {homepage.hero_image?.url ? (
        <img
          src={`${homepage.hero_image.url}?width=1600`}
          alt={homepage.hero_title ?? "Hero image"}
          className="absolute inset-0 h-full w-full object-cover"
        />
      ) : null}
      <div className="relative z-10 mx-auto flex min-h-[70vh] max-w-5xl flex-col items-center justify-center px-6 text-center text-white">
        <h1 className="text-4xl font-semibold md:text-6xl">{homepage.hero_title}</h1>
        <p className="mt-4 max-w-2xl text-lg md:text-xl">{homepage.hero_subtitle}</p>
        {homepage.cta_link && homepage.cta_text ? (
          <a
            href={homepage.cta_link}
            className="mt-8 rounded-md bg-white px-6 py-3 font-medium text-black"
          >
            {homepage.cta_text}
          </a>
        ) : null}
      </div>
    </main>
  );
}
```

This is a good example of how AI can save time without taking away architectural ownership. You define the content model and helper layer. The assistant helps you move faster on implementation.

### Step 10: handle rich text correctly

Rich text is one of the places where generic AI output often goes wrong. Contentstack rich text fields are stored as structured JSON, so you should use the appropriate utility package rather than treating them like plain HTML strings.

A reusable renderer can look like this:

```typescript
// components/RichTextRenderer.tsx
import { jsonToHtml } from "@contentstack/utils";

export function RichTextRenderer({ json }: { json?: unknown }) {
  if (!json) return null;

  const html = jsonToHtml(json);

  return <div dangerouslySetInnerHTML={{ __html: html }} />;
}
```

If you need stricter sanitization or custom rendering rules, you can extend this later. The important thing is to establish the correct baseline.

### Step 11: handle images from Contentstack

Image fields typically return an object with a url property. In Next.js, you will usually want to pass that URL into next/image and allow the relevant Contentstack image host in your Next.js config.

```typescript
import Image from "next/image";
export function ContentstackImage({
  src,
  alt,
}: {
  src?: string;
  alt: string;
}) {
  if (!src) return null;

  return (
    <Image
      src={`${src}?width=800`}
      alt={alt}
      width={800}
      height={450}
      className="h-auto w-full rounded-lg object-cover"
    />
  );
}
```




---

## Part 3: common real-world workflows

## Part 3: common real-world workflows

Once the site is up and connected, AI becomes most useful in iterative feature work.

### Adding a new content type

This is one of the clearest examples of an effective AI workflow.

The sequence looks like this:

Mermaid Diagram (do not use shift+cmd+v)

sequenceDiagram
    participant Dev as Developer
    participant CS as Contentstack
    participant AI as AI assistant
    participant App as Next.js app

    Dev->>CS: Create content type and fields
    Dev->>AI: Share UID, field UIDs, and page goal
    AI->>App: Generate page/component code
    Dev->>App: Review and refine
    App->>CS: Fetch published entries
In practice, the best approach is:

- create the content type in Contentstack first


- note the exact content type UID and field UIDs


- ask the AI to build the page using your existing helper layer


- review the output and refine it



### Adding search

Search is a good example of where AI can help with implementation details while you stay focused on behavior and UX.

Instead of asking for “search,” ask for something specific:


```html
Create a blog search page in Next.js App Router. Accept a q query parameter. 
Search the blog_post content type using the Delivery SDK. Render matching results as cards with title, image, and link. 
If no results match, show a No results found state.
```

### Adding pagination

Pagination is another good feature to hand off in a structured prompt. When you specify the page size, URL pattern, and rendering expectation, AI tools usually do well with the implementation.



---

## Part 4: deployment with Contentstack Launch

## Part 4: deployment with Contentstack Launch

Once the local app works, the next step is deployment.

Contentstack Launch is designed for frontend hosting and deployment in Contentstack-powered projects. The important thing to understand is that deployment environments behave differently from your local machine. A project that works perfectly with .env.local can still fail at build time if those same variables are not configured in Launch.

That is why the lazy getStack() pattern matters so much.

Without it, your build can fail early during page data collection because the SDK tries to initialize before required variables are present. With it, the app can fail more gracefully and avoid crashing during module import.

A simplified deployment flow looks like this:

Mermaid Diagram (do not use shift+cmd+v)

flowchart LR
    A[Local Next.js project] --> B[Push to GitHub]
    B --> C[Create Launch project]
    C --> D[Set environment variables]
    D --> E[Build and deploy]
    E --> F[Live site]
At minimum, configure these environment variables in Launch:

```html
CONTENTSTACK_API_KEY=your_stack_api_key
CONTENTSTACK_DELIVERY_TOKEN=your_delivery_token
CONTENTSTACK_ENVIRONMENT=development
CONTENTSTACK_WEBHOOK_SECRET=your_webhook_secret
```

If you are using webhook-driven revalidation, the webhook secret is required. Otherwise it can be added later.

## Best practices for prompting throughout the project

The shape of your prompt is often the difference between good code and cleanup work.

A strong prompt usually answers five things:

- what you are building


- who it is for


- what stack you are using


- what data model exists


- what output you want back



Here is a practical template you can reuse:

Goal: I am building a [type of website] for [audience or business type]. Role: You are a senior engineer experienced with Next.js, TypeScript, and Contentstack. Context: I am using Contentstack Headless CMS with the TypeScript Delivery SDK. The content type UID is [content_type_uid]. The relevant fields are [field_uid_1], [field_uid_2], [field_uid_3]. My environment variables are stored in process.env. Task: Create [page/component/helper] using Next.js App Router. Use my existing helper in @/lib/contentstack.ts where appropriate. Output: Return production-friendly TypeScript code and explain any assumptions.

The reason this works is simple: good prompts reduce ambiguity. They help the assistant reason about the system you are building instead of filling in missing details with guesses.

## The real takeaway

The point of this guide is not just that you can build a website faster with Contentstack and AI.

It is that Contentstack gives AI a better operating environment.

When your content is structured, your fields are named, your environments are defined, and your integration layer is clean, AI has something solid to work with. That is what turns AI-assisted coding from a novelty into a practical workflow.

If you treat the assistant like an all-knowing generator, you will get uneven output. If you treat it like a capable junior engineer working inside a well-structured system, you will move much faster and get better results.

That is the mindset shift.

You are not just asking AI to write code. You are giving it architecture, constraints, and intent. Contentstack makes that easier because the content model itself becomes part of the implementation context.

## Where to go next

Once you have this baseline working, you can extend it in several useful directions:

- add preview workflows


- add search and filtering


- add cache revalidation through webhooks


- model more complex references and relationships


- deploy multiple environments through Launch



At that point, you are no longer working through a tutorial. You are using a repeatable development workflow that can scale to real production projects.

And that is the real goal.



---

## Frequently asked questions

### What will I build by the end of this guide?

A working Next.js (App Router) site that fetches and renders published Contentstack entries via the Content Delivery API using the TypeScript Delivery SDK.

### What Contentstack setup is required before writing frontend code?

Create a stack, define content types and field UIDs, create an environment, generate a delivery token, and add sample entries so the frontend can target real data.

### Which environment variables are required for the Delivery SDK setup?

At minimum: CONTENTSTACK_API_KEY, CONTENTSTACK_DELIVERY_TOKEN, and CONTENTSTACK_ENVIRONMENT. Store them in .env.local (and in Launch for deployment).

### Why does the guide recommend lazy SDK initialization (getStack())?

It avoids brittle build-time failures when environment variables are missing in deployment environments by preventing SDK initialization during module import.

### How should Contentstack rich text be rendered in Next.js?

Treat rich text as structured JSON and render it using @contentstack/utils (for example jsonToHtml), rather than assuming it is a plain HTML string.

