Writing Better Prompts for Code Generation

The gap between developers who get mediocre results from AI and those who get great results almost always comes down to one thing: how they write their prompts. I’ve been using AI for code generation daily for over a year now, and I’ve learned that prompt quality is the single biggest factor in output quality.
Here’s what I’ve figured out about writing prompts that actually produce usable code.
Why Prompt Quality Matters
A vague prompt produces vague code. Ask “build me a form” and you’ll get a generic form with generic fields and generic validation. Ask precisely and you’ll get something you can actually ship. The AI isn’t being lazy with vague prompts. It’s making assumptions. And assumptions are where bugs come from.
The Anatomy of a Good Code Prompt
Every effective code prompt I write has four parts:
- Context: What’s the project, stack, and existing patterns?
- Task: What exactly do you want built?
- Constraints: What rules must the code follow?
- Output format: How should the response be structured?
Let me show you the difference this makes.
Bad Prompt
Write a React hook for fetching data.
This will produce a generic useFetch hook that probably doesn’t match your project’s patterns, error handling approach, or TypeScript conventions.
Good Prompt
Write a custom React hook called useApiQuery for our TypeScript + React 19
project. It should:
- Accept a URL string and an optional config object
- Return { data, error, isLoading, refetch }
- Use the native fetch API (no axios)
- Handle AbortController cleanup on unmount
- Type the response generically: useApiQuery<User>('/api/users')
- Follow this error pattern we use elsewhere:
type ApiError = { message: string; status: number }
- Don't cache responses (we handle caching at a different layer)
Existing pattern for reference:
export function useAuth() {
const [user, setUser] = useState<User | null>(null);
// ... we use this state pattern throughout
}
The second prompt gives the AI everything it needs to produce code that fits your project. Notice the explicit constraints: “no axios,” “don’t cache.” Without those, the AI might add features you don’t want.
Providing Context Effectively
Context doesn’t mean pasting your entire codebase. It means giving the AI just enough to understand your patterns. I typically include:
- The tech stack with version numbers
- One or two existing code samples that show the project’s style
- Any relevant type definitions
- The file this code will live in (so the AI understands the module context)
Specifying Constraints
Constraints are the most underused part of prompting. Every project has rules, either explicit or implicit. Tell the AI about them:
Constraints:
- No default exports (we use named exports everywhere)
- Props interfaces go in the same file, not a separate types file
- Use Tailwind classes, not CSS modules
- Error boundaries are handled at the route level, don't add them
- All text must support i18n (use t() function from our i18n hook)
- No inline styles
Without constraints, you’ll spend more time editing the generated code than you saved by generating it.
Bad vs Good Prompts: More Examples
Bad: “Write tests for this component.”
Good: “Write Vitest + React Testing Library tests for this UserCard component. Test: default render with required props, conditional rendering when user.avatar is null, click handler on the edit button. We mock API calls with MSW. Use our test pattern:”
Bad: “Convert this to TypeScript.”
Good: “Add TypeScript types to this JavaScript module. Use strict types (no any). Prefer interfaces over type aliases for object shapes. Union types for string literals. The function parameters should use the existing UserDTO type from ‘../types/api’.”
Advanced Techniques
Few-Shot Prompting
Show the AI an example of what you want, then ask for more of the same:
Here's how we define API endpoints in our project:
export const getUsers = createEndpoint({
method: 'GET',
path: '/users',
response: z.array(UserSchema),
});
Now create similar endpoint definitions for:
- GET /products (returns array of Product)
- GET /products/:id (returns single Product)
- POST /products (accepts CreateProductInput, returns Product)
- DELETE /products/:id (returns { success: boolean })
Few-shot examples are the fastest way to get consistent output that matches your codebase style.
Chain-of-Thought
For complex logic, ask the AI to think through the problem before coding:
I need a function that calculates shipping costs based on weight,
distance, and package dimensions. Before writing the code, first
outline the calculation steps and edge cases we need to handle.
Then implement it.
This produces better code because the AI identifies edge cases during the thinking phase rather than missing them during implementation.
The Meta-Lesson
Good prompts are just good communication. The same skills that make you effective in code reviews, ticket writing, and technical documentation make you effective at prompting. Be specific. Be explicit about constraints. Show don’t tell. And remember: the five minutes you spend writing a detailed prompt will save you twenty minutes of editing mediocre output.
Written by
Adrian Saycon
A developer with a passion for emerging technologies, Adrian Saycon focuses on transforming the latest tech trends into great, functional products.
Discussion (0)
Sign in to join the discussion
No comments yet. Be the first to share your thoughts.
Related Articles

Building and Deploying Full-Stack Apps with AI Assistance
A weekend project walkthrough: building a full-stack task manager from architecture planning to deployment, with AI as t

AI-Assisted Database Design and Query Optimization
How to use AI for schema design, index recommendations, N+1 detection, and query optimization in PostgreSQL and MySQL.

Automating Repetitive Tasks with AI Scripts
Practical patterns for using AI to generate automation scripts for data migration, file processing, and scheduled tasks.