Most MCP developers learn about tools and resources and stop there, treating prompts as a nice-to-have. This is a mistake. Prompts are the mechanism that turns a raw capability server into a polished, user-facing product. They let you bake your best workflows into the server itself, expose them through any MCP-compatible host, and guarantee that users get the same high-quality prompt structure regardless of which host they use. Think of prompts as the “saved queries” of the AI world.

What Prompts Are and Why They Matter
An MCP prompt is a named, reusable prompt template that the server exposes for clients to use. When a client calls prompts/get with a prompt name and arguments, the server returns a list of messages ready to be sent to an LLM. The messages can reference resources (to inject dynamic content), contain multi-turn conversation history, and include both user and assistant roles.
The key difference from tools: prompts are human-initiated workflows. A user explicitly selects a prompt from the host UI (“Code Review”, “Summarise Document”, “Translate to French”). Tools are model-initiated – the LLM decides to call them based on context. Prompts are the programmatic equivalent of slash commands.
import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { z } from 'zod';
const server = new McpServer({ name: 'dev-assistant', version: '1.0.0' });
// Simple prompt with arguments
server.prompt(
'code_review',
'Review code for quality, security, and best practices',
{
code: z.string().describe('The code to review'),
language: z.string().describe('Programming language (e.g. javascript, python, rust)'),
focus: z.enum(['security', 'performance', 'style', 'all']).default('all')
.describe('What aspect to focus the review on'),
},
async ({ code, language, focus }) => ({
messages: [
{
role: 'user',
content: {
type: 'text',
text: `Please review the following ${language} code with a focus on ${focus}:\n\n\`\`\`${language}\n${code}\n\`\`\`\n\nProvide specific, actionable feedback with examples.`,
},
},
],
})
);
“Prompts enable servers to define reusable prompt templates and workflows that clients can easily surface to users and LLMs. They provide a way to standardize and share common LLM interactions.” – MCP Documentation, Prompts
Prompts with Resource Embedding
Prompts can embed resources directly into messages. When the server returns a message with a resource content block, the client reads the resource and injects its content into the conversation context before sending it to the LLM.
server.prompt(
'analyse_file',
'Analyse the contents of a file',
{ file_uri: z.string().describe('The URI of the file to analyse') },
async ({ file_uri }) => ({
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'Please analyse the following file and provide a summary of its contents, structure, and any notable patterns:',
},
{
type: 'resource',
resource: { uri: file_uri }, // Client resolves this URI and injects content
},
],
},
],
})
);
// Multi-turn prompt with context
server.prompt(
'debug_error',
'Debug an error with context',
{
error_message: z.string(),
stack_trace: z.string().optional(),
context: z.string().optional().describe('Additional context about what you were doing'),
},
async ({ error_message, stack_trace, context }) => ({
messages: [
{
role: 'user',
content: { type: 'text', text: 'I am getting the following error:' },
},
{
role: 'user',
content: {
type: 'text',
text: `Error: ${error_message}${stack_trace ? `\n\nStack trace:\n${stack_trace}` : ''}${context ? `\n\nContext: ${context}` : ''}`,
},
},
{
role: 'assistant',
content: { type: 'text', text: 'I can help debug this. Let me analyse the error...' },
},
{
role: 'user',
content: { type: 'text', text: 'What is causing this error and how do I fix it?' },
},
],
})
);

Failure Modes with Prompts
Case 1: Putting LLM Logic Inside the Prompt Handler
A prompt handler should assemble and return messages. It should not call an LLM. Calling an LLM inside a prompt handler breaks the separation between prompt construction (server’s job) and prompt execution (host’s job). It also makes your server non-deterministic and slow.
// WRONG: Calling an LLM inside the prompt handler
server.prompt('summarise', '...', { text: z.string() }, async ({ text }) => {
const openai = new OpenAI();
const summary = await openai.chat.completions.create({ ... }); // WRONG
return { messages: [{ role: 'user', content: { type: 'text', text: summary } }] };
});
// CORRECT: Return the prompt; let the host's LLM execute it
server.prompt('summarise', '...', { text: z.string() }, async ({ text }) => ({
messages: [{
role: 'user',
content: { type: 'text', text: `Please summarise the following text in 3 bullet points:\n\n${text}` },
}],
}));
Case 2: Hardcoding Content That Should Be a Resource Reference
If your prompt inlines large amounts of data (a whole document, a database dump), the data will not be updated when the underlying source changes and the prompt will grow stale. Reference a resource URI instead, letting the client fetch fresh content at prompt execution time.
// BAD: Hardcoded data goes stale
server.prompt('analyse_policy', '...', {}, async () => ({
messages: [{ role: 'user', content: { type: 'text', text: ENTIRE_POLICY_TEXT_INLINED } }],
}));
// GOOD: Resource reference - always fresh
server.prompt('analyse_policy', '...', {}, async () => ({
messages: [{
role: 'user',
content: [
{ type: 'text', text: 'Please analyse our current company policy for compliance issues:' },
{ type: 'resource', resource: { uri: 'docs://company/policy-current' } },
],
}],
}));
What to Check Right Now
- Identify your power workflows – what are the 3-5 most common things your users ask the AI to do? Each one is a prompt candidate.
- Test prompts in the Inspector – the Inspector shows prompts in a dedicated tab. Fill in arguments and render the messages to verify the output before integrating with an LLM.
- Use resource references for dynamic content – never inline large or frequently-changing data in prompt text. Reference it by URI.
- Notify on changes – if your prompts change (updated templates, new prompts added), send
notifications/prompts/list_changedso clients can refresh their prompt catalogues.
nJoy 😉
