Context API
Request motivation-aware context before responding to users.
The getContext() Call
The Context API is the core of AMP. Before your agent responds to a user request, you call getContext() to get personalised recommendations.
const context = await amp.getContext({
userId: "user_123",
task: "build a login page",
complexity: "medium",
metadata: {
source: "chat",
timestamp: Date.now()
}
});Request Parameters
Required Parameters
| Parameter | Type | Description |
|---|---|---|
| userId | string | Unique identifier for the user |
| task | string | Description of what the user wants to accomplish |
Optional Parameters
| Parameter | Type | Description |
|---|---|---|
| complexity | string | "low" | "medium" | "high" - Helps AMP tailor recommendations |
| taskType | string | Category of task (e.g., "coding", "writing", "debugging") |
| metadata | object | Additional context (source, timestamp, session info) |
Response Structure
The getContext() call returns a rich context object:
{
// Unique ID for this request (use in reportOutcome)
requestId: "req_abc123",
// Recommended framing approach
suggestedFraming: "micro_task" | "achievement" | "learning" | "challenge",
// Communication style to use
communicationStyle: "brief_directive" | "detailed_explanatory" | "conversational" | "technical",
// How to handle complexity
complexity: "full_solution" | "break_into_steps" | "hints_only" | "high_level",
// Level of encouragement
encouragement: "high" | "moderate" | "minimal" | "none",
// Confidence in these recommendations (0-1)
confidence: 0.87,
// Why these recommendations were made
rationale: "User has 85% completion rate with step-by-step guidance",
// Additional context
metadata: {
profilePhase: "optimised", // cold_start | learning | optimised
interactionCount: 47,
explorationMode: false
}
}Using Context in Your Agent
Approach 1: System Prompt Adaptation
The simplest approach is to inject context into your system prompt:
const context = await amp.getContext({
userId: user.id,
task: userQuery
});
const systemPrompt = `You are a helpful coding assistant.
Communication Style: Use ${context.communicationStyle} communication.
Task Framing: Frame this as a ${context.suggestedFraming}.
Complexity: Provide ${context.complexity} level of detail.
Encouragement: ${context.encouragement} level of positive reinforcement.
Adapt your response accordingly.`;
const response = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{ role: "system", content: systemPrompt },
{ role: "user", content: userQuery }
]
});Approach 2: Programmatic Adaptation
For more control, handle context programmatically:
const context = await amp.getContext({
userId: user.id,
task: userQuery
});
// Adapt based on complexity preference
let response: string;
if (context.complexity === "break_into_steps") {
// Generate step-by-step guide
const steps = await generateSteps(userQuery);
response = formatAsSteps(steps, context.communicationStyle);
} else if (context.complexity === "full_solution") {
// Provide complete solution
response = await generateCompleteSolution(userQuery);
} else if (context.complexity === "hints_only") {
// Give hints without direct solution
response = await generateHints(userQuery);
}
// Add encouragement if needed
if (context.encouragement === "high") {
response = addEncouragement(response);
}Approach 3: Hybrid
Combine both approaches for maximum flexibility:
const context = await amp.getContext({
userId: user.id,
task: userQuery
});
// Build adaptive system prompt
const systemPrompt = buildSystemPrompt(context);
// Choose response strategy
const strategy = selectStrategy(context);
// Generate response
const rawResponse = await llm.generate({
systemPrompt,
userQuery,
temperature: strategy.temperature
});
// Post-process based on context
const finalResponse = postProcess(rawResponse, context);Context-Aware Templates
Create templates for different context combinations:
const templates = {
// Brief + Steps
"brief_directive__break_into_steps": (task) => `
1. [First step]
2. [Second step]
3. [Final step]
`,
// Detailed + Full Solution
"detailed_explanatory__full_solution": (task) => `
Here's a complete solution with explanation:
[Full code with detailed comments]
How it works:
[Step-by-step explanation]
`,
// Conversational + Hints
"conversational__hints_only": (task) => `
Great question! Here are some hints to guide you:
🤔 Consider [hint 1]
💡 Think about [hint 2]
✨ Remember [hint 3]
`
};
// Select template based on context
const templateKey = `${context.communicationStyle}__${context.complexity}`;
const template = templates[templateKey];
const response = template(userQuery);Handling Low Confidence
When AMP's confidence is low, you might want to hedge your bets:
const context = await amp.getContext({
userId: user.id,
task: userQuery
});
if (context.confidence < 0.5) {
// Low confidence - try multiple approaches
console.log("Low confidence, using hybrid approach");
// Provide both steps AND full solution
response = `
Here's a step-by-step approach:
1. [Step 1]
2. [Step 2]
Or if you prefer, here's the complete solution:
[Full code]
`;
} else {
// High confidence - use recommended approach
response = await generateResponse(context);
}Task Type Specialisation
Different task types may need different handling:
const context = await amp.getContext({
userId: user.id,
task: userQuery,
taskType: "debugging" // vs "coding", "learning", etc.
});
// AMP may return different recommendations for debugging vs coding
// even for the same user
if (context.metadata.taskType === "debugging") {
// Debugging often benefits from systematic approaches
response = generateSystematicDebugGuide(userQuery);
} else if (context.metadata.taskType === "learning") {
// Learning tasks benefit from exploration
response = generateExploratoryResponse(userQuery);
}Caching & Performance
For high-traffic applications, consider caching strategies:
// Cache profile lookups for 5 minutes
const contextCache = new TTLCache({ ttl: 300000 });
async function getCachedContext(userId: string, task: string) {
const cacheKey = `${userId}:${hashTask(task)}`;
let context = contextCache.get(cacheKey);
if (!context) {
context = await amp.getContext({ userId, task });
contextCache.set(cacheKey, context);
}
return context;
}⚠️ Caching Warning: Only cache for short durations. Profiles update frequently, and stale recommendations reduce effectiveness.
Error Handling
Always handle potential errors gracefully:
try {
const context = await amp.getContext({
userId: user.id,
task: userQuery
});
// Use context...
} catch (error) {
if (error.code === 'RATE_LIMIT_EXCEEDED') {
// Fall back to default behaviour
console.warn('AMP rate limit hit, using defaults');
context = getDefaultContext();
} else if (error.code === 'PROFILE_NOT_FOUND') {
// New user - AMP will create profile
// This shouldn't happen, but handle gracefully
context = getDefaultContext();
} else {
// Unknown error - log and use defaults
console.error('AMP error:', error);
context = getDefaultContext();
}
}Best Practices
- Always call
getContext()before generating responses - Use the full context object, not just one field
- Respect confidence scores - hedge when confidence is low
- Provide task descriptions that help AMP learn patterns
- Don't override recommendations without good reason
💡 Pro Tip: The rationale field explains why AMP made its recommendations. Use this for debugging and understanding user patterns.