Appearance
Available Tools
All Converra MCP tools with examples and common use cases.
Prompts
list_prompts
List all your prompts.
You say: "Show me my prompts"
Response:
Found 3 prompts:
- Customer Support (gpt-4o) - Active
- Sales Assistant (gpt-4o) - Active
- Code Review (claude-3-5-sonnet) - Draftcreate_prompt
Create a new prompt. Requires: name, content, llmModel.
You say: "Create a customer support prompt for gpt-4o"
Example with full content:
Create a prompt with:
- name: "Customer Support Agent"
- llmModel: "gpt-4o"
- content: "You are a helpful customer support agent. Be friendly,
concise, and always try to resolve issues on first contact."
- description: "Main support chatbot"
- tags: ["support", "production"]Supported models: gpt-4o, gpt-4, gpt-3.5-turbo, claude-3-5-sonnet, claude-3-opus, o1, o3, gemini-1.5-pro
update_prompt
Update an existing prompt.
You say: "Update my support prompt to be more friendly"
Example:
Update prompt abc123:
- content: "You are an exceptionally friendly customer support agent..."get_prompt_status
Get details about a specific prompt including performance metrics.
You say: "Show details for my support prompt"
Response:
Prompt: Customer Support Agent
Model: gpt-4o
Status: Active
Conversations: 1,247
Last optimized: 3 days ago
Performance: 87% task completionOptimization
trigger_optimization
Start an optimization to improve your prompt.
You say: "Optimize my support prompt with 3 variants"
Full example:
Optimize prompt abc123:
- mode: "exploratory" (or "validation" for statistical rigor)
- variantCount: 3
- intent:
- targetImprovements: ["clarity", "task completion"]
- hypothesis: "Adding examples will help users understand better"What happens:
- Converra generates variant prompts
- Simulates conversations with AI personas
- Evaluates which variant performs best
- Reports results with improvement percentages
get_optimization_details
Check progress and results of an optimization.
You say: "How's my optimization going?"
Response (in progress):
Optimization abc123
Status: Running (Iteration 2/5)
Progress: Simulating conversations...
Variants: 3 being testedResponse (complete):
Optimization abc123
Status: Complete
Winner: Variant B
Improvement: +23% task completion, +15% clarity
Recommendation: Apply Variant Blist_optimizations
See recent optimization runs.
You say: "Show my recent optimizations"
Response:
Recent optimizations:
1. Customer Support - Completed 2h ago - Variant B won (+23%)
2. Sales Assistant - Running - Iteration 3/5
3. Code Review - Completed yesterday - No clear winnerget_variant_details
Compare variants from an optimization.
You say: "Show me the variants from my last optimization"
Response:
Variant A (Control):
- Task completion: 72%
- Clarity: 68%
Variant B (Winner):
- Task completion: 89% (+17%)
- Clarity: 83% (+15%)
- Key change: Added step-by-step instructions
Variant C:
- Task completion: 75% (+3%)
- Clarity: 71% (+3%)apply_variant
Deploy a winning variant to your prompt.
You say: "Apply the winning variant"
Response:
Applied Variant B to "Customer Support Agent"
Previous version saved. You can revert anytime.stop_optimization
Stop a running optimization.
You say: "Stop the current optimization"
Insights
get_insights
Get performance insights for a prompt based on logged conversations.
You say: "How is my support prompt performing?"
Response:
Insights for Customer Support (last 30 days):
- Task completion: 87%
- Avg sentiment: Positive
- Common topics: order status, refunds, shipping
- Improvement opportunity: Users often confused about return policyConversations
list_conversations
List logged conversations for a prompt.
You say: "Show recent conversations for my support prompt"
get_conversation
Get details of a specific conversation including insights.
You say: "Show me conversation xyz789"
create_conversation
Log a conversation for analysis.
Example:
Log conversation:
- promptId: "abc123"
- content: "User: I need help with my order\nAI: Happy to help! What's your order number?"
- status: "completed"Personas
list_personas
List simulation personas for testing.
You say: "What personas are available for testing?"
Response:
Available personas:
- Frustrated Customer (impatient, had bad experiences)
- Enterprise Buyer (technical, detail-oriented)
- First-time User (needs guidance, asks basic questions)
- Power User (efficient, knows what they want)create_persona
Create a custom persona for simulations.
Example:
Create persona:
- name: "Confused Senior"
- description: "An elderly user unfamiliar with technology,
needs patient explanations, may ask the same thing twice"
- tags: ["senior", "patience-test"]Simulation
simulate_prompt
Test your prompt against personas without optimization.
You say: "Test my support prompt against 5 personas"
Response:
Simulation complete:
- 5 conversations generated
- Avg task completion: 78%
- Issues found: Struggled with technical users
- Recommendation: Add more technical detailsanalyze_prompt
Get structural analysis and improvement recommendations.
You say: "Analyze my support prompt for weaknesses"
Response:
Analysis of Customer Support:
Strengths:
- Clear role definition
- Good tone instructions
Weaknesses:
- No examples provided
- Missing edge case handling
- Could be more concise
Recommendations:
1. Add 2-3 example interactions
2. Add instructions for handling complaints
3. Remove redundant phrasesrun_head_to_head
Compare two prompts directly.
You say: "Compare my old and new support prompts"
Account
get_account
Get account info and usage.
You say: "What's my Converra usage?"
get_settings
Get optimization settings.
update_settings
Update default settings.
Webhooks
list_webhooks
List configured webhooks.
create_webhook
Create a webhook for events.
Example:
Create webhook:
- url: "https://myapp.com/converra-webhook"
- events: ["optimization.completed", "prompt.updated"]delete_webhook
Remove a webhook.
Common Workflows
New User Setup
1. "Create a prompt for [your use case] using gpt-4o"
2. "Analyze this prompt for weaknesses"
3. "Run an optimization with 3 variants"
4. "Show me the results"
5. "Apply the winner"Continuous Improvement
1. "How is my support prompt performing?"
2. "What are the common issues?"
3. "Optimize focusing on [specific issue]"
4. "Apply the improvement"Testing Before Production
1. "Simulate my prompt against 10 personas"
2. "What issues were found?"
3. "Update the prompt to address [issue]"
4. "Simulate again to verify"