Skip to content

Core Concepts

Understanding the key concepts behind Converra's AI optimization platform.

Prompts

A prompt is the system instruction that guides your AI's behavior. In Converra, prompts are:

  • Versioned - Every change is tracked
  • Optimizable - Can be improved through automated testing
  • Measurable - Performance metrics are collected from real conversations
typescript
// Example prompt structure
{
  name: "Customer Support Agent",
  content: "You are a helpful customer support agent...",
  llmModel: "gpt-4o",
  tags: ["support", "production"]
}

Agent Systems

An agent system is a set of prompts that work together as a multi-step flow (for example: an entry/router prompt handing off to specialist prompts).

Converra can auto-discover agent systems from imported traces and show:

  • the entry prompt
  • the most common paths (prompt sequences) and their frequencies
  • the weakest link (lowest-performing prompt in the system)
  • a diagnostic, weighted “system score”

Flow constraints (what you should expect)

For reliable, bounded simulation, Converra models discovered agent systems with a constrained flow:

  • Branching between steps is supported (based on what we observe in traces).
  • Each run records the path taken so comparisons are apples-to-apples.
  • Some patterns (like unbounded loops/retries or complex parallelism) may not be supported in early versions; in those cases Converra falls back to individual prompt optimization.

These constraints apply to Converra’s simulation model, not your production code.

Optimization

Optimization is the process of improving your prompts through simulation testing. It connects to where your prompts already live:

  1. Import - Pull prompts from LangSmith, your API, or paste manually
  2. Analyze & Generate - AI creates alternative versions of your prompt
  3. Simulate & Evaluate - Each variant is tested against diverse personas
  4. Select & Deploy - The best-performing variant goes back to production

Optimization Modes

ModeUse Case
ExploratoryQuick iteration, finding improvements fast
ValidationStatistical rigor, production-ready decisions

Conversations

A conversation is a logged interaction between a user and your AI. Logging conversations enables:

  • Insights generation - Understanding what's working and what isn't
  • Performance tracking - Measuring prompt effectiveness over time
  • Optimization fuel - Real data to guide improvements

Personas

Personas are simulated users that test your prompts:

  • Frustrated Customer - Tests patience and de-escalation
  • Enterprise Buyer - Tests technical depth
  • First-time User - Tests clarity and onboarding
  • Power User - Tests efficiency

You can create custom personas to match your specific user base.

Variants

A variant is an alternative version of your prompt created during optimization:

  • Variants compete against your original prompt
  • The winner can be applied with one click
  • Previous versions are always preserved

Insights

Insights are AI-generated analysis of your prompt's performance:

  • Task completion rates
  • Sentiment analysis
  • Common topics and issues
  • Improvement recommendations

Next Steps