Orchestra Documentation

Coordinate multiple LLMs like a symphony conductor. Build consensus, run debates, and orchestrate AI models at scale.

🎯 Why Orchestra?

In the rapidly evolving world of AI, relying on a single LLM is like listening to a solo performance when you could have a full symphony. Orchestra brings together the best of all models, creating a powerful orchestration platform that coordinates multiple language models to work together harmoniously.

🤝 Consensus Building

Get validated answers through multi-model agreement. Reduce hallucinations by up to 70%.

🎭 Structured Debates

Let models debate complex topics through multiple rounds for nuanced conclusions.

🔌 Provider Agnostic

Support for unlimited LLM providers with hot-swappable configurations.

⚡ High Performance

Parallel processing, intelligent caching, and optimized token usage.

🚀 Quick Start

Get up and running with Orchestra in under 5 minutes. Follow these simple steps to start orchestrating multiple LLMs.

Installation

Install Orchestra using your preferred package manager:

# Using npm
npm install @orchestra-llm/core

# Using yarn
yarn add @orchestra-llm/core

# Using pnpm
pnpm add @orchestra-llm/core

Basic Example

Here's a simple example to get you started with Orchestra:

import { Orchestra } from '@orchestra-llm/core'

// Initialize Orchestra with your providers
const orchestra = new Orchestra({
  providers: {
    openai: { 
      apiKey: process.env.OPENAI_API_KEY 
    },
    anthropic: { 
      apiKey: process.env.ANTHROPIC_API_KEY 
    },
    google: { 
      apiKey: process.env.GOOGLE_API_KEY 
    }
  }
})

// Get consensus from multiple models
const consensus = await orchestra.consensus(
  'What is the best approach for handling user authentication?'
)

console.log('Result:', consensus.result)
console.log('Confidence:', consensus.confidence)
console.log('Agreement:', consensus.agreement)

🤝 Consensus Building

Consensus is Orchestra's fundamental mechanism for getting validated, reliable answers from multiple models. By combining responses from different LLMs, Orchestra significantly reduces errors and hallucinations.

How It Works

Consensus Modes

// Democratic - Equal weight for all models
const result = await orchestra.consensus(prompt, {
  mode: 'democratic'
})

// Weighted - Different weights based on expertise
const result = await orchestra.consensus(prompt, {
  mode: 'weighted',
  weights: {
    openai: 2,
    anthropic: 1.5,
    google: 1
  }
})

// Hierarchical - Tiered decision making
const result = await orchestra.consensus(prompt, {
  mode: 'hierarchical',
  hierarchy: {
    tier1: ['openai'],
    tier2: ['anthropic'],
    tier3: ['google']
  }
})

🎭 Structured Debates

Debates allow models to challenge each other iteratively, leading to more nuanced and well-reasoned conclusions. Models can see each other's responses and revise their positions.

const debate = await orchestra.debate(
  'Should startups use microservices or monolithic architecture?',
  {
    maxRounds: 3,
    threshold: 0.8,
    style: 'adversarial'
  }
)

// Examine the debate process
debate.rounds.forEach((round, index) => {
  console.log(`Round ${index + 1}:`)
  round.arguments.forEach(arg => {
    console.log(`${arg.provider}: ${arg.content}`)
  })
})

🔌 Provider Configuration

Orchestra supports a growing ecosystem of LLM providers. Each provider can be configured with specific parameters and capabilities.

Supported Providers

🔧 Advanced Features

Orchestra includes powerful features for production use:

Automatic Failover

const orchestra = new Orchestra({
  fallback: {
    enabled: true,
    strategy: 'priority',
    maxAttempts: 3
  }
})

Response Streaming

const stream = await orchestra.stream(prompt)
for await (const chunk of stream) {
  process.stdout.write(chunk.content)
}

Custom Providers

class CustomProvider extends BaseProvider {
  async complete(prompt, options) {
    // Your implementation
    return response
  }
}

orchestra.addProvider('custom', new CustomProvider())

📊 Performance & Monitoring

Track and optimize your orchestration performance with built-in metrics:

// Listen to events
orchestra.on('consensus:complete', (result) => {
  console.log('Consensus reached:', result)
})

// Get statistics
const stats = orchestra.getStats()
console.log('Total requests:', stats.totalRequests)
console.log('Average latency:', stats.averageLatency)
console.log('Provider usage:', stats.providerUsage)

🤝 Contributing & Feature Requests

Orchestra is open source and we welcome contributions from the community! Whether you want to request a feature, report a bug, or contribute code, we'd love to hear from you.

🚀 Request a Feature

Have an idea to make Orchestra better? Submit a feature request and let us know what you need.

Submit Feature Request →

🐛 Report a Bug

Found something that isn't working? Help us fix it by reporting the issue with details.

Report Bug →

🔌 Request Provider

Need support for a specific LLM provider? Let us know which provider you'd like to see added.

Request Provider →

💻 Contribute Code

Want to contribute directly? Check out our open issues and submit a pull request.

Contribution Guide →

How to Submit a Good Feature Request

🚢 Ready to Ship?

Orchestra is production-ready with comprehensive features for building reliable AI applications. Join thousands of developers who are already orchestrating their LLMs.