Skip to content

Providers

ThinkLang is model-agnostic. It supports multiple LLM providers out of the box, and you can plug in custom providers of your own.

Supported Providers

ProviderPackageEnv VarDefault Model
Anthropic@anthropic-ai/sdk (bundled)ANTHROPIC_API_KEYclaude-opus-4-6
OpenAIopenai (optional peer dep)OPENAI_API_KEYgpt-4o
Google Gemini@google/generative-ai (optional peer dep)GEMINI_API_KEYgemini-2.0-flash
Ollamanone (uses fetch)OLLAMA_BASE_URLllama3
Groqnone (uses fetch)GROQ_API_KEYllama-3.3-70b-versatile
DeepSeeknone (uses fetch)DEEPSEEK_API_KEYdeepseek-chat
Mistralnone (uses fetch)MISTRAL_API_KEYmistral-large-latest
Togethernone (uses fetch)TOGETHER_API_KEYmeta-llama/Llama-3.3-70B-Instruct-Turbo
OpenRouternone (uses fetch)OPENROUTER_API_KEYanthropic/claude-sonnet-4

Anthropic's SDK is bundled with ThinkLang. For OpenAI or Google Gemini, install the corresponding SDK as a peer dependency:

bash
npm install openai                    # for OpenAI
npm install @google/generative-ai     # for Google Gemini

Ollama, Groq, DeepSeek, Mistral, Together, and OpenRouter require no extra package --- they use fetch() directly with the OpenAI-compatible chat completions API.

Auto-Detection

ThinkLang automatically detects which provider to use. No init() call is needed if you have the right environment variable set.

The runtime checks environment variables in this order:

  1. ANTHROPIC_API_KEY --- selects Anthropic
  2. OPENAI_API_KEY --- selects OpenAI
  3. GEMINI_API_KEY --- selects Google Gemini
  4. OLLAMA_BASE_URL --- selects Ollama
  5. GROQ_API_KEY --- selects Groq
  6. DEEPSEEK_API_KEY --- selects DeepSeek
  7. MISTRAL_API_KEY --- selects Mistral
  8. TOGETHER_API_KEY --- selects Together
  9. OPENROUTER_API_KEY --- selects OpenRouter

The first match wins. If you have multiple API keys set, the one earliest in this list takes priority.

When you pass an API key directly (via the library API), the runtime also detects the provider from the key's prefix:

PrefixProvider
sk-ant-Anthropic
sk-OpenAI
AIGoogle Gemini

If no provider can be detected, Anthropic is used as the default.

Explicit Configuration

Use init() when you need to configure the provider, API key, or model explicitly:

typescript
import { init, think } from "thinklang";

// Anthropic (default)
init({ apiKey: "sk-ant-...", model: "claude-sonnet-4-20250514" });

// OpenAI
init({ provider: "openai", apiKey: "sk-..." });

// Gemini
init({ provider: "gemini", apiKey: "AI...", model: "gemini-2.5-pro" });

// Ollama (no API key)
init({ provider: "ollama", baseUrl: "http://my-server:11434", model: "mistral" });

// Groq
init({ provider: "groq", apiKey: "gsk_..." });

// DeepSeek
init({ provider: "deepseek", apiKey: "sk-..." });

// Mistral
init({ provider: "mistral", apiKey: "..." });

// Together
init({ provider: "together", apiKey: "..." });

// OpenRouter
init({ provider: "openrouter", apiKey: "sk-or-..." });

// Custom ModelProvider instance
init({ provider: myCustomProvider });

If you don't call init(), the runtime auto-initializes from environment variables on first use:

typescript
import { think } from "thinklang";

// Just set ANTHROPIC_API_KEY (or OPENAI_API_KEY, etc.) in your environment
const result = await think<string>({
  prompt: "This works with zero configuration",
  jsonSchema: { type: "string" },
});

OpenAI-Compatible Providers

Groq, DeepSeek, Mistral, Together, and OpenRouter all extend OpenAICompatibleProvider — a fetch-based base class that works with any OpenAI-compatible chat completions API. You can use this base class to connect to any OpenAI-compatible endpoint:

typescript
import { OpenAICompatibleProvider, setProvider } from "thinklang";

const provider = new OpenAICompatibleProvider({
  name: "my-provider",
  baseUrl: "https://my-llm-api.example.com/v1",
  defaultModel: "my-model",
  apiKey: "my-key",
  // Optional: extra headers for authentication
  extraHeaders: { "X-Custom-Header": "value" },
  // Optional: set to false if the API doesn't support JSON Schema response format
  supportsJsonSchema: true,
});

setProvider(provider);

This is the easiest way to connect to LLM services like vLLM, LiteLLM, Anyscale, or any self-hosted OpenAI-compatible API.

Custom Providers

You can implement the ModelProvider interface to use any LLM backend:

typescript
import {
  setProvider,
  think,
  type ModelProvider,
  type CompleteOptions,
  type CompleteResult,
} from "thinklang";

class MyProvider implements ModelProvider {
  async complete(options: CompleteOptions): Promise<CompleteResult> {
    // options.systemPrompt  — system prompt string
    // options.userMessage   — the user's prompt
    // options.jsonSchema    — JSON Schema for structured output (optional)
    // options.schemaName    — name for the schema (optional)
    // options.model         — model override (optional)
    // options.maxTokens     — token limit (optional)
    // options.tools         — tool definitions for agent mode (optional)
    // options.messages      — conversation history for agent mode (optional)

    const data = await myLLM.generate(options.userMessage, options.jsonSchema);

    return {
      data,
      usage: { inputTokens: 0, outputTokens: 0 },
      model: "my-model",
      // For agent tool calling support, also return:
      // toolCalls: [{ id, name, input }],
      // stopReason: "end_turn" | "tool_use" | "max_tokens",
    };
  }
}

setProvider(new MyProvider());

const result = await think<string>({
  prompt: "Hello from my custom provider",
  jsonSchema: { type: "string" },
});

Registry-Based Approach

Register a provider factory so it can be referenced by name:

typescript
import { registerProvider, init } from "thinklang";

registerProvider("my-llm", (options) => {
  return new MyProvider(options.apiKey, options.model);
});

// Now you can use it by name
init({ provider: "my-llm", apiKey: "my-key" });

The factory receives a ProviderOptions object with apiKey, model, and baseUrl fields. Once registered, the provider name works everywhere --- including in init() calls.

Custom Pricing

ThinkLang's cost tracking includes built-in pricing for models from Anthropic, OpenAI, and Google. If you use a custom model or a model not in the built-in table, register pricing so costs are tracked accurately:

typescript
import { registerPricing } from "thinklang";

// Pricing is per million tokens (USD)
registerPricing("my-custom-model", { input: 5, output: 20 });

This means the model costs $5 per million input tokens and $20 per million output tokens.

Built-in Pricing Table

For reference, here are the models with built-in pricing:

Anthropic

ModelInput ($/M tokens)Output ($/M tokens)
claude-opus-4-6$15$75
claude-sonnet-4-5-20250929$3$15
claude-haiku-4-5-20251001$0.80$4

OpenAI

ModelInput ($/M tokens)Output ($/M tokens)
gpt-4o$2.50$10
gpt-4o-mini$0.15$0.60
gpt-4.1$2$8
gpt-4.1-mini$0.40$1.60
gpt-4.1-nano$0.10$0.40
o3$10$40
o4-mini$1.10$4.40

Google

ModelInput ($/M tokens)Output ($/M tokens)
gemini-2.0-flash$0.10$0.40
gemini-2.5-pro$1.25$10
gemini-2.5-flash$0.15$0.60

Groq

ModelInput ($/M tokens)Output ($/M tokens)
llama-3.3-70b-versatile$0.59$0.79
llama-3.1-8b-instant$0.05$0.08

DeepSeek

ModelInput ($/M tokens)Output ($/M tokens)
deepseek-chat$0.14$0.28
deepseek-reasoner$0.55$2.19

Mistral

ModelInput ($/M tokens)Output ($/M tokens)
mistral-large-latest$2$6
mistral-small-latest$0.10$0.30

Together

ModelInput ($/M tokens)Output ($/M tokens)
meta-llama/Llama-3.3-70B-Instruct-Turbo$0.88$0.88

INFO

Models not in this table use a default estimate of $3/$15 per million tokens (input/output). Register custom pricing with registerPricing() to get accurate cost reports.

Provider Feature Support

All nine built-in providers support the full ThinkLang feature set:

FeatureAnthropicOpenAIGeminiOllamaGroqDeepSeekMistralTogetherOpenRouter
Structured output (JSON Schema)YesYesYesYesYesYesYesYesYes
Tool calling (agents)YesYesYesYesYesYesYesYesYes
Cost trackingYesYesYesYesYesYesYesYesYes

Ollama note

Ollama's structured output support depends on the specific model you run. Most recent models (Llama 3, Mistral, etc.) support JSON mode.

Environment Configuration (CLI)

If you also use the ThinkLang CLI to run .tl files, provider selection works the same way --- set the appropriate environment variable:

bash
export ANTHROPIC_API_KEY=sk-ant-...
thinklang run app.tl

# Override the default model
export THINKLANG_MODEL=gpt-4.1
thinklang run app.tl

Next Steps