Skip to content

Why ThinkLang?

If you're building with LLMs in TypeScript, you've probably written a lot of boilerplate: constructing API clients, defining JSON schemas by hand, parsing responses, handling errors, switching providers. ThinkLang wraps all of that behind a type-safe, provider-agnostic interface.

Code Comparison

Here's the same task — structured sentiment analysis — implemented three ways.

Raw OpenAI SDK

typescript
import OpenAI from "openai";

const client = new OpenAI();

const response = await client.chat.completions.create({
  model: "gpt-4o",
  messages: [
    {
      role: "user",
      content: "Analyze the sentiment of this review: " + review,
    },
  ],
  response_format: {
    type: "json_schema",
    json_schema: {
      name: "Sentiment",
      schema: {
        type: "object",
        properties: {
          label: {
            type: "string",
            enum: ["positive", "negative", "neutral"],
          },
          score: { type: "number" },
          explanation: { type: "string" },
        },
        required: ["label", "score", "explanation"],
      },
    },
  },
});

const result = JSON.parse(response.choices[0].message.content!);
// result is `any` — no type safety

ThinkLang (Library)

typescript
import { think, zodSchema } from "thinklang";
import { z } from "zod";

const Sentiment = z.object({
  label: z.enum(["positive", "negative", "neutral"]),
  score: z.number(),
  explanation: z.string(),
});

const result = await think<z.infer<typeof Sentiment>>({
  prompt: "Analyze the sentiment of this review",
  ...zodSchema(Sentiment),
  context: { review },
});
// result is typed as { label: "positive" | "negative" | "neutral"; score: number; explanation: string }

ThinkLang (Language)

thinklang
type Sentiment {
  @description("positive, negative, or neutral")
  label: string
  score: float
  explanation: string
}

let result = think<Sentiment>("Analyze the sentiment of this review")
  with context: review

Feature Comparison

FeatureThinkLangOpenAI SDKLangChainVercel AI SDK
Structured outputZod schemas or language typesManual JSON schemaOutput parsersZod schemas
Full TypeScript typesYes (think<T>())No (any)PartialYes
Multi-providerAnthropic, OpenAI, Gemini, OllamaOpenAI onlyYesYes
Agent loopsBuilt-in agent()Manual implementationYesYes
Tool definitionsdefineTool() with ZodManual function callingYesYes
Cost trackingBuilt-in per-call trackingNoCallbacksNo
Batch processingbatch(), mapThink(), DatasetNoNoNo
Output guardsguards with auto-retryNoNoNo
Confidence trackingConfident<T>NoNoNo
Zero-config initAuto-detects from env varsNoNoNo
Also a languageYes (.tl files with compiler)NoNoNo

Key Differentiators

  • Provider-agnostic: Write your code once, swap between Anthropic, OpenAI, Gemini, or Ollama with a single environment variable. No code changes.

  • Zod-native schemas: Define your output structure with Zod and spread it into any call with zodSchema(). Full TypeScript inference, no manual JSON schema.

  • Built-in cost tracking: Every think(), infer(), reason(), and agent() call is tracked. Call globalCostTracker.getSummary() to see total tokens and estimated cost.

  • Guards for output validation: Constrain AI output with rules (length, pattern, content) and retry automatically on failure — no manual validation loops.

  • Batch and scale: Process thousands of items with mapThink(), reduceThink(), and Dataset pipelines. Built-in concurrency control and cost budgets.

  • Also a language: For teams that want deeper integration, ThinkLang is a full programming language where AI primitives are keywords. The compiler catches type errors before you hit the API.

Next Steps