Why ThinkLang?
If you're building with LLMs in TypeScript, you've probably written a lot of boilerplate: constructing API clients, defining JSON schemas by hand, parsing responses, handling errors, switching providers. ThinkLang wraps all of that behind a type-safe, provider-agnostic interface.
Code Comparison
Here's the same task — structured sentiment analysis — implemented three ways.
Raw OpenAI SDK
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "user",
content: "Analyze the sentiment of this review: " + review,
},
],
response_format: {
type: "json_schema",
json_schema: {
name: "Sentiment",
schema: {
type: "object",
properties: {
label: {
type: "string",
enum: ["positive", "negative", "neutral"],
},
score: { type: "number" },
explanation: { type: "string" },
},
required: ["label", "score", "explanation"],
},
},
},
});
const result = JSON.parse(response.choices[0].message.content!);
// result is `any` — no type safetyThinkLang (Library)
import { think, zodSchema } from "thinklang";
import { z } from "zod";
const Sentiment = z.object({
label: z.enum(["positive", "negative", "neutral"]),
score: z.number(),
explanation: z.string(),
});
const result = await think<z.infer<typeof Sentiment>>({
prompt: "Analyze the sentiment of this review",
...zodSchema(Sentiment),
context: { review },
});
// result is typed as { label: "positive" | "negative" | "neutral"; score: number; explanation: string }ThinkLang (Language)
type Sentiment {
@description("positive, negative, or neutral")
label: string
score: float
explanation: string
}
let result = think<Sentiment>("Analyze the sentiment of this review")
with context: reviewFeature Comparison
| Feature | ThinkLang | OpenAI SDK | LangChain | Vercel AI SDK |
|---|---|---|---|---|
| Structured output | Zod schemas or language types | Manual JSON schema | Output parsers | Zod schemas |
| Full TypeScript types | Yes (think<T>()) | No (any) | Partial | Yes |
| Multi-provider | Anthropic, OpenAI, Gemini, Ollama | OpenAI only | Yes | Yes |
| Agent loops | Built-in agent() | Manual implementation | Yes | Yes |
| Tool definitions | defineTool() with Zod | Manual function calling | Yes | Yes |
| Cost tracking | Built-in per-call tracking | No | Callbacks | No |
| Batch processing | batch(), mapThink(), Dataset | No | No | No |
| Output guards | guards with auto-retry | No | No | No |
| Confidence tracking | Confident<T> | No | No | No |
| Zero-config init | Auto-detects from env vars | No | No | No |
| Also a language | Yes (.tl files with compiler) | No | No | No |
Key Differentiators
Provider-agnostic: Write your code once, swap between Anthropic, OpenAI, Gemini, or Ollama with a single environment variable. No code changes.
Zod-native schemas: Define your output structure with Zod and spread it into any call with
zodSchema(). Full TypeScript inference, no manual JSON schema.Built-in cost tracking: Every
think(),infer(),reason(), andagent()call is tracked. CallglobalCostTracker.getSummary()to see total tokens and estimated cost.Guards for output validation: Constrain AI output with rules (length, pattern, content) and retry automatically on failure — no manual validation loops.
Batch and scale: Process thousands of items with
mapThink(),reduceThink(), andDatasetpipelines. Built-in concurrency control and cost budgets.Also a language: For teams that want deeper integration, ThinkLang is a full programming language where AI primitives are keywords. The compiler catches type errors before you hit the API.
Next Steps
- Quick Start — get running in under a minute
- Core Functions — think, infer, and reason in detail
- Agents & Tools — build agentic workflows