Skip to content

Core Functions

ThinkLang provides three core AI functions for different tasks. Each function sends a prompt to a configured LLM provider and returns structured, typed data.

  • think -- general-purpose LLM call
  • infer -- lightweight classification or interpretation of an existing value
  • reason -- multi-step chain-of-thought reasoning

think<T>(options): Promise<T>

General-purpose LLM call. Send a prompt, get structured data back.

Basic usage

typescript
const greeting = await think<string>({
  prompt: "Say hello",
  jsonSchema: { type: "string" },
});

With Zod schema

typescript
import { z } from "zod";
import { think, zodSchema } from "thinklang";

const Sentiment = z.object({
  label: z.enum(["positive", "negative", "neutral"]),
  score: z.number(),
});

const result = await think<z.infer<typeof Sentiment>>({
  prompt: "Analyze the sentiment of this review",
  ...zodSchema(Sentiment),
  context: { review: "Great product!" },
});

With context

typescript
const category = await think<{ label: string; confidence: number }>({
  prompt: "Classify this support ticket",
  jsonSchema: {
    type: "object",
    properties: {
      label: { type: "string" },
      confidence: { type: "number" },
    },
    required: ["label", "confidence"],
    additionalProperties: false,
  },
  context: { ticket: "My order hasn't arrived after 2 weeks" },
});

Options

OptionTypeRequiredDescription
promptstringYesThe prompt sent to the LLM
jsonSchemaobjectYesJSON Schema for the expected output
contextobjectNoContext data made available to the LLM
withoutKeysstring[]NoKeys to exclude from context
guardsGuardRule[]NoValidation rules applied to the result
retryCountnumberNoNumber of retry attempts on failure
fallback() => unknownNoFallback value if all retries fail
schemaNamestringNoOptional name for the schema
modelstringNoOverride the default model

Behavior

  1. Checks cache for identical prior call. Returns cached result on hit.
  2. Builds system prompt and user message from prompt and context.
  3. Calls the configured ModelProvider.
  4. Records usage in global CostTracker.
  5. Evaluates guard rules (if any). Throws GuardFailed on violation.
  6. Stores result in cache.

infer<T>(options): Promise<T>

Lightweight inference -- give it a value, get a typed interpretation.

typescript
import { infer } from "thinklang";

const parsed = await infer<{ iso: string }>({
  value: "Jan 5th 2025",
  hint: "Parse this into an ISO date",
  jsonSchema: {
    type: "object",
    properties: { iso: { type: "string" } },
    required: ["iso"],
    additionalProperties: false,
  },
});
// parsed.iso → "2025-01-05"

Another example:

typescript
const priority = await infer<string>({
  value: "urgent: server is down!",
  hint: "Classify priority as low, medium, high, or critical",
  jsonSchema: { type: "string" },
});

Options

OptionTypeRequiredDescription
valueunknownYesThe input value to transform/classify
hintstringNoOptional hint describing the transformation
jsonSchemaobjectYesJSON Schema for the expected output
contextobjectNoAdditional context
withoutKeysstring[]NoKeys to exclude from context
guardsGuardRule[]NoValidation rules
retryCountnumberNoRetry attempts
fallback() => unknownNoFallback value

reason<T>(options): Promise<T>

Multi-step chain-of-thought reasoning. Guide the LLM through explicit steps.

typescript
import { reason, zodSchema } from "thinklang";
import { z } from "zod";

const Analysis = z.object({
  recommendation: z.string(),
  risk: z.string(),
});

const analysis = await reason<z.infer<typeof Analysis>>({
  goal: "Analyze this investment portfolio",
  steps: [
    { number: 1, description: "Evaluate current allocation" },
    { number: 2, description: "Assess market conditions" },
    { number: 3, description: "Identify risks" },
    { number: 4, description: "Formulate recommendation" },
  ],
  ...zodSchema(Analysis),
  context: { portfolio: { stocks: 60, bonds: 30, cash: 10 } },
});

Options

OptionTypeRequiredDescription
goalstringYesThe reasoning objective
stepsReasonStep[]YesOrdered steps for the LLM to follow
jsonSchemaobjectYesJSON Schema for the expected output
contextobjectNoContext data
withoutKeysstring[]NoKeys to exclude
guardsGuardRule[]NoValidation rules
retryCountnumberNoRetry attempts
fallback() => unknownNoFallback value

ReasonStep: { number: number; description: string }

Comparison

FunctionUse caseInput
thinkGenerate structured data from a promptPrompt + optional context
inferClassify or interpret an existing valueValue + optional hint
reasonComplex analysis with explicit stepsGoal + numbered steps

All three functions support context, guards, retry, and fallback options.

Next Steps