Vercel AI SDK
The Maniac platform lets you collect LLM completions for training and evaluation. This guide walks through two ways to integrate Maniac into a project that uses the Vercel AI SDK:
Option A -- OpenAI-compatible proxy: Maniac acts as a proxy between your app and the underlying model. Inference and data collection happen in a single call with zero extra wiring.
Option B -- Register completions: You run inference through any provider you like (Anthropic, OpenAI, etc.) and then POST the completion to Maniac after the fact. This is more flexible and works with streaming, custom pipelines, or providers that Maniac does not proxy.
Both options start the same way: you create a container on the Maniac platform.
Prerequisites
Before you begin, make sure you have:
A Maniac API key (set it as
MANIAC_API_KEYin your environment).Node.js 18+ and an existing Vercel AI SDK project (or a new one).
The following packages installed:
ai(the Vercel AI SDK core)@ai-sdk/openai-compatible(needed for Option A)A provider package such as
@ai-sdk/anthropicor@ai-sdk/openai(needed for Option B)
Step 1 -- Create a Maniac Container
A container is a logical namespace on the Maniac platform that is tied to a specific base model. Every completion you send to that container -- whether proxied or registered -- is stored and can later be used for fine-tuning, evaluation, or analytics.
You only need to create a container once. After that, you reference it by its label in all subsequent calls.
The label must be unique within your project. The initialModel field should match the model you intend to use for inference (e.g. anthropic/claude-sonnet-4.5, openai/gpt-4o-mini). Maniac uses this to configure the container's proxy.
Step 2 -- Choose an Integration Path
The sections below cover each option in detail. You can mix and match -- for example, use the proxy in development for convenience and switch to registered completions in production for more control.
Option A: OpenAI-Compatible Proxy
This is the simplest integration path. The Vercel AI SDK's createOpenAICompatible helper lets you point your model calls at any OpenAI-compatible endpoint. When you point it at Maniac, every request is forwarded to the underlying model and the completion is automatically recorded in your container.
The flow looks like this:
Your app calls
generateText(orstreamText) with the Maniac provider.Maniac forwards the request to the base model configured on the container.
The model response is returned to your app and stored on the platform.
The model string follows the format maniac:<container-label>. The maniac: prefix tells the platform to route the request to the container you created, which already knows which underlying model to call.
Option B: Register Completions
If the OpenAI-compatible proxy does not fit your use case -- for example, you need to call a provider directly, use streaming, or run completions through a custom pipeline -- you can still feed data into a Maniac container by registering completions after the fact.
The registration endpoint is:
Each item you register must include an input and an output, both formatted according to the OpenAI Chat Completions schema. This means:
inputcontains amessagesarray (each message has aroleandcontent).outputcontains achoicesarray (each choice has amessagewithroleandcontent).
Here is a minimal helper function:
The items field is an array, so you can batch multiple completions in a single request if needed.
There are several ways to hook this into the Vercel AI SDK. The two most common patterns are middleware and onFinish callbacks.
Using Middleware
The Vercel AI SDK supports language-model middleware -- functions that wrap every call to a model. This is the cleanest approach when you want every completion to be registered automatically, regardless of where in your codebase the call originates.
The middleware below intercepts generateText calls via wrapGenerate. After the underlying model returns a result, it extracts the text content, formats it into the OpenAI chat-completions schema, and fires off a registration request in the background.
To use it, wrap your model with wrapLanguageModel and pass in the middleware. From that point on, every generateText call made with this wrapped model will automatically register completions to your container.
Using onFinish
onFinishIf you only need to register completions in one or two places, the onFinish callback is a lighter-weight alternative that avoids the overhead of setting up middleware. The callback fires once the model has finished generating, giving you access to the final text.
Which option should I use?
Best for: Quick setup, prototyping, or any situation where you are happy to route inference through Maniac.
Zero extra code beyond
createOpenAICompatible.Completions are captured automatically.
You are limited to models and providers that Maniac supports as a proxy.
Best for: Production workloads, custom pipelines, or when you need full control over the inference provider.
Works with any model or provider.
Supports streaming via
onFinishor middleware.Requires a small amount of extra wiring (the
registerCompletioncall).Can batch-register multiple completions in a single request.
Last updated