Maniac: Continually optimizing models from your LLM telemetry and evals.
Maniac is an enterprise AI platform that makes it easy to replace existing LLM API calls with fine-tuned, task-specific models. Drop in Maniac with one line of code to:
Capture and structure production LLM traffic
Automatically fine-tune and evaluate Small Language Models (SLMs) on your tasks
Replace over-generalized LLM calls with higher performance, lower latency models built for just what you need
Focus engineering time where it matters most: building and refining high-quality model evaluations—not managing infrastructure, hyperparameters, or bespoke fine-tuning pipelines
All with virtually no changes to your existing codebase.
Getting started
Sign up for Maniac
Head over to https://app.maniac.ai/auth/register
Create a new Organization
Organizations house multiple projects.
Add a Project
All your work — containers, evals, and deployments — live here.
Generate an API key
From your project settings
Dropping Maniac into your Codebase
For an agentic setup, copy this prompt and give it to your preferred coding agent:
Install the library
Initialize client
Create a container
Containers log inference and automatically build datasets for fine-tuning and evaluation. initial_model sets the model used in that container until a Maniac model is deployed in its place.
Log Completions
Now that you've made a container, let's add some data to it.
Optimizing your model
The inference logs in your container now serve as training data for a new SLM—fully yours, lower latency, most cost effective, and optimized specifically for your task.
Deploy.
Optimized models can be be deployed into a container from the Models tab. Once deployed, you can chat with your generated models, and inference requests are now routed through the Maniac model instead of the initial_model.

Need help?
📧 Email us at [email protected]
We'll get back to you within a day.
Last updated

