Runwita uses AI for two very different shapes of work, and they have very different cost profiles. Rather than ask one model to do both well, the app splits them into two independently configurable tiers.Documentation Index
Fetch the complete documentation index at: https://docs.runwita.com/llms.txt
Use this file to discover all available pages before exploring further.
The two tiers at a glance
| Tier | Role | When it runs | Default model |
|---|---|---|---|
| Frontier | Slow reasoning over a whole journey | Rarely, 1 to 3 times a day per journey | Claude Haiku 4.5 (you can switch to Opus, gpt-5, etc.) |
| Workhorse | Fast extraction and chat | Often, every captured note | gpt-5.4-nano (or Claude Haiku, Ollama, custom) |
Frontier (the intelligence layer)
The Frontier tier handles work that needs to read the entire journey: every engagement, every topic, every decision, and synthesise something coherent across them. This is the kind of work where output quality compounds with model capability. What runs on Frontier:- Deal stage detection. Reads the journey’s history and decides where in the customer lifecycle this is (Discovery, Qualification, Build, Go-live, Renewal, Churn Risk).
- Stakeholder analysis. Identifies the influence map across attendees over time.
- Sentiment analysis. How is this relationship trending, mood-wise.
- Deal health. A composite score with reasoning.
- Meeting brief. Pre-meeting prep: what was the last conversation, what’s open, what to land in this one.
- Objection detection. What concerns has this stakeholder raised that haven’t been addressed.
- Commitment gap detection. What did we say we’d do, and haven’t.
- Stale journey flagging. Has this gone quiet in a way that should worry us.
- Executive summary. A board-ready paragraph distilling the journey.
Workhorse (extraction and chat)
The Workhorse tier handles the high-frequency, mostly mechanical work:- Meeting extraction. The big one. Every transcript, every set of notes, every email goes through this. Title, date, summary, sections, decisions, actions, attendees.
- Journey matching. When you save an engagement, picking which journey it belongs to.
- Topic matching. When you save an engagement, deciding which topics each section belongs to (or whether to create new ones).
- Chat. The chatbot UI for asking questions about a journey.
When to upgrade which tier
A few rules of thumb:| Symptom | Likely fix |
|---|---|
| Journey matches are wrong or low-confidence too often | Upgrade Workhorse to Haiku 4.5 or gpt-5 |
| Topics are over-fragmenting (same thing as 3 separate topics) | Upgrade Workhorse |
| Topics are over-merging (different things on same topic) | Upgrade Workhorse |
| Extracted sections feel shallow or miss substance | Upgrade Workhorse (or switch model entirely) |
| Deal stage detection is consistently wrong | Upgrade Frontier |
| Executive summary feels generic | Upgrade Frontier |
| Stakeholder analysis misses obvious dynamics | Upgrade Frontier |
| Meeting brief reads like a regurgitation of one meeting, not synthesis | Upgrade Frontier |
Provider options for each tier
Both tiers support the same four providers:- Claude (Anthropic). Best output quality at every price point in Runwita’s experience. Fast streaming.
- OpenAI. Strong on the workhorse tier especially. gpt-5.4-nano is the cheapest credible option. gpt-5 and gpt-4.1 work too.
- Ollama (local). Run a model on your own machine, zero API cost, zero data leaves your laptop. Slower and less capable than cloud options. Qwen3-8B is the recommended default if you go this route.
- Custom. Any OpenAI-compatible endpoint. LiteLLM, vLLM, Together, OpenRouter, your own proxy, all work. You set the base URL and model name yourself.
/v1/models endpoint, so you always see what your API key actually has access to. For Ollama and Custom, you type the model name (or pick from /api/tags for Ollama).
Privacy implications
Cloud providers (Claude, OpenAI) see the text being processed on each call. That’s the transcript or notes for an extraction, the journey context for a Frontier analysis. They don’t see your full database, just the per-call payload. None of it is used for training (per their respective enterprise terms). Ollama keeps everything on your machine. Choose Ollama on both tiers if you want zero data leaving your laptop. It’s slower and the output is less polished, but the privacy gain is total. Custom providers (LiteLLM proxy, OpenRouter, etc.) inherit the privacy properties of whatever sits behind your endpoint.What’s next
Settings: models
The full model picker, provider by provider.
Troubleshooting: extraction errors
What to do when an extraction fails.

