Two models added to Database AI Suite this week: GPT-5.1 and KIMI 2 Thinking
Published: November 18, 2025
KIMI 2 Thinking is close to Gemini 2.5 Flash in quality. Costs are higher and more volatile. GPT-5.1 is 20% cheaper than GPT-5 for database execution. Token bloat is down. Default reasoning set to none instead of medium.
For advanced analysis planning (single API call, reasoning step)
- Gemini 2.5 Flash: strong quality, predictable costs
- KIMI 2: similar quality, costs are higher and more volatile
- GPT-5.1: high quality, high cost
- Claude Sonnet 4.5: highest quality, highest cost
- GPT-5 / Gemini-2.5-Pro: avoid on APIs - major token and cost bloat
For database execution (where 80% of cost sits)
- GPT-4.1-mini: my current workhorse for simple to medium work, very cost effective
- GPT-4.1: complex queries, strong at debugging, 30% cheaper than GPT-5.1
- GPT-4o-mini: high volume work, half the cost of GPT-4.1-mini
GPT-5-mini still has token bloat. Default reasoning is medium. For regular work, GPT-4.1-mini is better. For more firepower ramp up stepwise → GPT-4.1 → GPT-5.1
Planning vs execution costs
In multi-step advanced analysis workflow, for a single iteration, planner runs once. Execution agent runs 7 to 10 queries and debugs. Planning is 20% of cost. Execution is 80%.
Avoid multi-step unless needed
Multi-step workflows multiply costs fast. Single step: approx. $0.40 per 100 questions with a GPT-4.1-mini. Advanced analysis: approx. $15 per 100 questions depending on LLM. Use multi-step multi-agent workflows only when needed.
DATS-4 (Database AI Suite v4)
Database AI app. Connects to Postgres or MySQL. Two workflows: simple text-to-sql and advanced multi-step analysis. Supports python charts, table upload, export, PDF reports.
Try it
Sample button uploads test data to temporary Postgres instance. Use sample prompts. Or upload your files to a temp DB / connect to your DB.
Note
Public app routes through my backend. Sandbox testing only. For production, deploy on your servers.