# BRIQ - In-Browser DuckDB Analytics

Natural language to SQL with DuckDB running entirely in the browser. Data stays local - nothing uploaded to servers. Supports CSV, Parquet, JSON, TSV, pipe-delimited, and DuckDB database files.

## Links
- App: https://briq.tigzig.com
- Docs: https://tigzig.com/app-documentation/briq.html
- GitHub: https://github.com/amararun/shared-sql-rooms-tigzig-new

## Tags
database-ai, duckdb, duckdb-wasm, in-browser, text-to-sql

## Architecture

Built on SQLRooms (https://github.com/sqlrooms/sqlrooms), an open-source React toolkit for AI-powered data analytics. Uses DuckDB-WASM for in-browser SQL execution.

```
Browser
├── React UI (SQLRooms toolkit)
├── DuckDB-WASM (SQL engine, runs locally)
├── AI Query Generation → OpenAI / Anthropic / Google / Ollama
└── Vega Charts (visualization)
```

All data processing happens in the browser. Only AI API calls leave the browser (natural language questions, schema info, sample rows up to 10 per table, query results up to 100 rows).

### Custom Additions (over base SQLRooms)
- DDL/DML query support (CREATE TABLE, INSERT, UPDATE, DELETE)
- Table export to CSV, Parquet, and Excel formats
- Database import for .db/.duckdb files with schema management
- Enhanced error recovery with retry logic and "same error twice" detection
- Out-of-memory detection
- Single-file build option for offline use
- Auto-delimiter detection for .txt files (tab, comma, pipe)

### AI Provider Support
- OpenAI: gpt-4o-mini, gpt-4.1-nano (requires paid account)
- Google Gemini: gemini-2.0-flash, gemini-2.5-flash-lite, gemini-2.5-flash (free tier available)
- Anthropic: claude-sonnet-4-5 (requires paid account)
- Ollama: local model support

## Data Import/Export

### Import
- CSV, TSV, pipe-delimited (.pipe, .psv), TXT (auto-delimiter detection), Parquet, JSON
- DuckDB database files (.db, .duckdb) imported as schemas
- All data stays in browser memory

### Export
- Individual tables: CSV, pipe-delimited TXT, Parquet
- Full database: ZIP archive with schema.sql + Parquet files + README
- Parquet format is 80-90% smaller than CSV

## Session Management
- Last 10 sessions with content are preserved
- Sessions persist across browser sessions
- Example sessions always preserved
- Sessions auto-save as you work

## Data Privacy
- Data stays in browser (DuckDB-WASM)
- No data uploaded to servers
- Files stored in browser memory only
- AI receives: questions, schema, sample rows (10/table), query results (100 rows max)
- AI does NOT receive: complete datasets, individual files, full table contents

## User Guide

### Quick Start
1. Click API Keys button, enter key for at least one provider (Google Gemini free tier recommended)
2. Select AI model from dropdown
3. Import data files or use saved example session "Tour de France cycling analysis"
4. Type questions in natural language - AI generates SQL, executes it, shows results

### Interface
- Data Sources panel (left): table structure, schemas, column types, row counts, import/export controls
- Chat area (center): natural language queries and results
- SQL Editor (terminal icon): direct SQL execution, DDL operations
- History dropdown: session management, past 10 sessions

### Tips
- Start with "SUMMARIZE tablename" or "Show first 20 rows" for new datasets
- Upload a data dictionary file for better AI understanding of your data
- Use schema.table syntax for imported databases
- Switch models if results aren't satisfactory

### Chart Types
Line charts, bar charts, scatter plots, and more - just ask in natural language.
