𝐋𝐢𝐯𝐞 𝐏𝐨𝐫𝐭𝐟𝐨𝐥𝐢𝐨 𝐀𝐧𝐚𝐥𝐲𝐭𝐢𝐜𝐬 - 𝐏𝐨𝐰𝐞𝐫𝐞𝐝 𝐛𝐲 𝐌𝐂𝐏 𝐒𝐞𝐫𝐯𝐞𝐫𝐬 - Open Source
- Amar Harolikar
- 5 days ago
- 13 min read
Analyze any stock, crypto, metal, or oil symbol vs benchmark using 70+ KPIs & 15+ Charts, AI-powered technicals, and clean PDF + web reports.
Live across 6 interfaces: web apps, ChatGPT, chat agents, Excel (xlwings Lite), and forms.
Try it → quantstats.tigzig.com
Each interface serves a different use case - from rapid-deploy to full-featured agents. Modular architecture makes it easy to plug components into any flow - agentic or not.
Built the backend MCP servers, agent flows, and user interfaces as reusable modular components - easy to plug into different use cases and mix across stacks.
Performance stats powered by Python QuantStats package (by Ran Aroussi, creator of yfinance). Technical chart analysis with Python Finta and Gemini Vision. MCP servers built with Tadata FastAPI-MCP package, web apps with React and NextJS, and MCP enabled agents on n8n & Flowise.
Fully modular and live - clone it, remix it, set up your own stack
MCP Servers are public
- QuantStats MCP
- YFinance MCP
- Technical Analysis MCP
Full build breakdown below
1. WHAT IT DOES - Live Portfolio Analytics Stack
This is a working analytics stack delivering live performance reports, AI-driven technicals, and financial data pulls - powered by MCP, FastAPI, and modular agents.
1. QuantStats Performance vs. Benchmark For any Yahoo Finance symbol - stocks, crypto, oil, metals. Over 70 metrics, 15+ charts: drawdowns, Sharpe/Sortino ratios, return distributions, correlations - delivered in clean HTML
2. AI Technical Analysis Technical analysis across two timeframes - Across daily and weekly timeframes. Chart analysis via Gemini Vision API, delivered as PDF and web reports with structured tables.
3. Finance Data Pull: Extract prices, profiles, and full financials from Yahoo Finance: 150+ fields of profile info, P&L, balance sheet, cash flow. Excel integration via xlwings Lite
2. HOW TO USE - 6 Live Interfaces
The system runs across 6 interfaces - each tailored for different use cases
Agentic - Custom UI (NextJS + n8n)
2. Agentic - ChatGPT (Custom GPT)
3. Agentic - Advanced (React-based with full analysis tools)
4. Agentic - Rapid Deploy (Flowise Native)
5. Excel - xlwings Lite
6. Form UI - HTML-JS- Jinja2
Try them here → QuantStats Portfolio Analytics
Ask the agent for guidance or start with a prebuilt prompt.
All support the same analytics setup - just different frontends and feature layers. See next section on interface strategy.
3. INTERFACE STRATEGY - Why 6, and Why Modular
The setup is designed around modular Gen AI-powered components - backend, agent layer, and UI - each one reusable and configurable depending on the use case. Once core processing is in place, it's easy to plug into different interfaces without rebuilding the logic.
The six interfaces aren't just demos - they show real deployment options. From lightweight forms to full-stack apps and AI agents,. The options support a wide range of use cases - Section 6 on User Interfaces goes into details on when to use which and the trade-offs involved
To support this, I built three MCP-FastAPI servers and one standalone FastAPI server. These connect to agents running on n8n and Flowise, and frontends on React and Next.js. All components are connected via standard APIs, making them portable across tools - including third-party platforms.
In practice, modularity isn't always necessary. I sometimes deliver integrated solutions where the UI, logic, and agent live in the same build - faster for simpler use cases. But where reusability or scale is a factor, modular saves time, simplifies updates, and isolates risk.
This isn't about MCP or agents - it's about building practical, reusable analytics solutions that can plug into any interface or automation flow.
All live. Test it, clone it, build your own.
4. ARCHITECTURE - Modular, Component based, Reusable
The full stack is built around reusable components - each layer (frontend, agents, backend) is designed to plug into others with minimal setup. Here's how the architecture breaks down across interfaces and agents.
4.1. Modular - Component based: Why and When?
There are cases where a non-modular setup makes more sense. For example, in one client project I built a lightweight HTML-JS tool for Excel file import and manipulation, bundled with a browser-based SQLite agent and a simple NL-to-SQL chat interface - all integrated in a single app. In that setup, modularity would've just added unnecessary complexity.
But when I see components - UI, backend logic, agents - that can be reused, I default to separating them out. In another case, I built a custom UI connected to a Flowise agent and backend SQL service. Later, the client needed a second agent setup pointing to a different database. All I had to do was update the agent config and env variable - no UI rebuild, no backend changes.
Modular setups also help with debugging, iteration, and access control. I can isolate issues, restrict backend exposure, and upgrade parts independently.
I started building my technical analysis stack directly inside Excel with xlwings Lite. As it evolved, I split core processing into an MCP-FastAPI server - now the same logic runs across all UIs: web, forms, agents, GPT, Excel.
None of this is new - tech has done it for decades. Just sharing how modularity speeds up my own analytics builds, and when I choose to keep it simple.
4.2. Frontend options
Covered in detail in a separate section with notes on when to use what, and trade-offs based on my experience.
UI options include: Next.js, React, ChatGPT, form-based UI, Flowise native UI, and Excel
React and NextJS UIs are set up as reusable modules - they can connect to agent flows on Flowise, n8n, or any API-accessible setup. Just update the API endpoint in the env variable, match input/output formats, and it's live.
In the current setup, the n8n agent connects to the NextJS UI, and the Flowise agent connects to the React app. Both agents are MCP enabled and are in turn connected to MCP Servers.
4.3. Agent setups
The agent layer acts as the glue between UIs and backend MCP servers - handling workflows, orchestration, and logic routing. Here's how the n8n and Flowise setups are configured.
n8n and Flowise AI support webhooks and API endpoints - easy to plug into any interface and is great for keeping the Agent layer separate.
Both include MCP Clients with SSE support for remote MCP Servers. Just drop in the MCP Server URL and you're set. In the current setup, agents connect to multiple MCP Servers and database tools (Flowise).
Both are production-grade tools built for complex agentic and non-agentic workflows (n8n). Flowise supports Sequential Agents with LangGraph - enabling advanced orchestration with routing and step-wise execution. n8n is superb with its Agent node, wide platform integration, HTTP node for API calls, and a solid set of routing and processing nodes.
4.4. Core engine
This is the processing brain of the system - everything from calculations to report formatting runs through this Python-based backend, wrapped in modular MCP-FastAPI services.
QuantStats by Ran Aroussi (creator of yfinance)
yfinance for market data
Finta for technical indicators
Matplotlib for charts
Gemini Vision API for visual chart analysis
ReportLab for PDF formatting
4.5. Backend
Three integrated MCP + FastAPI servers and one standalone FastAPI server (details in next section). All Python logic is wrapped in FastAPI and mounted on an MCP Server in a single deployment - which also serves the form UI.
I keep all reusable logic on FastAPI - easy to automate and connect across UIs, from ChatGPT to Excel to any custom UI. Tadata's FastAPI-MCP package makes it simple to mount MCP on any existing FastAPI setup.
Connections
n8n and Flowise agents connect to the MCP server via their native MCP Client nodes
React and NextJS UIs connect to agents via API
ChatGPT connected to FastAPI endpoints on the same integrated MCP -FastAPI server via Custom Actions OpenAPI schema
Form UI connected to FastAPI endpoints on the same MCP-FastAPI server
Excel connects to FastAPI endpoints through xlwings Lite
There are cases where a non-modular setup makes more sense. For example, in one client project I built a lightweight HTML-JS tool for Excel file import and manipulation, bundled with a browser-based SQLite agent and a simple NL-to-SQL chat interface - all integrated in a single app. In that setup, modularity would've just added unnecessary complexity.
But when I see components - UI, backend logic, agents - that can be reused, I default to separating them out. In another case, I built a custom UI connected to a Flowise agent and backend SQL service. Later, the client needed a second agent setup pointing to a different database. All I had to do was update the agent config and env variable - no UI rebuild, no backend changes.
Modular setups also help with debugging, iteration, and access control. I can isolate issues, restrict backend exposure, and upgrade parts independently.
I started building my technical analysis stack directly inside Excel with xlwings Lite. As it evolved, I split core processing into an MCP-FastAPI server - now the same logic runs across all UIs: web, forms, agents, GPT, Excel.
None of this is new - tech has done it for decades. Just sharing how modularity speeds up my own analytics builds, and when I choose to keep it simple.
5. MCP SERVERS
The backend is split into four focused processing services - each one handles a specific piece of the analytics workflow, from financial data to report generation. All are exposed via MCP or API, built for reuse and quick integration.
There are three custom MCP Servers and one standalone FastAPI Server. The MCP servers are public - just plug and play. Add the URL to any SSE-enabled MCP client. Both n8n and
Flowise have native nodes for this.
The servers are integrated MCP-FastAPI servers. I used Tadata's FastAPI-MCP package to mount MCP on top of FastAPI - just a few lines of codes - brilliant package. A single deployment runs MCP + FastAPI + Form UI (HTML-JS-Jinja2) . Works cleanly with both agentic and non-agentic setups.
All the user interfaces connect to these integrated MCP-FastAPI servers - explained in the User Interfaces section
5.1. QuantStats MCP Server
This is an MCP-FastAPI wrapper over the QuantStats package - connects to any frontend via MCP or API. Takes two Yahoo Finance symbols (one for performance, one for benchmark) plus a time range. Returns a formatted HTML report using the QuantStats package. Tables and charts are auto-generated directly by the package.
The MCP enabled agents connect to MCP Server and rest of UIs to the FastAPI endpoints.
Detailed info, docs, and source QuantStats MCP Server
5.2. Technical Analysis MCP Server
This is a processing server that runs a multi-step workflow to generate technical analysis reports in PDF and web format. Takes a Yahoo Finance symbol and time range, returns AI-generated reports.
Workflow steps:
Connects to Yahoo Finance MCP-FastAPI server to pull price data
Converts daily prices to weekly dataframe
Calculates technical indicators using Finta
Connects to Gemini Vision API to get chart and technical analysis
Connects to ReportLab FastAPI server for generating the final PDF and Web reports.
The MCP enabled agents connect to MCP Server and rest of UIs to the FastAPI endpoints.
Detailed info, docs and source: Technical Analysis MCP Server
5.3 Yahoo Finance MCP Server
This is an MCP-FastAPI wrapper over the yfinance package. Connects to any frontend via MCP/ API. Takes a Yahoo Finance symbol and returns:
Price data (JSON) - requires date range
Company profile with 150+ fields
Full financials: P&L, balance sheet, cash flow, and quarterly breakdowns
The MCP enabled agents connect to MCP Server and rest of UIs to the FastAPI endpoints.
Detailed info, docs and source: Yahoo Finance MCP Server
5.4 ReportLab Markdown to PDF-HTML FastAPI Server
This is a FastAPI processing server for generating custom formatted reports. PDF and HTML outputs are customized for this use case but can be adapted for others. ReportLab offers deep customization for PDFs, easily replicated in web reports using standard HTML-JS. Workflow:
Convert Markdown to HTML (using markdown)
Parse HTML with BeautifulSoup for structure
Use ReportLab to build styled PDF
Style HTML output to match PDF
Reference charts from static folder
Auto-clean old files (older than 24 hrs) using FastAPI startup events + Starlette background tasks
I've set up a separate endpoint for technical analysis, with custom formatting for both PDF and HTML outputs. The same FastAPI server also includes a generic endpoint that takes Markdown content and returns a PDF with simpler formatting. It's deployed as an HTML-JS-Tailwind form UI that calls FastAPI endpoints - all served from a unified FastAPI server using Jinja2 templates. So it's a single FastAPI server supporting a custom format as well as a generic markdown to PDF converter: Markdown to PDF Converter
6. USER INTERFACES
All processing and agent logic connects to live, working interfaces. Each UI is connected to the same backend and agent layer - just optimized for different workflows, tools, or user preferences. All of these are live, working apps - each built on top of the same backend stack. The UI is just the entry point. Some are lightweight and fast to deploy, others offer more control or customization.
In the section below, I've shared my experience working with each type of interface - when to pick which and the trade-offs involved. Everything is live rex.tigzig.com along with source codes and build guides.
6.1. Custom GPT
Custom GPT is usually my first choice when a UI is needed. You get an out-of-the-box interface, embedded agent, and Python code execution. Just supply a JSON OpenAPI schema to connect to any backend. Faster and cleaner than building even basic HTML-JS forms.
Limitations: no custom UI, and feature set is narrower than a React or NextJS build. You'll need a Plus account ($20/month) to create a Custom GPT, though free users can still access it with rate limits.
Live setup connects to QuantStats and Technical Analysis MCP-FastAPI servers via FastAPI endpoints.
QuantStats GPT link + Schema: Custom GPT for Portfolio Analysis
Modular setups: See more live and open-source examples of Custom GPT setups in the Custom GPT section on rex.tigzig.com - all of them connected to reusable backend components. GPTs include - GPTs connected to Supabase, extractors for Yahoo Finance data, and automated workflows using n8n (PDF and Slides generator). Also Custom GPTs to connect to any DB by providing credentials, and run custom Python code inside GPT for automated file processing. As an example, the DB connector GPTs connect to databases via a shared FastAPI connector layer that handles SQL execution. This connector is designed as a reusable backend service and is used across all TEXTTOSQL and natural language to SQL apps on the REX site. It separates out DB access and logic from the GPT layer - making it easy to plug into different UIs and reuse across multiple apps.
6.2. Flowise native UI
Flowise native UI is a solid option when GPT isn't feasible. No UI build required. You get a ready-to-use chat interface that supports complex agent flows using LangGraph Sequential Agents, custom tools, and APIs. n8n too provides a similar native UI.
The QuantStats Agent is connected to the same MCP-FastAPI servers via the MCP Client node. The agent also supports remote Postgres/MySQL DB access (via a custom FastAPI connector I built), Python charting, and statistical analysis through the e2B Code Interpreter. n8n too provides a similar native UI.
Live Agent: Flowise Agent UI
Schemas and source
Flowise schema → docs folder in NextJS repo
Full tool schemas and DB server code → rex.tigzig.com → Agents → REX — Advanced Analyzer → Docs
Modular setup: The Flowise (and n8n) agent setup works well as a modular component to keep the agent layer separate. It can run with its native UI or be easily connected to any custom frontend. In the QuantStats example, I've connected this exact same agent to the REX React app (shared below). Another example is the DeepSeek multi-step advanced database analyst agent - available both with the native Flowise UI and as part of the REX Advanced Analyzer React app. All open source, all live on rex.tigzig.com under the AI Agents section. Source code available on the site under Docs.
6.3. Excel integration - xlwings Lite
xlwings Lite (by Felix Zumstein creator of xlwings) is a lightweight Excel add-in that runs full Python workflows inside Excel - no local Python install needed. It comes with a built-in editor, console, environment variables, and deep Excel integration.
One of the big benefits is that it allows easy connectivity to any CORS-enabled backend API service. This fits well with my setup - since I use FastAPI servers extensively - and also makes it easy to connect to LLM/AI endpoints and third-party server providers.
Live Excel app: xlwings Lite — Technical Analysis
The current app supports technical analysis; QuantStats integration is planned. It connects to FastAPI endpoints via xlwings Lite, with parts of the technical analysis logic embedded directly for in-Excel processing.
This app is connected to two FastAPI endpoints: the Yahoo Finance MCP-FastAPI server and the ReportLab FastAPI server. The technical analysis code is embedded inside, which I later pulled out and consolidated into a standalone MCP-FastAPI server - so it could be used across other applications as well.
Connect to Databases from Excel with xlwings Lite: This modular Excel app connects to any remote Postgres or MySQL database via the same FastAPI SQL connector layer mentioned earlier. It supports pulling full tables, table metadata, random N records, first N records, and running custom SQL queries - right inside Excel. All of this is powered by a reusable SQL connector component used across all my apps. Excel Database Connector App - xlwings Lite
Yahoo Finance Data Extractor: Another modular setup. Extract detailed profile, price, and financial info from Yahoo Finance directly into Excel - no API key needed. This app connects to the same backend Yahoo Finance MCP-FastAPI server via standard API endpoints. Live app and detailed docs available at: Yahoo Finance Data Extractor
6.4. Simple Forms
In many cases, a simple form works best. I typically use Jinja2 templates (HTML-JS-Tailwind) for tight FastAPI integration. Sometimes Flask. Works well for fairly complex UIs too, with full JavaScript access and server-side rendering - env vars and routes stay hidden. For lightweight use cases, I also use single-page HTML-JS forms. Codebase stays simple and easy to maintain.
QuantStats Form: QuantStats Form App
Technical Analysis Form: Technical Analysis Form App
Both of these form UIs are built with HTML-JS-Tailwind and served using Jinja2 templates. Best part: a single MCP-FastAPI deployment runs all three components - MCP server, FastAPI server, and the form UI. Separate endpoints handle API and MCP access, and the UI is deployed right on the root. Source code is on the app pages - same repo as the corresponding MCP servers (deployed together).
The Data section on rex.tigzig.com includes a range of other form-based UIs - both AI-enabled and non-AI processors. Some are built with simple HTML-JS, others with Jinja2/Flask. These include: an AI-enabled mutual fund portfolio processor, a file-to-text converter (MarkitDown for LLM input), PDF to Markdown, and Markdown to PDF tools. Some are non-modular, but most are modular with a separate FastAPI layer that can be connected to any other application. All are live, open source, and have full source code available on the site under Docs.
6.5. Going full stack? React / NextJS
For complex apps and a polished UI, I use React or NextJS. The REX-3 Co-Analyst app has the QuantStats Agent on a separate tab, connected to the same Flowise agent from earlier - just wrapped inside a React interface.
REX-3 app includes:
Connect to any Postgres or MySQL DB
6 advanced multi-step reasoning agents
TEXT2SQL (natural language to SQL)
Temporary Postgres DBs
Python charts and stats
CSV/TXT upload (direct to DB)
Export from DB in CSV/TXT format
Data structure + quick analysis
Doc updates and process logs
Simple auth
Upload sample files and run advanced analysis
Modular Design: The REX-3 React app is connected to 7 separate agents on Flowise. These agents can be connected to any API-accessible backend - whether Flowise, n8n, a third-party platform, or a custom build. All that's required is to update the agent API endpoint in the environment variable and match the input/output formats.
Trade-off: React/ JS frameworks immense flexibility, but with higher build effort.
Live app: REX-3 Full App. Source code on Docs section on the app site.
6.6. NextJS
The biggest benefit of NextJS : env vars and API routes stay private, and it supports server-side rendering. That's a major security benefit. The NextJS portfolio analysis agent uses a leaner UI, but it's still set up in a modular way - can be connected to any API-enabled agent backend by just changing the environment variable.
Trade-offs are the same as with React apps.
Note: Vercel serverless functions time out at 60 seconds (300s on pro plan), so longer API calls need workarounds.
App and code: NextJS QuantStats Agent
7. DEPLOYMENTS
NextJS and React apps are deployed on Vercel. All MCP + FastAPI servers, along with n8n and Flowise, run on a Dockerized setup via Coolify, deployed on a Hetzner VPS. Here's my post on how I use it, along with resources:
Top Resources for Gen AI Applications: Coolify - Self Hosted alternative to Render / Heroku
8. AI CODER: CURSOR
I use Cursor as my AI coding assistant across all builds - including this one. Every part of this stack - from UI's to FastAPI servers - was written and iterated using Cursor. My post on how I use it, along with resources:
Top Resources for Gen AI Applications - AI Coder: Cursor
Fine print: This is not investment research or financial advice. It's a live working example showing how to stitch together AI, analytics, and infra into real outputs. The logic and analysis structure is based on a general-use setup - fully modifiable to fit your own requirements. Source code, backend, and app stack are open and adaptable. AI and humans can both make mistakes - always validate results.