---
title: "Google - The old edge is back. By Dec '24, in AI, I had written Google off. Now, the balance has shifted"
slug: google-the-old-edge-is-back-by-dec-24-in-ai-i-had-written-google-off-now-the-balance-has-shif
date_published: 2025-09-23T13:13:57.341Z
original_url: https://www.tigzig.com/post/google-the-old-edge-is-back-by-dec-24-in-ai-i-had-written-google-off-now-the-balance-has-shif
source: migrated
processed_at: 2025-12-03T13:00:00.000Z
---

# Google - The old edge is back. By Dec '24, in AI, I had written Google off. Now, the balance has shifted

Horrible models. Gemini 2.0 Flash Experimental was a disaster – more than 50% error in my schema detection and structured output tests.

Then in Jan '25, they pushed it into production as Gemini-2.0-Flash-001.

I laughed. Tested it anyway.

* First run: 100% accuracy → luck
* Second run: 100% → coincidence
* Third run: 100% → that's a trend.

**That was the instant that Google changed.**

I even published my first post on it: [LinkedIn post](https://www.linkedin.com/posts/amarharolikar_gemini-ai-schema-detection-activity-7270621175974062080-9d6m)

And just for context – this wasn't casual testing, and I have used Google from day one. My first investment analysis webpage went live the same month Google did.

Today, approx. 20% of my workflow is Google-powered.

## Can Gemini become the top LLM, beating Claude Sonnet-4?

A year back I would have laughed. Now – very likely.

If Google gets this right, it might reign over AI the way it does over Search – lawsuits and congressional committees notwithstanding.

**The old Google is back.**

As much as I love them, I wouldn't want to be in Microsoft, OpenAI, or Anthropic's shoes at this point in time.

## 2.0-Flash is like the old Nokia 3310

I've tested it across workflows. Schema detection + structured output in my live Mutual Fund Portfolio Processor (India) still runs on Gemini 2.0 Flash. GPT-4.1 and Sonnet-4 both miss at times. Flash hasn't failed.

Try it live: [app.tigzig.com/mf-files-ai](https://app.tigzig.com/mf-files-ai)

This model is like the old Nokia 3310 - low-cost high-performance. Break it, drown it, throw anything at it, it works. Huge free tier.

## Google kept shipping

* **AI Studio** → I built 70% of my Quant Reporting Suite UI there (rest with Cursor). Live at [quants-suite.tigzig.com](https://quants-suite.tigzig.com/)
* **Gemini CLI** → 75K GitHub stars. I use it for FastAPI servers, xlwings Lite, simple web apps. [github.com/google-gemini/gemini-cli](https://github.com/google-gemini/gemini-cli)
* **Flash-2.5 & Pro-2.5** → solid upgrades (Pro is a token guzzler).
* **NotebookLM** → my YouTube time-saver (auto action items + guides). [notebooklm.google](https://notebooklm.google/)
* **Google Search AI Mode** → very new. Replaced Perplexity & ChatGPT for me for research tasks. Blazing fast.
* **Opal** → for mini-apps + n8n-like workflows. [opal.withgoogle.com](https://opal.withgoogle.com/landing/)
* **Database Toolbox** → web layer to connect agents to DBs. This could replace my custom connector. [Google Cloud Blog](https://cloud.google.com/blog/products/ai-machine-learning/announcing-gen-ai-toolbox-for-databases-get-started-today)
* **Code Interpreter** → sandboxed Python for LLM. [AI Studio Code Interpreter](https://aistudio.google.com/app/prompts/new_chat/code_interpreter)
* **URL Context** → fetch and analyze live pages. [AI Studio URL Context](https://aistudio.google.com/app/prompts/new_chat/url_context)
* **Google ADK** (Agent Development Kit)
* **Google LangExtract** → structured output from long docs. [Vertex AI LangExtract](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/langextract/overview)
* **Gemini with browser use** → [AI Studio Browser](https://aistudio.google.com/app/prompts/new_chat/browser)

Performance charts, technicals & CAGR reports → built with my open-source Portfolio Analysis tools at [quants.tigzig.com](https://quants.tigzig.com/). Valuation metrics → Yahoo Finance.
