top of page

GenAI App | LLM Analytics Assistant: Simplifying Data Transformation & Insights. AWS & Azure MySQL DW Example

Updated: Jan 23




NEW 

My open-source platform with a ton of micro-apps and tooling's for AI driven analytics

Text to SQL / connect to ANY data-warehouse on the fly/ direct file upload to data-warehouse table / create temporary database on the fly / python charts / statistical analysis

Realtime-voice connected to database - OpenAI new WebRTC API & Eleven Labs

And more ....


3rd part of the series on LLM Analytics Assistant Apps


Demonstrating data transformation and analysis on AWS MySQL via an LLM App. The app is deployed on my public website (outside of GPT Store, access-controlled section).


I cover 3 areas:


𝗟𝗟𝗠 𝗔𝗣𝗣 𝗗𝗘𝗠𝗢

𝗗𝗮𝘁𝗮 𝗪𝗿𝗮𝗻𝗴𝗹𝗶𝗻𝗴 & 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀: prototype customer table and transaction table with a million to 10 million records, creating summaries and merging data into new tables with additional variables... analyzing and creating customer profiles. All instructions in natural language... sometimes fuzzy and unclear... and sometimes with spellos...



𝗕𝗔𝗦𝗜𝗖 𝗔𝗥𝗖𝗛𝗜𝗧𝗘𝗖𝗧𝗨𝗥𝗘

Similar to one that I am currently using on a live client project.


𝗟𝗟𝗠 𝗔𝗽𝗽 𝗕𝘂𝗶𝗹𝗱 𝗮𝗻𝗱 𝗨𝗜: using Flowise AI. Open-source. Allows for rapid deployment. Powerful capabilities. Many other options - e.g. custom build with React/Next.js that can link up to company SSO and authentications.


𝗠𝗼𝗱𝗲𝗹 𝗖𝗵𝗼𝗶𝗰𝗲: trade-offs between pricing, speed, response quality, and security/privacy. Premium model vs. open-source on-prem solution.


𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗙𝗹𝗲𝘅𝗶𝗯𝗶𝗹𝗶𝘁𝘆: FastAPI processing server. Separate from the main system, making it reusable with different UI apps and backend databases.


𝗖𝗢𝗦𝗧 𝗖𝗢𝗡𝗦𝗜𝗗𝗘𝗥𝗔𝗧𝗜𝗢𝗡𝗦

𝗖𝗼𝘀𝘁 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: ran 478 API requests/queries over 10 hours with GPT-3.5, costing around $1... working with the 1 million-10 million dataset referred to above... also discuss optimization strategies...


𝗖𝗵𝗼𝗼𝘀𝗶𝗻𝗴 𝗟𝗟𝗠 𝗺𝗼𝗱𝗲𝗹𝘀: depends on use case. e.g. Multi-LLM option...for difficult tasks, use an expensive model, and for simpler tasks, use a lower cost model.... or On-Prem solution for specific use cases.


𝗙𝘂𝗹𝗹 𝗗𝗮𝘁𝗮 𝗜𝗻𝗴𝗲𝘀𝘁𝗶𝗼𝗻 by the LLM model is not always necessary... can significantly increase costs... potentially increasing by 100 times or more. For many use cases, processing can be done separately, and the LLM only passes SQL queries/Python commands.


𝗦𝗽𝗹𝗶𝘁 𝗪𝗼𝗿𝗸𝗳𝗹𝗼𝘄 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵: for scenarios requiring full data ingestion, split the workflow into multiple modules. LLM to only ingest the necessary and smallest amount of data directly... process the rest of the data separately.


𝗨𝗣𝗖𝗢𝗠𝗜𝗡𝗚 𝗩𝗜𝗗𝗘𝗢𝗦 𝗔𝗡𝗗 𝗣𝗢𝗦𝗧𝗦

Currently preparing detailed tutorials and step-by-step guides covering code, tips, and leveraging GPTs to develop apps. In future videos and posts, I will also cover areas like : processing with on-prem solutions, multiple LLM approaches, segregation of Python processing vs. MySQL processing, machine learning model builds, selective accesses, and more.




 
 
bottom of page