Internal teams needed advanced insights from console usage data, but existing tools created significant barriers. Accessing analytics required technical knowledge of SQL, data engineering, and complex query construction. The legacy tool was restrictive and couldn't support complex analysis requiring sophisticated data aggregations. This bottleneck prevented product managers, designers, and leadership from independently tracking feature usage and making data-driven decisions.
Goal: Build a conversational AI-powered analytics tool that enables anyone to explore console usage data through natural dialogue, progressively refining their questions and discovering insights they wouldn't have known to ask for initially—all powered by a proprietary semantic data model that achieves 90% query accuracy.
Product managers, design leadership, and designers needed specific console usage insights to track feature performance and validate product decisions. However, the existing analysis tools created insurmountable barriers that prevented non-technical team members from accessing the data they needed. Early attempts at AI-powered solutions achieved less than 20% query accuracy, making them unreliable. Even when technical barriers could be overcome, users struggled to formulate specific enough questions to get meaningful insights.
Accessing console usage data required proficiency in SQL query writing, understanding of complex database schemas with non-intuitive labeling, and knowledge of data engineering principles. The database had parent-child relationships between fields and required combining multiple fields to generate valuable insights. Non-technical team members were completely blocked from independent analysis.
Even when AI translation was attempted, users struggled to formulate specific enough questions upfront. Most people didn't know how to describe the data they wanted with sufficient precision or weren't familiar with what data was available. This created barriers even when the technical SQL translation worked correctly.
Teams depended on data engineering resources for routine analysis requests. This created delays in getting critical insights and slowed down product iteration cycles, particularly when tracking post-launch feature usage.
Initial attempts at AI-powered natural language to SQL translation achieved less than 20% accuracy. The complex database schema with non-obvious field relationships made it nearly impossible for AI systems to generate correct queries without deep semantic understanding of the data structure.
Why it mattered: Data-driven decision making requires democratized access to insights. When only technical specialists can access analytics, product teams lose agility and rely on intuition rather than evidence. The lack of self-service capabilities created a fundamental barrier to informed product development.
As a solo project with full ownership from concept to implementation, I approached this challenge by first deeply understanding the user friction, then designing an intuitive interface that leverages AI to eliminate technical barriers entirely.
I conducted a self-study by documenting my own attempts to use the available data analysis tools. This hands-on approach allowed me to map out the complete process required to obtain advanced analysis, identifying exact pain points at each step and determining which types of analysis were impossible with existing tools. Through initial testing of a single-shot query approach, I discovered a critical insight: users struggled to formulate specific enough questions to get the data they needed. Most users didn't know how to describe what they wanted to see with sufficient precision.
Based on research findings, I made the strategic decision to design a conversational AI experience rather than a one-shot query system. This approach allows users to explore data and insights beyond their initial question, creating opportunities for discovery and deeper analysis. The conversational model addresses a fundamental user need—most people don't know exactly how to ask for data upfront, but they can recognize valuable insights when they see them and ask intelligent follow-up questions. Rather than focusing solely on query accuracy, I designed the experience around the narrative that emerges from the data—what story does this information tell?
Built using a three-layer architecture: (1) Conversational Interface Layer using React with multi-turn context maintenance, (2) AI Processing Layer using Amazon Bedrock with a carefully engineered system prompt containing the semantic data model, and (3) Data Layer using Amazon Redshift with optimized query execution. The breakthrough was developing a comprehensive semantic data model through a three-step process: running discovery queries against the database, creating translated semantic metadata that maps natural language concepts to database elements, and integrating this knowledge base into the system prompt.
Testing focused on validating query accuracy across different types of questions and ensuring the AI-generated summaries correctly interpreted the data. The iterative refinement process revealed dramatic improvements from the semantic data model implementation—query accuracy jumped from less than 20% to 90%. The agent also developed self-correction capabilities, allowing it to recognize when a query didn't return expected results and reformulate the approach. I validated the conversational flow with target users, confirming that the multi-turn interaction pattern successfully enabled discovery of insights that users wouldn't have known to ask for initially.
The tool transforms how teams access console usage insights through a conversational AI experience that enables multi-turn dialogue, progressive refinement, and discovery of unexpected insights. Powered by a proprietary semantic data model achieving 90% query accuracy, the system allows anyone to explore data through natural conversation, ask follow-up questions, and receive narrative summaries that tell the story of what the data reveals—all without SQL knowledge or technical expertise.
Engage in natural dialogue with context maintained across questions, enabling progressive refinement.
Start broad and refine through dialogue—uncover insights you wouldn't have known to ask for initially.
Proprietary semantic data model achieves 90% query accuracy with self-correction capabilities.
Narrative summaries highlight patterns, trends, and actionable insights—not just raw data.
Users engage in natural conversations about console usage data, asking questions in plain English and receiving guided prompts for follow-up exploration. The interface maintains context across multiple turns, allowing progressive refinement from broad questions to specific insights. Example prompts and conversation starters guide users who don't know exactly what to ask initially, enabling discovery of insights they wouldn't have thought to request.
A proprietary semantic data model serves as the critical translation layer between natural language concepts and complex database schema. Through a three-step process (discovery queries, semantic metadata creation, and system prompt integration), the model maps business terms to database elements, handling non-intuitive field relationships and parent-child data structures. This breakthrough achieved 90% query accuracy (from initial <20%) and enabled self-correction capabilities—the agent can recognize when queries return unexpected results and autonomously reformulate its approach.
Rather than presenting raw data tables, the system automatically transforms query results into narrative summaries that tell the story of what the data reveals. The AI identifies patterns, trends, and actionable insights relevant to the user's question, highlighting what matters most and suggesting related avenues for exploration. This approach recognizes that users ask questions because they want answers and understanding—not because they want to manually analyze spreadsheets. The combination of accurate data retrieval and intelligent interpretation creates true value.
The proprietary semantic data model is the core innovation that makes accurate natural language to SQL translation possible. This translation layer maps how users think about data (in business terms) to how data is actually stored (in technical database structures).
Generated exploratory queries against the database and exported results to understand the actual data structure, relationships, and field meanings.
Created a comprehensive semantic metadata file mapping natural language concepts to database elements, including table definitions, field descriptions, parent-child relationships, and business logic rules.
Incorporated the semantic metadata file and sample queries into the system prompt powering the conversational agent, giving it robust knowledge for understanding user requests.
Impact: Query accuracy improved from less than 20% to 90%, and the agent gained self-correction capabilities—it can now recognize when a query doesn't return expected results and autonomously reformulate its approach. This semantic layer is what enables non-technical users to access technical data.
React-based chat interface that maintains conversation context across multiple turns and displays both raw data and AI-generated narrative insights
Amazon Bedrock agent with carefully engineered system prompt containing the semantic data model, enabling intent detection, SQL generation, and self-correction
Amazon Redshift with optimized query execution and result formatting, mapped through the semantic data model for accuracy
Technologies: React, Amazon Bedrock, Amazon Redshift, SQL, Semantic metadata architecture, System prompt engineering
Design Impact: By combining conversational AI, a proprietary semantic data model achieving 90% accuracy, and intelligent data storytelling, the tool eliminates every technical barrier that previously prevented non-technical team members from accessing console usage insights. The result is true democratization of data analytics through discovery-based exploration.
The conversational AI analytics tool fundamentally transformed how teams access and use console usage data, achieving 90% query accuracy through semantic intelligence and enabling data-driven decision making across the organization through discovery-based exploration.
90% Query Accuracy: Through semantic data model implementation, query accuracy improved from less than 20% to 90%—a dramatic breakthrough that made the tool reliable and trustworthy for business-critical insights.
Self-Correction Capabilities: The agent developed intelligence to recognize when queries don't return expected results and autonomously reformulate its approach, continuously improving accuracy and handling edge cases without human intervention.
The conversational approach allows users to discover insights they wouldn't have known to ask for initially. Through multi-turn dialogue, guided prompts, and progressive refinement, users start with broad questions and uncover specific patterns and opportunities through natural exploration. This capability transforms data access from answering known questions to enabling genuine discovery.
Made previously impossible or extremely difficult advanced analysis accessible to all team members, removing technical barriers entirely. Analyses that would have required data engineering support or were simply infeasible with legacy tools became routine self-service tasks through conversational requests.
Teams can now quickly validate hypotheses and track feature usage post-launch, enabling evidence-based product decisions. The ability to instantly explore data patterns and test assumptions fundamentally changed how product teams approach iteration and optimization.
Bottom Line: By combining conversational AI with a proprietary semantic data model achieving 90% accuracy, the tool transformed data from an exclusive resource requiring specialized skills into a universally accessible foundation for informed decision making. The conversational approach enables discovery of insights users wouldn't have known to ask for, while the semantic intelligence ensures reliability. The democratization of analytics capabilities through discovery-based exploration has compounding benefits across the organization.
The core challenge was building a semantic data model that could accurately translate natural language requests into correct SQL queries. The database schema was complex with non-obvious labeling and relationships between fields. Parent-child relationships existed between data fields, and multiple fields often needed to be combined to generate valuable insights. The labeling conventions weren't intuitive, making it difficult for an AI agent to understand what users were asking for and which tables/fields to query. Initial query accuracy was less than 20%, making the tool unreliable.
I developed a comprehensive three-step process to build the semantic data model:
This semantic layer acts as the critical translation mechanism between how users think about data (in business terms) and how data is actually stored (in technical database structures). The impact was dramatic—query accuracy improved from <20% to 90%, and the agent gained self-correction capabilities, allowing it to recognize and fix query errors autonomously.
Early testing with a one-shot query approach revealed that users didn't know how to describe the data they wanted with enough specificity. Most users couldn't formulate precise questions upfront because they weren't familiar with what data was available or how to articulate complex analytical requests. This created a barrier even when the technical SQL translation worked correctly.
I pivoted to a conversational AI experience that prioritizes exploration over precision. Instead of requiring users to ask perfect questions, the tool allows them to start broad and refine through dialogue. The agent provides follow-up questions and recommendations based on initial queries, guiding users toward deeper insights. The key design innovation was shifting focus from "query accuracy" to "data storytelling"—what narrative emerges from the results? I created easy-to-read summarized reports that not only answer the user's question but also highlight related patterns and suggest additional avenues for exploration. This conversational approach enables users to discover insights they wouldn't have known to ask for, making previously impossible analyses accessible to non-technical team members.
Understanding how data was stored in order to accurately have the system create queries that would return correct results. The complexity of the data structure meant that incorrect schema interpretation would lead to inaccurate insights, undermining trust in the entire system.
I created comprehensive documentation of the database schema and established clear mappings between natural language concepts and database tables/fields through the semantic data model. This involved deep analysis of the data structure to understand relationships, creating a knowledge base that the AI could reference, and implementing validation checks to ensure generated queries aligned with the actual data architecture. The semantic metadata file became the single source of truth for how the agent interprets and queries the database.
Building an accurate semantic data model that maps natural language to database schema is critical for AI-powered query tools—this translation layer is what enables non-technical users to access technical data. Without proper semantic understanding of how data is structured and how business concepts map to database elements, AI systems will generate incorrect queries. The investment in building a comprehensive semantic metadata file was the breakthrough that improved query accuracy from less than 20% to 90%.
Users don't know what they don't know. A conversational approach that supports exploration and progressive refinement is far more effective than requiring precise upfront queries, especially for non-technical users. Early testing proved that one-shot query systems fail because users can't formulate specific enough questions initially. The conversational model enables users to start broad, see what's possible, and refine through dialogue—discovering insights they wouldn't have thought to ask for.
The combination of accurate data retrieval and intelligent narrative summarization is essential—raw data alone doesn't provide value without interpretation that highlights patterns and suggests next steps. Users ask questions because they want answers and understanding, not spreadsheets to analyze. Shifting the design focus from "query accuracy" to "data storytelling" transformed the tool from a technical SQL translator into a discovery partner that helps users understand what their data reveals.
The semantic data model required iterative refinement through discovery queries, metadata translation, and continuous testing, but the result was a dramatic improvement in query accuracy from less than 20% to 90%. This journey demonstrated that building reliable AI-powered analytics tools requires patience, systematic methodology, and willingness to completely rebuild core components when initial approaches don't achieve acceptable accuracy. The three-step semantic modeling process became replicable for other complex data sources.
True democratization of analytics means removing every technical barrier between a user's question and a data-driven answer. When exploration becomes conversational and semantic intelligence ensures accuracy, data transforms from an exclusive resource into a universal foundation for discovery-driven decisions.
This project demonstrates the powerful impact of combining conversational AI with semantic intelligence and thoughtful UX design to break down technical barriers. By building a tool that enables multi-turn dialogue, achieves 90% query accuracy through proprietary semantic data modeling, and delivers narrative insights rather than raw data, I enabled true discovery-based exploration and data-driven decision making for teams that were previously excluded from accessing console usage insights.