AI-Powered Product Design

Democratizing Data Analytics Through Conversational AI and Semantic Intelligence

Internal teams needed advanced insights from console usage data, but existing tools created significant barriers. Accessing analytics required technical knowledge of SQL, data engineering, and complex query construction. The legacy tool was restrictive and couldn't support complex analysis requiring sophisticated data aggregations. This bottleneck prevented product managers, designers, and leadership from independently tracking feature usage and making data-driven decisions.

Goal: Build a conversational AI-powered analytics tool that enables anyone to explore console usage data through natural dialogue, progressively refining their questions and discovering insights they wouldn't have known to ask for initially—all powered by a proprietary semantic data model that achieves 90% query accuracy.

Role Lead Product Designer
Timeline 6 weeks
Team Solo Project
Platform Internal Web Application
The Challenge

Technical Barriers Blocking Data-Driven Decision Making

Product managers, design leadership, and designers needed specific console usage insights to track feature performance and validate product decisions. However, the existing analysis tools created insurmountable barriers that prevented non-technical team members from accessing the data they needed. Early attempts at AI-powered solutions achieved less than 20% query accuracy, making them unreliable. Even when technical barriers could be overcome, users struggled to formulate specific enough questions to get meaningful insights.

Technical Knowledge Required

Accessing console usage data required proficiency in SQL query writing, understanding of complex database schemas with non-intuitive labeling, and knowledge of data engineering principles. The database had parent-child relationships between fields and required combining multiple fields to generate valuable insights. Non-technical team members were completely blocked from independent analysis.

Users Didn't Know What to Ask

Even when AI translation was attempted, users struggled to formulate specific enough questions upfront. Most people didn't know how to describe the data they wanted with sufficient precision or weren't familiar with what data was available. This created barriers even when the technical SQL translation worked correctly.

Resource Bottlenecks

Teams depended on data engineering resources for routine analysis requests. This created delays in getting critical insights and slowed down product iteration cycles, particularly when tracking post-launch feature usage.

Low Query Accuracy

Initial attempts at AI-powered natural language to SQL translation achieved less than 20% accuracy. The complex database schema with non-obvious field relationships made it nearly impossible for AI systems to generate correct queries without deep semantic understanding of the data structure.

Why it mattered: Data-driven decision making requires democratized access to insights. When only technical specialists can access analytics, product teams lose agility and rely on intuition rather than evidence. The lack of self-service capabilities created a fundamental barrier to informed product development.

Our Approach

End-to-End Design and Development

As a solo project with full ownership from concept to implementation, I approached this challenge by first deeply understanding the user friction, then designing an intuitive interface that leverages AI to eliminate technical barriers entirely.

1

Research & Discovery

I conducted a self-study by documenting my own attempts to use the available data analysis tools. This hands-on approach allowed me to map out the complete process required to obtain advanced analysis, identifying exact pain points at each step and determining which types of analysis were impossible with existing tools. Through initial testing of a single-shot query approach, I discovered a critical insight: users struggled to formulate specific enough questions to get the data they needed. Most users didn't know how to describe what they wanted to see with sufficient precision.

  • Mapped the end-to-end workflow for obtaining data insights using current tools
  • Identified specific friction points preventing non-technical users from accessing data
  • Tested one-shot query approach and discovered users couldn't formulate precise questions upfront
  • Documented types of analysis that were completely blocked by existing tool limitations
2

Ideation & Design

Based on research findings, I made the strategic decision to design a conversational AI experience rather than a one-shot query system. This approach allows users to explore data and insights beyond their initial question, creating opportunities for discovery and deeper analysis. The conversational model addresses a fundamental user need—most people don't know exactly how to ask for data upfront, but they can recognize valuable insights when they see them and ask intelligent follow-up questions. Rather than focusing solely on query accuracy, I designed the experience around the narrative that emerges from the data—what story does this information tell?

  • Pivoted from one-shot queries to conversational AI enabling progressive refinement
  • Designed for discovery over precision—users start broad and refine through dialogue
  • Shifted focus from "query accuracy" to "data storytelling" and narrative insights
  • Created example prompts and conversation starters to guide users
3

Development & Implementation

Built using a three-layer architecture: (1) Conversational Interface Layer using React with multi-turn context maintenance, (2) AI Processing Layer using Amazon Bedrock with a carefully engineered system prompt containing the semantic data model, and (3) Data Layer using Amazon Redshift with optimized query execution. The breakthrough was developing a comprehensive semantic data model through a three-step process: running discovery queries against the database, creating translated semantic metadata that maps natural language concepts to database elements, and integrating this knowledge base into the system prompt.

  • Built React chat interface maintaining conversation context across multiple turns
  • Developed proprietary semantic data model mapping business terms to database schema
  • Engineered system prompt with semantic metadata file and sample queries as knowledge base
  • Enabled agent to reference previous queries and suggest related analyses
4

Testing & Refinement

Testing focused on validating query accuracy across different types of questions and ensuring the AI-generated summaries correctly interpreted the data. The iterative refinement process revealed dramatic improvements from the semantic data model implementation—query accuracy jumped from less than 20% to 90%. The agent also developed self-correction capabilities, allowing it to recognize when a query didn't return expected results and reformulate the approach. I validated the conversational flow with target users, confirming that the multi-turn interaction pattern successfully enabled discovery of insights that users wouldn't have known to ask for initially.

  • Achieved 90% query accuracy through semantic data model iteration (from <20%)
  • Validated agent's self-correction capabilities for recognizing and fixing query errors
  • Confirmed conversational flow enables discovery of insights users wouldn't initially ask for
  • Tested various natural language phrasings to improve intent understanding
The Solution

Conversational AI Analytics Powered by Semantic Intelligence

The tool transforms how teams access console usage insights through a conversational AI experience that enables multi-turn dialogue, progressive refinement, and discovery of unexpected insights. Powered by a proprietary semantic data model achieving 90% query accuracy, the system allows anyone to explore data through natural conversation, ask follow-up questions, and receive narrative summaries that tell the story of what the data reveals—all without SQL knowledge or technical expertise.

Multi-Turn Conversations

Engage in natural dialogue with context maintained across questions, enabling progressive refinement.

Discovery-Based Exploration

Start broad and refine through dialogue—uncover insights you wouldn't have known to ask for initially.

Semantic Intelligence

Proprietary semantic data model achieves 90% query accuracy with self-correction capabilities.

Data Storytelling

Narrative summaries highlight patterns, trends, and actionable insights—not just raw data.

Key Features

1. Conversational Query Interface with Multi-Turn Dialogue

Users engage in natural conversations about console usage data, asking questions in plain English and receiving guided prompts for follow-up exploration. The interface maintains context across multiple turns, allowing progressive refinement from broad questions to specific insights. Example prompts and conversation starters guide users who don't know exactly what to ask initially, enabling discovery of insights they wouldn't have thought to request.

Conversational Query Interface showing example prompts modal and multi-turn dialogue demonstrating how the interface maintains context across questions

2. Semantic Data Model with 90% Query Accuracy

A proprietary semantic data model serves as the critical translation layer between natural language concepts and complex database schema. Through a three-step process (discovery queries, semantic metadata creation, and system prompt integration), the model maps business terms to database elements, handling non-intuitive field relationships and parent-child data structures. This breakthrough achieved 90% query accuracy (from initial <20%) and enabled self-correction capabilities—the agent can recognize when queries return unexpected results and autonomously reformulate its approach.

Semantic Data Model showing the translation layer between natural language concepts and database schema with query generation process

3. Intelligent Data Storytelling with Narrative Summaries

Rather than presenting raw data tables, the system automatically transforms query results into narrative summaries that tell the story of what the data reveals. The AI identifies patterns, trends, and actionable insights relevant to the user's question, highlighting what matters most and suggesting related avenues for exploration. This approach recognizes that users ask questions because they want answers and understanding—not because they want to manually analyze spreadsheets. The combination of accurate data retrieval and intelligent interpretation creates true value.

Intelligent Data Storytelling showing AI-generated narrative summaries with key insights, metrics, and trend analysis alongside raw data view

The Semantic Data Model Breakthrough

From 20% to 90% Query Accuracy

The proprietary semantic data model is the core innovation that makes accurate natural language to SQL translation possible. This translation layer maps how users think about data (in business terms) to how data is actually stored (in technical database structures).

Three-Step Development Process:
1
Discovery Queries

Generated exploratory queries against the database and exported results to understand the actual data structure, relationships, and field meanings.

2
Semantic Metadata Translation

Created a comprehensive semantic metadata file mapping natural language concepts to database elements, including table definitions, field descriptions, parent-child relationships, and business logic rules.

3
Knowledge Base Integration

Incorporated the semantic metadata file and sample queries into the system prompt powering the conversational agent, giving it robust knowledge for understanding user requests.

Impact: Query accuracy improved from less than 20% to 90%, and the agent gained self-correction capabilities—it can now recognize when a query doesn't return expected results and autonomously reformulate its approach. This semantic layer is what enables non-technical users to access technical data.

Technical Architecture

Conversational Interface Layer

React-based chat interface that maintains conversation context across multiple turns and displays both raw data and AI-generated narrative insights

AI Processing Layer

Amazon Bedrock agent with carefully engineered system prompt containing the semantic data model, enabling intent detection, SQL generation, and self-correction

Data Layer

Amazon Redshift with optimized query execution and result formatting, mapped through the semantic data model for accuracy

Technologies: React, Amazon Bedrock, Amazon Redshift, SQL, Semantic metadata architecture, System prompt engineering

Design Impact: By combining conversational AI, a proprietary semantic data model achieving 90% accuracy, and intelligent data storytelling, the tool eliminates every technical barrier that previously prevented non-technical team members from accessing console usage insights. The result is true democratization of data analytics through discovery-based exploration.

Impact

Unlocking Previously Impossible Capabilities

The conversational AI analytics tool fundamentally transformed how teams access and use console usage data, achieving 90% query accuracy through semantic intelligence and enabling data-driven decision making across the organization through discovery-based exploration.

Query Accuracy & Agent Intelligence

90% Query Accuracy: Through semantic data model implementation, query accuracy improved from less than 20% to 90%—a dramatic breakthrough that made the tool reliable and trustworthy for business-critical insights.

Self-Correction Capabilities: The agent developed intelligence to recognize when queries don't return expected results and autonomously reformulate its approach, continuously improving accuracy and handling edge cases without human intervention.

Discovery Enablement

The conversational approach allows users to discover insights they wouldn't have known to ask for initially. Through multi-turn dialogue, guided prompts, and progressive refinement, users start with broad questions and uncover specific patterns and opportunities through natural exploration. This capability transforms data access from answering known questions to enabling genuine discovery.

Capability Unlock

Made previously impossible or extremely difficult advanced analysis accessible to all team members, removing technical barriers entirely. Analyses that would have required data engineering support or were simply infeasible with legacy tools became routine self-service tasks through conversational requests.

Democratized Data Access

  • Product managers can now independently explore console usage patterns and track feature performance post-launch
  • Design leaders access real usage data to validate design decisions and identify opportunities
  • Designers explore user behavior patterns without requiring technical support or SQL knowledge
  • Cross-functional teams make data-informed decisions without engineering dependencies

Operational Efficiency

  • Eliminated dependency on technical resources for routine data analysis requests
  • Reduced time from question to insight from days (with engineering support) to minutes (self-service)
  • Accelerated product iteration cycles through immediate access to post-launch usage metrics
  • Freed data engineering resources to focus on strategic initiatives rather than routine queries

Improved Decision Making

Teams can now quickly validate hypotheses and track feature usage post-launch, enabling evidence-based product decisions. The ability to instantly explore data patterns and test assumptions fundamentally changed how product teams approach iteration and optimization.

Bottom Line: By combining conversational AI with a proprietary semantic data model achieving 90% accuracy, the tool transformed data from an exclusive resource requiring specialized skills into a universally accessible foundation for informed decision making. The conversational approach enables discovery of insights users wouldn't have known to ask for, while the semantic intelligence ensures reliability. The democratization of analytics capabilities through discovery-based exploration has compounding benefits across the organization.

Challenges & Solutions

Overcoming Technical Complexity

Challenge 1: Creating an Accurate Semantic Data Model

Problem:

The core challenge was building a semantic data model that could accurately translate natural language requests into correct SQL queries. The database schema was complex with non-obvious labeling and relationships between fields. Parent-child relationships existed between data fields, and multiple fields often needed to be combined to generate valuable insights. The labeling conventions weren't intuitive, making it difficult for an AI agent to understand what users were asking for and which tables/fields to query. Initial query accuracy was less than 20%, making the tool unreliable.

Solution:

I developed a comprehensive three-step process to build the semantic data model:

  1. Discovery Queries: Generated exploratory queries against the database and exported the results to understand the actual data structure and relationships
  2. Metadata Translation: Created a translated semantic metadata file that mapped natural language concepts to database elements, including table definitions, field descriptions, relationships, and business logic rules
  3. Knowledge Base Integration: Incorporated the semantic metadata file and sample queries into the main system prompt that powers the conversational agent, giving it a robust knowledge base for understanding user requests

This semantic layer acts as the critical translation mechanism between how users think about data (in business terms) and how data is actually stored (in technical database structures). The impact was dramatic—query accuracy improved from <20% to 90%, and the agent gained self-correction capabilities, allowing it to recognize and fix query errors autonomously.

Challenge 2: Designing for User Discovery vs. Precision

Problem:

Early testing with a one-shot query approach revealed that users didn't know how to describe the data they wanted with enough specificity. Most users couldn't formulate precise questions upfront because they weren't familiar with what data was available or how to articulate complex analytical requests. This created a barrier even when the technical SQL translation worked correctly.

Solution:

I pivoted to a conversational AI experience that prioritizes exploration over precision. Instead of requiring users to ask perfect questions, the tool allows them to start broad and refine through dialogue. The agent provides follow-up questions and recommendations based on initial queries, guiding users toward deeper insights. The key design innovation was shifting focus from "query accuracy" to "data storytelling"—what narrative emerges from the results? I created easy-to-read summarized reports that not only answer the user's question but also highlight related patterns and suggest additional avenues for exploration. This conversational approach enables users to discover insights they wouldn't have known to ask for, making previously impossible analyses accessible to non-technical team members.

Challenge 3: Understanding Database Schema and Data Storage Patterns

Problem:

Understanding how data was stored in order to accurately have the system create queries that would return correct results. The complexity of the data structure meant that incorrect schema interpretation would lead to inaccurate insights, undermining trust in the entire system.

Solution:

I created comprehensive documentation of the database schema and established clear mappings between natural language concepts and database tables/fields through the semantic data model. This involved deep analysis of the data structure to understand relationships, creating a knowledge base that the AI could reference, and implementing validation checks to ensure generated queries aligned with the actual data architecture. The semantic metadata file became the single source of truth for how the agent interprets and queries the database.

Key Takeaways

Lessons in AI-Powered Design

Semantic Modeling is the Foundation

Building an accurate semantic data model that maps natural language to database schema is critical for AI-powered query tools—this translation layer is what enables non-technical users to access technical data. Without proper semantic understanding of how data is structured and how business concepts map to database elements, AI systems will generate incorrect queries. The investment in building a comprehensive semantic metadata file was the breakthrough that improved query accuracy from less than 20% to 90%.

Conversational Beats One-Shot

Users don't know what they don't know. A conversational approach that supports exploration and progressive refinement is far more effective than requiring precise upfront queries, especially for non-technical users. Early testing proved that one-shot query systems fail because users can't formulate specific enough questions initially. The conversational model enables users to start broad, see what's possible, and refine through dialogue—discovering insights they wouldn't have thought to ask for.

Focus on Storytelling, Not Just Data

The combination of accurate data retrieval and intelligent narrative summarization is essential—raw data alone doesn't provide value without interpretation that highlights patterns and suggests next steps. Users ask questions because they want answers and understanding, not spreadsheets to analyze. Shifting the design focus from "query accuracy" to "data storytelling" transformed the tool from a technical SQL translator into a discovery partner that helps users understand what their data reveals.

Iteration Drives Accuracy

The semantic data model required iterative refinement through discovery queries, metadata translation, and continuous testing, but the result was a dramatic improvement in query accuracy from less than 20% to 90%. This journey demonstrated that building reliable AI-powered analytics tools requires patience, systematic methodology, and willingness to completely rebuild core components when initial approaches don't achieve acceptable accuracy. The three-step semantic modeling process became replicable for other complex data sources.

True democratization of analytics means removing every technical barrier between a user's question and a data-driven answer. When exploration becomes conversational and semantic intelligence ensures accuracy, data transforms from an exclusive resource into a universal foundation for discovery-driven decisions.

Conclusion

This project demonstrates the powerful impact of combining conversational AI with semantic intelligence and thoughtful UX design to break down technical barriers. By building a tool that enables multi-turn dialogue, achieves 90% query accuracy through proprietary semantic data modeling, and delivers narrative insights rather than raw data, I enabled true discovery-based exploration and data-driven decision making for teams that were previously excluded from accessing console usage insights.

Future Enhancements

  • Saved conversation history allowing users to resume explorations and track insights over time
  • Automated scheduling for recurring reports and trend tracking
  • Collaborative features for sharing insights and conversation threads across teams
  • Expansion of semantic model to support additional data sources beyond console usage, making the methodology replicable