A reusable, cross-workflow platform that bridges SQL mental models and time-series data modeling for Amazon Timestream, a serverless time-series database. This platform-level component helps developers—particularly those migrating from SQL databases—understand, validate, and configure correct Timestream data models. It reduced setup time from 90+ days to a few hours, and now powers multiple cross-service workflows including Scheduled Queries, Batch Load, and Custom Partition Keys.
Goal: Bridge the gap between SQL mental models and time-series data structure, eliminate the 90+ day setup cycle caused by incorrect data modeling, increase workflow completion rates, and create a scalable platform component that could power multiple cross-service workflows.
Developers with SQL backgrounds brought the wrong mental model to Timestream. They assumed it worked like a relational database—with metrics in fixed table columns, attributes modeled like typical SQL fields, and schema defined once at creation. This fundamental mismatch resulted in inefficient schemas, queries scanning excessive data, ingestion failures, and a 90+ day setup cycle. Customers frequently re-ingested data after realizing their model was incorrect, and some abandoned onboarding entirely.
SQL users tried to apply relational thinking to a schemaless time-series database. They expected metrics to live in fixed columns, attributes to behave like SQL fields, and schema to be defined once at table creation—all incorrect assumptions that led to structural failures.
The 90+ day setup wasn't a single linear process. Customers would ingest data, discover modeling errors during queries, then have to restart completely. Some went through multiple cycles; others abandoned Timestream altogether.
Best-practice data model patterns existed but were buried deep in documentation or passed only verbally through Solution Architects. New customers had no way to discover validated starting points for common use cases.
Incorrect data models caused failures in critical workflows: Batch Load (data import) and Scheduled Queries (a key retention driver). This directly impacted service adoption and customer retention rates.
Why it mattered: This friction was significantly impacting service adoption and customer retention. The 90+ day setup time wasn't due to technical complexity alone—it was cognitive overload from trying to apply the wrong mental model to a fundamentally different data structure.
The approach centered on three key principles: match the customer's existing mental model (SQL databases), use the interface as the teaching tool for time-series concepts, and build a scalable, cross-workflow platform from the beginning. Rather than forcing customers to abandon their existing knowledge, I designed a bridge to new concepts.
Core Design Principle: "Bridge the gap—don't force the change." Let users start with what they already know (table columns, column types, schema-first thinking), then progressively translate these concepts into Timestream attributes (dimensions, measures, timestamps, multi-measure records, partition keys).
I investigated the root cause of low adoption rates through support ticket analysis, regular meetings with Solution Architects who worked directly with customers, and customer journey mapping of the typical 90+ day setup process.
I chose a visual builder approach over alternatives (improved documentation, tutorials, CLI tools) because it could simultaneously guide and educate users. Critically, I designed it as a reusable schema-mapping system—not a single-use feature—recognizing early that multiple workflows needed data model configuration. Using Figma and the AWS Console Design System library, I created prototypes that made abstract concepts concrete and could scale across workflows.
I collaborated closely with backend engineers to build schema inference and validation logic for both Scheduled Queries and Batch Load workflows, console engineers to implement a performant and reusable visual component, and Solution Architects to ensure the system addressed real customer needs across multiple use cases.
Internal testing with solution architects revealed that the visual relationship mapping needed to scale effectively for large schemas and that template categorization needed refinement based on common use cases.
The Data Model Visual Builder transforms abstract time-series concepts into concrete, understandable structures while preventing common errors. Rather than relying on external documentation, it embeds contextual learning directly into the workflow. As a platform-level component, it now powers Scheduled Queries, Batch Load, and Custom Partition Keys—with each workflow benefiting from the same cognitive bridge between SQL and time-series thinking.
Familiar table interface that maps columns to Timestream attributes, matching SQL users' mental models while teaching time-series concepts.
Automatically generates data model mappings from query validation or S3 source files, eliminating manual configuration and reducing errors.
Inline guidance and validation that explains unfamiliar terms, surfaces best practices, and warns about inefficient schema choices.
The heart of the system is an editable table interface that matches how SQL users already think about data. Users see their source columns (from queries or S3 files) and map them to Timestream attribute types (Timestamp, Dimension, Measure, Measure_Name) with corresponding data types (String, Integer, Double, etc.).
Why a table? SQL users instinctively understand tables. The familiar "columns → roles → datatypes" workflow provides a bridge to time-series concepts without requiring graph visualizations or abstract diagrams. The table itself is the relationship visualization, showing at a glance how source data maps to Timestream's schemaless structure.
Demo available to view on desktop.
The platform automatically generates data model mappings from different sources depending on the workflow:
This automation eliminates the most error-prone manual configuration tasks and creates a consistent schema experience across both ingestion and query workflows.
Demo available to view on desktop.
Every interaction helps users learn by doing. The interface explains unfamiliar terms inline, surfaces best practices contextually, and warns users when schema choices might lead to inefficiencies. This addresses the knowledge gap directly inside the workflow rather than forcing users to read external documentation.
Example guidance:
Real-time validation prevents common mistakes before data ingestion, and progressive disclosure patterns maintain usability even with complex schemas containing dozens of dimensions and measures.
Demo available to view on desktop.
Design Impact: The Data Model Visual Builder doesn't just enable configuration—it teaches time-series best practices while users work. By embedding education directly into the workflow and building it as a platform component from the start, this solution transformed multiple customer experiences simultaneously. One well-designed component now unifies schema configuration across Scheduled Queries, Batch Load, and Custom Partition Keys, with future workflows able to leverage the same foundation.
The Data Model Visual Builder transformed the Timestream onboarding experience, dramatically reducing setup time and improving completion rates for critical workflows. By designing it as a platform-level component from the start, the impact extended far beyond the original scope—now powering multiple cross-service workflows with the same cognitive bridge between SQL and time-series thinking.
Bottom Line: This project demonstrates how thoughtful UX design can address not just usability issues, but fundamental adoption barriers rooted in knowledge gaps and misconceptions. By focusing on the customer's mental model and designing systems rather than single-use features, we created a bridge to new technology that now serves multiple workflows—multiplying the impact far beyond the original scope.
SQL users needed a bridge, not a paradigm shift. Understanding customers' existing mental models (relational databases, table columns, schema-first thinking) and their context (data migration) allowed me to design an experience that translated familiar concepts to new ones rather than forcing users to abandon their existing knowledge. This cognitive bridge was the key to eliminating the 90+ day setup cycle.
Contextual guidance beats external documentation. Rather than relying on separate tutorials or help pages, embedding education directly into the workflow ensured customers learned the right patterns while completing real tasks. The interface itself became the teaching tool—explaining unfamiliar terms inline, surfacing best practices contextually, and validating choices in real-time.
One well-designed component unified multiple workflows. By recognizing early that Scheduled Queries, Batch Load, and Custom Partition Keys all needed schema configuration, I designed a reusable platform component rather than a single-use feature. This architectural approach multiplied the impact—the same cognitive bridge and auto-detection logic now serves multiple customer journeys, and future workflows can leverage this foundation without rebuilding core functionality.
Schema inference removed the most error-prone tasks. By automatically detecting structure from query validation results or S3 source files, the system eliminated manual configuration wherever possible. Users could review and adjust if needed, but automation handled the heavy lifting—drastically reducing cognitive load and preventing common mistakes before they happened.
This project reinforced the importance of understanding not just what customers are trying to do, but how they think about the problem. By designing a platform that met customers at their existing level of understanding (SQL) and guided them to new concepts (time-series), we eliminated a critical adoption barrier across multiple workflows simultaneously. The architectural decision to build a reusable system rather than a single-use feature multiplied the long-term value, with the same component now serving Scheduled Queries, Batch Load, Custom Partition Keys, and future API ingestion workflows.