Database Design Tool

Amazon Timestream Data Model Visual Builder

A reusable, cross-workflow platform that bridges SQL mental models and time-series data modeling for Amazon Timestream, a serverless time-series database. This platform-level component helps developers—particularly those migrating from SQL databases—understand, validate, and configure correct Timestream data models. It reduced setup time from 90+ days to a few hours, and now powers multiple cross-service workflows including Scheduled Queries, Batch Load, and Custom Partition Keys.

Goal: Bridge the gap between SQL mental models and time-series data structure, eliminate the 90+ day setup cycle caused by incorrect data modeling, increase workflow completion rates, and create a scalable platform component that could power multiple cross-service workflows.

Role Lead Designer
Timeline 6 weeks
Team 6 (Engineers, PM, Architect)
Platform AWS Console
The Challenge

The Problem Wasn't Technical Complexity—It Was Cognitive Overload

Developers with SQL backgrounds brought the wrong mental model to Timestream. They assumed it worked like a relational database—with metrics in fixed table columns, attributes modeled like typical SQL fields, and schema defined once at creation. This fundamental mismatch resulted in inefficient schemas, queries scanning excessive data, ingestion failures, and a 90+ day setup cycle. Customers frequently re-ingested data after realizing their model was incorrect, and some abandoned onboarding entirely.

Misaligned Mental Models

SQL users tried to apply relational thinking to a schemaless time-series database. They expected metrics to live in fixed columns, attributes to behave like SQL fields, and schema to be defined once at table creation—all incorrect assumptions that led to structural failures.

Costly Restart Cycles

The 90+ day setup wasn't a single linear process. Customers would ingest data, discover modeling errors during queries, then have to restart completely. Some went through multiple cycles; others abandoned Timestream altogether.

Hidden Best Practices

Best-practice data model patterns existed but were buried deep in documentation or passed only verbally through Solution Architects. New customers had no way to discover validated starting points for common use cases.

Workflow Abandonment

Incorrect data models caused failures in critical workflows: Batch Load (data import) and Scheduled Queries (a key retention driver). This directly impacted service adoption and customer retention rates.

Why it mattered: This friction was significantly impacting service adoption and customer retention. The 90+ day setup time wasn't due to technical complexity alone—it was cognitive overload from trying to apply the wrong mental model to a fundamentally different data structure.

Our Approach

Bridge the Gap, Don't Force the Change

The approach centered on three key principles: match the customer's existing mental model (SQL databases), use the interface as the teaching tool for time-series concepts, and build a scalable, cross-workflow platform from the beginning. Rather than forcing customers to abandon their existing knowledge, I designed a bridge to new concepts.

Core Design Principle: "Bridge the gap—don't force the change." Let users start with what they already know (table columns, column types, schema-first thinking), then progressively translate these concepts into Timestream attributes (dimensions, measures, timestamps, multi-measure records, partition keys).

1

Research & Discovery

I investigated the root cause of low adoption rates through support ticket analysis, regular meetings with Solution Architects who worked directly with customers, and customer journey mapping of the typical 90+ day setup process.

  • Analyzed patterns in customer struggles and common misconceptions
  • Identified friction points in the onboarding journey
  • Discovered customers were applying SQL principles to schemaless structures
2

Ideation & Platform Design

I chose a visual builder approach over alternatives (improved documentation, tutorials, CLI tools) because it could simultaneously guide and educate users. Critically, I designed it as a reusable schema-mapping system—not a single-use feature—recognizing early that multiple workflows needed data model configuration. Using Figma and the AWS Console Design System library, I created prototypes that made abstract concepts concrete and could scale across workflows.

  • Designed table-based visual mapping matching SQL mental models
  • Created auto-detection system for Scheduled Queries and Batch Load workflows
  • Built contextual education and validation directly into the interface
  • Identified Custom Partition Keys as another workflow needing this component
3

Development & Implementation

I collaborated closely with backend engineers to build schema inference and validation logic for both Scheduled Queries and Batch Load workflows, console engineers to implement a performant and reusable visual component, and Solution Architects to ensure the system addressed real customer needs across multiple use cases.

  • Built reusable component architecture supporting multiple workflow integrations
  • Implemented auto-detection for query validation and S3 source introspection
  • Ensured performance and scalability with complex data models
4

Testing & Refinement

Internal testing with solution architects revealed that the visual relationship mapping needed to scale effectively for large schemas and that template categorization needed refinement based on common use cases.

  • Optimized rendering for complex data models with many dimensions
  • Accounted for various data model formats customers might use
  • Refined template organization based on real use cases
The Solution

A Platform That Teaches While Users Build

The Data Model Visual Builder transforms abstract time-series concepts into concrete, understandable structures while preventing common errors. Rather than relying on external documentation, it embeds contextual learning directly into the workflow. As a platform-level component, it now powers Scheduled Queries, Batch Load, and Custom Partition Keys—with each workflow benefiting from the same cognitive bridge between SQL and time-series thinking.

Table-Based Visual Mapping

Familiar table interface that maps columns to Timestream attributes, matching SQL users' mental models while teaching time-series concepts.

Auto Schema Detection

Automatically generates data model mappings from query validation or S3 source files, eliminating manual configuration and reducing errors.

Contextual Education

Inline guidance and validation that explains unfamiliar terms, surfaces best practices, and warns about inefficient schema choices.

Key Capabilities

1. Table-Based Visual Mapping

The heart of the system is an editable table interface that matches how SQL users already think about data. Users see their source columns (from queries or S3 files) and map them to Timestream attribute types (Timestamp, Dimension, Measure, Measure_Name) with corresponding data types (String, Integer, Double, etc.).

Why a table? SQL users instinctively understand tables. The familiar "columns → roles → datatypes" workflow provides a bridge to time-series concepts without requiring graph visualizations or abstract diagrams. The table itself is the relationship visualization, showing at a glance how source data maps to Timestream's schemaless structure.

Demo available to view on desktop.

2. Auto Schema Detection Across Workflows

The platform automatically generates data model mappings from different sources depending on the workflow:

  • Scheduled Queries: During "Validate Query," the backend returns column names and inferred attribute types, allowing the builder to auto-generate the entire mapping. Users can review and edit if needed, but rarely need to adjust.
  • Batch Load: Upon connecting to S3 source files (CSV/Parquet), file headers are parsed, datatypes inferred, and recommended mappings applied automatically.
  • Custom Partition Keys: When destination tables have custom partition keys, the builder detects this configuration and guides users to map the correct attribute, preventing silent ingestion errors.

This automation eliminates the most error-prone manual configuration tasks and creates a consistent schema experience across both ingestion and query workflows.

Demo available to view on desktop.

3. Contextual Education & Validation

Every interaction helps users learn by doing. The interface explains unfamiliar terms inline, surfaces best practices contextually, and warns users when schema choices might lead to inefficiencies. This addresses the knowledge gap directly inside the workflow rather than forcing users to read external documentation.

Example guidance:

  • "This looks like a dimension; dimensions are used for filtering and grouping queries."
  • "Numeric fields can be measures; measures represent metric values that change over time."
  • "This table uses a custom partition key — review mapping guidance to ensure correct attribute assignment."
  • "Consider using multi-measure records for related metrics to optimize query performance."

Real-time validation prevents common mistakes before data ingestion, and progressive disclosure patterns maintain usability even with complex schemas containing dozens of dimensions and measures.

Demo available to view on desktop.

Design Impact: The Data Model Visual Builder doesn't just enable configuration—it teaches time-series best practices while users work. By embedding education directly into the workflow and building it as a platform component from the start, this solution transformed multiple customer experiences simultaneously. One well-designed component now unifies schema configuration across Scheduled Queries, Batch Load, and Custom Partition Keys, with future workflows able to leverage the same foundation.

Impact

From 90+ Days to Hours: Measurable Transformation

The Data Model Visual Builder transformed the Timestream onboarding experience, dramatically reducing setup time and improving completion rates for critical workflows. By designing it as a platform-level component from the start, the impact extended far beyond the original scope—now powering multiple cross-service workflows with the same cognitive bridge between SQL and time-series thinking.

51%
Completion rate for new customers (0-7 days old) using the visual builder
70%
Increase in total active users over one year (alongside cross-service integrations)
90+ days → hours
Reduction in setup time, eliminating costly restart cycles

Customer Results

  • Enabled new customers to successfully import data on first attempt through Batch Load workflow
  • Increased completion rates for Scheduled Queries workflow, a key retention driver
  • Accelerated POC development: "Setting up and testing to do a POC much easier and faster" (customer feedback)
  • Eliminated the 90+ day cycle of discovering incorrect data models only after ingestion

Technical Results

  • Scaled effectively for complex data models with dozens of dimensions and measures
  • Supported multiple data model formats (IoT, DevOps, application metrics) without overwhelming users
  • Real-time validation prevented query inefficiencies before data ingestion
  • Progressive disclosure patterns maintained usability with large schemas

Business Results

  • Addressed a critical adoption barrier that was significantly impacting service growth
  • Improved customer retention through higher workflow completion rates
  • Reduced support burden by preventing common data modeling mistakes
  • Enabled faster time-to-value, improving competitive positioning

Platform Impact — Beyond Original Scope

  • Became a system-level component powering Scheduled Queries, Batch Load, and Custom Partition Keys
  • Unified schema configuration experience across multiple workflows, creating consistency for users
  • Future API ingestion workflows can leverage the same foundation without rebuilding core functionality
  • One well-designed component multiplied impact across the entire service ecosystem

Bottom Line: This project demonstrates how thoughtful UX design can address not just usability issues, but fundamental adoption barriers rooted in knowledge gaps and misconceptions. By focusing on the customer's mental model and designing systems rather than single-use features, we created a bridge to new technology that now serves multiple workflows—multiplying the impact far beyond the original scope.

Key Takeaways

Lessons in Cognitive Design

Meet Users Where They Are

SQL users needed a bridge, not a paradigm shift. Understanding customers' existing mental models (relational databases, table columns, schema-first thinking) and their context (data migration) allowed me to design an experience that translated familiar concepts to new ones rather than forcing users to abandon their existing knowledge. This cognitive bridge was the key to eliminating the 90+ day setup cycle.

Use the Product as the Learning Surface

Contextual guidance beats external documentation. Rather than relying on separate tutorials or help pages, embedding education directly into the workflow ensured customers learned the right patterns while completing real tasks. The interface itself became the teaching tool—explaining unfamiliar terms inline, surfacing best practices contextually, and validating choices in real-time.

Design Systems, Not Screens

One well-designed component unified multiple workflows. By recognizing early that Scheduled Queries, Batch Load, and Custom Partition Keys all needed schema configuration, I designed a reusable platform component rather than a single-use feature. This architectural approach multiplied the impact—the same cognitive bridge and auto-detection logic now serves multiple customer journeys, and future workflows can leverage this foundation without rebuilding core functionality.

Automate Wherever Possible

Schema inference removed the most error-prone tasks. By automatically detecting structure from query validation results or S3 source files, the system eliminated manual configuration wherever possible. Users could review and adjust if needed, but automation handled the heavy lifting—drastically reducing cognitive load and preventing common mistakes before they happened.

Conclusion

This project reinforced the importance of understanding not just what customers are trying to do, but how they think about the problem. By designing a platform that met customers at their existing level of understanding (SQL) and guided them to new concepts (time-series), we eliminated a critical adoption barrier across multiple workflows simultaneously. The architectural decision to build a reusable system rather than a single-use feature multiplied the long-term value, with the same component now serving Scheduled Queries, Batch Load, Custom Partition Keys, and future API ingestion workflows.