← All projects

Case study · Steady

Building a cycling coaching app from zero.

Designing and building Steady — an AI cycling coaching app — from idea to product with users, in a two-person team.

Duration January – April 2026
Role Product design (IA, UX, UI), front-end build with Claude Code
Team Me, my co-founder
Method Iterative design, AI-assisted build, PostHog-led learning

Steady is an AI cycling coaching app for serious cyclists. It builds science-based training plans, reviews every ride, and adapts every week. I co-founded it with my husband — he built the backend, training logic, and cycling science. I designed and built the full product experience.

The product positioning for Steady is that it is a coaching product. The reference point for messaging and pricing is a human cycling coach, not other training apps. The app was designed to feel like a coach, rather than a training plan and a dashboard — coaching text over raw numbers, explanations over an overwhelm of data. Calm tones, minimal data, and a UI that lets the coaching text stand out.

We started with a web app to ship faster and get to real user feedback quickly. A native app will follow once we've learned enough from early usage.

The app is made up of four main pages — today, schedule, overview of training block, and progress — and a persistent chat as the main coaching surface.

The four sections

Today screen
Today — today's workout and weekly snapshot
Schedule screen
Schedule — weekly view with coach feedback post ride
Plan block (Overview) screen
Overview — based on Stephen Seiler's block periodization
Progress screen
Progress — coaching narrative and dimension tracking

The chat is the primary surface for a 2-way conversation, so it is key to the "coaching" aspect of the app.

  • Workout & weekly reviews. Coach initiated reviews after every synced ride and at the end of each week (and training block). These are the core engagement loop.
  • Plan adjustments. User initiated adjustments to the plan, such as "I'm sick today." "There's a sportive on Sunday." The coach responds with modifications and explains why.
  • Training questions. User initiated questions that the "coach" can answer based on everything the AI knows about training science and the user's individual goal, training plan, metrics, etc.

Chat use cases

Workout review
Workout review
Workout review
Workout review
Plan adjustment
Plan adjustment
Training questions
Training questions
The coach delivers a ride review after a synced ride.

The initial desktop layout had chat permanently open on the right. We moved away from this for two main reasons.

1. A persistent panel doesn't work well as a notification system. The coach needs to initiate conversations (e.g., "How did that ride feel?"), which doesn't work if the chat is always visible.

2. An always-open chat invites overuse, which is costly in AI usage and not necessarily better for the user.

The progress page answers "How am I doing?" and "Am I on track?" Steady determines the metrics that matter for a user's specific goal, calculates where they are physiologically, and sets target numbers. Then we generate a targeted plan to help the user close that gap.

The underlying metrics move slowly, sometimes not visibly for weeks. Designing for this without manufacturing false momentum was a hard problem to solve, and one we're still iterating on.

We tried several approaches: readiness scores, gap-closed percentages, estimated weeks to close, and starting every metric at zero. Each one looked good in a mockup but would eventually mislead the user, either by creating false confidence, ignoring real physiology, or eroding trust with inaccurate estimates.

Readiness score — false certainty
Readiness score — false certainty. Made users look closer to ready than they were.
Gap-closed bars — 83% sounds close
Gap-closed bars — 83% sounds close. It could take months.
Weeks to close — too many variables
Weeks to close — too many variables. A wrong estimate erodes trust faster than no estimate.

The coaching narrative is the main way users will understand how they are doing. As any good coaching relationship would, it is nuanced and explains where the user is.

For some quick understanding, we decided to show a quick snapshot and then just go with trend graphs even if they move slowly. (Building these graphs with real data also made me appreciate the details and edge cases involved — from calculating data to making decisions on what happens when data doesn't exist or only partially exists.)

We added weekly training load (TSS) because it moves every week — giving users a sense of momentum even when the physiological metrics haven't budged.

Progress — coaching narrative
Coaching text
Progress — dimensions, quick facts and trends
Dimensions — quick facts and trends
Progress — weekly load
Weekly load

We use PostHog to track early usage. It shows us over time where we have bugs and where we need to add capabilities.

  • FTP is personal. Users want to override it and see the consequences in their plan. We made it manually adjustable.
  • Plans change. Users want recurring adjustments ("Sundays are always group rides") without breaking training logic. We need to allow long-term preferences while still guarding against overtraining.
  • Reviews are engaging. Asking "how did it feel?" before delivering the review — rather than just pushing it immediately — made the feedback better and created a two-way conversation. Users are willing to share more than we expected.
  • AI makes common-sense mistakes. Things a human would never get wrong. When asked to change a workout, it changed this Monday (two days ago) instead of next Monday because it has no concept of time. Watching PostHog helped us catch and fix bugs like this quickly.

Honest data over impressive data

It's tempting to show readiness scores and closing percentages — users want certainty. But in coaching, things are more nuanced than a number can capture. We resisted the impulse to simplify and let the coaching voice do the work a single metric can't.

Building what you design changes how you design

Working in code with real data surfaces thinking that doesn't exist in prototypes. How trends are calculated, what happens with a single data point, how to display power zones in a repeatable way. It takes more time to handle every edge case, but the loop from idea to shipped result is hours, not sprints.

Building from zero is a lot of fun

After years of design leadership roles within existing products and organizations, building something from scratch (especially enabled by new tools) was the most fun I've had recently. A lot of things that slow progress are cut out of the loop and the focus is on problem solving.

AI builds fast — including technical debt

When the backend was built with Claude Code without strict architectural constraints, we accumulated tech debt very quickly. AI doesn't proactively do systems thinking — componentization, data architecture, domain-driven design. Building well with AI still requires discipline.