Growth.Talent
Episode Insightonboardingactivationretention

Lauryn Isford on Why Your Activation Metric Should Be in the Single Digits

Most teams pick activation metrics that are too easy to hit. Lauryn Isford explains why single-digit activation rates correlate with long-term retention, and how Airtable's team drove 20% activation growth by rethinking onboarding.

Apr 11, 2026|4 min read|By Growth.Talent|

Pick an Activation Metric That's Hard to Hit

Most growth teams celebrate when 40% of new users hit their activation milestone. Lauryn Isford thinks that's a red flag.

When she led growth at Airtable, her team focused on a metric that only 5-15% of users reached: week 4 multi-user active. That means more than one person on a team actively contributing to a workflow in the fourth week after signup.

An activation rate that falls in a lower percentage range, maybe for most companies, 5 to 15%, is better than one that falls in a higher percentage range because it means that there's likely much higher correlation with long-term retention. And you're really working hard to get most of your users to reach a state that they're not reaching today.

— Lauryn Isford

The logic is simple: if 40% of users are week 4 active, only a fraction will still be around at month 12 or 24. But if 5% reach a more demanding bar, those users stick around. Moving from 5% to 6% has massive downstream effects on retention and revenue.

Stop Running So Many Experiments

Isford has a contrarian take for someone who spent years at Facebook and Airtable: growth teams experiment too much.

She sees two reasons teams run A/B tests. One is precision—wanting to know if activation moved 6% or 7%. The other is risk mitigation when making dramatic changes. Most of the time, you don't need the precision, and the cost is higher than you think.

Experiments can be expensive. Sometimes if the business, you know, if it, let's say activation, right? Activation rates go up 6% versus 7%. That precision actually doesn't help all that much beyond being able to say in your performance review, hey, I increased activation by 7%.

— Lauryn Isford

Engineers, analysts, and PMs spend days interpreting experiment results instead of shipping or doing foundational analysis. Her advice: experiment when you need to mitigate risk, but otherwise invest in rigorous customer research, get mocks in front of users, and ship with conviction.

When Airtable launched a feature letting form submitters request a copy of their responses, the team didn't run an experiment. They knew from customer research that people wanted it. They turned it on, watched the top-line metrics, and used attribution to learn afterward. It moved the needle without the overhead.

Building a Culture That Doesn't Require Experiments for Credit

The challenge is performance reviews. In most growth orgs, engineers and PMs get rewarded for the percentage lift their experiments drove. Escaping that trap requires intentional culture building around doing right by customers and measuring impact through qualitative feedback, deals closed, or deals saved—not just A/B test readouts.

How Airtable Drove 20% Activation Growth

Isford's activation team rebuilt Airtable's onboarding over 12-18 months. Three big efforts drove a combined 20% lift in activation: guided onboarding, personalization by use case, and ongoing education.

The biggest win was the guided onboarding wizard. Instead of tooltips pointing users to features, Airtable built an immersive step-by-step experience. Users answer questions on the left side of the screen—what kind of project are you working on, what do you want to track—and watch their workflow visualize on the right as they make selections.

The key insight: meet users where they are. Airtable has powerful features like automations, but new users don't need advanced stuff on day one. The team prioritized what users actually needed over what the business wanted to show off.

We really worked hard to prioritize what the user actually needed and to consider what was necessary education versus what could be ongoing education and building it out.

— Lauryn Isford

For personalization, Airtable segmented by learning style and building style, not job function. Someone technical who likes to build from scratch needs different onboarding than someone exploring a tool recommended by a colleague—even if they're both in marketing. Templates can inspire, but teaching someone how to think about databases is more effective than showing them a project management template.

Break Your Activation Metric Into Components

Week 4 multi-user active sounds simple, but Isford's team decomposed it into every subcomponent: How many teams sign up versus individuals? How many make it to week 4? How many have 2+ people invited? 2+ people ever active? 2+ active in week 4 specifically? What counts as "active"?

That forensic breakdown revealed which levers had the most impact. But the team didn't stop there. They added two more metrics alongside activation: week 2 and week 4 retention for individuals, and "Build"—a sophistication score measuring whether users reached intermediate usage levels.

Why add metrics when everyone says pick one North Star? Because a single metric didn't capture the full picture. Some onboarding changes helped users build more sophisticated workflows but didn't increase teammate invites. Others boosted retention but not sophistication. Tracking all three let the team celebrate wins across multiple dimensions and stay well-rounded.

Isford has one more spicy take: if you're working on the same metric forever, you're probably leaving impact on the table. Growth teams should be agile. Once team activation looks healthy, maybe user retention is the next opportunity. Or conversion. Or a different retention signal. Stability helps with momentum, but don't over-optimize finding the perfect metric. By the time you do, it's probably time to move on.

Source Episode

Mastering Onboarding

Lenny's Podcast · 64 min

Related Insights