The Contrarian Math of Activation
Lauryn Isford believes that an activation rate of 5 to 15% is often better than one in a higher percentage range. Not because lower numbers are easier to celebrate in performance reviews, but because they signal something most growth teams miss: you've defined activation as something that actually predicts long-term retention. The logic is simple and ruthless. If your activation rate sits at 60%, you've probably set the bar too low. You're measuring a vanity milestone—opening the app, clicking a button—not the moment a user crosses into meaningful value. A lower threshold means you're working hard to get users to a state they're not reaching today, one that correlates with sticking around.
This framing flips the usual growth playbook. At scale companies—Meta, where Isford led product growth for Facebook in India and internet.org, or Airtable, where she ran growth—teams optimize relentlessly for percentage-point lifts. But Isford's thesis is that precision can be a trap. The difference between a 6% and 7% activation bump might matter for your performance review, but it rarely matters for the business. What matters is whether you're solving the right problem.
When Isford's team at Airtable rebuilt onboarding over 12 to 18 months, they drove a 20% lift in activation. That wasn't from A/B testing button colors. It came from immersive, guided experiences that reduced cognitive load and met users where they were. The work included a step-by-step wizard, personalized use-case scaffolding, and what the team nicknamed "the Mole"—a design pattern that pops up from the bottom of the screen to deliver ongoing education. Tooltips, once central to Airtable's onboarding, were retired.
An activation rate that falls in a lower percentage range, maybe for most companies, 5 to 15%, is better than one that falls in a higher percentage range because it means that there's likely much higher correlation with long-term retention.
— Lauryn Isford
Experiments Are Expensive (and Often Unnecessary)
Isford has a spicy take for anyone steeped in growth hacking culture: you probably run too many experiments. This from someone who spent years at Facebook, the temple of experimentation, and then led growth at Airtable, a product-led growth darling. But her argument isn't anti-rigor. It's anti-waste.
She breaks down why teams experiment into two buckets. One: to understand metric impact with precision. Two: to mitigate risk when making dramatic changes. The problem is that the first reason—precision—often doesn't justify the cost. Engineers, analysts, and PMs spend hours interpreting experiment results when they could be roadmapping, talking to customers, or shipping. Experiments, Isford argues, should primarily be a risk mitigation tactic, not the default.
Her example: Airtable Forms, a feature that lets users create forms and collect submissions from non-Airtable users. The team noticed a gap—submitters couldn't request a copy of their submission via email. So they built it. The feature required users to create an account, which meant it could shift signup metrics and activation mix. But instead of running an A/B test, they just turned it on. The product team had done rigorous customer analysis. They knew it would bring value. And even if it didn't move certain top-line metrics in the "right" direction, it was the right thing for customers using forms.
The result? A big enough impact on signups that they saw it at the top line without needing an experiment. They used post-launch attribution to analyze what happened. No weeks lost to experiment design. No engineering overhead. Just conviction and speed.
Experiments can be expensive. Sometimes if the business—let's say activation rates go up 6% versus 7%—that precision actually doesn't help all that much beyond being able to say in your performance review, hey, I increased activation by 7%.
— Lauryn Isford
Isford acknowledges the cultural headwind here. At companies like Airbnb, your ability to point to a 14% metric lift from experiments is how you prove impact, earn promotion, and secure resources. But she believes you can build a different culture—one where rigor comes from understanding customer problems deeply, not from statistical significance. That means rewarding teams for qualitative feedback, deals closed, and doing right by users, not just for experiment velocity.
The Guided Onboarding Wizard That Killed Tooltips
The most impactful piece of Airtable's onboarding overhaul wasn't personalization or ongoing education. It was the guided onboarding wizard—an immersive, step-by-step experience that asked users questions, let them select from easy-to-click buttons, and visualized their workflow coming to life on the right half of the screen as they made choices on the left.
This wasn't a tooltip tour. Tooltips, Isford explains, had been central to Airtable's onboarding for years. But they don't reduce cognitive load. They add to it. Users faced with a blank canvas and a complex product needed scaffolding, not pointers. The wizard helped more than 90% of users build something functional without requiring them to understand the full power of Airtable upfront. It met them where they were.
The approach required deep customer work. Isford's team spent months speaking to users, watching them get started, and looking for behavioral patterns. They identified clusters: someone familiar with databases who prefers building from scratch has different needs than someone exploring a project management tool recommended by a colleague. But when they launched the wizard, it was one-size-fits-all. Personalization came later. The insight was that one generic, well-designed onboarding could solve the majority of use cases if it focused on what users actually needed, not what the business wanted them to learn.
We really worked hard to prioritize what the user actually needed and to consider what was necessary education versus what could be ongoing education.
— Lauryn Isford
Airtable has advanced features—automations, integrations, relational databases. But dumping those on day-one users is counterproductive. Isford's team treated onboarding as a portfolio, not a checklist. Guided onboarding got users to basic scaffolding. The Mole handled ongoing education, popping up contextually to help users level up from beginner to intermediate to advanced. Personalization layered in use-case relevance. The result was a system that reduced effort, increased support, and let users build something meaningful fast.
What Customers Need Versus What the Business Wants
Isford is careful to note that immersive wizards aren't a universal pattern. They work for complicated products where cognitive load is high and figuring out where to start is genuinely hard. For simpler tools, the juice might not be worth the squeeze. But the underlying principle transfers: you have to delineate what the user needs from what your business wants.
Growth teams at scale companies—especially those with freemium models, self-serve products, or sandbox demos—often conflate the two. They want users to see advanced features, explore monetization pathways, and hit conversion milestones. But users just want to solve a problem. If you can help them do that, retention follows. If you can't, no amount of tooltip tours or email nurture will save you.
Isford's work at Airtable, and before that at Facebook and Blue Bottle's e-commerce business, reflects a consistent worldview: results-oriented teams that move with urgency and do right by customers will outperform teams optimizing for performance review metrics. That doesn't mean abandoning rigor. It means investing rigor where it matters—understanding customer problems, designing for real needs, and shipping with conviction.
A growth team or a growth org exists in service of improving the business and delivering results for the business. And whether or not you measure those precisely in an A/B test, you still shipped them or you didn't.
— Lauryn Isford
She also believes onboarding is one of the most undervalued growth levers, especially for products with self-serve elements. It's a choke point. Everything downstream—conversion, deal closures, expansion within organizations—flows from it. Get onboarding right, and lots of good things follow. Get it wrong, and no amount of paid acquisition or feature development will compensate.
Her advice for teams tackling onboarding: celebrate any clear, statistically significant improvement in activation rate. But don't mistake precision for impact. Build for the user, not the dashboard. And remember that lower activation rates, if they correlate with retention, are often a sign you're doing something harder and more valuable than chasing vanity metrics.