The Counterintuitive Truth About Failed Experiments
Most growth teams celebrate their wins and quietly bury their losses. Phil Carter does the opposite. After seven years leading product and growth at companies like Quizlet, Faire, and Ibotta, he's developed a discipline that feels almost masochistic: he hunts for the A/B tests that failed spectacularly, then uses them as launchpads for his biggest wins.
Oftentimes the biggest wins come right on the heels of a failed A/B test. That's not what I was expecting to see, but based on what I am seeing now, I've learned something new about the customer. So now let's go run a second A/B test on the basis of that learning, and then suddenly you sort of unlock a new opportunity.
— Phil Carter
This isn't just philosophical. When Carter interviews growth PMs today, he explicitly asks candidates to walk through an experiment that bombed—but in a way they didn't expect, one that led to a huge subsequent win. The question filters out resume padders and surfaces the intellectually curious, the ones who treat failure as signal rather than noise. It's a lens that defines his entire approach: growth isn't about optimizing button colors or chasing vanity metrics. It's about building a feedback loop between what you thought would work and what actually happened, then weaponizing that delta.
Why Quizlet's Growth Team Looked Nothing Like MasterClass's
Carter's time at Quizlet offers a masterclass in first-principles thinking. When he joined, the education platform was already one of the most trafficked sites in the U.S., driven almost entirely by organic acquisition—95% through SEO and word-of-mouth between students and teachers. That meant the growth team he built looked radically different from what you'd hire at a company like MasterClass, where higher price points and paid advertising dominate.
The growth leader and growth team that you would go build at a company like Quizlet that's 95% plus organic acquisition through SEO and word of mouth would look extremely different than the growth team you might go hire if you were at a company like MasterClass with a higher price point and where a lot of the acquisition is coming from paid advertising.
— Phil Carter
At Quizlet, students searched for help with homework or exam prep, stumbling onto long-tail search queries that Quizlet owned because of its massive library of user-generated flashcards. The growth motion wasn't about optimizing ad spend or conversion funnels—it was about understanding search intent, content depth, and viral loops within classrooms. Carter's insight: you can't copy-paste a growth playbook. The fundamental dynamics of how your business acquires users dictates who you hire and what they focus on.
This is why he pushes back on the idea of hiring a growth team pre-product-market fit. Before you know if you're building the right product, or which channel will scale cost-efficiently, you don't yet know what kind of growth leader you need. The exception? Engineers or analysts who naturally exhibit intellectual curiosity about spikes in certain channels or customer segments. Jason VanderMeuw at Strava started as an early engineer, then became director of growth engineering because he was obsessed with figuring out why certain experiments moved the needle.
The Sugar High That Kills Your Channels
Carter has a vivid metaphor for one of the most common growth mistakes: the sugar high. Teams see a metric plateau, panic, and crank up the volume—more push notifications, more emails, more retargeting ads. In the short term, it works. Engagement ticks up. Conversions rise. Then, weeks or months later, the channel dies.
Anytime you increase the number of notifications or emails you send, in the short term, it's like this sugar high. It's going to lead to a short-term pop in your metrics. But if you do that too many times, you kill the channel.
— Phil Carter
The issue is mistaking outputs for inputs. A board cares about ARR, subscribers, ARPU. But those are outcomes. The inputs—signup rate, activation rate, cost per trial, trial conversion rate—are where growth teams actually operate. Carter's job as a growth leader was to connect the dots: translate company strategy into output targets, then reverse-engineer which input metrics have the most leverage to get there. That means understanding baseline performance over 12 months, benchmarking against comparable companies, and diagnosing which levers are underperforming.
The art is recognizing that every product has unique user psychology. Benchmarks are a starting point, not gospel. A consumer subscription app might lose more than 50% of annual subscribers in the first year and over 50% of monthly subscribers in the first three months—but that doesn't mean your retention curve should mirror the average. Carter's focus is on understanding why your metrics differ, not just accepting industry norms.
Big Swings Versus Small Optimizations: The S-Curve Answer
Should a growth team focus on backlink optimization and paywall tweaks, or on one or two audacious bets per year? Carter's answer is both—but the mix depends entirely on where you sit on the S-curve.
Early-stage companies—seed or Series A—should ignore small optimizations. Three reasons: the impact won't move the needle enough, you should still have low-hanging fruit to harvest, and you likely don't have enough users to measure statistical significance on marginal tests. Big swings are the only rational bet. But once you're in hypergrowth—the steep part of the S-curve—small optimizations can generate millions in incremental revenue. A decimal point improvement in checkout conversion at Amazon is worth the effort because volume amplifies every marginal gain.
If you're in the early part of that S-curve, which by definition means any seed or Series A startup, you should not be focusing on small optimizations. It's a waste of time. You should still have lots of low-hanging fruit. If you're still that early, you should be taking bigger swings.
— Phil Carter
The trap is hiring specialists too early. Carter admits he made this mistake at Ibotta and Quizlet—hiring PMs with deep design chops or quantitative backgrounds, thinking expertise in a narrow domain would unlock growth. He learned the hard way that growth PMs need to be strong generalists who can flex to whatever the business demands in any given quarter. The traits that predict success aren't specialized skills but intellectual curiosity, bias toward speed, and a willingness to take smart risks.
How to Hire Growth Talent Without Introducing Bias
Carter has strong opinions on homework assignments for growth candidates. The goal is to test first-principles thinking and quantitative rigor without burning hours or introducing bias. His approach: pick a well-known company like Uber or Airbnb, pose a hypothetical scenario, and keep the scope tight—one to two hours max before diminishing returns kick in. Assignments about your own startup penalize candidates who don't already know your product. Open-ended prompts reward people with more free time, often disadvantaging candidates with families or full-time jobs.
Beyond the assignment, Carter asks two diagnostic questions. First: name one or two growth teams you admire at other companies. The question reveals whether someone is passionate about the discipline—reading Lenny's newsletter, listening to growth podcasts, studying other teams. The best growth people are obsessed with the craft. Second: describe a non-obvious experiment that performed well in an unexpected way, and then walk through an A/B test that failed but led to a major win. The first question tests creativity and learning agility. The second surfaces whether a candidate treats failure as a teaching moment or a dead end.
The best people in this field tend to be very passionate about it. They spend a lot of their time reading Lenny's newsletter or listening to podcasts or doing their own research to really hone their craft. If they can't name at least one or two other companies that they really look up to, that's a telltale sign.
— Phil Carter
CAC Inflation and the Brutal Economics of Consumer Subscriptions
Carter doesn't sugarcoat the economics. Customer acquisition costs go up over time almost by definition. The average consumer subscription app hemorrhages more than 50% of annual subscribers in year one and over half of monthly subscribers in the first three months. Those aren't aberrations—they're the baseline reality.
This is why he's allergic to generic growth advice. The through line in his career—venture capital, then seven years in product and growth, now advising—has been working with consumer businesses that make people's lives better, especially subscription models. He's seen the same patterns repeat: teams chase growth without understanding whether the product itself has become the primary growth asset, which is the defining shift of the last 15 years since Facebook built the first modern growth team.
What's fundamentally changed over the last 15 years or so since Facebook created what I would call the first modern growth team is that the product itself has become the most valuable asset powering the growth of many tech companies.
— Phil Carter
For consumer products without sales teams, the product has to do the heavy lifting—getting into users' hands faster, communicating value, driving conversion and retention. That's why Carter believes growth leaders need to sit between the CEO's output metrics and the team's input metrics, translating strategy into lever-pulls that actually compound. The science is benchmarking and baseline analysis. The art is understanding the user psychology that makes your product different, then building a roadmap around the experiments—including the failed ones—that teach you something new.
Related Insights
Elena Verna on Why $100M ARR Doesn't Mean You Have Product-Market Fit
Elena Verna
Lucas Vargas on Building Nomad: Why a VIP Lounge Beats a Business Model
Lucas Vargas
Kate Syuma on Why Product Quality Kills More PLG than Bad Tactics
Kate Syuma
Casey Winters on Why Marketplace Founders Play the Wrong Game Early On
Casey Winters