Growth.Talent
Guest Profilepricingplgexperimentation

The Pricing Playbook: Harmon, Sharma, Lee, and Holm on Growth Through Monetization

The best pricing leaders treat it like any other product feature—something that shapes customer behavior, drives outcomes, and evolves constantly. Here's how four top growth executives think about monetization.

Apr 11, 2026|6 min read|By Growth.Talent|

Higher Minimums, Faster Love: Why Miro Made It Harder to Buy

Most SaaS companies obsess over lowering barriers to purchase. Akshay Sharma took the opposite path. When Miro was finding product-market fit, the team made a counterintuitive choice: they set a five-user minimum on paid plans. Not to maximize revenue, but to maximize love.

The logic was simple. Miro's value emerged when multiple people collaborated on a board together. A solo user might pay, but they wouldn't fall in love. The five-user floor ensured that by the time someone converted, they were getting real value. When Miro introduced the Business plan with security features like SSO, Sharma raised the bar again—20 users minimum. The reasoning was the same: if you were smaller than that, you could pay the higher price, but you probably weren't extracting enough value to justify it.

It was choices which were not short-term optimal, but were the right goals for the company.

— Akshay Sharma

Only after Miro had proven product love and scaled revenue did the team reverse course, lowering minimums to one user through experimentation. The principle held throughout: pricing isn't about extraction. It's about aligning customer success with company goals. Sharma's framing is blunt: pricing is another product feature. It changes customer behavior. You should use it intentionally.

The OpenAI Paradox: Getting Better, Charging Less

Alison Harmon runs growth at OpenAI, where the pricing playbook flips conventional wisdom on its head. As the models get smarter and more capable, OpenAI has repeatedly lowered prices. The goal isn't margin optimization—it's market ignition.

The AI category is nascent. OpenAI's success metric right now is adoption at scale: getting as many companies and users as possible to try the platform, integrate it into workflows, and discover use cases. Lower prices accelerate that. As the company gets more efficient at serving inference, cost savings get passed to customers, which in turn unlocks more usage within accounts and across new customers.

You think as we increase value, you should increase price. But really it's while we're trying to ignite the market and just get as many companies exposed to AI as possible.

— Alison Harmon

Harmon is clear-eyed about context. This approach works for OpenAI now, but it's not a universal rule. The key is matching pricing strategy to company stage and market maturity. In a land grab, friction is the enemy. In a mature category with established value, you can afford to optimize differently.

Packaging as Customer Segmentation, Not Feature Bucketing

Carsten Holm strips away the abstraction around packaging. Good-better-best isn't about organizing features. It's about segmenting customers by willingness to pay and value realization. At Splunk, that insight reshaped how the company sold its data analytics platform.

Splunk's legacy model required customers to buy the platform, then layer on add-ons like Enterprise Security or IT Service Intelligence. Two line items, added friction, slower deals. Over time, the team observed that customers adopting specific use cases ended up expanding platform consumption anyway. The packaging was creating artificial separation between platform and application.

When you think about packaging, the thing you really wanna do is ultimately you wanna segment your customer base. Different customer segments have different willingness to pay.

— Carsten Holm

Holm's broader point applies everywhere: the value metric is the hardest lever to change after launch. Numbers—price points, user limits—can flex. But the underlying metric that captures customer value? That's structural. Get it wrong early, and you're stuck with it. The customer has to agree that the metric reflects how they derive value, or the exchange falls apart.

Experimentation Theater and the South Africa Test

Janie Lee leads product at Loom, where she's developed a clear-eyed view of when to experiment on pricing and when not to. The first question isn't "Can we test this?" but "What are we trying to learn?" Are you validating product-market fit for a new package, or de-risking a change to your most popular tier? The risk profile and data requirements are completely different.

Lee's framework is pragmatic. Most companies don't have enough volume far enough down the funnel to run statistically meaningful pricing experiments quickly. If you're enterprise or sales-led, forget it. Even in high-volume self-serve, you'll wait months for reliable churn data. And if you're testing changes to an existing package that most customers use, any move directly impacts your bottom line.

Don't test something you wouldn't ship. And the contrapositive is also true: don't test something you wouldn't roll back.

— Janie Lee

When Loom does experiment, they keep it simple, segment tightly, and set guardrail metrics. Revenue or conversion lifts can mask long-term damage to engagement or user growth. Lee also stresses customer-first execution: support and sales teams need clear comms plans, and if someone discovers they're in a test, the company offers the most generous version of the policy. Holm adds a tactical wrinkle from his enterprise days: test in low-risk geographies first. He ran ERP pricing experiments in South Africa—English-speaking, isolated market, minimal brand risk if things went sideways. Find your South Africa.

Failure Is the Default, So Build for It

Sharma's team at Miro ran an experiment that seemed like a slam dunk. They identified the top features driving conversion from Starter to Business plans and let Starter users test them for free. The hypothesis: users would see the value and upgrade faster. Conversion rates collapsed. Users thought the trial was all Business had to offer and decided it wasn't worth the price jump.

The inverse also happens. Miro put GIFs behind a paywall on free plans. The team expected modest lift. Conversion jumped 15%. Both outcomes teach the same lesson: expect to be wrong. Experiments fail more often than they succeed, and the unexpected happens constantly.

The de-risking is as a leader, assume the mindset that failures will happen. The insurance you're doing that helps you continue doing experiments and learning, that's more important than the success of any given experiment.

— Akshay Sharma

The real discipline isn't in running perfect tests. It's in building a culture where failed experiments don't kill momentum. That means tight scoping, thoughtful rollback plans, and cross-functional alignment with sales, support, and customer success before launch. Pricing is a one-way door once it's public, so the insurance—limited exposure, clear comms, willingness to revert—matters more than any single result.

No Silver Bullet, Just Clear Outcomes

Across all four leaders, one theme recurs: there is no universal pricing formula. The right model depends entirely on what you're trying to achieve. User adoption? Minimize friction. Revenue maximization? Find the ceiling of willingness to pay. Product love? Set thresholds that ensure value realization before conversion.

Sharma's mental model is elegant: pricing is a cost-benefit exchange. On one side, the customer expects benefit from your product. On the other, they weigh expected cost. Pricing influences that cost in multiple ways—list price, discounts, value metric, packaging. As you change the benefit side through product features, go-to-market, or brand, you have to adjust the cost side to hit your goal.

Harmon, Lee, Holm, and Sharma all operate from the same foundation: be ruthlessly clear on your objective, pick the levers that move it, and evolve them as your company and market mature. Pricing isn't a one-time decision. It's a feature you ship, measure, and iterate. Treat it that way.

Related Insights