Growth.Talent
Guest Profileb2bexperimentationteam-building

Jordan Hwang on Building Trust, Testing Smart, and Stacking Growth Curves

Jordan Hwang has spent 15 years dismantling the myth that growth curves are smooth. The real picture looks more like programs stacking, dying, and launching again—all at once.

Apr 11, 2026|7 min read|By Growth.Talent|

The Staircase Illusion: Why Growth Is Never Actually Linear

Most growth charts lie. Not because the numbers are wrong, but because the clean upward trajectory hides what's actually happening underneath. Jordan Hwang, VP of Marketing at OpenPhone, has a different mental model. He sees growth as a series of overlapping programs—some peaking, some flatlining, some just beginning to catch—that, when stacked, create the illusion of a smooth line going up and to the right.

"What that actually looks like is like it's multiple programs that are stacking on top of each other," Hwang explains. "If we've done this really well, then what that means is that as your initial programs start to saturate out or they start to flatline, a new program is coming up and taking on that charge. And so once you stack them all together, it's basically like this, and then it's this, and then it's this, and comes up to like a line that goes up into the right."

You have to continually invest into that in order to make that happen. It's actually because there's things that are dying off and things that are growing, and things start dying off and things that are growing all at the same time.

— Jordan Hwang

This mindset has real consequences. At Gusto, where Hwang led customer acquisition, lifecycle marketing, and go-to-market during a phase of 50% year-over-year revenue growth, the expectation wasn't just to keep the engine running—it was to know when to kill what's working and replace it before the dip became visible. The staircase model forces a culture of relentless testing, not as a luxury, but as infrastructure.

Experiment to Learn, Not to Win

Hwang is blunt about what kills experimentation culture: people who run tests hoping to be right instead of hoping to know something new. "Something that gets—that I find generally to be like really underrated is like quality of experiment design," he says. "Largely because of two things. One of them is, and I think we've come across this a lot, like we've all come across this, is like people who experiment to win versus experimenting to learn."

The difference isn't semantic. Experimenting to win means choosing metrics that flatter the hypothesis, running tests until you get a result you can celebrate, and retrofitting logic to justify the spend. Experimenting to learn means designing for signal clarity, setting constraints that make failure informative, and being transparent about what you don't yet understand.

Hwang's default posture is to test so many small things so often that no single test requires a board deck. "Do people have to know that you're testing a new channel? In an ideal world, they kind of don't," he says. "When you're testing at such a small scale that the amount of spend that's required is fairly inconsequential relative to the overall pie… you shouldn't have to require some level of buy-in to do that."

When bigger bets are necessary—offline buys, new market tests, brand plays—the internal pitch changes. Hwang frames it as portfolio construction: this program is maturing, this one is dying, this one needs to ramp now so we don't have a gap in six months. The story isn't "trust me, this will work." It's "here's the gap we're filling, here's the risk we're mitigating, and here's how we'll know if we're wrong."

The Two Kinds of Trust You Actually Need

Hwang splits trust into two buckets. The first is credibility: do people believe you're shooting straight, that you're not spinning bad news or hiding inconvenient data? The second is competence: do they believe you know what you're doing, that your logical leaps are grounded in something real?

"Folks have to trust that they can believe in what you're saying. As in, you're not lying to them, you're being very transparent, whether good or bad, you're shooting it straight," he explains. "But the second piece of it is they also have to trust that you understand what you're doing as well. And I think sometimes that can be a little bit harder to develop."

Helping folks get in on like that kind of like that ground basis and really sort of setting the stage for initial belief buy-in is like a really key piece. And then over time, I think like, obviously, it's like, as that trust continues to develop, then in a lot of ways, the requirements to have like that same like level of bar is going to decrease over time because you've built up like a reputation of sorts to that, that I can rely on.

— Jordan Hwang

This is where Hwang's approach to disagree-and-commit diverges from the usual interpretation. He doesn't see it as "I heard you, now fall in line." He sees it as a contract: you flagged a risk, I'm acknowledging it, and here's exactly how we're going to monitor it and what we'll do if it materializes. "Disagree and commit is effectively like, I hear you, it's a risk. You don't think it's worth the risk. I think it's still worth the risk. So, we're going to go ahead and do this. But we are going to be very eyes wide open about these risks and continuing to monitor them so that we can deal with them appropriately if they do come up in a meaningful way."

That kind of transparency compounds. Over time, the bar for approval drops because you've shown you can call your own mistakes before anyone else does.

Measuring What Matters When the Tags Don't Line Up

Hwang has little patience for attribution purists. Not because measurement doesn't matter, but because waiting for perfect tracking is a trap. He's worked with business partners who demand one-to-one correlation between spend and pipeline, and he's learned to vet for mindset early. "Are they folks who are sitting there saying, no, I have to see it show up exactly like this, exactly tagged as this and all those other things? Or are they able to take a leap of faith in some respects and like they'll trust like the data or they'll trust not the data, but like they'll trust like the inferences, they'll trust like the mixed media modeling?"

The question isn't whether attribution exists. It's whether your partners are willing to act on directional signals while you tighten measurement in parallel. "I understand that it's not a direct correlation one-to-one, but when we put money on this side, things go up on this side, there must be something there. Do they take that or not?"

Do they take that or not? And I think like we've all probably worked with business partners who really want the most literal version of this, which can make it very painstaking to work with versus ones who are able to buy into the logic, buy into—narrative's a bad word here, but like buy into kind of like that train of thought and then sit there and say, yeah, yeah, I see it. Let's figure out now how we can better measure this and better make it more predictable.

— Jordan Hwang

At OpenPhone, where integrations with Salesforce, Slack, and HubSpot are core to the product story, this philosophy shows up in how Hwang thinks about ecosystem positioning. He references his time at Gusto, where payroll sat third or fourth in a company's onboarding sequence—after LLC registration, legal counsel, and banking. Knowing your place in line tells you who can feed you customers and who you can feed to. The best partnerships, Hwang says, create wins across three boards: yours, your partner's, and your customer's.

Change Management Without the Mutiny

When Hwang joined OpenPhone, the company already had product-market fit. The challenge wasn't finding signal—it was refining messaging and evolving product marketing without breaking what was working. "It's always tricky," he admits. "Like whenever you inherit a team, it's a little bit like how hard do you push? Do you push this early? Do you push this late?"

He's seen two failure modes. The first: leaders who impose their preferred playbook wholesale, ignoring context and earning resentment. The second: leaders too cautious to make any meaningful change, who leave feeling ineffective and unfulfilled. "There's something to be said for even if we failed at something but we did our best and we've tried really hard and we worked really hard at it, we should still feel pretty good about the effort that we put in. But if you didn't want to rock the boat and you didn't put into that effort, and then at the same time you still failed, you're going to feel pretty bad about that entire experience."

Hwang's middle path: align on current reality, identify opportunity areas, and push on them as quickly as team readiness allows. That means listening to the people who built what you inherited, understanding the nuance in the existing system, and making changes that compound rather than reset. At OpenPhone, that's meant leaning into integrations as a product story, treating Slack and Salesforce adoption as distribution channels, and framing the phone system not as a replacement tool but as connective tissue in workflows people already depend on.

Growth isn't a curve. It's a portfolio of bets, constantly rebalanced, where the art is knowing when to double down and when to let something die before anyone notices it's gone.

Related Insights