Growth.Talent
Guest Profileaiproduct-managementexperimentation

Marily Nika on Why AI Is Not the Product (The Experience Is)

Before generative AI was cool, Marily Nika was building machine learning into video games. Now she's teaching growth teams that personalization matters more than the model—and that experimentation beats data volume every time.

Apr 11, 2026|7 min read|By Growth.Talent|

The Experience Is the Product, Not the Algorithm

Most companies are getting AI backwards. They're rushing to become "AI companies," slapping models onto products like merit badges, hoping the technology itself will unlock growth. Marily Nika has watched this play out from inside Google, where she leads generative AI product development, and her diagnosis is blunt: "AI is not the product. AI tools, AI models, AI agents, these are not the product. The product is the experience."

This isn't semantic hairsplitting. It's the difference between building features and building value. Nika points to Spotify as a north star example: when the app tells you that "people that listen to the songs you like are adding these two new songs to their playlist," it's not showing off its recommendation engine. It's collapsing the time between you and discovering music you'll love. The AI is invisible. The convenience is everything.

That also the adjacent people and demographic angle that it can add, I think is phenomenal. It takes away for me, it improves convenience. It reduces the amount of time of me spending to discover something new.

— Marily Nika

Nika has been living in AI since most people thought it was just video game opponents. She grew up in Greece obsessed with computer games, did her PhD in computing science with a focus on machine learning, and spent years building models before ChatGPT made everyone an armchair AI strategist. That decade-plus head start gives her an unusual vantage point: she remembers when machine learning was just probability trees and pattern recognition, which makes her less dazzled by the current hype cycle and more focused on what actually moves metrics.

The trap she sees most often is teams thinking the model is the moat. It's not. Personalized onboarding, predictive targeting, context-aware customer support—these are the growth levers. The model is just the engine under the hood.

Three Metric Buckets (And Most Teams Only Track One)

When people ask Nika how to measure success in AI products, she doesn't point them to accuracy scores or loss functions. She maps out three distinct buckets, and most growth teams are only watching one of them.

Bucket one: AI proxy metrics. Accuracy, false accepts, false rejects—the stuff data scientists obsess over. Bucket two: classic product metrics. Retention, engagement, growth, the North Star. Bucket three, the one almost everyone forgets: system health. "With AI, we don't really know how the system is going to behave if you get millions of people in there trying to use it," Nika explains. "It requires immense computational power to be able to provide an experience that's AI enabled."

Sometimes you may have this amazing product that if you change slightly the model, everything will crash.

— Marily Nika

This three-tier framework is what Nika teaches in the AI Product Academy she founded, because she realized product managers and growth leads were being handed AI tools without understanding how they break. A model can be 95% accurate in testing and still crater retention if the latency spikes or the user experience feels opaque and scary. Conversely, a slightly less accurate model that loads instantly and explains its suggestions can demolish a technically superior competitor.

For growth specifically, Nika anchors everything to the North Star metric—the single number that best represents value delivery. Then she uses tools like Amplitude, Mixpanel, or Google Analytics to see where users drop off. The hypothesis comes next: "We are going to use AI to personalize onboarding for growth. We expect a 10% increase in day 7 retention." Then you A/B test it with platforms like Optimizely. The experimental culture is non-negotiable. AI's probabilistic nature means you can't predict outcomes—you can only test them.

Start With No Data (And a Narrow Hypothesis)

Early-stage startups often freeze when they think about AI, assuming they need massive datasets to train custom models. Nika's advice flips that script entirely. "You don't need data. You have to validate whether this is something that's going to add value. Just hardcode the data. Synthesize the data or use pre-trained models."

She tells the story of a vacation planning startup that wanted to generate personalized travel agendas for families and couples but had no user data yet. Her response: stop worrying about the data pipeline and start worrying about the hypothesis. Does auto-generated itinerary planning actually improve sign-up conversion or engagement? Test that first. Use GPT-4. Use Google AutoML. Use Hugging Face Hub. Use Snorkel AI for programmatic data labeling or Synthesis AI for synthetic data. The model doesn't need to be yours—it just needs to prove the concept.

Start with a very narrow use case. Start with a very narrow hypothesis. Solve this tiny problem that's critical, like improving sign-up conversion or whatever, and then see if this is for you, and then the rest will come.

— Marily Nika

This is where startups have a structural advantage over incumbents. They're not bogged down by legacy infrastructure or risk-averse committees. They can spin up a pilot in a week, hardcode the edge cases, and see if users care. Nika is emphatic: the barrier to entry isn't data volume, it's willingness to experiment. The pre-trained models are already good enough to test most hypotheses. What's missing is the courage to ship something imperfect and learn from it.

She even points to resources most founders don't know exist, like Google Data Search, which indexes public datasets across the web. The data is out there. The question is whether you're asking the right question.

Why 2025 Is the Year of Experimentation (Not Optimization)

Nika posted on LinkedIn that 2025 would be "the year of experimentation," and she means it literally. Not optimization. Not scale. Experimentation. The distinction matters because AI is so good at micro-optimization—clicks, immediate conversions, next-session engagement—that it can seduce teams into chasing short-term lifts that don't ladder up to long-term growth.

She recently trained a company that had millions of rows of messy data. They uploaded it to an AI tool, which organized it, generated insights, created charts, and handed it all back. And then they froze. "They just didn't know how to engage with it. They didn't know what to do with it. And it became this scary, weird thing they didn't want to touch."

You kind of need to have the appetite, if you will, to be a bit more open to changing the ways a company on the company level works. And that's not comfortable for all domains, right?

— Marily Nika

The bottleneck wasn't the AI. It was organizational readiness. This is why Nika emphasizes "AI awareness" across teams—not just for product managers, but for growth, marketing, ops, everyone. If the company doesn't have an experimental culture, the best model in the world will gather dust. But if the team is trained to hypothesize, validate, iterate, then AI becomes a force multiplier.

She's also candid about the limits. AI won't tell you what happens to churn nine months after you tweak your paywall. It won't replace the judgment that comes from years of seeing how users behave across different products. "If you wouldn't worry about this, then AI would take over our jobs," she says. "That's why AI is a tool and you can get inspiration from it and get the help, but it will never replace us and our experience and what we've seen in other companies."

Personalization Beats Scale (And Other Heretical Truths)

Nika sees growth teams obsess over volume—more users, more channels, more content. But her bet is on depth: hyper-personalized experiences that make users feel seen. "When it's like, 'Hey, Merily, we think you would like this,' is by far incomparable to any other strategy," she says. And when the message explains why—based on your listening history, your browsing patterns, your cohort—it stops feeling like marketing and starts feeling like service.

This isn't just a nice-to-have. Nika argues it's the unlock for retention, which is the unlock for growth. "People need to have this experimental culture in their companies where you hypothesize a lot and validate." Personalized onboarding. Personalized targeting. Personalized experimentation. Every one of these reduces friction, accelerates time to value, and compounds over the user lifecycle.

She published a book right before DeepSeek and OpenAI Operator launched, which felt like cosmic irony. But she's at peace with it. "You gotta say, I'm drawing the line here. This adds value. And then I need to, if anything, add more material eventually." The fundamentals don't change. The tools do. And the teams that win are the ones that stay grounded in the former while experimenting relentlessly with the latter.

Nika's journey from video game AI to Google's GenAI product lead wasn't linear, but it gave her something rare: a decade of pattern recognition about what works when the technology is new and the playbook doesn't exist yet. Her advice isn't to chase the shiniest model. It's to ask what experience you want to deliver, then use AI to get there faster, cheaper, and more personally than anyone else.

Related Insights