Growth.Talent
Episode Insightaicontentteam-building

Justin Kistner on Why AI Is Your New Junior Team Member, Not Your Boss

CopyClub.ai founder Justin Kistner explains why most growth leaders prompt AI wrong—and how building a context library turns ChatGPT into your best junior hire.

Apr 11, 2026|4 min read|By Growth.Talent|

Stop Talking to AI Like It's a Senior Colleague

Most growth leaders fail at AI because they prompt like they're delegating to a VP. They type "create a content calendar for me" and expect magic.

Justin Kistner has a better mental model: treat AI like a smart new hire fresh out of college. Someone who needs context, specificity, and clear direction—not vague requests that assume years of company knowledge.

If you're used to talking to senior colleagues who are highly competent, you're probably bad at prompting because you would say things like, create a content calendar for me. You have to act like this is somebody fresh outta college.

— Justin Kistner

That shift unlocks everything. When Kistner uploaded a product-market fit survey CSV into ChatGPT for the first time, he didn't just ask for trends. He told it to analyze like Sean Ellis would—filtering only responses from users who'd be "very disappointed" without the product. The output was instantly actionable.

Build a Context Library or Keep Repeating Yourself

AI has amnesia. Every conversation starts from zero unless you feed it the right information upfront.

Kistner's solution is the context library: a set of Markdown files containing your company profile, ideal customer pain points, competitor landscape, and product descriptions. Before asking AI to generate ideas, draft landing pages, or critique your homepage, you load that context in.

AI is super smart, but it always has amnesia. So you have to load up into its RAM again. Who are we? What are we doing? Who's my customer?

— Justin Kistner

Start simple. Kistner recommends a single Google Doc with all your foundational company info. Once you feel the pain of copying and pasting it repeatedly, graduate to tools like Cursor or Zed—developer IDEs that let you @mention files directly into AI chats. Your context library becomes instantly reusable across hundreds of prompts.

For growth teams, this means you can take consolidated product-market fit insights and turn them into job-to-be-done statements, landing page copy tests, or roadmap prioritization—all without re-explaining your business every single time.

Ask Like a PhD Student, Not a Googler

When Kistner subscribed to OpenAI's $200/month O1 Pro, he didn't just throw random questions at it. He studied how early users were prompting—PhD-level, multi-layered questions—and reverse-engineered the structure.

The problem? Most people ask O1 Pro to lift 50 pounds when it can handle 2,000. They use frontier reasoning models for basic tasks and wonder why they're not impressed.

Kistner's trick: ask AI how you should ask the question. For example, instead of "give me 10 activation test ideas," try: "I have an activation challenge in [context]. What additional insights could I give you to improve the quality of test ideas?" Then iterate.

He also avoids unspecified nouns—terms like "North Star metric" or "my project" that sound clear but mean different things to different people. Instead, anchor prompts in specific expertise: "If Andrew Chen were to look at this viral loop, how would he recommend I improve it?"

Local LLMs and the Data Privacy Gamble

Not all AI tasks belong in the cloud. Kistner is now running local LLMs—like Meta's Llama 3.1 405B model—for sensitive workflows.

Why? Consumer ChatGPT accounts default to opt-in for future training data. Even if you disable that, your prompts may still be subject to human review or sit on servers vulnerable to hacks. For startups handling customer survey data, competitive intelligence, or pre-launch financials, that's a real risk.

Companies need to be mindful of what they're doing. There's publicly available information—who cares? Then you have stuff that might be private but not necessarily sensitive. And then there's stuff like your quarterly results before it hits the market.

— Justin Kistner

Smaller 7B parameter models can run on a laptop and handle redaction tasks. Frontier-level models require rack-mounted servers and $10K–$20K investments—but for companies processing proprietary data at scale, that's cheaper than a single hire.

Kistner's interim workaround: manually redact company names and personally identifiable info before uploading to cloud LLMs. Ask all questions as "Company X." You know what it is; the system doesn't.

Startups Have the AI Advantage—If They Act on It

Big companies have budget and data. Startups have motivation.

Kistner has seen it firsthand at CopyClub.ai. When founders enroll their teams in AI training, employees who are told to "go learn AI" rarely follow through. Startups, meanwhile, treat AI proficiency like survival.

His advice: start with tasks you dread. Processing survey results. Transcribing and interrogating sales call transcripts. Turning recorded customer conversations into proposal drafts. These are B+ quality wins that free mental energy and compress timelines.

For Kistner, that meant pivoting from his previous startup (UpFocus) to launching CopySub—a subscription content service—and scaling to $18K MRR in months by blending human-in-the-loop editing with AI speed. The competitive advantage wasn't the AI itself. It was knowing how to use it without sacrificing quality.

The growth leaders who win in the next 12 months won't be the ones with the fanciest tools. They'll be the ones who learned to manage AI like a team—context libraries loaded, prompts specific, and outputs ruthlessly evaluated.

Source Episode

How Growth Professionals Should Use AI to Win

Breakout Growth Podcast · 61 min

Related Insights