The Studio × Tier Framework: Where Your Studio Actually Sits on the AI Map
If you're a mid-sized studio with a real community, you're in the danger zone
TL;DR
The studios asking “how much AI can we use?” are answering the wrong question. That gives a productivity answer, not a strategy.
Two variables determine your real exposure: studio size and game tier. Different cells, different risk.
The production floor collapsed. The discoverability signal collapsed with it. Steam AI disclosures up 681% in 18 months.
The legal exposure is not priced. Being a downstream user does not sever the chain.
The studios that survive are not the ones adopting AI fastest. Studio size is your exposure. Game tier is what’s at stake.
This piece was developed in collaboration with Sebastian Cardoso, whose work on game operations and production practitioner realities I read every week. His companion piece answers the “how” question. This one answers the question before that.
It also builds on the Story ROI tier framework from The Story of Game series: Tier 1 games are pure systems (Tetris, League of Legends); Tier 2 treats story as optional texture; Tier 3 puts narrative at the centre of the experience. AI interacts with each tier very differently, and that distinction does a lot of work in what follows.
Last night, eleven thirty-something, a Discord ping came in from a colleague.
“Another studio’s server. Lel. Entirely dead but everyone just picked on the AI thing.”
He’d screenshotted a thread from another studio’s channel. A creator had posted content for feedback. Within fifteen minutes, the channel, otherwise dormant for weeks, had come alive. Not to engage with the work. To accuse it of being AI.
“He sound like a normal guy but maybe the people in tiktok gonna say its ai.”
“I honestly think it’s AI. It sounds flat and emotionless, and the rhythm is consistent.”
“Yes it feels like a generated voice over.”
The studio hadn’t disclosed using AI. It might not have used AI. That wasn’t the point.
The point was that a community barely paying attention came roaring back to life specifically to flag the suspicion. The trust account had nothing in it. The first major upload became the audit.
I texted back: “tell him to put #MadewithHumans in the hashtag.”
Not a joke. A defensive posture some studios are adopting on principle: a public commitment, before any controversy, that human craft will be visible and disclosed. Also a tell. A studio that needs the hashtag is a studio that has noticed the audit is already running.
The question I keep getting now, on calls across four continents, is some version of “how much AI can we realistically use in our pipeline?”
The answer is roughly the same for everyone.
The harder question, the one that actually changes outcomes, is what this studio can do, given what it has built and what it stands to lose.
The Floor Dropped. The Signal Dropped With It
At certain studios, AI is generating 60–70% of environmental assets.
The BCG Video Gaming Report 2026 estimates roughly 50% of studios are now using AI.
About one in five 2025 releases disclosed generative AI usage. That figure covers only player-facing content; pipeline tools are mostly undisclosed.
The production floor collapsed. That part is settled.
The second-order consequence is what the productivity case misses. When the floor drops, volume explodes, and the signal-to-noise ratio on every storefront collapses with it.
Steam saw 20,004 new game releases in 2025, up 42% over 2023.
Titles disclosing generative AI usage surged 681% between early 2024 and mid-2025, from roughly 1,000 to 7,818.
Steam Next Fest 2026 made the downstream problem visible in real time, with forums flooded with players identifying “AI slop” at scale.
An ex-Valve developer described Steam’s discovery architecture as “miles ahead of every other media platform,” then added: “That’s like saying they’re the tallest hobbit.” The platform is structurally incapable of curating against AI-scale volume.
The studios that survive this environment are not the ones with the most efficient output. They’re the ones with genuine community capital: established audiences who seek them out regardless of what else is on the shelf.
I covered the audience signal side of this: the pencilslop reversal, the Fortnite AI slop revolt, the 100,000-person creativity study, all in The Revenge of Pencilslop. This piece is upstream of that: the strategic framework that determines which studios are actually exposed.
Automate, Augment, Protect
Before the framework, one quick anchor. I worked through this in detail in Where AI Belongs, so I won’t rehash. The shorthand: AI belongs in three different relationships to your studio’s work.
Automate the mechanical layer: the grunt work nobody loved, where the skill was real but the creative ceiling was never there.
Augment the judgment-intensive layer: concept iteration, narrative direction, system design, where it surfaces options for human decision-makers to evaluate.
Protect the irreplaceable layer: the specific human noticing something worth making, the authorial conviction behind a creative decision. That layer is not a process. It cannot be automated.
The capital allocation question is which layer of your studio sits in which category, and whether you’re investing accordingly.
The creative ceiling is protected.
The mechanical middle is not.
That’s where employment concentration has historically been heaviest, and the people who built careers there deserve better than what most studios have offered during this transition. I’ll come back to that.
Studio × Tier
The Studio × Tier framework runs on two intersecting variables:
How big your studio is
What kind of game you make.
The cells of the matrix behave differently. Find yours!
1. Small studios (1–10 people): AI is a survival technology.
The historical barrier to entry was never creative vision; it was production capacity. A two-person team lacked the headcount to ship a polished, competitive experience on any reasonable timeline. They could prototype, but they couldn’t finish.
AI has materially changed that. The same two-person team can now ship work that, three years ago, would have required eight to twelve people. The output gap between micro-teams and mid-sized studios is closing fast.
Reputational risk at this scale is low: there’s no community trust account to drain, because the studio hasn’t built one yet. The upside is the difference between shipping the game and not shipping at all.
Where you sit on the map:
Small + pure systems game (puzzle, arcade, mobile): green light. Use AI aggressively. Audience expectation is low on craft signal, the loop is the product. Your competitive risk is not moving fast enough.
Small + optional story game (most indie titles, narrative-light RPGs): green light, with one diagnostic to run first. Ask whether story is your hook or your texture. If hook, AI accelerates. If texture, AI just makes you ship faster.
Small + prestige narrative: rare cell. If you’re here, you’re chasing a Disco Elysium or Pentiment level of authorial signal. The same rules as mid-sized prestige apply. You don’t have a community trust account yet, but you also can’t survive a controversy that costs you the one you’re trying to build.
2. Mid-sized studios (11–100 people): This is where the decision gets genuinely complex.
You have enough history to have cultivated a community but not enough margin to absorb a controversy. This is the most exposed scale on the map.
The danger zone is what I call the Tier 2 trap.
Studios spend on optional story texture that was never actually driving engagement. Then AI makes the economics undeniable, and they discover they were never differentiating with it in the first place. They thought story was their moat. AI revealed it never was.
The diagnostic question: was the narrative investment ever load-bearing? If story genuinely is the differentiator, AI accelerates execution. If it isn’t, AI just runs the diagnostic the studio was avoiding.
Where you sit on the map:
Mid-sized + pure systems game (mobile mid-core, competitive multiplayer, casual): green light on automation. Your audience cares about the loop, not the craft signal of how the assets were made. Disclose where required, otherwise focus.
Mid-sized + optional story game: the most populated cell on the map. Also the riskiest. Run the load-bearing diagnostic before you automate anything narrative. If the answer is “the loop works without story,” automate freely. Stop pretending narrative was your moat. If the answer is “players come for the story, not the loop,” AI is augmentation only. Every shipped beat needs human authorial signal.
Mid-sized + prestige narrative (handcrafted single-player, story-central RPGs, choice-driven games): reputation is the IP. Every undisclosed AI asset is a withdrawal from the trust account that holds your entire valuation. The question is not “can we use AI?” It is “what have we earned the right to use it for?”
3. Large studios and publishers (100+): AI as infrastructure, not tool.
These organisations are encountering AI as infrastructure rather than tool. Epic and Unity, the two engine companies that nearly every studio builds on, have embedded AI capabilities directly into the development tools themselves. Publishers with enough contractual leverage over their studios are beginning to mandate AI adoption through workflow requirements.
There is a real difference between AI as a tool studios choose to use and AI as infrastructure that platforms and publishers impose. The first is a strategic decision; the second is a supply chain reality.
Where you sit on the map:
Large + pure systems game (free-to-play live ops, casual giants): green light on automation, with one caveat. Your scale makes you a regulatory target. Whatever you automate becomes the industry’s disclosure baseline by default. Build governance now, not when audited.
Large + optional story game: medium-risk. The standardisation pressure on contracted studios is real, and your suppliers are the most exposed cell on the map. Treat your supply chain disclosure as your own.
Large + prestige narrative: the most severe exposure on the map. A single undisclosed AI asset in a $200M flagship title, discovered by a community primed to look, is the kind of controversy that reshapes valuations overnight. Disclosure is not damage control. It is product strategy.
The Trap Inside the Map
A reader inside a mid-sized prestige studio could finish this section thinking we just need to disclose properly and we’re fine. That misreads the framework.
Disclosure is not the protection. The trust account that exists before the disclosure is the protection.
For a studio with a credible community track record, disclosure is a badge. Here is what we did, here is why, here is what stayed human. The community treats it as additional information about a studio they already trust.
For a studio without that track record, the same disclosure becomes a liability. They used AI on this and didn’t tell us until they had to. The community treats it as confirmation of suspicion. Same disclosure, opposite outcome, because the trust account read it differently.
The studios safest from AI controversy are not the ones with the cleanest disclosure language. They are the ones who built the community relationship that makes disclosure land as transparency rather than confession. That work happens years before the controversy arrives.
This is a pressure map, not a rulebook. It tells you where the risk is concentrated, not what you are permitted to do.
The Legal Time Bomb That Isn’t Priced
Copyright infringement cases against AI companies more than doubled in 2025, growing from approximately 30 to over 60 active lawsuits.
The U.S. Copyright Office’s May 2025 Part 3 AI report concluded that using copyrighted materials for AI model development may constitute infringement on its face, and explicitly warned that “transformative” arguments are not inherently valid.
The New York Times lawsuit against OpenAI carries statutory damage exposure ranging from $7.5 billion to $1.5 trillion if willful infringement is found.
Many AI tools currently in production pipelines were trained on datasets that include improperly sourced visual art, music, and code. The legal chain from tool provider to studio is not severed by the fact that the studio is a downstream user.
The practical advice is unambiguous: favour AI vendors offering clear training data provenance, opt-in and opt-out mechanisms, and indemnification clauses. Capture human authorship through edit thresholds and documented artist contributions. These are balance sheet positions most studios have not yet recognised.
The UK published its Copyright and Artificial Intelligence report in April 2026. The pressure is building across multiple jurisdictions simultaneously. Studios treating this as someone else’s legal problem are reading the timeline wrong.
45,000 Jobs and the Question Nobody Is Answering
The layoff wave that began in 2022 and peaked in January 2024 has cost an estimated 45,000 jobs through July 2025. The peak quarter, Q1 2024, saw 8,619 layoffs in three months.
The rate has since declined. But the acute phase passing isn’t the same as the structural phase ending. When AI absorbs a function, that headcount doesn’t come back.
The Game Developers Conference 2026 State of the Industry survey found that 52% of game developers now view generative AI as having a negative impact on the industry, up from 18% two years earlier.
The most critical voices: visual and technical artists, designers, narrative teams, programmers. The disciplines whose careers were built on the skills now being automated.
These were people who made career bets in good faith. They went to school for animation. They moved cities for QA jobs. They trusted that the industry would still need them in ten years.
The automation of those roles at this speed, without reskilling infrastructure, without severance frameworks designed for structural displacement, without any industry-wide retraining commitment, is a failure of institutional responsibility.
Xbox made the failure vivid. A senior leader, in a now-deleted LinkedIn post, encouraged laid-off workers to use ChatGPT and Microsoft Copilot for emotional resilience while the company was announcing its $80 billion AI infrastructure commitment.
The tools that displaced you will now counsel you about it.
That is not a communication failure. It is a governance failure.
I have also sat in rooms where AI adoption decisions were made. I know what the productivity calculus looks like when you’re managing costs in a difficult market. But the absence of a reskilling framework is a choice, not a circumstance.
The studios that answer this question:
What do we owe the people whose careers were built on skills we have chosen to automate?
...will build internal culture and external trust that AI-first studios will not have access to. The studios that don’t will have the answer written for them, by regulation, eventually.
The Framework Holds Outside Gaming
Earlier this year I published three pieces on AI in game studios. Pencilslop on audience taste. You Can Pick Up the Cats on disclosure and trust. Where AI Belongs on practitioner placement. Three diagnostics for game studios.
Watching the film industry over the last six months, I have realised something. Hollywood is going through the live version of all three at once.
LA County has lost over 40,000 entertainment jobs since 2022, with production activity at its lowest level since 1995. Gaming has lost 45,000 jobs in roughly the same window. Two creative industries, near-identical scale of disruption. That is one structural shock, not two.
Innovative Dreams is the cleanest practitioner case I have seen of an AI-first creative production company in any industry. Founded by filmmaker Jon Erwin, backed by Amazon Web Services and Luma, the company shot a Ben Kingsley series in a week that would conventionally have taken six.
Erwin’s observation: “The best AI artists are actually people that have been retrained from the industry. Editors, cinematographers, first ADs, directors, costume designers.” Decades of taste, redirected through new tools.
That is Where AI Belongs playing out in film.
The audience signal is the same. The Brutalist spent its awards-season run defending its use of Respeecher AI to refine Adrien Brody’s Hungarian pronunciation.
The disclosure landed late, the conversation ate weeks of press cycles that should have been about the film.
Coca-Cola’s AI Christmas ad was rejected by audiences in November 2024 for the same reason: visibly AI-made, and the brand had not earned permission to use AI on a product that traded on warmth. Pencilslop‘s argument exported to film with one substitution.
The disclosure architecture is being built in public. SAG-AFTRA’s 2025 Interactive Media Agreement was ratified after a year-long video game actors’ strike, mandating consent and disclosure for AI digital replicas. That is Cats exported to film. Disclosure is not an ethics question. It is a balance sheet question.
The Studio × Tier framework applies in both industries with one substitution. In gaming, the variables are studio size and game tier. In film, the variables are budget tier and craft signal. A $5M indie horror film sits in a different cell than a $200M studio tentpole. A reality competition format sits in a different cell than a prestige limited series. The cells differ. The logic doesn’t.
This is no longer a gaming story.
The Audience Has Developed Taste
The discoverability flood does not damage challengers and incumbents equally. The AI volume surge amplifies the power of established IP. The biggest networks, the deepest community hooks, the most accumulated trust: they dominate harder. The practical beneficiaries of AI-driven volume flooding are not small studios using AI to compete. They are large studios whose established IP makes them findable regardless of catalogue size.
Roblox paid out $670 million to creators in 2025. Fortnite and Roblox creator ecosystems will collectively distribute over $1.5 billion this year. The platforms with existing community gravity are capturing the value that AI democratisation was supposed to distribute.
Beyond the discoverability asymmetry, the audience is developing taste. A 2025 Scientific Reports study confirmed that the top 10% of human creators still significantly outperform the strongest AI models in storytelling and narrative design, and that the gap at the top is widening. The cultural version of that finding showed up in unexpected places.
Ben Affleck put it precisely at the CNBC Delivering Alpha summit in late 2024: AI can produce excellent imitative verse that sounds Elizabethan. It cannot write Shakespeare.
Human craft is becoming a differentiator at the precise moment the industry is abandoning it. The studios that read this correctly will position specific authorial choices, work that requires a consciousness with actual lived experience, as a premium signal.
The Org Chart Nobody Is Drawing
The pipeline that historically trained people toward senior creative roles is exactly where automation is hitting first.
Junior art positions.
Entry-level QA.
Associate narrative writer roles.
These weren’t just jobs; they were the developmental path that built the judgment the senior roles require.
The scarce and valuable roles will remain judgment roles:
The Art Director whose taste defines what AI generates
The Narrative Director who knows which AI output has soul and which is the median version of soul
The Game Designer who feels that the core loop isn’t working before it can be articulated.
These are senior roles: They require extensive experience to develop, and that experience used to be developed inside the very pipelines being automated.
The danger is not that studios will have no human creative capacity in five years.
It is that the pipeline for developing that capacity will have been destroyed, and senior talent will age out without a generation behind them.
New org chart roles like “AI Lead” and “AI Integration Producer” manage tool adoption and quality standards; they are not replacements for the career development infrastructure that built the humans making the judgments those tools require.
BCG projects global gaming revenue reaching $350 billion by 2030. The mechanics of how that revenue gets made are changing faster than the org charts supposed to produce it.
The Reframe
“How much AI can we use?” gets you a productivity answer. Studio × Tier gets you a strategy.
Studio size is your exposure. Game tier is what’s at stake. The map is not a moral constraint. It is a strategic one.
Before you close this tab, find your cell.
If you are a small studio shipping a pure systems game: move fast. Your competitors are. The risk is not moving.
If you are a small studio shipping optional story or prestige work: run the load-bearing diagnostic this week. Not next month.
If you are a mid-sized studio in the optional-story cell, the most populated and riskiest place on the map: ask whether your narrative spend was buying engagement or buying a pitch deck line. The economics are about to make you answer.
If you are a mid-sized prestige studio: your reputation is the product. Disclosure is not enough. Audit the trust account before AI tests it for you.
If you are large and shipping prestige work: every undisclosed asset is an exposure your CFO has not modelled. Build the governance now, before a community primed to look finds the seam.
The studio in that Discord channel last night sits in one of the most exposed cells on the map. Mid-sized. Optional story. Community account thin enough that the audit ran the moment new content posted.
I don’t know yet whether they used AI. Neither does the community that accused them. That is the point. The framework is not about what studios actually did. It is about whether the trust account holds when the audit comes.
Some studios are running the audit on themselves before the community does. The hashtag move is one example. Disclosure as posture, before exposure makes it damage control.
That is the move the framework is pointing at. Not the disclosure itself. The decision to audit yourself before the community shows up to do it.
Know which cell you are in. Know what you have actually been building. Know what the trust account holds, and what it costs to liquidate.
Then act accordingly.
Abbas Saleem Khan is Principal Consultant at Llama & Griffin, working with game studios across six continents. He writes The Pattern Recognition: gaming industry intelligence 12 to 24 months before it becomes consensus.












