The AI Cat Test: What Game Studios Get Wrong About Community Trust
Three AI controversies. Three different handling decisions. One pattern that every studio building right now needs to understand.
TL;DR
Crimson Desert launched March 19 and had AI assets spotted within 24 hours. The same day, the top Reddit post in the community was: “I don’t care what anyone says. You can pick up the cats, so therefore game is good.”
Three studios hit AI controversies in a six-month window. All three survived. They are also all prestige narrative games with deep community investment built before the controversy arrived. That is not a coincidence.
What communities punish is not the use of AI tools. It is the failure of honesty. Players are now literate enough to find AI assets in a shipped product within hours; they are still willing to extend goodwill to a studio that talks to them like adults.
The studios most exposed are not at the top or the bottom. The middle tier, story-motivated players with thin trust reserves, is where the apology template fails and the pencilslop movement does its worst damage.
I had the Spring Sale data open when both stories broke at once.
I was watching the numbers when the Crimson Desert AI controversy hit Reddit. Pearl Abyss had shipped undisclosed AI-generated paintings: horses with extra legs, castle architecture that made no structural sense, faces that lost coherence at the edges. Classic tells. No disclosure on the Steam page. Direct policy violation.
The second thread I had open was about picking up cats.
Protagonist Kliff can scoop a cat up and carry it on his shoulder through the entire game. Through shops. Through arm-wrestling matches. When forced to put it down, the cat follows him around mewling until he picks it up again. Kotaku ran both stories simultaneously. The AI controversy. “Everyone’s Falling In Love With Picking Up Cats In Crimson Desert.”
The top post on r/CrimsonDesert on the exact day the scandal broke: “I don’t care what anyone says. You can pick up the cats, so therefore game is good.”
I have watched community responses to controversies across a lot of studios and a lot of markets. That thread told me something specific: the game had built enough genuine affection that the community could hold the controversy and the delight simultaneously, and the delight had weight. That is not luck. That is the structural output of someone caring enough to think “yes, the cat follows him into the arm-wrestling match, that is the correct decision.”
Pearl Abyss apologised, audited, and patched within ten days. The horse with extra legs is gone. The cats remain.
The Detection Window Closed
Here is what changed in the past twelve months that makes the current moment different from every prior AI controversy:
Players are not detecting AI assets from training. They are detecting them from exposure. Because audiences now encounter AI-generated imagery constantly, in social feeds, in promotional materials, in games they have already played, they have developed pattern recognition that belongs to practitioners and non-practitioners alike.
The hands with too many joints. The backgrounds that lose coherence at the edges. The lighting that illuminates nothing in particular.
Quantic Foundry’s research put a number on the sentiment: 85% of gamers surveyed expressed negative views toward AI use in games, 63% at peak negativity. Researchers described the finding as “rare in many years of survey research.”
What they were measuring was not technophobia. It was a specific and legible objection: players felt deceived when AI assets appeared in shipped products without disclosure, and they understood that deception as a relational breach, not a technical error.
Steam Next Fest in February 2026 was where this literacy became visible at scale. Developers and players alike called out AI slop in Next Fest submissions, not as aesthetic preference but as an identifiable category. There is now a browser game called “Your AI Slop Bores Me” built entirely around detection practice. The community is training itself on purpose.
The operational consequence is simple: you cannot accidentally ship AI assets and expect not to be found. The detection window is already under 24 hours for a title with any audience at all. The question for every studio in production right now is not whether the community will notice; it is what position you want to be in when they do.
Three Controversies, Three Handling Decisions
1. Crimson Desert: visual AI assets shipped undisclosed. March 2026
The apology named the assets “experimental placeholders from early development that were inadvertently included.” Patch 1.01.00 landed within ten days. GameSpot ran side-by-side comparisons. The new painting is coherent, detailed, and clearly made by someone who understood what a horse looks like.
Pearl Abyss also, at some point in an undocumented patch, renamed the in-game “fat cats” to “loafy cats.” Nobody knows why. The community loves this. Some decisions are inexplicable and correct.
2. Arc Raiders (Embark Studios): AI voice synthesis used for NPC dialogue. October 2025
Arc Raiders launched October 30. Eurogamer’s review flagged that some NPC dialogue sounded “slightly off.” Players ran with that and confirmed AI-generated text-to-speech voice lines by November. Three AI scandals broke in a single gaming news cycle that week.
Embark CEO Patrick Söderlund’s initial response missed the critical distinction: the community accepted ML-based animation tools as legitimate production tooling, but rejected AI voice synthesis as performer replacement. Treating them as the same objection in a single statement did not help.
What changed things was what Söderlund said in March 2026:
“We re-recorded some of the lines post-launch using real voices. There is a quality difference. A professional actor will always outperform AI; that’s simply a fact.”
That is a commitment. It is falsifiable. The top comment on the r/ARC_Raiders thread: “Love these devs.” Arc Raiders received a BAFTA nomination in December 2025. The controversy did not kill the game. The honest handling arguably strengthened it.
3. Clair Obscur: Expedition 33 (Sandfall): placeholder textures shipped; prior denial to an awards body. December 2025
This one is the most structurally interesting, and the community response split the discourse cleanly in two.
The Indie Game Awards revoked Clair Obscur’s GOTY and Debut Game wins after Sandfall confirmed placeholder textures had shipped, particularly after a Sandfall representative had told the awards committee no generative AI was used. The awards went to Blue Prince and Sorry We’re Closed, with Raw Fury explicitly framing Blue Prince as “crafted entirely with human creativity.”
The community reaction was not uniform. A petition to reinstate Clair Obscur’s award gathered significant signatures. Director Guillaume Broche went on YouTube with creator Sushi within days: the studio had experimented with AI in 2022 during early development; no AI was used in artistic direction, character design, or voice acting. “Everything will be made by humans from us.”
The Game Awards 2025 gave Clair Obscur multiple wins, including GOTY. Sandfall released a free content update the same night as a thank-you. Clair Obscur lost two indie awards and kept everything else.
The three studios that survived are not a random sample
All three are prestige narrative games with deep community investment built before the controversy arrived. Quantic Foundry’s data maps this precisely:
Pure systems games (Tetris, League, mobile idle) have an audience that skews positive toward AI and never came for human craft in the first place.
Prestige narrative games have the community reserves to absorb a controversy handled honestly.
The middle tier (games that rely on story-motivated players but have built neither the community capital of a Crimson Desert nor the creative clarity to say “here is what a human made and here is where AI helped”) has the smallest reserves and the most to lose.
This is also where the pencilslop movement lands hardest. If you have not read The Revenge of Pencilslop, that piece covers why audiences are actively training themselves to detect AI output and reclaiming human craft as a preference, and why the mid-tier studio is the most exposed to that shift.
The Three Studios That Survived Are Not A Random Sample
All three are prestige narrative games with deep community investment built before the controversy arrived. Quantic Foundry’s data maps this precisely:
Pure systems games (Tetris, League, mobile idle) have an audience that skews positive toward AI and never came for human craft in the first place.
Prestige narrative games have the community reserves to absorb a controversy handled honestly.
The middle tier (games that rely on story-motivated players but have built neither the community capital of a Crimson Desert nor the creative clarity to say “here is what a human made and here is where AI helped”) has the smallest reserves and the most to lose.
This is also where the pencilslop movement lands hardest. If you have not read The Revenge of Pencilslop that piece covers why audiences are actively training themselves to detect AI output and reclaiming human craft as a preference, and why the mid-tier studio is the most exposed to that shift.
What The Apology Template Actually Does
There is a statement that has appeared from Pearl Abyss, Sandfall, 11-bit Studios, Ubisoft, and Activision in nearly identical form across the past year:
“These assets were experimental placeholders from early development that were inadvertently included in the final build. We take this seriously and are committed to reviewing our pipeline.”
The statement is not wrong. It is also not a response. It fills the space without creating a relationship. It is optimised for crisis management, not for the community that is reading it.
The studios that came through their controversies did something recognisably different: they made a specific commitment, followed through on it publicly, and treated the correction as a conversation rather than a conclusion.
Söderlund’s “a professional actor will always outperform AI; that’s simply a fact” is a position, not a managed statement. Broche going on YouTube to name exactly where in the development timeline the AI had appeared costs something. That cost is what communities read as sincerity.
I have watched this pattern play out in consulting across markets and controversies for a long time. The studios that survive scrutiny are not the ones that avoided the mistake. They are the ones that built enough genuine community capital before the mistake happened. And when the mistake happened, they responded to the relationship rather than to the news cycle.
There is an operational version of this worth building deliberately:
Disclose before launch, specifically. Not “we use modern digital tools” as a general hedge. A specific description of where in the pipeline AI was used and what it was used for. Steam’s AI disclosure field exists.
Jump Space did this right: disclosed that voice lines were AI placeholders, with human voice acting committed to as development continued. Nobody wrote a controversy piece about Jump Space.
Treat placeholder protocol as a pipeline requirement. Traditional placeholder art was deliberately ugly: low-resolution standins nobody could mistake for finished work. AI placeholder art looks polished enough to pass QA, which is exactly the problem. Internal tagging and a final review gate are the thing that prevents the horse-with-extra-legs from shipping.
When something goes wrong, respond to the community rather than to the coverage. One is oriented toward the studio’s reputation. The other is oriented toward the relationship. Communities have been reading the difference for a long time.
The Reframe
The cats in Crimson Desert are not a PR tactic. They are evidence that someone was paying attention.
The horse-with-extra-legs was evidence that somewhere in the pipeline, nobody checked. The cat-carrying mechanic was evidence that somewhere else in the same pipeline, someone cared enough to think "yes, the cat follows him into the arm-wrestling match, that is the correct decision." Both pieces of evidence were visible simultaneously.
The community held both at once.
That coexistence is the thing worth understanding. Players are not waiting for a reason to abandon a game they love. They are looking for evidence that the people who built it were present.
AI assets that ship without disclosure damage that relationship not because players object to the tools in the abstract, but because the slip suggests nobody was looking.
The cover-up lands worse than the mistake because the cover-up confirms the suspicion.
The studios that will build communities worth having in 2026 and beyond are the ones that treat disclosure as part of the product rather than as a legal obligation to minimise.
Not because regulators are coming, though Valve’s AI disclosure policy exists and is being enforced unevenly. Because the players who build the community that gives a game its commercial longevity are owed the honesty.
Pearl Abyss patched the horse. They did not rename the cats back.
Some things you protect because they are worth protecting.
Abbas Saleem Khan is a Principal Consultant at Llama & Griffin, advising game studios, streaming platforms, and investment funds across six continents. He writes The Pattern Recognition: gaming industry intelligence 12 to 24 months before it becomes consensus.








