The text of the article, for those who don't want to sign in:
The first game conference I ever attended was at MIT in the late 90s. It’s where I met people who actually worked in the game industry for the first time. Some were my heroes. Some I’d never heard of. I was just a student, with dreams of someday doing what they did, and I remember the conversations vividly.
This was the early days of 3D gaming, after the CD storage boom had made cutscenes a big part of video games. There was a sense that the industry was experimenting, trying to “crack the code” of video game storytelling, and a lot of the talks, panels, and just general chatter were about this in one sense or another. What was the “right” way to tell a game story, so that it wasn’t “just a movie”? All these people seemed to hate cutscenes, or even just general cinematic presentation, as well as the games famous for them.
I remember talking to one of these developers about Final Fantasy VII and how it compared to Xenogears, which Squaresoft had just published. These games felt almost identical to me in terms of how interactive they were. If anything FF7 felt more interactive, since Xenogears was infamous for just becoming a barrage of cutscenes in its latter half. But this guy adamantly felt Xenogears was more “interactive” than FF7.
“Why?” I asked, bewildered.
“Because you can move the camera,” he replied. “That’s a kind of interactivity, isn’t it?”
It boggled my mind that someone could think that a game where you can’t date anyone, can’t perform CPR, can’t snowboard, can’t order a drink, and can’t do a host of other eccentric little things FF7 let you do was somehow “more interactive” just because you can swing the camera left and right while walking around, but this speaks a lot to the mindset of Western–or maybe particularly North American–game developers at the time. While there were plenty of deep, richly interactive games being made, where you did have tons of such choices—from Baldur’s Gate to Fallout to Ultima to System Shock to many others—there was also this obsession with “eliminating cutscenes”, to the point that any new technique that eschewed traditional cinematic language was seen as inherently a step in the right direction, towards games “being free of the shackles of cinema”, regardless of what that materially in terms of choices available to the player. A scientist presents the enviro suit in the original half life
For an industry with these obsessions, the release of Half-Life was an instant revelation, like it was the Bell X-1 and Gabe Newell was gaming’s Chuck Yeager, the duo that broke the sound barrier. Valve had “cracked the code”, had finally shown that a game could tell a story without a single cutscene, without ever “taking control away from the player”. This is when Half-Life’s legendary status was solidified, when its list of design choices commonly cited as groundbreaking—the “cutscene-less” narrative design, the coherent sense of spatial exploration, the use of “realistic” locations, the lack of inventory management to slow you down, the crisp strategy offered by its nail-biting close-quarters combat—was first articulated. It was a towering achievement.
It also fucked everything up.
Half-Life (and its also-groundbreaking sequel) are both excellent, and the praise constantly heaped on them for their design virtues is more or less deserved. However that doesn’t mean the impact of those virtues—or, more specifically, the design cult surrounding them—has been a net positive for games as a storytelling medium. On the contrary, a lot of the techniques pioneered and popularized by Half-Life basically stalled video game storytelling innovation for about a decade, and while this isn’t Valve’s fault, this refusal by the Anglophone gaming establishment to take Half-Life off its pedestal and look at its narrative design a bit more critically has been a contributing factor to this stagnation and therefore long overdue. The Cosmetic Agency of Half-Life
The biggest design trope worth picking apart here, that by far has cast the longest shadow on the industry, is this “cutscene-less” approach to storytelling. Half-Life, and later Half-Life 2, were famous for taking story events normally relegated to non-interactive cutscenes and dropping them right into gameplay. Instead of that helicopter crashing in a cutscene, it would crash right in front of you during gameplay. Instead of the game halting and having a character speak to the camera in a cutscene, the character would face the player and speak when they got close. This, combined with Half-Life’s commitment to having zero loading screens, ensured that players felt part of a single, unbroken story from beginning to end, giving both games a sense of immersion and immediacy few others had, or so the thinking goes.
The issue here, of course, is that whether something feels immersive or immediate is utterly separate from whether it offers the player genuine agency, and Half-Life notoriously offered the former at the expense of the latter. Put bluntly: Half-Life’s approach to “eliminating” cutscenes was to make the entire game one big cutscene—a single, unbroken string of scripted events players have no power to effect beyond simply triggering them, essentially a movie where you press ‘play’ by walking forward, where the only “choice” you have in this movie is deciding where to point a camera named Gordon Freeman. This only “solves” the cutscene problem if your understanding of agency is entirely cosmetic, prioritizing the feeling of being more involved in a story via smoke and mirrors without actually giving players any choices at all. Soldiers rappel out of an Osprey while the player fires a rocket at its right engine. An iconic Half-Life moment recreated in Black Mesa.
One reason people don't remember Half-Life as being so airless and choreographed is because Valve's games have always had excellent mechanics and encounter design that make players feel good when their strings are pulled. The Special Forces soldiers who show up at Black Mesa blitz players with flank attacks, grenades, and chatter that makes it feel like you're in the firefight of your life, and the genuine combat improvisation possible with the Gravity Gun still hasn’t been matched in terms of its liquid elegance and endless variation. But it is precisely this exceptional craft that obfuscates Half-Life’s limitations. It’s easy to remain so high on this dopamine rush that you never quite notice the game is basically a theme park ride, where every carefully choreographed set-piece that happens around you has to happen—exactly the same way, every time—for the next plot point to unfold.
Half-Life wasn’t the first game to use invisible lines to trigger narrative events, but its single-minded, all-encompassing insistence on using this and only this method gives its world a curious weightlessness. Dropping players into a reality where nothing lives, nothing moves, nothing happens without their intervention, without walking over an invisible line—which is the ONLY point when the world springs to life and demonstrates any behavior at all–is Half-Life’s singular strategy for storytelling from the moment the game begins to the moment it ends. If a tree falls in City 17 and no one is around to hear it does it make a sound? Fuck no… because no tree falls in Half-Life unless Gordon Freeman walks near it. Literally NOTHING happens unless Gorden Freeman walks near it. The entire world of Half-Life is frozen until the player brings it to life with their presence, like some massively elaborate surprise birthday party. To move through its world is to imagine at all times that enemies, that NPCs, that monsters, that whatever is waiting around the next corner is whispering “Okay! Quiet everyone! Gordon’s coming! Get ready!” It’s a world of perpetual, transparent, utterly deterministic stagecraft.
Of course all games involve some level of artifice, some suspension of disbelief. No game can be a perfect simulation where everything is possible, and we wouldn’t want them to be. Focusing on one aspect of experience and excluding others is not only what makes a game a game, it’s what makes art art, in any medium. It’s why as players we embrace those literal exclamation marks that appear above surprised soldiers’ heads in Metal Gear, or don’t mind recovering from wounds that should be fatal in The Last of Us, or are relieved when we find complex aircraft simple to fly in GTA. So why was Half-Life’s artifice a stumbling block for games, an albatross around its neck, as more and more games embraced its approach over the course of the 2000s? Dawn of the FPS Age
It’s hard to overstate what a wild time the 90s were for narrative experimentation in video games. Storytelling conventions that had slowly coalesced and stabilized over the course of the 70s and 80s were almost overnight tossed aside in favor of square-one experiments involving Full-Motion Video and 3D graphics, with results more often cringe-inducing than ground-breaking. For every Doom or Mario64 or Deus Ex there were seemingly countless more Night Traps or Phantasmagorias, under-designed games drunk on the possibilities of cinematic language and over-sold to an increasingly skeptical public as the “future” of game storytelling.
By the time this fever broke late in the decade developers were understandably eager to move on, feeling hamstrung by marketing teams obsessed with chasing the coattails of Hollywood, and there was, at least in the North American industry, a distinct tendency to heap in any game that used cinematic language–even if done well–as part of this supposedly degenerative trend. Final Fantasy VII and Metal Gear Solid might be better produced than Night Trap, but they represented the same fundamentally wrong-headed approach to the medium, a medium screaming out to be released from its cinematic cage and—like a millennial prophecy—finally come into its own at the dawn of the 21st century. A pixelated video of a dark Jedi on the ground at the mercy of a green lightsaber blade held off camera. An FMV cutscene in Dark Forces 2
This was largely the industry atmosphere that late-90s immersion-focused experiments came out in, with Half-Life in particular seen by the North American gaming establishment as a hard push of the game storytelling pendulum away from cinematic language back towards first-person immersion. The fact that it didn’t bring much agency back with it tended to be downplayed, ignored, or simply not noticed, which had a lot to do with the fact that Half-Life was an FPS, and therefore tended to be compared to other 90s FPSs, games which typically weren’t even trying to do the sorts of things it did, and not to games which were.
Half-Life’s believable world populated by realistic environments felt revelatory compared it to the arcade-like, thinly-justified Mars colonies of Doom, but not compared to the meticulous, thoroughly thought-through spaces of System Shock, which came out three years earlier, or Thief, which came out the same year. Being able to look back where you came from to get that sense of a larger interconnected world felt amazing compared to Quake’s blocky, oatmeal-brown mazes, but not compared to the deeply satisfying traversal experience of Super Metroid, which came out four years earlier, or Symphony of the Night, which came out one year earlier. Looked at from one angle Half-Like felt like an FPS with a story, but looked at from another it felt like an Immersive Sim without any interaction, or a Metroidvania without any exploration. Such distinctions quickly became irrelevant, however, as games rapidly entered the era where the FPS was king.
When Microsoft muscled its way into the console war with the Xbox a few years later, it caused a massive shift away from cinematic experiments and towards First-Person Shooters. This genre that had been almost exclusively the domain of the English speaking world’s desk-bound computers in the 90s migrated to the couch in the 2000s, making a once-niche genre mainstream, not just for nerds anymore. LAN parties were for dorks. But landing headshots on your bros in a four-way split screen deathmatch, the whole room cheering and jeering the way they would at Monday Night Football, was cool, respectable even. Halo, Call of Duty, and a host of others pulled Anglophone gaming’s center of gravity toward this kind of experience in a big way, which is when a lot of experimentation of the early 3D era started to die off, its demise hastened by the rapid ascension of the FPS to a blockbuster game genre whose big-budget formula and big business success everyone now wanted to chase.
When we think of the type of narrative design that became synonymous with big-budget FPSs like Call of Duty—that increasingly, over the years, became famous for its spectacle set-pieces overwhelming interactivity more and more with each game, to the point there are some scripted set pieces where you can simply put down the controller and still finish them—that’s Half-Life’s storytelling legacy at work, the downside of its lessons and techniques being applied with ever-mounting capitalist excess. While there’s nothing wrong with enjoying this, or acknowledging it takes its own kind of craft to pull off well, there was a good long time when it tended to suck the air out of the room for other approaches that require more care, craft, and complexity at balancing interactivity with dramatic experience. Elizabeth in Bioshock stands over the perspective character brandishing a heavy tome in a library. Every trace of System Shock and most of Bioshock was bleached out Bioshock Infinite in favor of cinematic presentation.
Take how the Immersive Sim—the FPS-adjacent game genre that offered the clearest alternative to the surprise birthday party approach—basically disappeared for about a decade, how after System Shock 2 in 1999 and Deux Ex in 2000 there was a rapid decline in such games despite occupying a similarly mythic status as Half-Life, how the game designers most vocally interested in following their trajectory were either driven out of business or forced, like in the case of Bioshock (2007) or Far Cry 2 (2008), to limit their experimentation to what was possible through the lens of the Blockbuster FPS. It’s telling that the games which managed to carry the torch for Immersive Sim-like design at this time, during the era of peak-FPS saturation, were stealth-action series like Splinter Cell, Metal Gear, or Hitman—all of them third-person, and therefore not seen as being in design dialogue with these other games, but clearly, in retrospect, embodying nearly all their design virtues. Again, just like with Xenogears: where you put the camera seems to fundamentally alter how we perceive what a game is doing with agency. Such is the near-sightedness that comes with valuing cosmetic agency over everything else.
It wasn’t until Arkane’s Dishonored in 2012 that the Immersive Sim really made a serious comeback in the AAA space, and even that couldn’t get by without having a gun as a sub-weapon. For a good long time, and even now, it’s as if major commercial developers were simply not allowed to make a first-person game unless there was a gun sticking out of the lower right corner. This is arguably the only way Fallout—originally a 2D isometric RPG from the 90s—was able to reinvent itself as a blockbuster game with 2008’s Fallout 3, its clever marriage of RPG design and FPS presentation managing to resonate with the juggernaut audiences weaned on Halo.
Far Cry 2, Bioshock and to a greater degree Fallout 3 and Dishonored are all successful examples of working within FPS-dominated industry norms to push narrative design away from the surprise birthday party approach and towards more richly interactive narrative worlds, but they are certainly the exception, not the rule, and even they experienced diminishing returns in sequels, most notably Bioshock, which, over the long development of Bioshock Infinite, slowly chipped away at its Immersive Sim principles until—you guessed it—the whole thing became yet another surprise birthday party. And this again is not to accuse it of being a bad game, only to illustrate that the same capitalist forces acting on Call of Duty acted on Irrational Games in their attempts to deliver a once-in-a-lifetime narrative experience at the AAA budget level. Thanks to Half-Life, the surprise birthday—with its determinism, with its linearity, with its insurance that each player will experience the exact same things—is invariably seen as the most risk-averse way to achieve that. It will always be the most business-friendly kind of supposedly “immersive” narrative design, the utterly deterministic “movie” that isn’t a movie because apparently being able to walk around in it makes it not a movie, even though you have the same amount of choices in it you have in a movie: zero. Half-Life’s Warring Legacies
Today, with the vast proliferation of indie games (many of which made possible by Valve’s distribution platform), and the catastrophic budget escalation of AAA games, we are living in the stark aftermath of this peak FPS era. Few franchises can compete with Call of Duty’s Hollywood budget-level surprise birthday parties, and the mantle of the Immersive Sim, the Metroidvania, and all kinds of other experiments in reconciling storytelling with rich agency has been taken up mostly by the less risk averse indie space.
There does seem to be more of a sense now, as opposed to 20 years ago, that a diverse set of audiences want diverse types of content, that a first-person surprise birthday party where you go pew pew isn’t any sort of “code” that’s been cracked but merely one type of experience among many that includes dating, cooking, dancing, and almost anything else you can imagine. Even with the awful, stratified state the industry is in right now, even with everything else that’s wrong, this available diversity of experience is undeniably a good thing, and basically unimaginable from the perspective of 2005, when the last cardinal Half-Life game was released.
This all speaks to what feels like, from the perspective of today, two warring legacies. There is the perceived legacy of Half-Life and the actual Legacy of Half-Life, the legacy of Half-Life as a code-cracking, ground-breaking experiment in tightly controlled immersive choreography, and the legacy of Half-Life as the game that made scripted set-pieces a viable narrative tool among many. Joel in The Last of Us cupping his hands on the side of Ellie's head as fire burns around them.
It’s telling that, for all its purist bluster, all its rhetoric of progress toward the “right” (i.e. non-cinematic) way of architecting stories in games, the high-minded ambitions North American industry elites, with the premium they put on immersion over agency–their millennial prophecy–have really been shut down by audiences over the past 10 years. The Xbox/FPS era swung the pendulum so hard towards set-piece driven roller coaster rides that audiences were happy to have cutscenes back, provided developers knew how to use them well. There is no greater example of this than Naughty Dog, who basically took Half-Life’s whole approach to highly choreographed set-pieces and just… added cutscenes. And guess what? Uncharted and The Last of Us were massive, runaway successes, thanks to their expert fusion of these narrative approaches. Blessedly free of any immersive ideology, they were simply content to use the right tool for the job, whether it be a cutscene, a set-piece, a mechanic, or what have you.
It is almost as if audiences are highly literate in various storytelling forms, and are able to switch between them effortlessly, if skillfully presented by a team of storytelling professionals who instinctively understand that a pithy, well-written, well-acted cutscene might be the right choice for one part of a story, while a gameplay set-piece might be for another. It’s almost as if this—the sum total of these elements coming together in the player's mind—is what immersion actually is, and not just being able to look away or not when a helicopter crashes nearby.
It’s resisting this kind of faith in audiences that pushes us towards fundamentally narrow understandings of what agency and immersion are, towards some singular vision of progress in game narrative, as if it were a straight line leading from a collage-like past into a VR-like future and not simply an ever-expanding palette for artists wishing to use the right combination of tools to deliver a particular narrative experience to a particular audience. While Half-Life 1 and 2 remain great games, and superbly well-crafted examples of a particular kind of approach to game storytelling, they are not the only kind of approach, and we are still living in some of the wreckage of big game companies and marketing teams latching onto that approach as some absurd kind of endgame. Half-Life didn’t crack any sort of code. It just gave us another set of tools, tools that can be used skillfully or unskillfully, just like cutscenes.
It would have been great if we had been able to learn that without taking this long, hellish detour through ever-more expensive, more-bloated, more exhausting surprise birthday parties. It would have been great if we had been able to learn it without killing the Immersive Sim (a process you can watch unfold in slow motion by playing each Bioshock game back-to-back). And it would be great if we could all recognize that this is also part of Half-Life’s legacy, the dark side of the North American gaming establishment’s millennial prophecy, with Half-Life as its messiah, come to deliver us all from the tyranny of cutscenes, only to replace it with another tyranny we are still trying to recover from.
A walking example of privilege