this sums graphics up quite nicely:
source: gamesradar.com
Graphics R.I.P?
[27/01/04 11:07]
Graphics today are better than ever. So, asks PC Gamer, why isn't our excitement about them at an all time high?
The European Games Developer Conference is many things. Often intriguing. Often illuminating. Often - though perhaps not often enough - outspoken. And often about better ways to get pixel shader effects even shadier. But it's rarely as out-and-out controversial as Jason Rubin's keynote address this year: "Great Game Graphics... Who Cares?"
If you rely purely on your PC for your gaming pleasure, you're unlikely to know much about Jason Rubin, or the company he heads, Naughty Dog. They're responsible for the entire series of Crash Bandicoot games on the PlayStation and, recently Jak and Dexter on the PS2. They've sold 25 million copies, which explains the lovely tan Mr Rubin possesses. Their secret - and I paraphrase him directly - is that they made sure that their games were better looking, by far, than the nearest competition while ensuring their games were just entertaining enough: great graphics plus a reasonable enough game. This led to the aforementioned mega sales. While the original Jak and Daxter was hardly a failure, it didn't break the three million boundary, despite being one of the most technically impressive games of its type. From this, Jason has elaborated an argument that graphics alone won't be enough to sell a game anymore. He posits two other things that can sell a game - association or novelty. That is, licences, or new stuff. The day of Great Game Graphics has passed.
You might shrug, thinking that he's extrapolating his own experience to a general rule. Obviously graphics are going to continue getting better and better. Nevertheless, there's something worryingly convincing about Rubin's argument. He doesn't claim that graphics aren't going to improve - it's whether anyone will care. While this may not have such a serious impact on consoles, it's a different story for the PC. Consoles have five-year cycles: limited advances within their own lifespan, but an enormous leap in graphical fidelity every time ConsoleToyNext is released. The PC, however, has rolling improvements, month on month, and this has a dampening effect on our expectations. Day by day almost, 3D card manufacturers release new cards promising graphics that will liquidise your eyes and make them drool from your sockets. But - be honest now - since when has this been true?
To take a recent experience: during the final days of Steam I found myself playing the original Half-life. And, frankly, it looked perfectly acceptable. While it clearly lacks the fine polish of modern first-person shooters, the world it presented me with was entirely comparable with anything around. And, being a great game in the first place, it was more enjoyable than - say - Unreal II. However, if you went back to 1998 when Valve's masterpiece was released, and attempted to play a game five years older than that, it would be a very different experience. To go back and play System Shock, Doom or Wolfenstein requires a whole re-arrangement of your thought processes to accept the difference in graphics quality.
This is key to Rubin's argument. The amount of graphical power used in a game often bears little relation to how much the gamer will enjoy it. Consider the leap from Space Invaders to Super Mario. The technological increase is relatively small, a few more colours and greater definition. However, in terms of the difference in experience it's a huge leap. Instead of blank adversaries, you're now playing with characters you can have an emotional response to. Conversely, consider the jump in power between the PlayStation 1 and PlayStation 2. According to the raw numbers, the latter is several hundred times more powerful. Is it offering an experience that's several hundred times better? No, because the fundamentals haven't changed since the leap from the 16-bit consoles to the PlayStation, from flat characters to a 3D world and a sense of place. It's still a leap, but a relatively restrained one - and thus while attractive to people, it's not irresistibly exciting.
On the PC side, consider the performance advance between a 3D card of a few years ago and one of today. Ignoring everything else the card does, and the increases in the supporting PC architecture, and purely looking at how many MegaPixels it throws around every second, examine the difference between the TNT2, which managed 230, and the ATI Radeon 9800 Pro, which shifts 3040. That's 13 times more power there alone. Are you getting 13 times more pleasure from your 3D card? Almost certainly not. In fact, the real leap in the PC was from the days before 3D cards to the days after 3D cards. The difference afterwards has been, in the general scale of things, minor. Attractive, yes, but still minor. While the next 3D cards will offer beautiful shiny hair, that's hardly the same order of excitement as being dropped into an alien world for the very first time.
Look at the state of Great Graphics in the movies, and you get a sense of both videogaming's own future and a fine counter-argument against those who think graphics can improve and drive games. While Pixar's Toy Story managed to attract some of its audience purely through its groundbreaking technical merits, with people leaving amazed at what those new-fangled computers could do, the even-more-attractive Monsters Inc aroused much less comment. With Finding Nemo, it has almost disappeared. People are going to these films because of the story now, not the technology which allows the story to be told.
It's also noticeable that as we approach perfection, it becomes more elusive. Take Matrix Reloaded: people were disappointed that they could "tell some bits were computer generated". You wonder whether back in 1933 audiences stomped out of King Kong moaning "that was no Giant Ape! That was a big old puppet!" They didn't, of course. The difference being that back then such special effects were unprecedented. These days they're old news, and the slightest little failings break the illusion. There's a ceiling, and movie graphics have hit it.
With games, the graphics ceiling differs in every genre. In some, it's clear that we've reached it. With accurate motion-captured movement, realistic settings and player features already the norm for sports sims, how will FIFA 2015 be any different from what we're playing now? Mud sticking to people's legs? Pupils dilating when they look in the sun? Reflective referee whistles? First-person shooters are also dangerously near to the effective maximum - after Doom III and Half-Life 2, how much actual use will further graphical excess be in advancing the genre? The humble real-time strategy game, on the other hand, is about to take a genuine leap forward with Rome: Total War. Freeform, expansive games like Grand Theft Auto also have room to expand. And massively-multi-player games have a distinctly long way to go. Developers working in these areas can perhaps still make their living through releasing games with better graphics alone.
However, there's another problem. Not only are good game graphics getting less impressive, they're getting easier to implement. Jason Rubin talks about the early days of Crash Bandicoot on the PlayStation, where most developers could get about 1,000 polygons on screen and if they really worked at it, nearer 3,000. 1,000 polys wasn't enough for a convincing third-person action game, thus it was necessary for a developer to be good at graphics to make a game. Fast forward a few years and assume the 'third' rule holds true. Even if you don't work at it you can now get 50,000 polygons on screen, and if you do work at it you'll get 150,000. 50,000 is enough, by far, for any game. While there's a visible difference, it's only surface polish. Fast forward again, and it's 500,000 polys if you're lazy, 1.5 million if you're not. And it's unlikely that any but the most eagle-eyed would spot the difference. And this isn't even taking into consideration the phenomenon of off-the-peg 3D engines, which allow games as remedial as Devastation to look almost as attractive as Unreal 2 or Invisible War.
Selling a game through graphics alone is still possible. If you could, for example, make the leap from current standards to photorealism with a single graphics card and an associated game, you can guarantee a large roll of cash heading your way. But that's not the way it's going to happen. We're going to arrive there in tiny steps. To steal the horrible phrase of lazy developers and journalists everywhere, we're going to approach visual perfection through evolution rather than revolution. And while revolutions make the front page across the world, evolution is so slow as to be invisible. And that doesn't excite anyone.
This brings me back to my earlier point: the problem is going to be worse for the PC than for consoles. With those leaps forward every five years, there are archipelagos of graphical excitement ahead in console land. With the PC, and its steady progress, we're not even going to get that. In fact, it's entirely possible that the coming months offer the last genuine leap in graphical performance we'll see for some time. For the last three years, the Quake III engine has been standard for the PC action world, and now Doom III and Half-Life 2 look to offer the first radical advance on that. It'll be a leap of sorts. And it will be impressive, in its own way. But the days when graphics ruled videogames are rapidly drawing to a close.