Breast physics and hair

Posted in Industry Rants on February 16th, 2015 by MrCranky

I confess, I just wanted to use that in a post title. But I’ve been using 3DMark to get a sense of which of the three main machines I use is the best performer. The answer, depressingly, is that all three are below the standard of a ‘gaming laptop’, and less than a third of the performance of a ‘high-end gaming machine.’ Not that I chase the bleeding edge of performance, I’m far too cheap for that. But my usual tactic of staying 3-4 years behind that edge does mean that I occasionally have to see how far things have come along since I last splashed out on new kit.

How does that relate to breast physics you ask? Well while watching the Sky Diver test one of the most prominent views you’re given of the sky-diver in question seems specifically designed to show off the rippling of their breasts in the wind. Or perhaps it’s the ASUS logo that’s plastered all over the suit (although curiously, not in the shot they use in their benchmark listing)

A distinctly ASUS-less promo image. With static breasts.

Not that I have anything against more accurate depictions of the human form in motion of course. I think the reason that it jumped out at me though was because it didn’t look natural. I can almost imagine the animator’s reaction to their initial feedback. “You want them to do what? Are you sure? Would they even move like that…? I don’t know, I’ve never worn a wing-suit. How about you go find me some video footage of an actual female sky-diver and I’ll work from that instead of your imagination?”

The reason this popped out at me as more than just an off-hand amusement at the benchmark graphics was my flabbergastedness at certain tweets this week, accusing game developers of sexism, for the crime of not devoting as much effort towards hair rendering as to shiny and reflective surfaces. This grinds my gears on several levels. The last time I shipped a console game (Brave), we spent a quarter of the entire frame calculating and rendering Brave’s hair, and exactly zero time on shiny or reflective surfaces. So to pretend that we’ve just never concentrated on hair is disingenuous.

Secondly, the reason why there’s more shiny stuff in games than fabulous hair is not because, you know, screw women, but because rendering hair is hard. Not just developing it, making sure it moves properly and looks good, but actually getting it on screen is costly. Like fluid dynamics and other similar technical challenges, you’re having to simulate many, many small things at once, and then deform geometry and alter texturing every frame as a result; something 3D hardware would really prefer you didn’t do. Fundamentally, that’s costly, and the cost doesn’t go away just because you spend more development time on it. Whereas good lighting and reflections comes almost for free, from the way that hardware 3D rendering works; spend some development time on getting the lighting calculations right, and then they can be done for every fragment you see on screen, at only slightly more cost than just rendering the thing in plain lit colours. And once it’s done it works for everything, not just the subset of characters who happen to have long hair, but for everything in the environment and all the characters, even the short-haired ones. So from a development point of view it’s a no-brainer as to which gets you the most pretty for the least cost.  Trying to make it an issue of sexism only serves to show how little you understand about the challenges of making games.

No-one is avoiding making the hair look good because they’re sexist, if it was affordable then they’d be doing it all the time. Because when your characters’ have long hair that looks good (regardless of their gender), reviewers gush over it, it’s immediately noticeable. When your environments are a little bit more shiny than before, no-one bats an eyelid. At best it’s acknowledged as part of a wider judgement that your game looks good. Why wouldn’t we want to go for the hair? Because even though it’s nicer to have in, it still costs too damn much to get right, both in development time and in runtime resources.

VR movement

Posted in Design Ideas on January 18th, 2015 by MrCranky

So, despite my best efforts, my spare time available for experimenting with the Rift SDK has been fairly limited. I’m more convinced than ever though that there is lots of amazing potential there. There’s a lot of cynicism, and rightly so. Many of the same problems that were there in previous iterations of VR are still present. There are a lot of good posts out there covering the most apparent (the motion-sickness / nausea generated by lag, the resolution). I’m not so concerned about those. We used to have to hit 60Hz refresh rates on the dot, and with a lot less rendering power than we have now. Hitting 90Hz is achievable with discipline. The screen resolution is I’m sure going to be addressed by future iterations of the devices. These are known quantity problems.

The unknowns to be addressed come from the parts of the tech that are new. Control, user interaction, is going to be the key. My concern here is that the old systems we used are just not ideal in a VR environment. Traditionally, we’ve been controlling avatars in a virtual world. We’ve had mostly free movement around that world, but there’s always been a clear disconnection – you’re controlling something other than yourself, and the screen shows you the view from their position. We’ve refined the control mechanisms so that feels natural, and trained ourselves to the point where it feels a lot less like we’re rotating an avatar, and more like it’s us. “Look right” becomes a quick flick of the mouse, even though our head doesn’t actually move. The avatar becomes an extension of ourself. That ability to make the control mechanism effectively disappear is key. In the same way it’s easier to drive when gear changing is instinctive and done without thinking; it allows you to focus on the higher level functions.

If you’ve ever watched someone new to games playing a first person or third person game, you’ll know the effect. When someone has to look down at the controller to remind themselves of which joystick to use. You say “look right”, and they have to stop moving their character before they change camera angle. So many of our games are designed to take advantage of the affordances already learned by gamers. It doesn’t matter that they haven’t played your game before, if they’ve played another game in a similar style. More crucially, when designers get it wrong, that failure permeates the whole game. When someone complains that moving your character around feels like driving a tank, that niggle interferes with everything they do in your game. For all of the great things about GTA 4, I struggled with Nico’s movement. I’d miss a door by just a fraction, and then he had a minimum turning circle that meant that I’d end up bashing into the other side of the door frame instead. You get used to it and learn to compensate, sure, but it’s a problem that needs to be overcome.

Bringing it back to VR, the change in viewpoint brings the control issues into sharp relief. The immediate and all-encompassing nature of the viewpoint makes it *you* that’s in the game. You’re not controlling what you see on that screen ‘over there’, you are controlling you. So when the controls feel unintuitive, it’s *you* that feels sluggish and unresponsive. So it’s important to get it right, and I’ve seen a variety of problems with the Rift demos so far.

Assuming that you’re using a joystick or keyboard controls, you effectively have a 2-axis input controlling movement. That we always had with first person games. But in the past there was a fundamental restriction in place. ‘Forward’ was always ‘the direction you’re facing’. There was no option to look to the right while still running forward, unless you were playing a mech or tank game which often mapped view direction to an extra input axis. But the natural mouse/keyboard or joypad controls we’re used insisted that you always look rigidly forward, the same direction your gun was facing. That extra axis (where the body of your avatar/vehicle was pointing in a different direction to the view direction) was discouraged, because people struggled to manage their awareness of the two directions (look and move). Skilled players learned to compensate naturally for this. While running forward, to look right you’d turn and simultaneously start strafing left. But you knew exactly where you were looking and moving at all times, because the restrictions were clear.

In VR, that restriction no longer makes sense. Instead we have different restrictions. Even if standing, the cable to the headset restricts your turning. If sitting, then there is an obvious ‘forward’, which is the direction your torso is pointing. You move your head to the left and right, but forward doesn’t change. However, the real problem is that the Rift headsets at least aren’t really anchored to that ‘forward’ direction. The headset knows what direction it’s facing, but it doesn’t know at what point the headset was facing ‘forward’ as far as the user is concerned.

Instead, all of the demos I’ve seen try to replicate the same restriction as traditional FPS controls have. ‘Forward’ is ‘where you are looking’. Walk forward on your directional axis, and look to the right, and you’ll move to the right. Which seems sensible, until you consider the need for complete freedom around the world. You have a limited head movement circle, so how do you turn completely around so that forward is south instead of north? You’re not going to twist your head 180 degrees round and press forward. So the demos map another axis on top of your head movement. So if you start facing forward and north in the world, turning your head 90 degrees right means that ‘forward’ motion moves you east. Use the rotation control to turn your character 90 degrees right, and you’re moving south. Return your head to centre though, and you’re moving east again. So even though ‘forward’ always moves in a predictable direction, you’re still having to manage awareness of that extra rotation. Your head orientation is being added on top of a base avatar orientation. That input-controlled axis is constantly fighting against the headset rotational axis. You can be turning your avatar right and rotating your head left to keep the view pointed in the same direction.

Having experimented, I think trying to cling onto that old input style is a mistake. It feels horrible when you’re craning your head around to the right because you’ve been using the head orientation as the primary means to choose your direction, only to find that you actually need to turn more than 90 degrees in either direction, at which point you need to fall back on the directional turn controls. Worse, when you’re running forward at speed, you have to keep your viewpoint locked directly ahead, because if you try and glance left or right you’ll start running in that direction. You’ve lost one of the big plus points of VR, freedom of motion in your viewpoint.

Keeping it so that you always have complete freedom of moving your viewpoint is I think key to making the user comfortable. We used to be able to make the controls avatar-centric, but now we need to be aware of the range of motion the user has, and allow them a natural way of expressing a complete range of motion without discomfort.

Instead of assuming that forward is where you’re looking, you need to build in some awareness of the user’s torso. The only natural way I can think of to do that is to add a quick and simple calibration point. Ask the user to look directly forward, and then press a button. From then on, that direction is the reference centre, and should align with the direction of motion of the avatar. Forward means moving in that direction, regardless of where the headset is pointing. Same for strafing right and left. Ideally, like the mech and tank games that have to do this naturally, you’d have an in-view indication of where forward is relative to your viewpoint. That might be your avatar visible from your viewpoint (e.g. a gun or arms), or a HUD indication.

Lack of information

Posted in Industry Rants on January 1st, 2015 by MrCranky

Starting the new year afresh and reinvigorated, I am looking forward to 2015 and the changes it will bring. In an effort to get out from the hole I’ve made for myself to quietly work away on client work, I thought I’d shared below the response I just wrote up to the question: How the issues that hinder the growth of creative industries can be overcome, and how to capitalise on opportunities?

To me the biggest thing that the public sector could do to aid the creative industries, especially games, is to provide the broader view that we in the private sector are solely lacking. The dearth of information on what is actually happening in the games industry is shameful. Sharing of information will help us all to grow, to avoid making the same mistakes, to spot opportunities as they arise and not well after they’ve been exploited by others. But we can barely even claim to know how many studios and developers are in the industry, let alone the more useful information like what they are working on or in what areas they are seeing growth/recession. We have trade bodies who poll their own members, but that represents only a fraction of the industry currently working. It’s frankly embarrassing that so little resources are put into tracking what the games industry is doing, and it seems to me that the government itself would benefit from being able to point to the growth of the Scottish games industry. It’s a manageably small sector to collect information on, smaller than the UK, and I’d guess more interconnected as well.

We in the Scottish games industry want to be able to shout about our successes, but we can’t, because we don’t have the context to say how much better we are doing than last year. Individual successes are great, but they are fleeting, what matters is the overall trend in the industry. I feel that it’s a positive trend, but I have absolutely no data to back that up, and asking around, it seems that no-one else does either, not even the government bodies who are supposed to be there to support the industry. But how can we be supported if they don’t even know who we are and what we’re doing? Don’t we run the risk of allocating resources based on a woefully out of date picture of what is happening? What use is it to the industry if support is provided for console games that form a dwindling share of development; or for social games when our market has moved on to mobile platforms?

I think that the very first step that must be taken is to put resources in to dramatically improve the information we have on the games industry as it is now; and to commit to keeping that information current as quickly as the industry itself moves. Without that information to inform us, I feel that the answers to all of the other questions the committee are asking run the risk of being out of date and useless before any actual answers can be agreed upon. Armed with that information, the public sector can know who to engage with, and the private sector can know how their industry is changing and seek out new opportunities rather than be left behind.

Adventures in Android In-App Billing

Posted in Tales from the grind-stone on August 28th, 2014 by MrCranky

Standards Proliferation

Enough said.

Ubuntu 14.04 upgrade

Posted in Technical Guidance on August 21st, 2014 by MrCranky

After the kerfuffle with Heartbleed earlier in the year, and finding out our server installation was way out of date, I resolved to keep it more current. That meant upgrading from 12.04 to 14.04. Not as painless as previous upgrades sadly, and left me with three notable problems. I’m posting my notes here in case they’re helpful to anyone else upgrading. Basically our server box acts as a DHCP server, file server (using Samba) and gateway for the internal network, as well as hosting a couple of websites which we use both internally and externally. After upgrading, I noted:

  1. Two of the hosted websites were no longer working: they were giving 404 not found messages.
  2. A persistent message being posted at startup and various points during shell sessions: “no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory”
  3. DHCP server stopped responding properly the day after the upgrade

I’d had to merge a few configuration files, of which Samba and dhcpd were one, so my initial thought was that I’d botched the merge. However there wasn’t anything obvious in the merge results that would explain why. Anyway, issue by issue:

No talloc stackframe

This one was the most easily resolved. This post points the finger at libpam-smbpass, a Samba module which seems to have an outstanding bug. Fine, it’s not functionality we rely on, so uninstalling the module makes the problem go away:

sudo apt-get remove libpam-smbpass

DHCP server stopped responding properly

This one didn’t bite me until mid-way through I was trying to diagnose the Apache issues, my DHCP lease ran out and suddenly the laptop I was remoting in to the server was without network. Super annoying. Setting the IP address / DNS of the laptop manually got me network connectivity to search for solutions, otherwise it would have been guessing blindly from the server terminal (which doesn’t have web browsing).

While there wasn’t anything hinting at a DHCP problem in the logs, I noted at server reboot time a line along the lines of the following:

Apparmor parser error for /etc/apparmor.d/usr.sbin.dhcpd at line 69 Could not open /etc/apparmor.d/dhcpd.d

I couldn’t find that line anywhere in my logs, but I probably wasn’t looking in the right place. The configuration file that is complaining is basically trying to #include the dhcpd.d subfolder, and failing. Still, it suggested some sort of permissions or configuration problem with AppArmor and dhcpd. Oddly though, DHCP had been working after the upgrade the afternoon before, and I could see successful DHCP negotiation going on from this morning, but it all ceased an hour or two before my DHCP lease expired. All of my searches were throwing up results for the package isc-dhcp-server though, whereas I was pretty sure the package was dhcp3-server. On checking, isc-dhcp-server was not installed. Installing it:

apt-get install isc-dhcp-server

Lo and behold, DHCP was functional again, using our already existing configuration. So, I’m guessing, the packages on our legacy machine (upgraded using do-release-upgrade from 10.10) aren’t properly handled by the release upgrade procedure, and were left with folder permissions set incorrectly; which was fixed by installing the correct DHCP server package.

Apache website issues

Ubuntu 14.04 brings with it a fairly major upgrade from Apache 2.2 to 2.4. While the web server was still functional, and I could access pages resting directly under the DocumentRoot, our two sites set up using Alias directives were no longer accessible. Both returned 404 errors. Using a symbolic link in the filesystem under the DocumentRoot would allow them to be accessed, but that wouldn’t allow us to enable/disable the site at will. While there are changes to the permissions system in 2.4, we don’t use those with our sites. So all very odd.

Our setup was very simple: each site had a configuration file that only contained a single Alias line, remapping the appropriate site folder to the folder on the local disk. Further experimentation showed that we could shift the same Alias line into the default site configuration, and have it work. It gave a 403 Forbidden error, but not a 404 any more. Adding an appropriate Directory element with a “Require all granted” directive inside fixes the 403. So presumably the default permissions for an aliased directory have changed to deny by default instead of grant.

So from that I can only conclude that Apache 2.2 was more forgiving of having Alias directives standing alone in their own site .conf files, for whatever reason. I’m probably missing some nuance of the setup as to why it worked before. Rather than spend too much time figuring it out, I’m going to just go with having the sites as a part of the main site instead of as sites on their own.


Posted in Tales from the grind-stone on August 14th, 2014 by MrCranky

One of the most interesting things about the field in which I work is the sheer range of topics I get to work on. Not just on different platforms or in different languages, but the actual subject matter of the projects. The project I’ve just completed, again working with Eutechnyx, manage to exercise parts of my skillset that I haven’t had to use for a while. Sometimes, working in games, you find yourself working on ostensibly the same problems. Different skin, different IPs, different engine, but really it’s the same fundamental concepts and functions that you’re re-implementing in a new project.

So when you get challenging problems, it’s really quite refreshing, because you have to go back to first principles of analysis and visualisation techniques to solve them. Where you’re presented with an overwhelming amount of information, and you have to design and implement something which wrangles that raw data into something coherent. Where you have to figure out a way of presenting that information in a way that is visually compelling and conveys that information in a useful, concise form. They are fundamentally hard problems, sometimes solvable, sometimes not, and finding the right solutions, if they exist at all, takes a level of concentration somewhat higher than the usual required to bring our games to life.

This is the state of mind I’ve been in for the last few months. For confidentiality reasons I can’t say anything very much about the project itself (it’s not the lovely visualisation linked above of course), but I wanted to talk about the satisfying nature of developing visualisations in general.

Often when you’re developing code, your debugging tools are limited to logging and step-by-step debugging. But for complex data sets, especially those dealing with spatial data, it’s far more useful to display that data visually. The same is true for end users – a sequence of numbers means very little; a dumped spreadsheet of data, while accurate, doesn’t let you see shapes or patterns. Turn those numbers into 2D graphs, and you can discern patterns, noise and trends. But that still may not be enough. You can make a line graph of each component of a 3D position that changes over time, but in 2D those graphs make little sense. Allow it to be viewed in true 3D space however, and suddenly you can see the shapes. But a line covering every point that 3D position visited tells you nothing about how quickly it moved between those points. So you introduce animation over time, or colour, and suddenly the data makes sense. When writing processing code, a visual representation of the outputs lets you pick out flaws that would otherwise be hidden. Spikes where there should be smoothness, patterns where there should be only random noise, correlations that hint at relationships you didn’t realise existed.

Of course it isn’t as simple as layering in more and more information. With too much visual clutter, it becomes impossible to discern any useful patterns from the data. So knowledge of what data is important becomes vital; ways of filtering information to show only what is relevant allow you to show information where it is needed, and hide it when it is not.

This, again, is one of the reasons why I’m so enthused about VR development. Being able to visualise spatial data is very dependent on good camera work – you have to be able to look around the visualisation. If the camera is out of your control, then you’re utterly reliant on the generated camera. If that is poor, then you might as well have a 2D visualisation of the data, because you need to have some useful spatial context to be able to process what you’re actually seeing. It’s for that reason that many optical illusions that rely on a particular perspective are defeated by moving your viewpoint; if you viewed the same illusion from a static perspective, you wouldn’t be able to tell it was an illusion at all.

Floating cube

So the introduction of VR allows us to get great flexibility of viewpoint, while not requiring the user to learn a cryptic set of control inputs to gain full 3D control over their viewpoint. That opens up great possibilities for exploring even more complex datasets and 3D structures. We’re living in an age where technology has advanced massively and the integration of computing into our everyday lives has resulted in masses of new information becoming available, in overwhelming amounts. Being able to visualise and process that information is the first step to being able to make use of it, and we’ll need new tools and tricks to do that.

Scottish Games Network

Posted in Random Stuff on December 4th, 2013 by MrCranky

What I didn’t take the time to blog about last month was my attendance at the Scottish Games Network launch event held in Edinburgh. I’ve been broadly supportive of the idea of making SGN official since Brian announced it in October. I’m sure it must be a little disconcerting for him to think that simply declaring that SGN is now the official games industry trade body for Scotland is enough to make it happen, but it’s not really as simple as that. All a trade body really needs to be taken seriously is the support of the companies it purports to represent. Officially or not, I think SGN has been doing a pretty good job of representing our interests, without being asked, or paid. The proof is in the pudding as they say, and so we will judge it on the work it does.

What I’ve said, both here on the company blog and in person when I’m out and about amongst the rest of the industry, is that communication is key. As an industry we’re not generally competing with each other. We gain a lot by collaborating, sharing knowledge, ideas and inspiration. Many if not most of the client relationships we have were started by going out, seeing what the rest of the industry was doing, and letting them know who we are and what we do. Without a focus point, to do that we’d all have to be contacting each other, and that is time-consuming and not very practical. The simple fact is: locality is important. I know what many of the game developers in Edinburgh are up to, because I meet them. Either at @GameDevEd, or at other industry events around town. I know what some of the developers in Dundee are up to, but generally only because I’m in touch with individuals at various studios up there and we chat regularly. I’d love to know more about what’s going on up there, as it’s very easy for me to lose touch, especially when we or they get busy.

So for me there’s a definite niche to be filled, that of a locus for information, someone or something capable of routing information around. That’s especially true for those outside the industry. I’m sure there are many, many Scottish organisations that are interested in interactive digital entertainment, with ideas and projects just waiting to be made. I don’t know who they are. I’d love to talk to them though. They don’t know who we are. Few outside the industry do. It’s just as infeasible for them to cold-contact every games developer in the country as it is for me to cold-contact random organisations to offer our services. But if there were a central point of information, obvious and high profile, those two organisations can be connected together. They can go to that central provider and say “we’ve got a budget and an idea, but we don’t know who can help us,” and be told “Well, Black Company makes games about that size, or Proper, or Storm Cloud. Here are their details, I’ll introduce you.”

More importantly I feel that the government bodies here in Scotland, the Parliament, Creative Scotland, the many media departments, could all be engaging with the creative digital interactive media talent here in Scotland much more, if they had a reliable conduit into the industry. Scotland is a country exploding at the seams with culture and history, and I feel it’s crying out to be exploited in interactive media. I’ve long chafed at the need to globalise and homogenise our games to appeal to the world-wide audience. We should be embracing our heritage and making games that tap into our local culture. Such as Beeswing, a lovely little project, set in rural Scotland.


It’s fantastic that the Kickstarter for this was successfully funded (with a few of my pounds as well). Instinctively though I looked at it and thought – this is the sort of stuff that the Scottish government should be actively encouraging. I believe they would too, if they had a practical way of engaging with the Scottish games development community to start these discussions. So again, a central focal point can enable those two sides to get together and make amazing things happen.

Visibility is one of the main reasons we are members of TIGA, and is why we’ll be happy to become paying members of the Scottish Games Network as well. Not because one is better than the other, but because they serve different localities. The TIGA folks are lovely, and very efficient. They give us a presence in Westminster that I feel is important. They cover the UK industry and beyond, and that is also an area in which we are very interested. We don’t cut our business dealings off at Hadrian’s wall. But like it or not, Black Company isn’t well placed to attend events in London, and so a sister organisation that can provide even more coverage in Scotland seems like a very good idea to me.

Winter warming

Posted in Tales from the grind-stone on November 20th, 2013 by MrCranky

You can tell when the November cold snap comes around to Edinburgh. Here in the office, it means that the winter charcoal hand-warmer comes out…

Winter hand-warmer

Virtual Reality

Posted in Tales from the grind-stone on November 13th, 2013 by MrCranky

So, it seems in a staggering degree of competence, the three lengthy rants on crunch I wrote and queued for posting didn’t in fact get posted, and just sat around in WordPress till now. So much for my good intentions on blog posting. They’re up now though, so please do scroll back and take a look. Those who know me know I’ve a bee in my bonnet about both crunch and the underlying viability of games development, so those thoughts have been bubbling up for a while.

Since NASCAR: Redline shipped I’ve been busy with various different things that I’ve been neglecting, not least of which is looking ahead to see where to go next. While I’m still positive about smartphone development, I’m conscious now that the market is rather saturated, and I feel like we have missed the window where a small team can do great things and get noticed for it. That’s entirely my fault, focusing too much on work-for-hire development and neglecting our own projects, but I stand by the choices good or bad.

Photo 11-11-2013 16 31 19Recently though I’ve been investigating the resurgence of Virtual Reality tech, specifically the Oculus Rift project and castAR. Both are using modern technology to revisit the old holy grail of immersive digital realities, but unlike previous attempts at it, these project really feel like they are breaking through and making this a real possibility. The enthusiasm around both projects is real and infectious, and having gotten a hold of a Rift prototype in October, I confess that I joined in. Potential projects, interesting research opportunities, titles that you just couldn’t do before, they’re all bubbling up and out of me, and I’m enthused about getting my hands dirty in a way I haven’t been for games development in a long time. What’s clear to me is that many or most of the existing techniques we use, both for user input and for user interfaces, just don’t work in a VR setting. We’ll need to throw a bunch of our old preconceptions about how to build games out and learn them anew; as well as conquer a few more problems which are unique to VR. I have plenty of ideas on that front, but will need to try them out to see if I really understand the problems, let alone the solutions.

My Rift prototype turned up earlier this week, and I find myself massively frustrated that I’m busy with other more pressing projects and can’t get started. I’ll hopefully rectify that soon enough, and you should probably expect a bunch more posts around my experimentation with the kit. I’ll have to wait until next year for a castAR kit, but that will open up even more possibilities in a different way to the Rift, and I think the final solutions will draw on the lessons learned from both systems.

A typical crunch story

Posted in Industry Rants on October 31st, 2013 by MrCranky

Following on from my previous two posts about why crunch happens, the last of my crunch posts (for a while at least) focuses on the developer, and why crunch happens even when projects are started with the best of intentions.

For most developers, the underlying business reality is that the deadlines are fixed, the budget has little room to grow, and the scope is broadly fixed when the title is green-lit. The only axis with any real wiggle room is quality, but dropping your title’s quality will hurt sales, and even if it doesn’t cost you this time, your next contract will suffer because you let the quality bar slip. But it’s that inability to shift any of the parameters which is the reason why crunch is so common in our industry. Starting off with an unrealistic schedule is what causes crunch. Failing to respond to external or internal factors that have increased the project cost, either by shifting the deadline, cutting scope or increasing the budget causes crunch. If a team is closing in on a deadline they can’t make, and the developers can’t shift the deadline or cut scope, then of course they’re going to try crunch, ineffectual as it is. They’re stuck. Why? Because the entire thing was unrealistic in the first place.

Most big games seem to involve crunch in some way (whether they turn out good or bad). But we all know, management included, that crunch is something to be avoided. At some point, the management and/or the team, voluntarily or not, decide that crunch is the least bad of all their available options. Given how bad crunch can be, and how many bad experiences we’ve all had, I don’t believe that smart, capable people would make that decision for no good reason. So I want to explore that reasoning, and perhaps bring it out into the open.

I’d like to posit an example that I’ve seen a few times, obviously it’s not the only case. The developer is mid-way through their project. Two weeks from a big milestone, the time for what needs to go in doesn’t fit into two weeks. Publisher won’t budge on dates or features, and there’s no more people to put on it. But maybe it’s only three weeks worth of work. So the team does 60 hour weeks, but they still don’t quite get it all done. But they were close enough that the publisher accepts it, and the work still left to do (lets say a couple of days) gets rolled over, because you’ve claimed to deliver it, right? You can’t get it cut later, the work still needs done. But hey, only two weeks of crunch is productive, right? And it felt productive – you got 2 and 3/4 weeks done in the space of two. And the crunch is ‘done’. Only now you’ve just cut two days out of your budget for the next milestone. And even if you hadn’t the next milestone was actually a week over budget as well.

Chain a few of those milestones together, and not only have you been alternating between fortnights of crunch and 40 hour weeks, but your actual feature set / quality is lagging behind the milestone list, and the publisher and their QA team know it. For milestone one the decision seemed obvious – it was only an extra week of work, and you pretty much nailed that. For milestone two, well, you knew there had to be a bit of knock-on when you slipped the first milestone a little. Third and fourth? Now the publisher is on your back, and things are getting awkward. Now it’s not “we need to somehow get an extra week’s work done to make this the game we want it to be,” it’s “we need to get an extra fortnight’s work done just to avoid the publisher canning us for breach of contract.” They’re running just to stay upright.

At that point, the management are sitting there with a pretty rubbish choice. If they do crunch, well then perhaps those work-time studies were right, and the team will actually get less than 40 hours done in a 60 hour week. But if they don’t crunch, then they know they’re going to fail. The milestone won’t be hit, the bills won’t be paid, and it all goes south really fast. The only hope they have is that the studies were wrong, that their team is at the top end of that bell curve, and that they can still be more productive than normal even though they’re pushing harder. But the fact is, they don’t really know. There’s no control group to compare themselves against, there’s no equivalent game being made without crunch. So they crunch and hope, while they try to dig their way out by other means (pleading with the publisher for more leeway, slashing the quality bar below where they’re happy with it, stealing resources from other projects / teams).

Thing is, the alternative: no crunch, and hope that by not crunching you actually do more already assumes that you’re so far down the road of crunch that even with >100% effort you’re doing <100% actual work. And most teams aren’t prepared to admit that. Not the managers, the teams. They know that the shit is hitting the fan, and they want to bail the team out, they don’t want to be the ones saying “actually guys, I was zoned out for a whole bunch of last week and maybe did 30 hours of actual work in my 60.” They see their bosses sitting in the meeting rooms with the publisher with all serious looks on their faces, and a lot of them (usually the younger ones who haven’t been through the wringer quite as often) feel guilty that they couldn’t be more effective, that they’re struggling after a few long hard weeks.

Worse, if the managers did say “no crunch, and we’ll do better work,” they’d have to admit to the publishers that they’ve been barely able to hit the milestones they agreed on, making them look like a poor developer. Because even if the publishers aren’t aware of the crunch before, you’ve got to explain why crunching now isn’t even an option. Now if there’s been shifting milestones or external factors that can be argued around a bit, but fundamentally the developer is having to admit to the publisher that they’re not good enough at development to deliver on what they’ve promised, for whatever reason. That’s a bitter pill, and not one that most developers want to swallow.

Again I think it’s stemming from the harsh financial conditions and unfounded optimism: the budget is fixed low due to market expectations, but the feature set / quality bar doesn’t shift; the developers agree to the optimistic assessment because it’s sign this gig or go hungry. Then everybody loses. The team gets burnt out, the developer loses money and their team, the publisher gets a shit game if they get a game at all, and the customer gets delays on their game and a poorer experience. It just isn’t as simple as those who’ve been burnt by crunch saying “it simply never works.” Long term we know that’s true. Even short term it’s not great. That doesn’t mean it won’t happen, or that sometimes it doesn’t need to happen.

But just because I understand the reasoning, doesn’t mean I agree with it. Management shouldn’t be burying their heads in the sand. They should be honest about their teams situation and performance, and they need to know that the very real costs of crunch on the staff aren’t something they can just ignore. If workers aren’t shouting against crunch, management are all too likely to forget that it’s not just the productivity on the game that matters, but the well-being of their staff and team, up to that deadline and beyond it. We absolutely should not be accepting the word of management teams that are conflating crunch with ‘passion’, and suggesting that crunch is a natural, positive part of game development. It’s not. Mandated crunch indicates a severe, uncorrected failure from somewhere along the line. Maybe it was the planning, maybe it was the publisher, maybe it was the team, maybe a combination of all three. But it’s always a failure.

Black Company Studios Limited, 14 Belford Road, Edinburgh, EH4 3BL
Registered in Scotland (SC283017) VAT Reg. No.: 886 4592 64
Last modified: August 14 2014.