Archive for the 'Industry Rants' Category

Incentivising crunch

Posted in Industry Rants on May 25th, 2015 by MrCranky

It’s often suggested by front-line staff in development studies that their management is aiming for crunch deliberately. As in, they know how bad it is for their team, but because they think it saves them money, they encourage it anyway. Personally, I don’t believe it’s always that cynical. Sometimes, sure, but not most times.

I genuinely believe crunch is endemic because our development process leads to a disconnect between present and future phases of the project, a failure of accountability. In the early phase, when there’s planning, people are incentivised to cram as much as possible into the plan, in a short a time as possible, for as little money as possible, while still keeping it realistic. But the incentives are all based around promising more, aiming higher, and the judgement of what is realistic is deferred till later. Most developers will have worked in a place where the project was only landed because the publisher pushed back on what was feasible in the time, and the developer over-promised.

No-one in that process has ever been rewarded for under-promising, or aiming low. No-one in the publisher would get patted on the back if they revised the scope document down, or the expected price up. No-one in the developer’s management would get rewarded if they said they could deliver less given the same resources. You are punished right away if you fail to agree the best plan you can, but you are not immediately punished if you over-promised, and you might still be able to avoid that punishment in the future. The failure to deliver is only a possibility, in the future. Plus if you fail to deliver, it can be someone else’s fault, or you can push harder, or any one of a bunch of different things. But in the early phase, the solution to the problem you have right now seems to be to over-reach, even if it creates more problems for you in the future.

In the latter phase, when its clear you have over-promised, it’s too late. The deadlines are set, the budget is limited, the resources are finite. The only solution to the problem you have right now is to desperately try to eke out more productivity, any way you can. Short-term thinking is rewarded, because failure in the short-term is punished terribly. Crunch is a workable solution to hit the first milestone you are in danger of slipping, and the cost of crunching only comes due after that milestone. So the solution to the problem you have right now seems to be crunch, even if it creates more problems for you in the future. Again, the punishment for failing comes right away, but the punishment for making the decision to crunch is in the future, where it may be someone else’s problem, or it may be avoided, or it might be staved off by some other means.

Accountability is the problem. There will always be finite resources, money, time, staff and pressure to do more with less. But we wouldn’t be seeing these problems if we incentivised more conservative, realistic planning. If you make decisions that lead to failure later on, your punishment should be twice as severe as the punishment for if you fail early on. And you shouldn’t be able to offload the responsibility for that failure. If the team fail to deliver on your over-optimistic plans, you should be the one carrying the punishment for it. That can and should echo down the line – each person should have to face the consequences of failing to deliver, right down to the team level. But they should also be the ones estimating how much they can do. That does hit a snag at the tail end, in that at the leaf nodes, you have two conflicting goals. You’ve just incentivised your staff to promise very little (so they know they can fulfil their promises), you have to also find a way to incentivise them to promise a the highest amount that’s realistic. And that’s quite hard.

The problem I fear is that if you do this right, and you reverse the normal incentives so that projects are far more conservative and likely to go to plan, then the games produced are smaller, less radical, less interesting, and ultimately less profitable. And very possibly not profitable enough to maintain the companies involved. Still, I maintain that it’s better to be honest with yourself about the viability of your business, than it is to keep it afloat only by exploiting your staff resources and by failing to deliver to the clients and customers. After all, how much money do you think we waste, aiming too high and having to scrabble to recover; burning productivity well below where it should be because of crunch.

Profitability in a market where successes are rare

Posted in Industry Rants on April 30th, 2015 by MrCranky

This week I wanted to share a really great article over on Lost Garden that echoes a lot of things I’ve been saying for a long time. Not just for indies (although it’s especially relevant for them), but for larger companies too. Some standout quotes that I feel are most apt:

Game development is inherently unstable. Technology, markets, profit margins and teams shift regularly. Any of these can quickly destroy a previously comfortable business.


In the 90s, Sierra expected 1 out of 4 games to be a success and pay for the other products that failed to turn a profit. Recently, Mike Capps, the previous president of Epic, claimed that he couldn’t promise more than a 10% chance a game would be a success. If you made 10 games, on average, you’d expect only 1 would be considered a success.


Your budget is likely Target Revenue * Success Rate. So if there’s a 10% chance of reaching $500,000, you should spend $50,000 on each project.


Over time success has been dropping. 25% is almost never seen in modern game markets. […]Given a set of equally competent games, only a fraction will become profitable.

What happens if that profitable game make $600,000? It earned 6X its costs! You made a profit of $500,000, enough to make 5 more games. However, you are still on the long road to bankruptcy, despite an apparent success. There’s only a roughly 40% chance those 5 swings at bat will result in a success. Long term, you’ll find yourself out of money or in debt.


It is a disservice to other developer to claim that a breakeven project is a financial success. Break even means almost nothing. You are still on the knife’s edge of baseline survival and should operate financially exactly as if you had achieved nothing.

You cannot bank on individual successes being repeated reliably. Games, even those developed by the best of developers, are not reliably successful. Maybe they miss the moment where the audience is really looking for them, maybe they get the quality bar wrong, maybe there are technical constraints that rob the title of what it needs to really work. It doesn’t matter, because unless games development suddenly becomes much more predictable than it is, a business making games has to assume that some if not most of their games will fail. If that business wants to be one which survives, it needs to be profitable across all its games, success or fail.

A team where the developers are taking low wages, putting everything they have into their first game, makes for a great story for the press. Make or break. But it’s terrible business. A team that’s taking their funding and figuring out how many months that will let them operate for, and then planning a game to fit that period, is a team that’s most likely going to fail.  Even if they survive the first game and limp on to make a second, even if through talent and luck and timing they magic up a massive success that not only makes all its costs back but also nets enough to fund their next game several times over, chances are the next game will not match the first’s success, nor the one after that.

This is true even for larger teams. The publishers are the ones who have to look at viability longer term, so often developers can ignore that and just live title-to-title, but the same pitfalls are there. If you’re a developer whose best title only earned back twice what it cost, then you shouldn’t be surprised if the publisher drops your team after the next title that only earns back a quarter of what it cost. They just can’t afford to take the risk that your next title will be a flop rather than a mediocre hit, because you’ll have become a losing bet. You have to deliver the massive breakout hits if you want to make them confident that over time you are a reliable generator of income, and not a drain on their coffers overall.

When we’re looking at whether making games is viable, we need to be looking at long-term profitability of the team, not just per-title. Anything you do to hide the true cost of your development is really just selling yourself short, setting yourself up for a later failure. Don’t lie to yourself about the cost of your time, or the extra hours you’ve put in. Don’t hide the costs of one game in something else. A game that is profitable, as long as you don’t count the months you spent eating just ramen, is not a profitable game. Don’t believe your own hype. It might be hard to accept, but there are no guarantees that the business you love is a profitable one.


Posted in Coding, Industry Rants on April 15th, 2015 by MrCranky

A discussion I was reading elsewhere linked me to this old gem on “falsehoods programmers believe about names.” I laughed, but it was one of sympathy rather than surprise. I’ve tackled localisation on a bunch of projects in my time, and the thing that takes the time is not content wrangling, or getting the right unicode fonts in. It’s dealing with the assumptions that the code team have already made and implemented in the early days of development.

It’s a print-to-string line that formats currency for display as $1.23, regardless of the user’s locale. Or a user signup form that has one box for First Name and one box for Surname, and expects exactly one word in each. Simple things, taken from the programmer’s own experience as ‘obviously’ the way it is, thrown in because they have to get a working implementation done quickly, and no-one has asked them to take localisation into account. That can all get sorted later, right? No. Not when you build in assumptions at the very base level that simply aren’t true.

For example, Steam. I was probably one of the early adopters of it. I didn’t want to be, annoying system tray icon that it was, but I wasn’t going to wait for Half-Life 2. To sign up, I had to use my email address as a username. Sure. Whatever. It’s now over 12 years later. That email address, tied to an ISP I moved away from, is long since gone. Can I change my username? No. Because they insisted on a particular form of unique ID at the time, and they insisted that usernames can never change. New users don’t have the same restriction, they can pick whatever username they want, but mine is frozen in time. Even though there is an actual email field on my account which is wholly separate from the username, I still have to enter a 30 character email address that has no relation to reality every time I want to log in. While I can just treat it as not an email address but just an oddly formatted unique identifier string, it jars with me, every single time.

Arguably this is just the meat of development – changing requirements over time invalidating initial assumptions. But for me it’s a plea to other developers – to slow down and take some time thinking about your initial implementation. If it’s not on a specification handed to you and you’re winging it based on how you think it should be; maybe think to yourself what the ramifications of how you choose to implement it will be. What won’t you be able to do if you implement it this way? What are the awkward cases, the potential ranges of input. Will it be possible to fix it later once the system is live and populated with data, or are you building in something that’s a fundamental to the system?

Breast physics and hair

Posted in Industry Rants on February 16th, 2015 by MrCranky

I confess, I just wanted to use that in a post title. But I’ve been using 3DMark to get a sense of which of the three main machines I use is the best performer. The answer, depressingly, is that all three are below the standard of a ‘gaming laptop’, and less than a third of the performance of a ‘high-end gaming machine.’ Not that I chase the bleeding edge of performance, I’m far too cheap for that. But my usual tactic of staying 3-4 years behind that edge does mean that I occasionally have to see how far things have come along since I last splashed out on new kit.

How does that relate to breast physics you ask? Well while watching the Sky Diver test one of the most prominent views you’re given of the sky-diver in question seems specifically designed to show off the rippling of their breasts in the wind. Or perhaps it’s the ASUS logo that’s plastered all over the suit (although curiously, not in the shot they use in their benchmark listing).

Not that I have anything against more accurate depictions of the human form in motion of course. I think the reason that it jumped out at me though was because it didn’t look natural. I can almost imagine the animator’s reaction to their initial feedback. “You want them to do what? Are you sure? Would they even move like that…? I don’t know, I’ve never worn a wing-suit. How about you go find me some video footage of an actual female sky-diver and I’ll work from that instead of your imagination?”

The reason this popped out at me as more than just an off-hand amusement at the benchmark graphics was my flabbergastedness at certain tweets this week, accusing game developers of sexism, for the crime of not devoting as much effort towards hair rendering as to shiny and reflective surfaces. This grinds my gears on several levels. The last time I shipped a console game (Brave), we spent a quarter of the entire frame calculating and rendering Brave’s hair, and exactly zero time on shiny or reflective surfaces. So to pretend that we’ve just never concentrated on hair is disingenuous.

Secondly, the reason why there’s more shiny stuff in games than fabulous hair is not because, you know, screw women, but because rendering hair is hard. Not just developing it, making sure it moves properly and looks good, but actually getting it on screen is costly. Like fluid dynamics and other similar technical challenges, you’re having to simulate many, many small things at once, and then deform geometry and alter texturing every frame as a result; something 3D hardware would really prefer you didn’t do. Fundamentally, that’s costly, and the cost doesn’t go away just because you spend more development time on it. Whereas good lighting and reflections comes almost for free, from the way that hardware 3D rendering works; spend some development time on getting the lighting calculations right, and then they can be done for every fragment you see on screen, at only slightly more cost than just rendering the thing in plain lit colours. And once it’s done it works for everything, not just the subset of characters who happen to have long hair, but for everything in the environment and all the characters, even the short-haired ones. So from a development point of view it’s a no-brainer as to which gets you the most pretty for the least cost. Trying to make it an issue of sexism only serves to show how little you understand about the challenges of making games.

No-one is avoiding making the hair look good because they’re sexist, if it was affordable then they’d be doing it all the time. Because when your characters’ have long hair that looks good (regardless of their gender), reviewers gush over it, it’s immediately noticeable. When your environments are a little bit more shiny than before, no-one bats an eyelid. At best it’s acknowledged as part of a wider judgement that your game looks good. Why wouldn’t we want to go for the hair? Because even though it’s nicer to have in, it still costs too damn much to get right, both in development time and in runtime resources.

Lack of information

Posted in Industry Rants on January 1st, 2015 by MrCranky

Starting the new year afresh and reinvigorated, I am looking forward to 2015 and the changes it will bring. In an effort to get out from the hole I’ve made for myself to quietly work away on client work, I thought I’d shared below the response I just wrote up to the question: How the issues that hinder the growth of creative industries can be overcome, and how to capitalise on opportunities?

To me the biggest thing that the public sector could do to aid the creative industries, especially games, is to provide the broader view that we in the private sector are solely lacking. The dearth of information on what is actually happening in the games industry is shameful. Sharing of information will help us all to grow, to avoid making the same mistakes, to spot opportunities as they arise and not well after they’ve been exploited by others. But we can barely even claim to know how many studios and developers are in the industry, let alone the more useful information like what they are working on or in what areas they are seeing growth/recession. We have trade bodies who poll their own members, but that represents only a fraction of the industry currently working. It’s frankly embarrassing that so little resources are put into tracking what the games industry is doing, and it seems to me that the government itself would benefit from being able to point to the growth of the Scottish games industry. It’s a manageably small sector to collect information on, smaller than the UK, and I’d guess more interconnected as well.

We in the Scottish games industry want to be able to shout about our successes, but we can’t, because we don’t have the context to say how much better we are doing than last year. Individual successes are great, but they are fleeting, what matters is the overall trend in the industry. I feel that it’s a positive trend, but I have absolutely no data to back that up, and asking around, it seems that no-one else does either, not even the government bodies who are supposed to be there to support the industry. But how can we be supported if they don’t even know who we are and what we’re doing? Don’t we run the risk of allocating resources based on a woefully out of date picture of what is happening? What use is it to the industry if support is provided for console games that form a dwindling share of development; or for social games when our market has moved on to mobile platforms?

I think that the very first step that must be taken is to put resources in to dramatically improve the information we have on the games industry as it is now; and to commit to keeping that information current as quickly as the industry itself moves. Without that information to inform us, I feel that the answers to all of the other questions the committee are asking run the risk of being out of date and useless before any actual answers can be agreed upon. Armed with that information, the public sector can know who to engage with, and the private sector can know how their industry is changing and seek out new opportunities rather than be left behind.

A typical crunch story

Posted in Industry Rants on October 31st, 2013 by MrCranky

Following on from my previous two posts about why crunch happens, the last of my crunch posts (for a while at least) focuses on the developer, and why crunch happens even when projects are started with the best of intentions.

For most developers, the underlying business reality is that the deadlines are fixed, the budget has little room to grow, and the scope is broadly fixed when the title is green-lit. The only axis with any real wiggle room is quality, but dropping your title’s quality will hurt sales, and even if it doesn’t cost you this time, your next contract will suffer because you let the quality bar slip. But it’s that inability to shift any of the parameters which is the reason why crunch is so common in our industry. Starting off with an unrealistic schedule is what causes crunch. Failing to respond to external or internal factors that have increased the project cost, either by shifting the deadline, cutting scope or increasing the budget causes crunch. If a team is closing in on a deadline they can’t make, and the developers can’t shift the deadline or cut scope, then of course they’re going to try crunch, ineffectual as it is. They’re stuck. Why? Because the entire thing was unrealistic in the first place.

Most big games seem to involve crunch in some way (whether they turn out good or bad). But we all know, management included, that crunch is something to be avoided. At some point, the management and/or the team, voluntarily or not, decide that crunch is the least bad of all their available options. Given how bad crunch can be, and how many bad experiences we’ve all had, I don’t believe that smart, capable people would make that decision for no good reason. So I want to explore that reasoning, and perhaps bring it out into the open.

I’d like to posit an example that I’ve seen a few times, obviously it’s not the only case. The developer is mid-way through their project. Two weeks from a big milestone, the time for what needs to go in doesn’t fit into two weeks. Publisher won’t budge on dates or features, and there’s no more people to put on it. But maybe it’s only three weeks worth of work. So the team does 60 hour weeks, but they still don’t quite get it all done. But they were close enough that the publisher accepts it, and the work still left to do (lets say a couple of days) gets rolled over, because you’ve claimed to deliver it, right? You can’t get it cut later, the work still needs done. But hey, only two weeks of crunch is productive, right? And it felt productive – you got 2 and 3/4 weeks done in the space of two. And the crunch is ‘done’. Only now you’ve just cut two days out of your budget for the next milestone. And even if you hadn’t the next milestone was actually a week over budget as well.

Chain a few of those milestones together, and not only have you been alternating between fortnights of crunch and 40 hour weeks, but your actual feature set / quality is lagging behind the milestone list, and the publisher and their QA team know it. For milestone one the decision seemed obvious – it was only an extra week of work, and you pretty much nailed that. For milestone two, well, you knew there had to be a bit of knock-on when you slipped the first milestone a little. Third and fourth? Now the publisher is on your back, and things are getting awkward. Now it’s not “we need to somehow get an extra week’s work done to make this the game we want it to be,” it’s “we need to get an extra fortnight’s work done just to avoid the publisher canning us for breach of contract.” They’re running just to stay upright.

At that point, the management are sitting there with a pretty rubbish choice. If they do crunch, well then perhaps those work-time studies were right, and the team will actually get less than 40 hours done in a 60 hour week. But if they don’t crunch, then they know they’re going to fail. The milestone won’t be hit, the bills won’t be paid, and it all goes south really fast. The only hope they have is that the studies were wrong, that their team is at the top end of that bell curve, and that they can still be more productive than normal even though they’re pushing harder. But the fact is, they don’t really know. There’s no control group to compare themselves against, there’s no equivalent game being made without crunch. So they crunch and hope, while they try to dig their way out by other means (pleading with the publisher for more leeway, slashing the quality bar below where they’re happy with it, stealing resources from other projects / teams).

Thing is, the alternative: no crunch, and hope that by not crunching you actually do more already assumes that you’re so far down the road of crunch that even with >100% effort you’re doing <100% actual work. And most teams aren’t prepared to admit that. Not the managers, the teams. They know that the shit is hitting the fan, and they want to bail the team out, they don’t want to be the ones saying “actually guys, I was zoned out for a whole bunch of last week and maybe did 30 hours of actual work in my 60.” They see their bosses sitting in the meeting rooms with the publisher with all serious looks on their faces, and a lot of them (usually the younger ones who haven’t been through the wringer quite as often) feel guilty that they couldn’t be more effective, that they’re struggling after a few long hard weeks.

Worse, if the managers did say “no crunch, and we’ll do better work,” they’d have to admit to the publishers that they’ve been barely able to hit the milestones they agreed on, making them look like a poor developer. Because even if the publishers aren’t aware of the crunch before, you’ve got to explain why crunching now isn’t even an option. Now if there’s been shifting milestones or external factors that can be argued around a bit, but fundamentally the developer is having to admit to the publisher that they’re not good enough at development to deliver on what they’ve promised, for whatever reason. That’s a bitter pill, and not one that most developers want to swallow.

Again I think it’s stemming from the harsh financial conditions and unfounded optimism: the budget is fixed low due to market expectations, but the feature set / quality bar doesn’t shift; the developers agree to the optimistic assessment because it’s sign this gig or go hungry. Then everybody loses. The team gets burnt out, the developer loses money and their team, the publisher gets a shit game if they get a game at all, and the customer gets delays on their game and a poorer experience. It just isn’t as simple as those who’ve been burnt by crunch saying “it simply never works.” Long term we know that’s true. Even short term it’s not great. That doesn’t mean it won’t happen, or that sometimes it doesn’t need to happen.

But just because I understand the reasoning, doesn’t mean I agree with it. Management shouldn’t be burying their heads in the sand. They should be honest about their teams situation and performance, and they need to know that the very real costs of crunch on the staff aren’t something they can just ignore. If workers aren’t shouting against crunch, management are all too likely to forget that it’s not just the productivity on the game that matters, but the well-being of their staff and team, up to that deadline and beyond it. We absolutely should not be accepting the word of management teams that are conflating crunch with ‘passion’, and suggesting that crunch is a natural, positive part of game development. It’s not. Mandated crunch indicates a severe, uncorrected failure from somewhere along the line. Maybe it was the planning, maybe it was the publisher, maybe it was the team, maybe a combination of all three. But it’s always a failure.

The real cost of making games

Posted in Industry Rants on October 24th, 2013 by MrCranky

The last time I talked about inaccurate estimating, and the dangerous road publishers and developers are heading down by lying to themselves and each other about the real cost involved in making their games. To me, the arguments about crunch and contingency are looking in the wrong place. They’re a symptom, not the root problem in themselves. Crunch happens, because there aren’t any tenable options left to the developer that is mid way through a title, and has a fixed deadline to hit. To appreciate why it’s the only option left, you have to step back a bit.

Most developers are pitching for business from publishers. A few get their finance from a non-publisher entity, but the relationship is effectively the same. Publisher-owned studios are in much the same situation, it’s just that the pitch and negotiation stage isn’t between two distinct businesses, but between units in the same business; so the negotiation is less antagonistic, but the basic relationship is the same. One side provides the finance, and gets the revenue/profits from selling the game; the other provides the game for some cost. The financier is buying a title that it can sell on for a profit. The console market has moved to a place where to make a profit, you have to hit a certain level of quality and have a game of a certain level of scope. So there is a minimum viable product for the financier, and an effective market size that means it’s not cost-effective to make a title unless it costs little enough that it can make its costs back. Most titles cost is proportional to the number of man-months involved, so shifting a deadline out doesn’t really save any money, quite the opposite – the developer staff need paid more for that extra time. So generally, the deadline is fixed.

It’s with that price in mind the only variable left gets decided: scope. How big a game will it be? How complicated? Will it break new ground, or go with a safe mechanic or style that the developer is confident of delivering for the budget? Here’s where the problem comes: how big does it need to be to make its money back? I think we’ve got to face the very real possibility that the effective cost of making the games the console market expects outstrips the likely revenue you’ll get from those titles. If it does, the difference has to come from somewhere.

From the developer’s point of view, it is hard to get a publisher to sign on to what you think is a reasonable price for making the game they’d like. Of course they want more for less; their margins have been squeezed to the bone as it is. But if you have a team of staff waiting to make a game, the cost of refusing to make a game because the publisher is only prepared to pay 80 or 90% of what you think it will actually take to make their game may be that you fold altogether. At least if you take the 80% deal you can argue the scope down later, or find some other way of making it work.

That’s where the trouble kicks in. If your company is bidding low to get financing, then making up the difference through crunch (which is effectively asking the employees to subsidise the project cost through ‘free’ labour), then it’s screwed. But the alternatives aren’t much better for the company, although they’re clearly better for the staff:

  1. Don’t make the game at all. Company has no business, shuts. Financiers get no games, can’t make a profit.
  2. Make a smaller game. Market rejects it due to unrealistic expectations, financiers lose out, next title doesn’t get funded, company shuts.
  3. Bid low and try to make the game for less than it costs, through crunch. Company and financiers do okay on this title. Staff get burnt out, next title costs even more to deliver (through reduced efficiency/quality), repeat this choice scenario again but with worse numbers to start with.
  4. Bid low and manage to raise the price later. Company does okay, but financier loses out when revenue doesn’t match cost. Next title doesn’t get funded, company shuts.
  5. Bid realistically, financier knows the numbers don’t work. Company loses out, shuts. Financier either gets no games, or finds some company willing to choose scenario 3.

You can probably see why companies choose option 3, even when they know what the consequences are. Because it’s the least-bad option available to them. And they can persuade themselves that this time will be different, this time they’ll work smarter, and they’ll hit those lower costs without crunching, because they’re good at what they do. When that works out, everyone’s happy. When it doesn’t, there are lots of factors they can blame. NB: “Bid realistically” here means hiring great planners, and adopting a sensible, reactive planning approach like I described last time. A company can be bidding low without even realising it, but that doesn’t make their situation any better.

When the fundamentals of it are that it costs that particular developer more to make that particular game than they thought, that’s a business doomed to extinction. Crunch is a side issue, one of many symptoms, of which the root cause is denial about how much it actually costs to make the games we are building. The only way out is to make different games, maybe in different markets, which actually cost less to make than they take in revenue. Maybe I’m wrong, maybe the console games business is eminently viable. But the reality of difficult financial conditions and the developer’s strategy for dealing with that is the core problem underlying crunch. Railing against crunch is going to do little to help us, if we don’t address the underlying business conditions that cause the unrealistic expectations in the first place.

Crunch vs. Contingency

Posted in Industry Rants on October 17th, 2013 by MrCranky

So the PlayStation 4 and XBox One are soon to be released, launching us into another console generation. This time around, it’s not just me that is cynical about the prospects for the ‘traditional’ games industry. The ecosystem of games has been changed irrevocably by the advent of smartphones, tablets, and a resurgence from PC gaming. It’s no longer a given that there is a niche for console gaming large enough to support the costs of developing those games. But I’ve certainly been wrong before, and I don’t want to call console gaming dead before its time.

Recently, in response to this article on crunch, I found myself  coming at this tired old debate from another angle. Many in the industry, generally not management types, are frustrated by the management’s inability to put in sufficient contingency, resulting in an almost inevitable period of crunch, where the developers put in overtime far over and above their expected working hours, to try and get the title out  for its fixed deadline. Typically, when the ‘more contingency’ argument is rolled out, it is countered with “game development is hard, and unpredictable,” and “you can’t schedule for ‘fun’.” The counter-counter argument to that is typically that other software industries deal with equally unpredictable factors, and they don’t have to crunch in quite as pathological a way as we do. The core of these arguments is really this niggling underlying sense that crunch is a natural consequence of not being quite good enough at making games, and that’s problematic.

Thing is, being bad at making games is a cause of crunch. But not because the people making the games are bad at what they do. Because part of making games is estimating how long it will take (and correspondingly how much it will cost) to make the game, given the team you have. Not an ideal team, not the team you’d like to have, the team you have actually got. Planning is hard. Some game-devs, usually the ones who’ve not had to make a plan for any sort of sizable project, think that all that is needed is ‘more contingency.’ This is waved around as if it was really simply to do, and that the management / planners are not doing it deliberately so that crunch is required, because crunch is cheap, and contingency isn’t. But anyone that has to make a plan, and more importantly anyone that has to sell a plan to the game’s financiers, knows that simply whacking on a bigger and bigger percentage figure for contingency doesn’t work. It is admitting that you don’t know how things are going to go, and trying to pick a single large fudge factor that insulates you against bidding too high or too low. We almost never make the same game twice; previous games aren’t much help at predicting how long future games will take. You can break things down to estimable components, but the way those components interact, in ways which may or may not work, which may or may not be fun, is what turns a project from under-budget to over-budget.

That’s not to say we can’t get a lot closer than we do, with better planning. Game-devs in my experience are almost always hopelessly optimistic, even though project after project teaches them that requirements do change, designs do change, and that a sizeable software project invariably has nuances that couldn’t reasonably be predicted at the start. Fundamentally though, there are two changes that need to happen before we’ll stop seeing regular, mandated crunch.

Firstly, we need to accept that the scope, design and timetable for the development is flexible. Trying to nail down the plan up front is foolish and naive. Either the developer does stick to the plan, and the game is crippled because it didn’t respond to the practically inevitable changes that were needed to make it the game it should have been; or the developer diverges from the plan, and either the publisher has to pick up the cost (from the deadline slipping) or the developer does (either by paying for more development time, or by burning out their staff with crunch). As the development continues, the plan should become more and more clear, but it won’t be clear up front. A good developer, and the publisher/financier that is bankrolling the development, will be continually re-assessing the plan as to what is feasible, and what is desired. The publisher will always be pushing for more for less money, and the developer will be pushing for less, but it needs to be accepted that the ‘plan’ is a continually shifting thing, that is going to end up being a comprimise, negotiated by both sides.

Secondly, both the financier/publisher and developer need to be honest about how much it actually costs to make the games that are being made. Hiding the real development cost of a title by burying it in crunch is effectively passing off some of the cost of development onto the staff, and that is fundamentally bad for all concerned. But more importantly, it’s leading both developer and publisher down the road to bankruptcy, from sticking their heads in the sand. More on that next time.

Management structure

Posted in Industry Rants on June 28th, 2013 by MrCranky

Written in response to musing about whether or not Valve’s ‘cabal’ structures were useful, or just a quirk of the company.

Management isn’t generally the problem, the problem is that after a certain point the structure starts to exist to serve the structure, not the needs the structure was originally supposed to serve. All organisational structures, be they flat, tiered, cabals, whatever, are there to facilitate the business needs. Generally a games developer needs to make better games, faster and cheaper. When you spend all day in interminable meetings because your hierarchy is a bad fit for what actually needs done, then communication overhead means that more time is spent talking about what should be done than is spent on doing it, you’re not serving the business. When you spend a bunch of time flitting between tasks because it’s not clear whether you should be doing something or someone else should, and end up doing the same thing as someone else while other vital things fall between the cracks, you’re not serving the business.

All different sorts of management can be fine, great even, as long as everyone remembers that at the end of the day it’s supposed to make the work go better, not worse. It doesn’t matter whether it’s top down, bottom up, side to side or shaken not stirred, as long as it’s making it easier for real, productive, money-making development to happen. Remember those Time and Motion studies? I think that’s what we need sometimes – someone from outside to point out when our structures are getting in the way rather than helping. It’s very hard to see when you’re in the thick of it; you get a sense that something is wrong, that this madness can’t be the best way to do things, but not how to fix it.

Maturity in fiction and games

Posted in Industry Rants on November 11th, 2012 by MrCranky

I’ve been attempting to thrash out some opinions in my head recently, and I think they’re reached the stage where writing them down would help. I’m thinking about the sorts of games the industry tends to make, and seeing in them parallels with fiction in books. Specifically I’m thinking about the sorts of stories we tell, and the kind of writing involved. Looking back on the most notable games of the last twenty years, it seems to me that many if not most games use a science-fiction or fantasy (SF&F) setting. The ones which don’t (I’m thinking Call of Duty and the other modern FPS games, Uncharted, etc.) all tend to rely on the same tropes which I’ll talk about later.

For some background, I’ve recently read the Game of Thrones series, which for those who haven’t read it is a gritty fantasy series set in an essentially medieval world. The characters are dark and flawed, and the line between the heroes and villains of the piece is most definitely blurred. ‘Good’ characters are not uniformly noble, and ‘evil’ characters are not unremittingly bad. The main characters are vulnerable as everyone else in the world, they don’t have special skills, they’re not extraordinarily lucky. They die just like everyone else, and just being a main character is no guarantee they’ll even survive till the end of the book. Interesting stuff happens all over the place, not just where the main characters are. Fortuitous events are as often bad for the protagonists as they are good. I won’t say “just like real life” because real life doesn’t have a whole lot of dragons in it, but certainly a lot more plausible than a lot of SF&F fiction.

I’m almost tempted to use the term ‘grown-up fiction’ here, but I think that’s doing a disservice to SF&F fiction, which can be as grown-up and compelling as regular fiction. But the tropes that I see in non SF&F games are the same ones you come across in SF&F fiction and games. Here are few:

  • The protagonist(s) turn out to have amazing powers that elevate them far above regular people, e.g. amazing strength, abilities with weapons or magic, or supernatural senses; or maybe they’re the one and only person who is the fulfilment of some ancient prophecy.
  • These powers are often previously undiscovered and the protagonists develop them through the course of the story, leading to the story’s climax where the full range of their abilities will be tested.
  • The antagonists have powers or a similar advantage that rival the protagonists’, but they will already be in full command of them at the start of the piece
  • Alternatively the antagonists will be in control of the world situation (e.g. an evil government commanding an army of minions), and the protagonists are only safe because they are hidden, and achieve victory by using their superior abilities against ever-increasing numbers / strengths of minions.
  • Minions will be so staggeringly ineffective their only purpose is to be cannon fodder for the developing protagonist. The unstoppable army that has supposedly swept away all resistance seems to be entirely staffed by soldiers that seem unable to tie their own shoe-laces.
  • If the protagonists don’t have great abilities, then they are at least unnaturally lucky – other minor characters throw their lives away while the main characters are miraculously untouched, despite the antagonists being in a clearly superior position.
  • Alternatively they will be the rich and noble sons and daughters of the rulers of the land; uniquely placed to get involved in high adventure, without needing to ever sully themselves with something as hum-drum as a regular job, just to earn enough to put a roof over their heads.
  • The protagonists will always be in the right place at the right time for interesting stuff to happen. The village which has been ignored by the evil emperor for years is raided by the empire’s secret police only a day after the protagonists seek refuge there.

Sounding familiar? Star Wars, Lord of the Rings, Harry Potter, Eragon, The Belgariad; Call of Duty, Half-Life, Doom, heck – every FPS ever, Uncharted, GTA, Max Payne, Prototype, Ninja Gaiden, God of War. They’re not unique to SF&F settings, but SF&F does use them rather a lot.

This I think is where my feeling that these are immature stories comes from. They appeal to our sense of wanting to be special, we sympathise with a powerless character becoming powerful, and standing for all that’s good against a clearly evil villain. We don’t want them to have weaknesses because we’re putting ourself in the protagonist’s place, and we don’t want to have weaknesses. But the notion of a super-powerful character who is only vulnerable because they don’t realise just how strong they are is the very definition of an adolescent fantasy of what a great character would be. He’s a ninja with super-strength, who can fly faster than the speed of sound, and can also stop time and can totally be invisible. Really? And he hasn’t conquered every enemy in the entire world yet why? Comic writers have realised this since the start, as the arms race of super-hero versus super-villain is a never ending one. A super hero who is invulnerable and superior to all his foes is a really boring character.  They have to be vulnerable, both in their powers and in their characters, to be able to weave them into an interesting story. And no, not being able to be everywhere at once isn’t a vulnerability, at least not a proper one. This isn’t limited to super-hero fiction either: if you’ve played Call of Duty or Wolfenstein but haven’t seen Band of Brothers, watch at least a couple of episodes; the brutality of fighting in WWII is inescapable – one man wouldn’t be mowing down dozens and dozens of Axis soldiers, they’d be lucky to kill a half a dozen before luck meant that they took a bullet themselves.

Now flip it around the other way. There is a surfeit of fiction out there that doesn’t fall into these traps (classic literature such as Dickens, Austen, etc. not to mention crime novels, historical fiction novels, romance novels). But comparably there are very few games which don’t. There are plenty of abstract games (e.g. Tetris), simulation games (e.g. Gran Turismo, flight sims, or The Sims), but I think most people would struggle to name more than one or two high profile games with narratives that don’t fall prey to these same easy tropes. From memory I’d call out “Hotel Dusk: Room 215,” as being a good story with compelling and believable characters without any of the tropes I’ve mentioned above. Similarly the LucasArts games did very well at telling a story without requiring the main characters to be super-special in any way. But these are the exceptions and hardly the norm.

Of course, there are reasons why the fiction in games is written the way it is. A Call of Duty game where you were dropped by a single bullet quite simply wouldn’t be fun. An RPG where you played a subsistence farmer, struggling to get by, wouldn’t keep any but the most masochist player interested. However I still think it’s important to recognise that there is a richness of narrative fiction out there largely untapped because we are treading the safe road we’ve walked before. When accusations are levelled at the games industry that we only make games for kids, and that we’ll never make a game that will make people cry (a lie, I know), I look at the sorts of games that get made, and I can’t help but think that we’re not doing ourselves any favours.

That’s not to say that some games aren’t bucking the trend. “Dear Esther” sounded like a laudable attempt, although I’ve yet to play it. I’d love to see more crime fiction brought to life through games (L.A. Noire, for all its plodding repetitive game-play, was a great stab at this genre). I wish someone would tell a compelling ghost story in the form of a game. Heck, I’d even settle for romantic comedy. Just, you know, something that stretches our boundaries a bit, and not just another bullet-proof space marine or boy that finds he is actually an ultra-powerful magician.

Black Company Studios Limited, The Melting Pot, 5 Rose Street, Edinburgh, EH2 2PR
Registered in Scotland (SC283017) VAT Reg. No.: 886 4592 64
Last modified: February 06 2020.