Resurfacing

Posted in Games, Tales from the grind-stone on October 25th, 2012 by MrCranky

Oh my, it has been a while, hasn’t it?

In my defence, it’s been a crazy summer, and I have been juggling many different balls. Thankfully, all the work we’ve been doing has finally come to fruition, and is all now out there in the world so we can talk about it. First off, the work I’ve been doing for the last year or so with Sumo Digital, on Nike+ Kinect Training.

This was mostly working on the localisation aspect, as the game is translated into some 15 languages across 3 discs, there was a lot of voice content to get in. I can’t take much for anything else, but I think the folks at Sumo did a great job on it – certainly when I’ve had to actually stand up in front of the Kinect and do some real exercise, I’ve certainly felt the burn!

In-house however, we’ve had another big project that we’ve put our heart and soul into. Last year, Bliss Kiss Productions approached us with a pitch to re-make Daley Thompson’s Decathlon, for mobile devices. Of course, we loved the original game, I think anyone who had a Spectrum or Commodore 64 will have played it at some point: personally I abused my old rubber-keyed Spectrum 48K terribly to try and get a decent score. Thankfully I didn’t have a joystick at that point, otherwise I’m sure it would have been broken just as many others did theirs. So the chance to bring it to mobile was something we couldn’t pass up.

While we did some solid work on it in autumn last year, other commitments meant that it wasn’t until this summer that we could tackle it in earnest. Which, combined with all our other ongoing commitments, made for a lot of work. Dan’s been in pretty much the whole summer working flat out on it, and seems pretty chuffed with his first proper published title.

It’s a remake from the ground up, obviously. Looking back at the original version it was clear that the design was still fun (we spent more time playing than taking notes when researching), but the rose-tinted glasses of nostalgia allowed us to forget just how dated the graphics looked. On the Spectrum version, Daley’s an all-white blocky sprite with only a few frames of animation! There were also a lot of design decisions that were clearly made due to technical limitations (such as the shot put taking place on a straight track, instead of in a circular pit as it does in real life). Some of those decisions we revisited, but where there was a design case for it, we erred on the side of the original.

What was pretty clear,  from even the first round of focus testing, was that the original was brutally hard in its learning curve. Running events like the 100m and hurdles are straightforward enough, but three events in particular were unique in their own way: the high jump, pole vault and discus throw all differ in style. Instead of rewarding frantic tapping, they are games of timing. In the 80s, it was fine to spring that sort of challenge on the player and expect them to learn it on their own, but modern players are nowhere near as understanding. With that in mind, we put in a practice mode that allowed players to learn how to master particular events, without the added pressure of participating in the whole decathlon; and we put on-screen prompts and buttons to guide unfamiliar players through each event.

Also needing wholly revisited were the controls themselves. As a first principle we wanted to replicate the frantic button mashing / joystick waggling of the original; the user should have to break a sweat to get those high scores, especially in the 400m. At first glance the touch-screen controls seem obvious, alternating between left and right sides of the screen to run. But finding a way to let the user throw and jump without a) accidentally jumping when they didn’t mean to, or b) having the on-screen feedback be underneath the user’s fingers, was not a trivial task. Worse, when you introduce multi-touch, we had to find a way to handle input so that it was always physically hard to achieve the maximum speed. Later focus testing revealed that our use of an on-screen button for throwing / jumping wasn’t working; users were interpreting “HOLD” as a prompt, not a button, and simply holding their finger down wherever they last tapped. Based on that, we revised the controls to respond to exactly that action.

On the visuals and audio, we wanted to aim somewhere between modern and nostalgic. For the art side, we brought in Paul Helman to work on the graphics, and we feel he was right on the mark in his style – not blocky or restricted in colours, but also not trying to be too realistic. At first we were worried about how Daley Thompson would react to the stylised look we gave him, but all the feedback was positive.

For audio, we worked with Gavin Harrison, who did a great job experimenting on the audio we needed. Evoking the ‘old style’ in audio is somewhat harder; the audio chips of the 8-bit era had a very limited range, which just sounds silly nowadays. In the end, we went for a simple synth-sounding musical theme, and some very slightly distorted audio samples.

We finished our work at the end of September, and the game itself was released on iOS and Android on the 21st of September. The PR machine for the launch is in full swing, and we’re eagerly awaiting the public’s reception of it. When the dust has settled, I’ll try to write up a post-mortem of everything we’ve done, what worked and what didn’t, but right now I’ve been enjoying some well deserved time off!

Coding conventions

Posted in Coding on July 30th, 2012 by MrCranky

Another mini-rant on coding this week, originally composed as a response to someone who didn’t see why conforming to coding standards was such a big deal. In this case (roughly sorting header #include statements alphabetically) the defence was “it’s trivial to do that automatically, so why should you care whether a coder does it themselves?” That’s a pretty typical response, but the answer to it for me sums up exactly why following conventions is important, and it is nothing to do with the conventions themselves, and everything to do with how you work as a team.

First off I’d agree that this case in particular is not a major issue. None of them (indenting conventions, space conventions, capitalisation conventsion) are, but that isn’t why it gets people worked up when one coder decides to go ahead regardless. The problem is that you have a choice between:

  • Original coder does it as agreed first time

or

  • Original coder decides to ignore convention previously agreed on
  • Entire team endures negative effects of said change until either:
    • Another coder takes time out from whatever else they’re doing to fix it:
      • If they do it as part of a commit they’re already doing, it obscures the diffs for the ‘real’ changes they’re making.
      • If they do it as a separate commit, they’ve got to take the time to make sure that they’ve not accidentally broken something everyone chooses to leave it as it is and over time the entire codebase degenerates into a collection of such issues
    • Somebody writes an automatic tool to fix the problem

Fixing it after the fact is not a good solution, because it’s far more expensive than just doing it right the first time. If there’s a policy, then everyone should stick to it. If they don’t agree with the policy, then they should take that up amongst the team, not just ignore it because they don’t agree and they expect someone (or something) else to fix it later. If it’s a stupid policy, then the team can agree to get rid of it. If it has merit for others then they should respect that even if they don’t personally agree with it: because they’re working as part of a team, not just as individuals, and that should entail a certain amount of respect to your team-mates.

Most of us will have known ‘renegade’ coders, who go off into their own zone and implement some big bit of functionality without consulting with the rest of the team. Sometimes that works well, and other times they come back, throw the code over the fence at the rest of the team and act surprised when they have problems integrating it. That is no way to work, and not only will it lead to friction amongst the team, it also generally means a bunch of wasted effort that could have been avoided with better communication up front. Not respecting coding conventions isn’t nearly as bad as that, but I feel like it’s the first step down the road towards it.

When you’re working in a team, you don’t have the luxury of implementing things in a bubble: you have to work with other peoples’ code, and they have to work with yours. Coming to a common agreement as to how to work with each other is the most basic part of that, otherwise you’ll find yourself working at odds with each other. There can and should be compromises to get to that agreement, but ‘agreeing to disagree’ is generally not a viable option.

Conflicting ideas about the size of STL strings

Posted in Coding, Technical Guidance on July 18th, 2012 by MrCranky

This post is one of those “I couldn’t find it when I was Googling, so here’s a succinct description of the problem / solution so other people can avoid the same round-about research.”

Symptom:

You have one bit of code (perhaps a library or a DLL) which thinks that sizeof(std::string) is 28 bytes, and another bit of code which thinks that it is 32 bytes. In Release mode they both agree that the size is 28 bytes. In our case it was actually std::wstring, but both string objects are actually the same size and exhibit the same problem.

Diagnosis:

You have a mismatch in your configuration between the two projects, essentially you’re trying to mix Debug code and Release code, which is just fundamentally not allowed. This much information is readily available on the Internet with some basic searching, but crucially most of those places don’t tell you the one piece of information you really need: exactly what setting is different? Which one of the dozens of settings that typically differ between Debug and Release is the STL code actually paying attention to?

The real answer lies in the details. It is not a Debug vs Release problem (well it is, but only indirectly). If you’re like me, the first thing you checked was the presence (or absence) of the _DEBUG or NDEBUG pre-processor directives. After all, they’re the defines most often used to get differing behaviour between the debug and release builds. You’ll find however that those definitions have no bearing at all on the size of std::string.

Now is probably a good time to visit this Stack Overflow question which links to good information on the subject.

In fact, the root cause is the presence and value of the preprocessor definitions _SECURE_SCL and/or _HAS_ITERATOR_DEBUGGING. If these are defined and set to 1, then sizeof(std::string) will be 32. If they are defined and set to 0, sizeof(std::string) will be 28.

More troubling is that even if those definitions aren’t explicitly listed in the set of pre-processor definitions, I believe the compiler (the Visual Studio compiler at least) will define them for you, based on its own internal logic. _SECURE_SCL will always be 1 for both debug and release builds, but _HAS_ITERATOR_DEBUGGING will be 1 for debug builds, 0 for release builds (as it has a tangible performance impact). You can explicitly turn off _SECURE_SCL to get more performance if you want, but you should understand the drawbacks before you do so.

I will update this post if I find out more about the internal setup of those definitions, but simply knowing that they are the cause of the size difference is usually enough to get to a resolution. I would certainly recommend adding some logging to both code modules that spits out the value of these two defines so it’s clear to you what the values are on both sides.

Resolution:

For most, an immediate solution is to simply manually define iterator debugging to be on or off in both projects so that they are consistent. To do that, simply add _HAS_ITERATOR_DEBUGGING=1 (or 0) to your project’s preprocessor definitions.

You may want to avoid setting it explicitly (ideally you’d simply rely on the compiler defaults), in which case you’ll need to figure out why iterator debugging is enabled for one module but not the other. For that I’m afraid you need more information about how the compiler decides to set those defines, but presumably another one of your project settings is indirectly making the compiler decide that iterator debugging should be enabled or not, and it is that setting which is different between your two modules.

The importance of (good) teachers

Posted in Industry Rants on June 25th, 2012 by MrCranky

I usually recommend that students looking to get into the games industry as coders stick with traditional, academic courses like Software Engineering or Computer Science. Not because those courses teach the content most appropriate to games development, but because they leave the students with a well rounded education. With a well rounded education, they can learn the practical / vocational skills needed for games development (a higher level of programming expertise usually) on their own, plus they have the option of a career somewhere other than the games industry if they change their mind or find there is a shortage of employment available. If they specialise in a vocational course too early, they wouldn’t get the more general education that would allow them to work anywhere other than games.

That’s not to say that I discount students from vocational games courses though, far from it. But the quality of those courses varies dramatically, and so it’s even more important to assess the quality of the education they’re receiving. The first and probably biggest alarm bell that rings is when courses employ lecturers without games industry experience. That to me is utter madness. They might have masters degrees or doctorates, they might be the most engaging lecturer in the world, but without industry experience, they are wholly unqualified to be teaching a vocational course. That’s like someone teaching others how to swim when they’ve only ever had a bath. There are many other warning signs of course, but to me an institution that thinks to staff their course with vocational teaching staff with no experience in that vocation is only ever going to produce sub-par graduates.

So my advice to those institutions is this: hire industry experienced people. Poach them away from the industry with better working conditions and less stress, even if you can’t offer them more money. Entice them with the notion of enthusing a new generation of games developers. Find the next big studio that gets shut down (there’s no shortage of those), and see if anyone wants to take a break from the industry proper to teach. But whatever you do, don’t hire academics who’ve never shipped a game in their life.

And don’t hire people who couldn’t get into the games industry on their own, but who want to pretend like they’ve made games so get into teaching. Hint: you’re not a professional games designer until someone has paid you real money to design a game which has shipped. That doesn’t include:

  • designing games for your friends
  • designing your own game but never actually making or releasing it
  • writing books about other peoples’ game designs and how they are good or bad

If you’re going to teach games design, personally I think it should be compulsory to detail which games you designed (or part designed), and how well they did. Your students should be able to go find your games and judge for themselves how good your design chops really are, before they start taking your opinions on design as ‘the way things are.’

In defence of middleware

Posted in Industry Rants on May 31st, 2012 by MrCranky

This mini-rant sprang from a discussion on The Chaos Engine about middleware, in answer to the question: “even if it’s the best engine available is it really worth being locked in to anything other than in-house, license-uninhibited tech?”

That depends on whether you’re interested in building games or shipping games. You’re trading many man-months of effort on a new / unknown engine versus a non-trivial licencing cost. How many games do you have to ship on your internal engine before the difference in cost becomes positive? And what do you do with all those engine developers you’re carrying once the engine is done? Because they’re part of your burn-rate now.

Making your own tech is simultaneously the risky option for the business, and the safe option for the developers. Why? Because as long as you can persuade someone to bankroll it, there’s a tonne of work to do, and it’s nice, tangible work with obvious goals and milestones. You know when you’re done. You know what you’re making. The customers are the other developers on your team, and they’re not nearly as fickle as the public. It’s a lot easier to find success in building your own engine than it is to find success making and shipping games.

That is super short-term thinking though. Because once you’ve succeeded in making the engine, you’ve still got to ship a successful game, and worse, you’ve probably got to ship several successful games before the engine development effort is paid back. Plus your engine will have a lifespan just like they all do: if you don’t profit enough from the games made on it in that lifespan, then it’s been a net loss.

It’s no wonder that individual developers don’t like middleware. It’s clunky, it rarely fits right with what you’re trying to make, and you’ve got little to no control over its development. But “it’s expensive” isn’t a great argument against it, because the alternative is expensive too. It’s not risk-free, but it’s certainly less risky than doing it yourself. It’s a known cost, and in most cases a known risk. Fundamentally, it frees your employers from having to take a gamble on the tech you build, and when the money they’re gambling on your tech is money that they could be gambling on your games, I don’t think that’s really very attractive.

I’d prefer to be working for a smaller company that can be more agile, more robust, and capable of shipping more games, over a company that’s carrying an engine development team, that has to build games based on tech that won’t be done till some future date, and which has less capital to work with because it’s invested a chunk of it in an engine that has yet to pay that money back.

What to expect from the games industry, and what it expects of you

Posted in Tales from the grind-stone on March 7th, 2012 by MrCranky

The folks from Edinburgh University Computing Society, who run the student TechMeetup, have asked me to give a brief talk on the games industry to one of their gatherings. As anyone who knows me will attest, I’m happy to waffle about the games industry at length, but I do have a few pet topics. Here are my discussion items for the talk, on which I’ll expand at the talk itself.

  • Hard but rewarding work – need talent and passion.
  • The feeling you get from seeing other people pick up the work you’ve made and get real entertainment from it is fantastic.
  • Making games is a business, but not a hugely lucrative business. If you want to get rich, look elsewhere.
  • Don’t expect a job for life, or gamble everything on one team.
  • Employers vary in quality. Good teams make good games. Business can still kill good teams.
  • Margins are much tighter, hiring people is a risk.
  • Show your talent: make a demo, work on mods. An academic CV is unlikely to be enough.
  • Passion should not equal crunch. Enjoying your work is not a licence for exploitation.

New hotness

Posted in Tales from the grind-stone on January 17th, 2012 by MrCranky

Is it bad to be compulsively checking the UPS tracking page for my new laptop? Or to be a little nervous because it’s currently in Kazakhstan, and all those Call of Duty games made me a little nervous about ex-Soviet republics? Is that over-protective? It’s not even here yet, and I’m clucking over it like a mother hen.

Whatever, as long as it gets here in one piece and is suitably shiny. We’re kicking off with our new client this week, and it was immediately apparent that my current 32-bit dual core laptop (now five and a half years old) really wouldn’t cut the mustard. It was okay, just, for building for 360, because the console does all the heavy lifting. But it won’t run a PC build of anything substantial, and compilation takes an age. Not to mention the graphics flashing and sporadic unexplained hard freezes. So the new Macbook Pro kills two birds with one stone – it’s modern and chunky enough that it should build and run the client’s title, and it means Tim and I no longer have to pass the older Macbook Pro whenever there’s iOS work needing done.

To put it in some context, Tim’s machine needed a new graphics card as well to bring it up to spec. His new graphics card scored ~1600 on the benchmarks. The new Macbook Pro’s graphics score ~1300. Tim’s old graphics scored ~500, and the old MBP ~270. My current laptop (and bear in mind I got the Dell Precision M65 with the graphics ‘upgrade’) scores 71. Yes, 71. I had to go three pages down on the benchmark list before I could even find it.

Of course, even the new MBP isn’t up to the level of the monster Alienware M17X that MGS bought for me, but on the flip side, it also won’t weigh 7 kilos and sound like a jet turbine taking off. While I do still miss the glowy lights and brushed aluminium body of the M17X, the added benefit of crotch-based heat sterilisation from the MBP is surely enough to seal the deal.

Pinnie the Who and the Blustery Day

Posted in Random Stuff, Tales from the grind-stone on January 3rd, 2012 by MrCranky

Happy New Year! Tim and I have actually been in the office since Monday, eschewing the traditional extra Scottish bank holiday in favour of getting cracking on our big stack o’ work. Today though we’re here in defiance of all the sensible advice to avoid travel! Trees down, tiles smashing onto the ground, signs being torn off buildings and thrown around the roads like crisp packets in the wind. There are a few nice things about being in a basement office, and shelter from the wind is one of them.

It’s been a while since the last blog post though, so I’ve missed the opportunity to post this gem from back in December (and #HurricaneBawBag)

The aerial on the building at the back of our office, bent and battered, trailing a polythene sheet in the awful wind

How to get poor reception

That is our back-yard neighbour’s TV and ham-radio antenna, trailing a big sheet of polythene. Note the mangled and bent spokes, as a result of the polythene catching the wind like a sail and whipping around for hours, very nearly pulling the poor man’s chimney stack over. Not that last months winds can hold a candle to today’s storm though. It seems Mother Nature is angry with us this winter.

To other news: we’ve picked up a new client for the new year which promises to be very interesting – a variety of code support work on PC/360/PS3. In addition to our existing clients, that’ll mean our own projects will have to be put to the side for a little while.

After yet another acquaintance saw fit to share their mobile app idea with me last night, I realised that what we’re short on isn’t ideas, it’s time. What with all of our client work and flitting back and forth, we very rarely get a chance to get heads-down, all-out concentrated on our own apps. There’s nobody to blame for that but me really, but we are rather at the mercy of the paying work. Tim’s been doing a bang-up job in December of bringing our latest creation up to a releasable standard, but I fear it’s not going to reach the quality bar before we have to put it back on the shelf and concentrate on our clients’ needs.

In an ideal world, we’d be able to take our time, concentrate fully on bringing our ideas to fruition, and the money made from releasing them would pay for the next round of product-making. In practice it’s not as simple as that; client work is money in our pockets now, but app sales are money in our pockets later, maybe. Of course, that’s a vicious circle, without taking a punt on our own apps, we’ll never have the opportunity to win big and break out of the work-for-hire mold. But in the meantime we take the work that keeps a roof over our heads.

We’re coming up on the end of our 7th year in business now, which is no mean feat these days. I’ve just updated our entry in SDI’s Gaming Brochure list of Scottish developers, and it’s heartening to see all the small and large companies in there. Here’s to a bright and positive 2012, and to the opportunities it brings.

Accountants, Dragons and Helicopters (not in that order)

Posted in Games, Tales from the grind-stone on November 22nd, 2011 by MrCranky

Ooh: post 666! Spooky. :-)

I’ve the office to myself for a couple of weeks, as Tim has taken the opportunity to use up the load of holidays he’s saved up before the end of the year, and Dan is busy with both university and other projects. I’m somewhat surrounded by Amazon boxes, as my wife has been using the office as a delivery drop-off for a vast amount of Christmas presents for all and sundry; as a personal rule I don’t shop for Christmas until it turns to December, but she’s a bit more efficient and organised about it than I am. As compensation for that though, and because she’s just generally lovely, she’s also had them deliver a shiny new copy of The Elder Scrolls V: Skyrim for the 360. There was a certain amount of giggling with glee when it turned up, as I’ve been quite jealous of all the other devs who are enjoying it: I do like a good open-world adventure. Where I’m going to find the time to play it I’m not quite sure yet, but even rationed out over weekends I’m sure it will be fun. A first quick blast in the office had me running away from dragons, which is always a good start.

On a whim a few weekends back while I was huddled up trying to beat off a nasty illness, I picked up a copy of DCS: Black Shark from Steam; I do like sim games, and the X52 in the cupboard doesn’t get a chance to come out. It was tragically disappointing though. Not because the manual isn’t the manual for the game, it’s the manual for the actual helicopter. That’s half the fun. No, what put me off was the terrible way it was presented. In a nod to playability, they include ‘game’ toggles for the flight and avionics. The ‘game’ flight mode is much friendlier to new players, but takes away half the fun and control I enjoy. However I learned my lesson with Lock On: Modern Air Combat; actually learning the radar and weapons controls for a real combat aircraft isn’t nearly as much fun! So I want ‘game’ avionics, and ‘sim’ flight, and set the options accordingly.

Here’s where it starts to go wrong. If you set either of those options, the game considers you in ‘game’ mode. And there’s an entirely distinct control configuration for game mode. It doesn’t tell you it’s in game mode, or give any indication as to which controls are ‘current’. You are just supposed to know. It’s not even in the manual anywhere, I checked. Worse, the control configuration isn’t accessible from the in-game menu. So you start a mission, take off (because that part is easy), but find you can’t operate one of the controls (of which there are many). Can you look it up? No. Because to look it up, you have to exit the mission, and go check the control configuration in the front end. I don’t even want to change it, I just need to see which button it’s mapped to.

So instead of actually enjoying the challenge of controlling a complex, agile helicopter, I find myself getting into the mission, only to find that the weapons systems are unusable, and I get shot down because I am spending a good few minutes just trying to get a particular bit of it to work. And there aren’t any missions in there that let you just concentrate on one thing at a time. You don’t get a ‘free flight’ mode, you don’t get some a mission with nice simple targets that don’t fire back right in front of you so you can familiarise yourself with the weapons systems. It’s either ‘quick start’ (which throws you into a mission assuming that you have full control over everything), or ‘campaign’. At least the first mission in the campaign takes you through some easy flying, but there’s no practicing of flight maneuvers, just ‘fly there, then there, then home’. That’s not what you need to practice. You need to practice low level flight, and going from full forward to stopped and hovering before popping up over the brow of a hill. You need to practice strafing and orbiting targets. None of which is encouraged in the missions provided.

Anyway, suffice to say that the nod towards making it ‘friendly’ very much fails. It’s not that much friendlier for novices, and those parts are ignored by intermediate or pro pilots.

Lastly, and on a completely different note, we’ve got ourselves a new accountant, who comes recommended from a couple of other game-devs around Scotland. This is a bit of a relief to me, since our filing deadline is the end of December. The previous accountants, who I’ll not name (although they do deserve to be shamed) have been informed, although they can’t have expected to keep our business, not least because they’ve been avoiding contact with me since spring (and their refusal to pay the fines they incurred through their incompetence).

In defence of object orientation

Posted in Coding on October 22nd, 2011 by MrCranky

So, rather randomly, I was discussing with @PetMac about the merits of a particular engine being split up into multiple libraries. We’d suffered from the other extreme: one gigantic project that contained everything (and had several minute link times as a result). I opined that it was, by and large, a good thing, even if it was inevitable that a lot of time be spent splitting off chunks of functionality and pushing them up or down the hierarchy of libraries to avoid circular dependencies. The alternative being of course that libraries end up tightly coupled, and even though they are two separate units, they are effectively indivisible. That is, not quite spaghetti code, but certainly a tightly snarled up ball of functionality that it would take many man-hours to pull apart. And as soon as libraries start to stick together like that, the rot sets in quickly; one  reference turns to dozens, and even if it might have been possible to separate them again before, it isn’t feasible any longer. I think (and he can correct me if I’m misstating his opinion) that Pete agreed on that front.

Why is that relevant to object orientation? Well, because the means by which you most commonly ‘fix’ a circular dependency is to abstract one side so that it becomes unaware of the other. So your character system knows about your audio system because characters carry around audio references; but instead of your audio system being aware of characters (so that they can, say, position sounds), you rewrite your audio system in terms of ‘character-like things’. Or more cleanly, ‘things with position’. Because that’s all the audio system really needs to know about. In an object oriented system, you’d use an interface definition to say that characters can satisfy the definition of a ‘thing with position’; of course that’s not the only way to achieve the same goal, but it’s certainly a nice easy way to do it. What’s important is that the library has a nice clean interface, that expresses exactly what it needs, in a succinct way. Ideally, it is also written without explicit knowledge of any other libraries. Having a clean and clear interface is what helps you keep that lovely de-coupled code, and lets you re-use it elsewhere.

Personally, I’ve never had a problem with using interfaces or other object-oriented mechanisms. But recently Pete has been trying to persuade me that object orientation is the dark side, and that our code would be much better if we only thought about things in terms of data transforms. There’s been a lot of eminently sensible stuff written on it, including stuff by @noel_llopis over on his blog, and by @TonyAlbrecht in a talk for GCAP. I’ve read their pieces, and don’t really disagree with most of it. If I have an issue at all, it is that their concerns about OO (and C++ specifically) primarily relate to performance, and when I’m coding, performance is only one factor; an equally pressing factor is how easy the code is to write and maintain.

Here’s the thing though; object-orientation can be really bad for performance, sure. And used badly, it can be really bad for design as well. C++ has a whole lot of cruft that means that expressing the code design you want, without locking yourself into bad assumptions, is hard. Not impossible, just hard. But there are a whole lot of code design needs I have which are very hard to satisfy without the basic features of C++. Interfaces and polymorphism straight off, and probably more. Really though, my problem lies with anyone that tells me that we should all go back to using C instead of C++, because it will avoid all of that bad stuff. Well, sure. I could go back to writing in assembly and never worry about variable aliasing as well, but I’m not going to. I’ll use C-style interfaces when they help, and C++ when they help, thank you very much. Whatever gets me the simplest, cleanest, most maintainable interface, that still lets me do the work.

I have no doubt that using C-style library interfaces would avoid a lot of unnecessary object-orientation. @PetMac is trying to persuade me though that a C-style interface is just plain better, and not only that, but that the inputs and outputs should only be structures defined in the library interface. So an audio transform would be ProcessAudioEmitters, and if you want to process a bunch of positional audio emitters, one for each character, you have to marshal an array of audio emitter structures, and copy the position from your character into its audio emitter. Which doesn’t sound so terrible, if it leads to a cleaner interface. I’d probably be fine with that. At a simple level, for core systems like audio or rendering, where the inputs and outputs are clear and rarely change, I think that would probably work well. Best of all it makes the audio library completely independent – it knows nothing of the things that it’s working with, except the data the other systems choose to feed it.

My problem comes when I consider how I would make that approach scale to all the other systems I need to build. The example I posed to Pete was one of an AI system. To use Pete’s preferred paradigm, and think of data transforms, the AI system would be a DecideWhatToDo transform. Great. What are the inputs and outputs? Well, that depends on the AI. One type of AI might want just a list of character types and positions. Another might want to know about the environment as well. Smarter AI might want character positions, damage levels, movement histories, factional allegiances; as well as the ability to co-operate with other characters. The outputs of the AI are just as bad – they can affect everything from the desired target position, to queuing animations, in fact pretty much anything a character can do might be an output of the AI.

I would describe Pete’s system as a ‘push’ system. Everything the system needs has to be fed to it explicitly, in terms it can understand. The problem with push systems though is that when the number of inputs goes up, the amount of code you have to maintain just for the marshalling of the push grows with it. You find yourself implementing the same code several times: you add the notion of damage to the character, then you have to add the ability to marshal the damage information into a structure the AI system would understand, then you have to add the notion of damage to every single AI interface that wants to know about damage. And in a system with dozens of different sorts of AI, that might be a lot of interfaces.

To me that smells wrong. It means that you’re baking implementation details (like AI ‘X’ cares about damage) into the interface. Conversely, the ‘pull’ system stays relatively clean. You simply pass the list of characters to the AI, or the environment, and allow the AI system to ask the character interface for only the data it needs. Characters might provide a vast array of query-able state, and the AI can pick and choose what it asks for. Of course this comes with a down side. The AI system now has to have knowledge of the character system (or at least, provide abstractions which the character system can fulfil). It’s no longer truly independent. The performance impact of lugging over the entire character object, when perhaps you only want to access a few small parts of it, is very real. But in terms of the ability to write clean code, without a massive amount of interface book-keeping, it’s a big win. That said, I’m open to persuasion. If someone can describe to me how they would write a succinct AI library interface in a C-style, for a few dozen varied and sophisticated character AI, without giving the AI library knowledge of the character interfaces, I’d be happy to change my point of view.

There will be those who say that if your structures are that complex, you’ve already done something wrong. That’s very idealistic thinking. The simple fact is that we are often writing fantastically complex simulations. Sometimes the ‘pure’ systems that you’d need to build to support the level of complexity the design calls for are just far more effort than the benefits they would give. When it comes down to it, we need to write code effectively more than anything else. We need to be able to code quickly, cleanly, and flexibly; especially when the game design is changing quickly as well. It’s of no benefit at all to spend months building a fantastically clean engine to support one game design, only to find that in the time it took you to build it, design changes have rendered it obsolete.

To sum up, because I’ve gone on for a long time: the one thing I like less than being accused of ‘drinking the OO kool-aid’, is the notion that there’s only one right way to do things. As a coder, you should be constantly and critically evaluating all your systems and interfaces. Sometimes a data oriented approach is better: consider the purity of the interface and the vastly improved ability to parallelise and minimise your memory accesses. Other times the structures and inter-dependencies are simply too complex, and object orientation is the most effective tool at keeping your code clean and versatile. I won’t claim to always get it right (as Pete and Tim have both at various times pointed out, I tend to over-structure my code), but I’d hope I always aim for clean code as best I can.


Email: info@blackcompanystudios.co.uk
Black Company Studios Limited, 14 Belford Road, Edinburgh, EH4 3BL
Registered in Scotland (SC283017) VAT Reg. No.: 886 4592 64
Last modified: June 17 2014.