Author Archive

Accountants, Dragons and Helicopters (not in that order)

Posted in Games, Tales from the grind-stone on November 22nd, 2011 by MrCranky

Ooh: post 666! Spooky. 🙂

I’ve the office to myself for a couple of weeks, as Tim has taken the opportunity to use up the load of holidays he’s saved up before the end of the year, and Dan is busy with both university and other projects. I’m somewhat surrounded by Amazon boxes, as my wife has been using the office as a delivery drop-off for a vast amount of Christmas presents for all and sundry; as a personal rule I don’t shop for Christmas until it turns to December, but she’s a bit more efficient and organised about it than I am. As compensation for that though, and because she’s just generally lovely, she’s also had them deliver a shiny new copy of The Elder Scrolls V: Skyrim for the 360. There was a certain amount of giggling with glee when it turned up, as I’ve been quite jealous of all the other devs who are enjoying it: I do like a good open-world adventure. Where I’m going to find the time to play it I’m not quite sure yet, but even rationed out over weekends I’m sure it will be fun. A first quick blast in the office had me running away from dragons, which is always a good start.

On a whim a few weekends back while I was huddled up trying to beat off a nasty illness, I picked up a copy of DCS: Black Shark from Steam; I do like sim games, and the X52 in the cupboard doesn’t get a chance to come out. It was tragically disappointing though. Not because the manual isn’t the manual for the game, it’s the manual for the actual helicopter. That’s half the fun. No, what put me off was the terrible way it was presented. In a nod to playability, they include ‘game’ toggles for the flight and avionics. The ‘game’ flight mode is much friendlier to new players, but takes away half the fun and control I enjoy. However I learned my lesson with Lock On: Modern Air Combat; actually learning the radar and weapons controls for a real combat aircraft isn’t nearly as much fun! So I want ‘game’ avionics, and ‘sim’ flight, and set the options accordingly.

Here’s where it starts to go wrong. If you set either of those options, the game considers you in ‘game’ mode. And there’s an entirely distinct control configuration for game mode. It doesn’t tell you it’s in game mode, or give any indication as to which controls are ‘current’. You are just supposed to know. It’s not even in the manual anywhere, I checked. Worse, the control configuration isn’t accessible from the in-game menu. So you start a mission, take off (because that part is easy), but find you can’t operate one of the controls (of which there are many). Can you look it up? No. Because to look it up, you have to exit the mission, and go check the control configuration in the front end. I don’t even want to change it, I just need to see which button it’s mapped to.

So instead of actually enjoying the challenge of controlling a complex, agile helicopter, I find myself getting into the mission, only to find that the weapons systems are unusable, and I get shot down because I am spending a good few minutes just trying to get a particular bit of it to work. And there aren’t any missions in there that let you just concentrate on one thing at a time. You don’t get a ‘free flight’ mode, you don’t get some a mission with nice simple targets that don’t fire back right in front of you so you can familiarise yourself with the weapons systems. It’s either ‘quick start’ (which throws you into a mission assuming that you have full control over everything), or ‘campaign’. At least the first mission in the campaign takes you through some easy flying, but there’s no practicing of flight maneuvers, just ‘fly there, then there, then home’. That’s not what you need to practice. You need to practice low level flight, and going from full forward to stopped and hovering before popping up over the brow of a hill. You need to practice strafing and orbiting targets. None of which is encouraged in the missions provided.

Anyway, suffice to say that the nod towards making it ‘friendly’ very much fails. It’s not that much friendlier for novices, and those parts are ignored by intermediate or pro pilots.

Lastly, and on a completely different note, we’ve got ourselves a new accountant, who comes recommended from a couple of other game-devs around Scotland. This is a bit of a relief to me, since our filing deadline is the end of December. The previous accountants, who I’ll not name (although they do deserve to be shamed) have been informed, although they can’t have expected to keep our business, not least because they’ve been avoiding contact with me since spring (and their refusal to pay the fines they incurred through their incompetence).

In defence of object orientation

Posted in Coding on October 22nd, 2011 by MrCranky

So, rather randomly, I was discussing with @PetMac about the merits of a particular engine being split up into multiple libraries. We’d suffered from the other extreme: one gigantic project that contained everything (and had several minute link times as a result). I opined that it was, by and large, a good thing, even if it was inevitable that a lot of time be spent splitting off chunks of functionality and pushing them up or down the hierarchy of libraries to avoid circular dependencies. The alternative being of course that libraries end up tightly coupled, and even though they are two separate units, they are effectively indivisible. That is, not quite spaghetti code, but certainly a tightly snarled up ball of functionality that it would take many man-hours to pull apart. And as soon as libraries start to stick together like that, the rot sets in quickly; one  reference turns to dozens, and even if it might have been possible to separate them again before, it isn’t feasible any longer. I think (and he can correct me if I’m misstating his opinion) that Pete agreed on that front.

Why is that relevant to object orientation? Well, because the means by which you most commonly ‘fix’ a circular dependency is to abstract one side so that it becomes unaware of the other. So your character system knows about your audio system because characters carry around audio references; but instead of your audio system being aware of characters (so that they can, say, position sounds), you rewrite your audio system in terms of ‘character-like things’. Or more cleanly, ‘things with position’. Because that’s all the audio system really needs to know about. In an object oriented system, you’d use an interface definition to say that characters can satisfy the definition of a ‘thing with position’; of course that’s not the only way to achieve the same goal, but it’s certainly a nice easy way to do it. What’s important is that the library has a nice clean interface, that expresses exactly what it needs, in a succinct way. Ideally, it is also written without explicit knowledge of any other libraries. Having a clean and clear interface is what helps you keep that lovely de-coupled code, and lets you re-use it elsewhere.

Personally, I’ve never had a problem with using interfaces or other object-oriented mechanisms. But recently Pete has been trying to persuade me that object orientation is the dark side, and that our code would be much better if we only thought about things in terms of data transforms. There’s been a lot of eminently sensible stuff written on it, including stuff by @noel_llopis over on his blog, and by @TonyAlbrecht in a talk for GCAP. I’ve read their pieces, and don’t really disagree with most of it. If I have an issue at all, it is that their concerns about OO (and C++ specifically) primarily relate to performance, and when I’m coding, performance is only one factor; an equally pressing factor is how easy the code is to write and maintain.

Here’s the thing though; object-orientation can be really bad for performance, sure. And used badly, it can be really bad for design as well. C++ has a whole lot of cruft that means that expressing the code design you want, without locking yourself into bad assumptions, is hard. Not impossible, just hard. But there are a whole lot of code design needs I have which are very hard to satisfy without the basic features of C++. Interfaces and polymorphism straight off, and probably more. Really though, my problem lies with anyone that tells me that we should all go back to using C instead of C++, because it will avoid all of that bad stuff. Well, sure. I could go back to writing in assembly and never worry about variable aliasing as well, but I’m not going to. I’ll use C-style interfaces when they help, and C++ when they help, thank you very much. Whatever gets me the simplest, cleanest, most maintainable interface, that still lets me do the work.

I have no doubt that using C-style library interfaces would avoid a lot of unnecessary object-orientation. @PetMac is trying to persuade me though that a C-style interface is just plain better, and not only that, but that the inputs and outputs should only be structures defined in the library interface. So an audio transform would be ProcessAudioEmitters, and if you want to process a bunch of positional audio emitters, one for each character, you have to marshal an array of audio emitter structures, and copy the position from your character into its audio emitter. Which doesn’t sound so terrible, if it leads to a cleaner interface. I’d probably be fine with that. At a simple level, for core systems like audio or rendering, where the inputs and outputs are clear and rarely change, I think that would probably work well. Best of all it makes the audio library completely independent – it knows nothing of the things that it’s working with, except the data the other systems choose to feed it.

My problem comes when I consider how I would make that approach scale to all the other systems I need to build. The example I posed to Pete was one of an AI system. To use Pete’s preferred paradigm, and think of data transforms, the AI system would be a DecideWhatToDo transform. Great. What are the inputs and outputs? Well, that depends on the AI. One type of AI might want just a list of character types and positions. Another might want to know about the environment as well. Smarter AI might want character positions, damage levels, movement histories, factional allegiances; as well as the ability to co-operate with other characters. The outputs of the AI are just as bad – they can affect everything from the desired target position, to queuing animations, in fact pretty much anything a character can do might be an output of the AI.

I would describe Pete’s system as a ‘push’ system. Everything the system needs has to be fed to it explicitly, in terms it can understand. The problem with push systems though is that when the number of inputs goes up, the amount of code you have to maintain just for the marshalling of the push grows with it. You find yourself implementing the same code several times: you add the notion of damage to the character, then you have to add the ability to marshal the damage information into a structure the AI system would understand, then you have to add the notion of damage to every single AI interface that wants to know about damage. And in a system with dozens of different sorts of AI, that might be a lot of interfaces.

To me that smells wrong. It means that you’re baking implementation details (like AI ‘X’ cares about damage) into the interface. Conversely, the ‘pull’ system stays relatively clean. You simply pass the list of characters to the AI, or the environment, and allow the AI system to ask the character interface for only the data it needs. Characters might provide a vast array of query-able state, and the AI can pick and choose what it asks for. Of course this comes with a down side. The AI system now has to have knowledge of the character system (or at least, provide abstractions which the character system can fulfil). It’s no longer truly independent. The performance impact of lugging over the entire character object, when perhaps you only want to access a few small parts of it, is very real. But in terms of the ability to write clean code, without a massive amount of interface book-keeping, it’s a big win. That said, I’m open to persuasion. If someone can describe to me how they would write a succinct AI library interface in a C-style, for a few dozen varied and sophisticated character AI, without giving the AI library knowledge of the character interfaces, I’d be happy to change my point of view.

There will be those who say that if your structures are that complex, you’ve already done something wrong. That’s very idealistic thinking. The simple fact is that we are often writing fantastically complex simulations. Sometimes the ‘pure’ systems that you’d need to build to support the level of complexity the design calls for are just far more effort than the benefits they would give. When it comes down to it, we need to write code effectively more than anything else. We need to be able to code quickly, cleanly, and flexibly; especially when the game design is changing quickly as well. It’s of no benefit at all to spend months building a fantastically clean engine to support one game design, only to find that in the time it took you to build it, design changes have rendered it obsolete.

To sum up, because I’ve gone on for a long time: the one thing I like less than being accused of ‘drinking the OO kool-aid’, is the notion that there’s only one right way to do things. As a coder, you should be constantly and critically evaluating all your systems and interfaces. Sometimes a data oriented approach is better: consider the purity of the interface and the vastly improved ability to parallelise and minimise your memory accesses. Other times the structures and inter-dependencies are simply too complex, and object orientation is the most effective tool at keeping your code clean and versatile. I won’t claim to always get it right (as Pete and Tim have both at various times pointed out, I tend to over-structure my code), but I’d hope I always aim for clean code as best I can.

Busy August

Posted in iPhone Apps, Links from the In-tar-web, Tales from the grind-stone on September 4th, 2011 by MrCranky

Lots of little things this month, keeping us all busy. I was ill for much of it, a fortnight of a racking cough that was driving everyone in the office crazy I’m sure, which put the kibosh on any plans I had to enjoy the Edinburgh Festival. It also made it rather hard to concentrate quite as much as I would have liked on our new project, a re-make of a famous Spectrum / C64 classic for smartphone and tablets. Instead, that’s largely been left in the capable hands of Tim and Dan, with me only providing interference in the form of design notes. We can’t talk too much more about it just yet, but it’ll be announced soon enough, probably when we get some good looking preliminary builds made up that will give people something to talk about while we get the game ready for release.

The iPhone app we made for PASG has finally launched – Hold’em Manager for iOS. That was our focus for much of late last year and this first half of this year, so it’s nice to see it out in the wild. It’s a partner application for users of the Hold’em Manager suite of apps, which are a great tool for any serious on-line poker player. Mind you, I do have to persuade our accountant that the money paid to on-line poker sites during testing are in fact valid business expenses. Not sure exactly what category that comes under in our year end accounts.

I took some time out in late July to tackle something I’d been meaning to do for a while: get us some official Company t-shirts. Here’s me modelling the black version:
20110904-031914.jpg

Very ‘man from C&A’, I know. I’d never make a model.

Our month long experiment with allowing people to comment on the blog without registering first is now done with, as I’d suspected, it didn’t really help much with the spam. Instead of a few dozen spambots registering on the site and needing deleted, we got a few dozen spambots registering on the site and needing delete and a few hundred spam comments which Akismet blocked before ever seeing the light of day. We don’t see a lot of discussion here on the blog, so the increased maintenance effort on my part wasn’t really worth it. Back to registration first for the foreseeable future.

Too much time this month was wasted trying to rebuild Dan’s PC, which had taken to freezing on boot and blue-screening. After swapping out every single component (graphics, PSU, motherboard/CPU, HDD, heck even the keyboard and power cable), we eventually figured out it was the DVD drive. Operated as a DVD drive perfectly, but if plugged in would cause failures. As a result we’ve got pretty much all of the bits of a new machine, so now Dan has his own, entirely rebuilt machine with Windows 7 (instead of a hand-me-down server machine running XP). I also get my XP server back, which I’d been missing as it’s nice to have a box I can run Cruisecontrol and background tasks on. It’s doing a sterling job with our tools work for Sumo, which is occupying most of my time right now.

Team Bondi went into administration. Not entirely unexpected, but still not nice when the livelihood of people is on the line. Hopefully it will serve as a warning to other studios as to what happens when you mismanage a project so badly with regards to working hours. However more likely it will all be pinned on Brendan McNamara, and the crunch part will be played down. The people I really feel sorry for are those at KMM (the only other sizeable employer of digital art staff in the area), who escaped Team Bondi and its management, only to find that their nemeses have now followed them to their new job.

Anyway, that’s pretty much it for now, back to tidying up all the boxes of PC components strewn around the office.

Crunch is avoidable

Posted in Industry Rants, Links from the In-tar-web on July 28th, 2011 by MrCranky

I’m putting off my blogging responsibility this week onto someone else: a great opinion piece from Charles Randall of Ubisoft, rebutting entirely the piece by that moron Michael Pachter which I won’t even dignify by linking to it. Here’s Charles’ piece. Stand-out quote for me:

Crunch is avoidable. But it requires a level of maturity and acceptance that the game industry sorely lacks. People argue that there’s always a period of crunch necessary at the end of a project. But that’s not true, either. If you are disciplined enough to accept deadlines and understand that there’s a point where you have to stop adding features, schedules can be planned with some lead time for debugging.

Anyone who tells you crunch is unavoidable is a fool. It might be that the games being made just now are unprofitable without crunch, but that’s not a reason to crunch; that’s a reason to change the way we make games.

On a similar note, you will find a couple of opinion pieces from me over on I <3 Crunch, a new blog set up specifically to raise awareness about articles on crunch, studios who are crunching their staff (and those which aren’t). I hope that by talking about this more we can put to rest this ridiculous notion that crunch is somehow acceptable or something we just have to live with. It’s the industry’s dirty secret, and the more we bring it out into the open, the better we will all be.

 

Opinion: How the IGDA could help tackle crunch

Posted in Industry Rants on July 18th, 2011 by MrCranky

Erin Hoffman’s comment on my previous IGDA post got me to thinking. If the IGDA are looking for a tangible way they can help things, what can they really do? So here’s my suggestion:

My issue with the way the IGDA work with regards to these reports of crunch is pretty much the same every time. They don’t seem to do anything unless someone makes a formal complaint to them, and even then they seem to put the onus on the individuals at the studio to be acting on it themselves. To me, it should be the other way around. There should be a ‘report a company’ button on their website which is 100% anonymous, and really simple to find/use. Once pressed, the IGDA (or whomever) would come along to the company and ask the company if it’s true. Either:

  1. the company says it is, and they’re not ashamed
  2. the company says it is, and they’re sorry, and here’s how they’re going to address it
  3. the company says it isn’t.

In 3) the IGDA can then ask if it can speak to employees at random for their opinion. The company can only really refuse if they’ve got something to hide. The company won’t be allowed to know who said what, and they’ll have to ask enough people so that the employees can’t be threatened or accused of ‘ratting the company out’. The employees will either:

  1. confirm that there’s no crunch, and the original report was bogus
  2. confirm that there is crunch (and ideally give details), showing that the company is both deliberately crunching, and deliberately lying about it.

In most of those outcomes, they can publicly state the results of their investigations. It doesn’t have to be a big fanfare or singling particular developers out (at least to begin with), just quietly announcing what they discovered when they asked the question.

  • If a company is never reported on, you can take that as a good sign.
  • If a company isn’t crunching its staff, it can be held up as a good example.
  • If a company is crunching its staff and isn’t ashamed, the IGDA can publicise that fact (and discourage potential applicants).
  • If a company is crunching its staff but wishes it weren’t, that can be publicised, and the situation monitored; if they have a plan to fix it, the IGDA could go back in a year or two and see if they’ve made progress, and if so hold them up as an example to others as to how to get out of crunch mode.
  • If a company is crunching its staff but pretending they aren’t, that can be publicised as well, including the fact that their staff say something different, all of which will discourage potential applicants.

Even those at the IGDA who are convinced that the “40 hour week” is some crazed ideal that not everyone agrees with can’t really argue against that, because you can do it neutrally, without stating categorically that crunch is bad. Even if you think crunch can be a good thing, it can be highlighted in the findings. What matters is that the situation be made clear to one and all.

It only relies on the simple fact that any organisation can ask a question of another publicly. The respondent is then put on the spot, either they have to ignore the question, lie, admit it, or deny it. Failure to answer the question is damning enough in itself. An organisation which doesn’t crunch has nothing to fear, an organisation which crunches and doesn’t care (like Team Bondi) won’t mind the question being asked. The only organisations which would be disadvantaged are the ones who are crunching and trying to hide it. In which case simply asking the question is enough to bring it out into the light.

Our real problem is that the press and the IGDA and others aren’t talking about it enough. Not in general terms (‘crunch is bad’), but in specifics (‘the kind of crunch being talked about at Bondi is bad’). If no-one asks the awkward questions until after it’s been so f*(&ed up for years, then it’s only going to continue.

Pest Control

Posted in Tales from the grind-stone on July 10th, 2011 by MrCranky

Ah, summer is in Edinburgh at last. Thunder and lightning storms, and flooding so bad the water breaks out of the sewers and comes up through the road. I love this city. I don’t think the squirrels in the garden were quite as happy though.

Baby mouse in a soup tin

Curse you and my steel (tin) prison...

At least the squirrels have the decency to stay on the outside of the office though. This little gent (or lady, I didn’t get close enough to check), was the second littlest of a family of mice that have been tormenting us for weeks now. Leaving little presents on our desks. Something in the last couple of weeks must have driven them out looking for nesting material though, because they were all inexorably drawn to the box of packing peanuts that lay out in our office. Bold as brass, we found them rustling around in the box, and popping out the top with a polystyrene peanut in their mouth, trying to get away. Thankfully, their attraction to the box made it much easier for us to arrange things in such a way that we could more easily trap them when they did show themselves. At the current count, I’ve caught four of them, and Tim caught one [Hah – I win!]. We’re presuming the one Tim caught was the daddy, as he was much larger.

All of them were released into the wild (or as wild as it gets 100 yards in either direction along Belford Road), as we’re both softies at heart and couldn’t quite bring ourselves to kill them. Tim’s catch was released on the Dean Bridge itself, much to the amusement of passers by – hopefully it won’t have decided to end it all and take the leap off the edge. They probably have a homing instinct of some sort, but we figure as long as they find a similarly attractive home somewhere along the way back we’ll be rid of them for now.

Migrating drives

Posted in Technical Guidance on June 17th, 2011 by MrCranky

So one of the most annoying things about the internet for computer fixing is that a) a lot of the people asking questions aren’t technical, so the problem reports are spotty at best, and b) a lot of the people providing answers think they know more than they do, so the answers often either conflict, or are just plain misleading. Worse, they’re usually just a list of commands, without any context as to why you’re doing these things, so it’s hard to know if they’re even appropriate. Often-times what might appear to be the same situation is in fact cause by a completely different underlying problem, and following instructions blindly will just make things worse.

So here is a guide, intended for those readers who want to try to understand exactly what is going on with their computer, and why it’s gone wrong. You’ll have to be prepared to stomach a bit of technical jargon, but I’ll try to be clear. I can’t claim full knowledge on this, but I’ve been working with PCs for over 15 years, and I’m pretty confident I understand what is going on.

My problem arose when I was shifting my existing stuff to a new hard drive, as the only one was reaching the end of its life. I’d like to write up my situation and how I fixed it, in the hope it will be more useful for others than the internet search results I came across while figuring out what I needed to do.

The Situation

Some time ago, I upgraded from Windows XP to Windows 7, and at the same time bought a small Intel SSD to put it on. I’d heard bad things about the upgrade procedure, and felt it was time for a clean install anyway, so I installed W7 on the SSD from clean, no upgrade. The fact that it was an SSD isn’t relevant here, this would happen with regular hard disks too. But what that meant was that I had two operating systems available on the computer. To its credit, the W7 installer was fine with this, and once installed, I had the option of booting either operating system. I got my W7 installation set up the way I liked it, and eventually deleted the XP install. Again, all was well.

I have several disks on my machine, but only two are relevant here: 1) the SSD with a single partition on it (C:), and 2) an HD with two partitions (D and F). Crucially, the D partition was where the XP installation used to reside, and the C partition is where the W7 installation lives. I wanted to migrate the D and F partitions to a larger new disk, and simply remove the old drive. This I did, with the help of Norton Ghost and it’s drive copying functionality. So at this point I had partitions C (SSD), D & F (HD1) and K & M (HD2). I would then reassign drive letters such that the new drive would have partitions called D & F, and the old drive would have no letters at all (and could be quietly removed from the system).

The Problem

As soon as I removed the old HD (HD1) with the D and F partitions on it, the machine would no longer boot, prompting me to insert a system disk. Explicitly choosing the SSD from the machine’s boot menu or reordering the boot order made no difference. Re-connecting the old disk, everything was fine again.

Diagnosis

To boot from a hard disk, a computer needs an ‘active’ partition (active or not being a property set in the partition, usually when it’s created). Normally there is only one active partition on a machine, but if there is more than one, the order in which the computer looks at the disk becomes important (hence the setting in the BIOS to change the boot order). On an active partition, the computer expects to find a Master Boot Record (MBR), which will tell it where it should look for a program which can start an operating system. In a typical, simple setup, the MBR lives on partition 0, the C drive, along with the operating system. But there’s no reason it has to. You can have an MBR on one partition pointing to another partition altogether. And that is what happened here.

Originally, the MBR lived on the partition now called D (it was C back then), with the XP operating system. When it came time to install Windows 7, the W7 installer put itself on C, but it didn’t make a new MBR (because there was already one available). Instead, it simply modified the existing MBR so that it could boot either W7 or XP. Whenever the machine booted, it would look at the SSD, find no active partition, and move on to HD1, where it would find an active partition and MBR, which then pointed it back towards partition C and the W7 install. Everything happy.

When I removed HD1, I was left with the SSD and HD2, neither of which had an active partition or an MBR. So the system did not know where it could boot from, and complained.

Solution

I needed to make the W7 partition active and bootable again, so that the system would operate even if the old disk was disconnected.

To do that, I dug out my W7 installation CD, and put that in the drive (making sure that the BIOS will boot from the CD before trying the HDDs). After starting and selecting a language, you are presented with the installer, but under that is a repair mode. Selecting that, I could tinker with the existing setup. It tried to find a viable OS to repair, but said there were none available (even though I knew the W7 install was still there). I’ve deduced this is because an OS not installed on an active partition doesn’t count. However it still lets you click Next, and gives you various options to work with. Startup Repair (the user friendly option) didn’t work, basically because there wasn’t anything to repair because the repair system didn’t realise the W7 install was there. Again, advice on the internet seems to be just ‘run Startup Repair a few times and it will fix it.’  That’s bad advice, you’re much better off trying to understand what’s currently wrong, because that will guide you as to how to fix it.

From the command prompt, you can run various tools to interrogate the current setup. With other MBR related problems, the advice is usually to just run ‘bootrec /fixmbr’ and ‘bootrec /fixboot’. For me, /fixmbr did nothing (presumably because there was no MBR to fix), and /fixboot gave the error ‘Element not found’. I think the former problem is because there wasn’t an MBR available to fix, and the latter was because bootrec relies on knowing which partition to put the new MBR on. Because the W7 install hadn’t been detected, bootrec had a choice of several partitions, and didn’t know which one to use. It may be that it would work just fine if the W7 install had been detected (if the partition was marked active).

However, from the command prompt you get access to the diskpart and bootsect tools, which are more helpful, even if they do require more technical savvy. I had two immediate problems, 1) the C partition wasn’t active, and 2) there was no MBR on the C partition even if it was active. Both problems needed fixed before I could progress.

Bear in mind, when running from the installation/repair CD, the drive letters your drives are assigned may not correspond to their normal assignments. So I’d advise the following steps:

  • At the repair command prompt, run ‘diskpart’
  • Type ‘list volume’ to get a list of volumes. One of these will be the CD/DVD drive, note which one (for me it was H); another will be the partition you want to boot from (for me it was D), note that one as well.
  • Type ‘list disk’ to get a list of disks. One of them will be the disk you want to boot from (you’ll have to recognise it based on size / brand).
  • Type ‘set disk X’ (replace X with the correct disk number).
  • Type ‘list partition’ to get a list of partitions. Again, one of them you want to boot to.
  • Type  ‘set partition X’ (replace X with the correct partition number).
  • Type ‘active’ to make the right partition active.
  • Type ‘exit’.

Now your boot partition should be active, but it doesn’t yet have an MBR on it. To get that, you need the bootsect tool. That tool is on the installation DVD, but in a subfolder.

  • Type ‘H:’ (or whatever your DVD drive was called.
  • Type ‘cd boot’ to move to the subfolder containing the bootrect tool.
  • Type ‘bootsect /nt60 D: /mbr’. This will write a new boot sector / MBR to the partition called D. The /nt60 is for Vista or later operating systems.

This should result in the computer now finally being able to boot from the local disk rather than the CD.

New problem

When booting, the message ‘BOOTMGR not found’ is displayed, if you try to boot from the disk you just made active / bootable.

New diagnosis

Now we have progressed a stage. Instead of the BIOS telling us that it didn’t even know which drive to boot from, instead now it is telling us that the drive we told it to boot from isn’t as bootable as we claimed it was. Booting a drive is really just running a particular program – the information in the MBR is not just ‘what partition do I boot from’, it’s also ‘what program do I run from that drive’. For Windows, that program is BOOTMGR, which it expects to find in the root of the bootable partition.

So when I installed W7, not only did it not make a new active partition or MBR, it also didn’t put the bootable software in the new partition. Instead it just modified the configuration for the old (XP) BOOTMGR which used to live on D, and told it about the W7 installation on C instead.

New solution

We need to get a copy of the boot software onto the bootable partition. Thankfully, this is the job of the operating system, and if we boot into the repair disk one last time, we can get it to help us.

Boot from the installation CD, and go to the repair menu. Now we’ve made the W7 partition bootable and active, it should be correctly found by the repair option, and show up in the list of operating systems. For me it was marked as ‘recovered’. There was also another ‘recovery partition’ recovered as well (I believe this is used for other sorts of system recovery, although it was useless in this situation), which I ignored.

On selecting Next, we get the same list of repair options as before. This time, we can select ‘Startup Repair’, and let it do its thing. If you click on the option to view more details about what the Startup Repair is going to do, it should list all the things it checked. For me, the file system and various other things were fine (reported as error code 0x0), but it correctly detected that the boot software was damaged/missing, and needed replaced. Allowing it to proceed, and restart, and hey presto: after a reboot, the W7 partition is correctly booted from, and normal operation is resumed.

Parking in Edinburgh

Posted in iPhone Apps on June 8th, 2011 by MrCranky

As of this morning, our second entirely internal app, Edinburgh – Parking, is up for sale on the iOS App Store! It’s a pretty niche app this time,  and combines the geo-location abilities of the iPhone with the rather complicated parking zones in Edinburgh, to provide a useful and easy to use reference for anyone wanting to know where they can park, and how much it’s going to cost. So if you’re local to Edinburgh, please do check it out!

While the market for such an app is obviously limited to those people with iPhones either in Edinburgh, or planning to visit and drive, it’s a simple enough design that we’re thinking of making equivalents for other cities with similar parking systems. But we shall see how people like this one before we tackle that.

Next for us? Well, we’ve got a very promising game design in a pretty functional state right now, so hopefully we’ll be able to polish that up and release our first game title! I’ll perhaps share some sneak preview screenshots next time.

iPad @ home

Posted in Tales from the grind-stone on May 27th, 2011 by MrCranky

I must confess, the iPad we bought for device testing has migrated home to the flat, and now only makes its way back to the office for specific needs. Not for purely selfish reasons I hasten to add, although it is partly that. Rather it’s because when we first got it, I was unsure as to exactly how it would fit into the average user’s life. The iPhone was easy, within an hour or two of using it I could see it’s niche; a pocket sized, versatile device with good connectivity and an intuitive interface. The iPad, not so much. Too large to carry around without making a conscious effort; lacking the keyboard for serious work, and unable to run most of the existing software most users are accustomed to using on a laptop.

The real trouble is that we here at Black Company make terrible cold testers. We’re technical, so we tend to focus on the implementation details rather than the broader feel of the interface. We’re advanced users, used to knowing everything about the software we use; being forced to learn a whole new interface makes us grumpy, but not nearly as grumpy as having not having all of our usual tools to hand. So as I usually do with such things, I hand them straight to my wife without saying a word, and simply watch how she uses it. The question was, really, would it find a use naturally, or would we be using it for the sake of it? And what would that use be?

Put simply, it did, and the use is: content viewing. I had thought that my computer time was read-write, but in reality, outside of work, the majority of my time is spent consuming content and not creating it. Facebook, Twitter, blogs and RSS feeds obviously, but more and more with on-demand video services like iPlayer. The iPad keyboard is, frankly, not pleasant to use (I’m writing this blog post using it as a proper test), but for the majority of content viewing we do, that’s not an issue. In fact, in the few months we’ve been using it, the biggest annoyance has been the fact that much of the on-demand TV we want to watch is on Channel 4, and their web solution was Flash based (i.e. not available on iPad.

And it was what we had to do when we did want to watch those things that drove it home to me. The iPad lies around the living room happily. It’s discreet and portable. To get the laptop out, plugged in, booted, takes a good 5 minutes, not just because it lives in a bag in the other room. So it’s a new way for us to experience the content out there, that we just wouldn’t have done before, and I don’t think I would have appreciated that without properly field testing it (or at least, allowing Vicki to do that).

That’s not to say that there aren’t other lessons to learn too. The bad apps we’ve found are the ones which simply take an iPhone user interface and make it bigger. But the key thing to appreciate about the iPad is that there’s likely to be only one in the household. Whereas the iPhone is a naturally single user device (not just because it’s something you keep on you as you move around), the iPad is passed around amongst the household. So apps like Facebook and Twitter have to account for the fact that you’ll want to easily pop back to the top level and switch users; as well as some loose protection against accessing other people’s accounts. You trust the people you share the iPad with, but not that much. And of course, it’s far less likely to be moving around out in the world, so apps that focus on the geo-location data are far less useful. On iPad, the value is on it’s versatility to display content in a relaxed environment (not necessarily at a desk). The larger display is key to that versatility.

The trick will be to take the things we understand about how the iPad gets used, and use it to inform our app designs.

Portal 2 / Scope

Posted in Games, Industry Rants on May 17th, 2011 by MrCranky

I thought I’d add my voice to the rest of the gaming community praising Portal 2, which I finished last week. A great story, which made me laugh out loud at least a dozen times, which is rare in any medium, let alone a game. It’s not without its flaws, but all are minor and do not detract noticeably from the overall experience. It most definitely passed my usual acid test for quality: that I wanted to play it even when I didn’t have any free time, to the point where I was skipping sleep to play it some more.

I loved the original, even though I wouldn’t have bought it were it not tacked onto Half-Life 2: Episode 2. It always struck me as a wonderfully weighted title – just the right length, elegant in its simplicity, and with a level of polish that larger titles just don’t achieve. More than anything though, it was a title that left me wanting more, not because it was too short, but because it was so good. Much like a wonderful novel or film where I get immersed in the universe and characters, the end comes with both a warm glow of satisfaction at the conclusion, and an aching for more. More of the characters, more from the rich universe. It’s a rare creation that brings that level of quality to the observer, and both Portal incarnations have that quality in spades.

I’ve been ranting somewhat about the poor judgement of top-end games development recently. Quality of Life and financial issues are just one facet of a deeper problem: that we’ve been trapped into an arms race of scope. To justify a ‘full-price’ cost, developers feel they have to match or out-do each other. Worlds grow larger and larger, not even bound by memory constraints, since every large game streams their environments off disc. Stories grow more and more epic, and require game-play lengths to match. More characters are wedged in, even though there’s not enough time to get to know them in any great detail. Their voices are provided by more and more famous actors. Cut-scenes get flashier and longer.

The problem is that the underlying mentality to it all is ‘go big, or go home.’ Budgets spiral upwards, or if they don’t, then quality spirals downwards. Both hurt a title’s chances of success. But more quality doesn’t justify a higher price tag to match the increased costs. The players have shown in a wide variety of ways that they’re not prepared to pay any more for games than the already high cost. Second-hand sales and rental mean that the RRP quickly gets turned into the ‘real’ price – far lower. Popular titles drop slower than unpopular ones, so market forces still apply. But as an industry we still delude ourselves that we ‘deserve’ the RRP times the number of units sold.

That’s not the real madness though. The real madness is that despite all our profitability numbers showing the decline, developers and publishers keep on down the same path. They know how much more it costs to increase the scope of the games we make, but they do it anyway. Why? Because they know if they don’t invest enough in titles they flop, because they are competing with other titles on quality. But they don’t know how to turn investment money into quality. Quality is hard. It’s intangible, and you don’t always know it until you see it. So they put the money on things they can understand. More levels, more characters, bigger worlds. They set themselves a benchmark of their competitors, plus some. Because if X was a success, and we have more of everything than X, then we’re as good as X, right?

So when a title like Portal comes along, I regain a bit of hope for our industry. By showing that you can make a massively successful title, not by making it bigger, or more complicated, but by making it good, it’s a bit of ammunition for the decision makers. They can point to Portal and say “it doesn’t need to be big, as long as it’s fun”, or “let’s find a mechanic that works well, and just stick with that”. And maybe we can halt this crazy race to massacre our industry’s profit margins.


Email: info@blackcompanystudios.co.uk
Black Company Studios Limited, The Melting Pot, 5 Rose Street, Edinburgh, EH2 2PR
Registered in Scotland (SC283017) VAT Reg. No.: 886 4592 64
Last modified: February 06 2020.