Idea Submissions

Posted in Design Ideas, Random Stuff on August 4th, 2015 by MrCranky

tl;dr: We don’t take them. Thanks, but no. Best of luck making your game.

Why is a little more complicated. Even when I was still actively looking at our own development projects, our problem was never really a lack of ideas, it was a lack of time for the ideas we had. It’s naive to think that just the idea is enough, that the idea just starts good, or that it is already so good that it ‘just needs made.’ Ideas are cheap. We have them all the time. Working in games, playing games, watching what others are doing, ideas come all the time. Some better than others, some already made by other teams. But realising that idea takes time, and by extension, money. A lot of it. Even the best ideas are worth very little until they are implemented. Not even fully implemented, just getting the idea to a state where it can be pitched to a financier for funding takes a non-trivial amount of effort. It’s important to realise that whenever you have an idea you are trying to progress, what you are doing is investing in that idea. How much you’re investing depends on how valuable your time is (both to you and others), but you should always be aware that it is costing you to make this idea a reality, and that if you want it to succeed, it needs to realistically be able to pay you back more than you put into it.

Amateur developers often misjudge that equation. Their time spent on the idea is cheap, because it is enjoyable time, time they might want to spend anyway. Their assessment of the merits of their idea is often inflated because they are passionate about it. I think it helps to consider: “what if I had fifty ideas?” Then, time spent on one idea is time you can’t spend on one of the others. It forces you to think critically about the value of your time and on which idea really merits the investment you put into it.

That awareness that you are investing should also temper your desire to get other people involved. Because you are, effectively, asking them to invest into your idea. It is no different from going to friends or family and asking for a loan to start a business. There are some fundamental questions to ask, even before you get to the idea itself. “Yes, we want to help you. But what do we get out of it? What are you putting into this? What are we putting into this? Who gets what when it does well? What if we disagree on how this business should be run?” For these reasons it’s often easier to go it alone, at least to start with, or to only throw in your lot with people you know and trust. Because you don’t have to persuade those others of the merits of your idea. If you put the effort in, if you progress the idea to something that by itself can demonstrate that your idea has legs, then you start the relationship with your collaborators on a much better footing. “This is what I’ve done, this is what I think it’s worth, I could use a hand getting it finished, and if you agree with me on its merits we can come to some agreement on how to work together.”

This may be the jaded viewpoint of a professional developer, someone for whom new ideas and projects have to be traded off against lost income from other work, but I think all developers, amateur or not, should be thinking about their work not just in terms of potential but of cost. Because while this idea you have might be good, the next one might be great, and if you sink all of the investment you had to spare in the first idea and it doesn’t pay off, we’ll never get to see the great idea come to light. The ideas can only be realised when the development is sustainable, and to be sustainable requires the ideas to, on average, pay for themselves. Some amount of up-front investment is fine, but at some point one of those ideas needs to pay enough back to cover all of the ideas which didn’t.

Incentivising crunch

Posted in Industry Rants on May 25th, 2015 by MrCranky

It’s often suggested by front-line staff in development studies that their management is aiming for crunch deliberately. As in, they know how bad it is for their team, but because they think it saves them money, they encourage it anyway. Personally, I don’t believe it’s always that cynical. Sometimes, sure, but not most times.

I genuinely believe crunch is endemic because our development process leads to a disconnect between present and future phases of the project, a failure of accountability. In the early phase, when there’s planning, people are incentivised to cram as much as possible into the plan, in a short a time as possible, for as little money as possible, while still keeping it realistic. But the incentives are all based around promising more, aiming higher, and the judgement of what is realistic is deferred till later. Most developers will have worked in a place where the project was only landed because the publisher pushed back on what was feasible in the time, and the developer over-promised.

No-one in that process has ever been rewarded for under-promising, or aiming low. No-one in the publisher would get patted on the back if they revised the scope document down, or the expected price up. No-one in the developer’s management would get rewarded if they said they could deliver less given the same resources. You are punished right away if you fail to agree the best plan you can, but you are not immediately punished if you over-promised, and you might still be able to avoid that punishment in the future. The failure to deliver is only a possibility, in the future. Plus if you fail to deliver, it can be someone else’s fault, or you can push harder, or any one of a bunch of different things. But in the early phase, the solution to the problem you have right now seems to be to over-reach, even if it creates more problems for you in the future.

In the latter phase, when its clear you have over-promised, it’s too late. The deadlines are set, the budget is limited, the resources are finite. The only solution to the problem you have right now is to desperately try to eke out more productivity, any way you can. Short-term thinking is rewarded, because failure in the short-term is punished terribly. Crunch is a workable solution to hit the first milestone you are in danger of slipping, and the cost of crunching only comes due after that milestone. So the solution to the problem you have right now seems to be crunch, even if it creates more problems for you in the future. Again, the punishment for failing comes right away, but the punishment for making the decision to crunch is in the future, where it may be someone else’s problem, or it may be avoided, or it might be staved off by some other means.

Accountability is the problem. There will always be finite resources, money, time, staff and pressure to do more with less. But we wouldn’t be seeing these problems if we incentivised more conservative, realistic planning. If you make decisions that lead to failure later on, your punishment should be twice as severe as the punishment for if you fail early on. And you shouldn’t be able to offload the responsibility for that failure. If the team fail to deliver on your over-optimistic plans, you should be the one carrying the punishment for it. That can and should echo down the line – each person should have to face the consequences of failing to deliver, right down to the team level. But they should also be the ones estimating how much they can do. That does hit a snag at the tail end, in that at the leaf nodes, you have two conflicting goals. You’ve just incentivised your staff to promise very little (so they know they can fulfil their promises), you have to also find a way to incentivise them to promise a the highest amount that’s realistic. And that’s quite hard.

The problem I fear is that if you do this right, and you reverse the normal incentives so that projects are far more conservative and likely to go to plan, then the games produced are smaller, less radical, less interesting, and ultimately less profitable. And very possibly not profitable enough to maintain the companies involved. Still, I maintain that it’s better to be honest with yourself about the viability of your business, than it is to keep it afloat only by exploiting your staff resources and by failing to deliver to the clients and customers. After all, how much money do you think we waste, aiming too high and having to scrabble to recover; burning productivity well below where it should be because of crunch.

Profitability in a market where successes are rare

Posted in Industry Rants on April 30th, 2015 by MrCranky

This week I wanted to share a really great article over on Lost Garden that echoes a lot of things I’ve been saying for a long time. Not just for indies (although it’s especially relevant for them), but for larger companies too. Some standout quotes that I feel are most apt:

Game development is inherently unstable. Technology, markets, profit margins and teams shift regularly. Any of these can quickly destroy a previously comfortable business.

[…]

In the 90s, Sierra expected 1 out of 4 games to be a success and pay for the other products that failed to turn a profit. Recently, Mike Capps, the previous president of Epic, claimed that he couldn’t promise more than a 10% chance a game would be a success. If you made 10 games, on average, you’d expect only 1 would be considered a success.

[…]

Your budget is likely Target Revenue * Success Rate. So if there’s a 10% chance of reaching $500,000, you should spend $50,000 on each project.

[…]

Over time success has been dropping. 25% is almost never seen in modern game markets. […]Given a set of equally competent games, only a fraction will become profitable.

What happens if that profitable game make $600,000? It earned 6X its costs! You made a profit of $500,000, enough to make 5 more games. However, you are still on the long road to bankruptcy, despite an apparent success. There’s only a roughly 40% chance those 5 swings at bat will result in a success. Long term, you’ll find yourself out of money or in debt.

[…]

It is a disservice to other developer to claim that a breakeven project is a financial success. Break even means almost nothing. You are still on the knife’s edge of baseline survival and should operate financially exactly as if you had achieved nothing.

You cannot bank on individual successes being repeated reliably. Games, even those developed by the best of developers, are not reliably successful. Maybe they miss the moment where the audience is really looking for them, maybe they get the quality bar wrong, maybe there are technical constraints that rob the title of what it needs to really work. It doesn’t matter, because unless games development suddenly becomes much more predictable than it is, a business making games has to assume that some if not most of their games will fail. If that business wants to be one which survives, it needs to be profitable across all its games, success or fail.

A team where the developers are taking low wages, putting everything they have into their first game, makes for a great story for the press. Make or break. But it’s terrible business. A team that’s taking their funding and figuring out how many months that will let them operate for, and then planning a game to fit that period, is a team that’s most likely going to fail.  Even if they survive the first game and limp on to make a second, even if through talent and luck and timing they magic up a massive success that not only makes all its costs back but also nets enough to fund their next game several times over, chances are the next game will not match the first’s success, nor the one after that.

This is true even for larger teams. The publishers are the ones who have to look at viability longer term, so often developers can ignore that and just live title-to-title, but the same pitfalls are there. If you’re a developer whose best title only earned back twice what it cost, then you shouldn’t be surprised if the publisher drops your team after the next title that only earns back a quarter of what it cost. They just can’t afford to take the risk that your next title will be a flop rather than a mediocre hit, because you’ll have become a losing bet. You have to deliver the massive breakout hits if you want to make them confident that over time you are a reliable generator of income, and not a drain on their coffers overall.

When we’re looking at whether making games is viable, we need to be looking at long-term profitability of the team, not just per-title. Anything you do to hide the true cost of your development is really just selling yourself short, setting yourself up for a later failure. Don’t lie to yourself about the cost of your time, or the extra hours you’ve put in. Don’t hide the costs of one game in something else. A game that is profitable, as long as you don’t count the months you spent eating just ramen, is not a profitable game. Don’t believe your own hype. It might be hard to accept, but there are no guarantees that the business you love is a profitable one.

Localisation

Posted in Coding, Industry Rants on April 15th, 2015 by MrCranky

A discussion I was reading elsewhere linked me to this old gem on “falsehoods programmers believe about names.” I laughed, but it was one of sympathy rather than surprise. I’ve tackled localisation on a bunch of projects in my time, and the thing that takes the time is not content wrangling, or getting the right unicode fonts in. It’s dealing with the assumptions that the code team have already made and implemented in the early days of development.

It’s a print-to-string line that formats currency for display as $1.23, regardless of the user’s locale. Or a user signup form that has one box for First Name and one box for Surname, and expects exactly one word in each. Simple things, taken from the programmer’s own experience as ‘obviously’ the way it is, thrown in because they have to get a working implementation done quickly, and no-one has asked them to take localisation into account. That can all get sorted later, right? No. Not when you build in assumptions at the very base level that simply aren’t true.

For example, Steam. I was probably one of the early adopters of it. I didn’t want to be, annoying system tray icon that it was, but I wasn’t going to wait for Half-Life 2. To sign up, I had to use my email address as a username. Sure. Whatever. It’s now over 12 years later. That email address, tied to an ISP I moved away from, is long since gone. Can I change my username? No. Because they insisted on a particular form of unique ID at the time, and they insisted that usernames can never change. New users don’t have the same restriction, they can pick whatever username they want, but mine is frozen in time. Even though there is an actual email field on my account which is wholly separate from the username, I still have to enter a 30 character email address that has no relation to reality every time I want to log in. While I can just treat it as not an email address but just an oddly formatted unique identifier string, it jars with me, every single time.

Arguably this is just the meat of development – changing requirements over time invalidating initial assumptions. But for me it’s a plea to other developers – to slow down and take some time thinking about your initial implementation. If it’s not on a specification handed to you and you’re winging it based on how you think it should be; maybe think to yourself what the ramifications of how you choose to implement it will be. What won’t you be able to do if you implement it this way? What are the awkward cases, the potential ranges of input. Will it be possible to fix it later once the system is live and populated with data, or are you building in something that’s a fundamental to the system?

Change of focus

Posted in Tales from the grind-stone on March 9th, 2015 by MrCranky

The funny thing about working as a consultant and selling your services is that you have very little predictability in your business. No matter how useful your skills are, no matter how in-demand your services are, you don’t always get to choose when the work starts and ends. Plans change, clients’ needs wax and wane, and a previously concrete plan for what you’ll be doing over the next few months can suddenly turn into idle time, with no other clients waiting and ready to take advantage.

For contractors, that’s a mixed blessing. Finally you get some breathing room, a chance to catch up on all the little loose ends that you’ve pushed to one side while you’ve been busy. For me, a chance to actually play some games instead of helping to make them. It doesn’t take too long however before you start to get restless, when you’re not actively engaged on something and your brain has some time to wander. During this particular down-time, I took the opportunity to try and reset my brain a bit, do some DIY, catch up on my reading, and other non-computer related stuff. I know that I could have been working on the VR stuff I’ve been putting off, but I felt like I lacked the focus needed to really get stuck into it, I’d been too long working on other peoples’ projects. Even when the DIY was done and I was properly back in the office, something was still nagging at me, and I spent more time pottering in the office organising than anything solidly useful.

The other thing staying busy with client work does is make you lazy about introspection. If I’m honest, the steady supply of business had allowed me, to fall into something of a safe, comfortable place. It was well past time to take a long hard look at the business plan and re-assess. Tim left the team over 2 years ago now, and Dan has been full-time on his Ph.D. work for almost as long. Black Company Studios is, and has been for a long time now, just me, consulting with our various clients and developing software for them. I think it’s time to stop lingering on the trappings of a larger team, and accept that reality.

There’s no shame in it, I feel. What we have always been good at is providing our expertise in software development to our clients. Actually making our own games and apps, not so much. For too long when talking about our work I’ve mentioned those non work-for-hire projects with a little bit of embarrassment, that they were things we notionally did, but they never received enough attention to make them projects we could hold up and be proud of, they were always just a footnote. That’s mostly because I felt that the work-for-hire we provided was always where we could add the most value. So changes are afoot to help us, me, refocus on that strength. The Belford Road office, large enough for 5, but really only holding me and a whole bunch of boxes and old machines, is being left behind, as of the end of April. There are hot-desk environments that would suit a lone developer much better, and often enough I’ll be working on-site with the client anyway. The machines and excess equipment have been sold off for token amounts and will actually see some use instead of lying cold in the corner of the office.

And finally, I’ll be trying to increase the breadth of clients I look to work with, not just games studios but all sectors. That’s already partly happening – 90% of the work over the last 12 months has been more 3D visualisation than games. Having to be a generalist for so many years has given me a grounding in many aspects of software development; .NET tools, DevOps and build pipelines, building web services and tools, user interface work both on the web and natively, high-performance/low-latency coding. It’s time I put those skills to work. Because after a lot of hard thought, I’ve come to the conclusion that what I enjoy about my work isn’t limited to making games, it’s knowing that I can do good, useful work, for whatever clients I deal with.

Breast physics and hair

Posted in Industry Rants on February 16th, 2015 by MrCranky

I confess, I just wanted to use that in a post title. But I’ve been using 3DMark to get a sense of which of the three main machines I use is the best performer. The answer, depressingly, is that all three are below the standard of a ‘gaming laptop’, and less than a third of the performance of a ‘high-end gaming machine.’ Not that I chase the bleeding edge of performance, I’m far too cheap for that. But my usual tactic of staying 3-4 years behind that edge does mean that I occasionally have to see how far things have come along since I last splashed out on new kit.

How does that relate to breast physics you ask? Well while watching the Sky Diver test one of the most prominent views you’re given of the sky-diver in question seems specifically designed to show off the rippling of their breasts in the wind. Or perhaps it’s the ASUS logo that’s plastered all over the suit (although curiously, not in the shot they use in their benchmark listing)

A distinctly ASUS-less promo image. With static breasts.

Not that I have anything against more accurate depictions of the human form in motion of course. I think the reason that it jumped out at me though was because it didn’t look natural. I can almost imagine the animator’s reaction to their initial feedback. “You want them to do what? Are you sure? Would they even move like that…? I don’t know, I’ve never worn a wing-suit. How about you go find me some video footage of an actual female sky-diver and I’ll work from that instead of your imagination?”

The reason this popped out at me as more than just an off-hand amusement at the benchmark graphics was my flabbergastedness at certain tweets this week, accusing game developers of sexism, for the crime of not devoting as much effort towards hair rendering as to shiny and reflective surfaces. This grinds my gears on several levels. The last time I shipped a console game (Brave), we spent a quarter of the entire frame calculating and rendering Brave’s hair, and exactly zero time on shiny or reflective surfaces. So to pretend that we’ve just never concentrated on hair is disingenuous.

Secondly, the reason why there’s more shiny stuff in games than fabulous hair is not because, you know, screw women, but because rendering hair is hard. Not just developing it, making sure it moves properly and looks good, but actually getting it on screen is costly. Like fluid dynamics and other similar technical challenges, you’re having to simulate many, many small things at once, and then deform geometry and alter texturing every frame as a result; something 3D hardware would really prefer you didn’t do. Fundamentally, that’s costly, and the cost doesn’t go away just because you spend more development time on it. Whereas good lighting and reflections comes almost for free, from the way that hardware 3D rendering works; spend some development time on getting the lighting calculations right, and then they can be done for every fragment you see on screen, at only slightly more cost than just rendering the thing in plain lit colours. And once it’s done it works for everything, not just the subset of characters who happen to have long hair, but for everything in the environment and all the characters, even the short-haired ones. So from a development point of view it’s a no-brainer as to which gets you the most pretty for the least cost.  Trying to make it an issue of sexism only serves to show how little you understand about the challenges of making games.

No-one is avoiding making the hair look good because they’re sexist, if it was affordable then they’d be doing it all the time. Because when your characters’ have long hair that looks good (regardless of their gender), reviewers gush over it, it’s immediately noticeable. When your environments are a little bit more shiny than before, no-one bats an eyelid. At best it’s acknowledged as part of a wider judgement that your game looks good. Why wouldn’t we want to go for the hair? Because even though it’s nicer to have in, it still costs too damn much to get right, both in development time and in runtime resources.

VR movement

Posted in Design Ideas on January 18th, 2015 by MrCranky

So, despite my best efforts, my spare time available for experimenting with the Rift SDK has been fairly limited. I’m more convinced than ever though that there is lots of amazing potential there. There’s a lot of cynicism, and rightly so. Many of the same problems that were there in previous iterations of VR are still present. There are a lot of good posts out there covering the most apparent (the motion-sickness / nausea generated by lag, the resolution). I’m not so concerned about those. We used to have to hit 60Hz refresh rates on the dot, and with a lot less rendering power than we have now. Hitting 90Hz is achievable with discipline. The screen resolution is I’m sure going to be addressed by future iterations of the devices. These are known quantity problems.

The unknowns to be addressed come from the parts of the tech that are new. Control, user interaction, is going to be the key. My concern here is that the old systems we used are just not ideal in a VR environment. Traditionally, we’ve been controlling avatars in a virtual world. We’ve had mostly free movement around that world, but there’s always been a clear disconnection – you’re controlling something other than yourself, and the screen shows you the view from their position. We’ve refined the control mechanisms so that feels natural, and trained ourselves to the point where it feels a lot less like we’re rotating an avatar, and more like it’s us. “Look right” becomes a quick flick of the mouse, even though our head doesn’t actually move. The avatar becomes an extension of ourself. That ability to make the control mechanism effectively disappear is key. In the same way it’s easier to drive when gear changing is instinctive and done without thinking; it allows you to focus on the higher level functions.

If you’ve ever watched someone new to games playing a first person or third person game, you’ll know the effect. When someone has to look down at the controller to remind themselves of which joystick to use. You say “look right”, and they have to stop moving their character before they change camera angle. So many of our games are designed to take advantage of the affordances already learned by gamers. It doesn’t matter that they haven’t played your game before, if they’ve played another game in a similar style. More crucially, when designers get it wrong, that failure permeates the whole game. When someone complains that moving your character around feels like driving a tank, that niggle interferes with everything they do in your game. For all of the great things about GTA 4, I struggled with Nico’s movement. I’d miss a door by just a fraction, and then he had a minimum turning circle that meant that I’d end up bashing into the other side of the door frame instead. You get used to it and learn to compensate, sure, but it’s a problem that needs to be overcome.

Bringing it back to VR, the change in viewpoint brings the control issues into sharp relief. The immediate and all-encompassing nature of the viewpoint makes it *you* that’s in the game. You’re not controlling what you see on that screen ‘over there’, you are controlling you. So when the controls feel unintuitive, it’s *you* that feels sluggish and unresponsive. So it’s important to get it right, and I’ve seen a variety of problems with the Rift demos so far.

Assuming that you’re using a joystick or keyboard controls, you effectively have a 2-axis input controlling movement. That we always had with first person games. But in the past there was a fundamental restriction in place. ‘Forward’ was always ‘the direction you’re facing’. There was no option to look to the right while still running forward, unless you were playing a mech or tank game which often mapped view direction to an extra input axis. But the natural mouse/keyboard or joypad controls we’re used insisted that you always look rigidly forward, the same direction your gun was facing. That extra axis (where the body of your avatar/vehicle was pointing in a different direction to the view direction) was discouraged, because people struggled to manage their awareness of the two directions (look and move). Skilled players learned to compensate naturally for this. While running forward, to look right you’d turn and simultaneously start strafing left. But you knew exactly where you were looking and moving at all times, because the restrictions were clear.

In VR, that restriction no longer makes sense. Instead we have different restrictions. Even if standing, the cable to the headset restricts your turning. If sitting, then there is an obvious ‘forward’, which is the direction your torso is pointing. You move your head to the left and right, but forward doesn’t change. However, the real problem is that the Rift headsets at least aren’t really anchored to that ‘forward’ direction. The headset knows what direction it’s facing, but it doesn’t know at what point the headset was facing ‘forward’ as far as the user is concerned.

Instead, all of the demos I’ve seen try to replicate the same restriction as traditional FPS controls have. ‘Forward’ is ‘where you are looking’. Walk forward on your directional axis, and look to the right, and you’ll move to the right. Which seems sensible, until you consider the need for complete freedom around the world. You have a limited head movement circle, so how do you turn completely around so that forward is south instead of north? You’re not going to twist your head 180 degrees round and press forward. So the demos map another axis on top of your head movement. So if you start facing forward and north in the world, turning your head 90 degrees right means that ‘forward’ motion moves you east. Use the rotation control to turn your character 90 degrees right, and you’re moving south. Return your head to centre though, and you’re moving east again. So even though ‘forward’ always moves in a predictable direction, you’re still having to manage awareness of that extra rotation. Your head orientation is being added on top of a base avatar orientation. That input-controlled axis is constantly fighting against the headset rotational axis. You can be turning your avatar right and rotating your head left to keep the view pointed in the same direction.

Having experimented, I think trying to cling onto that old input style is a mistake. It feels horrible when you’re craning your head around to the right because you’ve been using the head orientation as the primary means to choose your direction, only to find that you actually need to turn more than 90 degrees in either direction, at which point you need to fall back on the directional turn controls. Worse, when you’re running forward at speed, you have to keep your viewpoint locked directly ahead, because if you try and glance left or right you’ll start running in that direction. You’ve lost one of the big plus points of VR, freedom of motion in your viewpoint.

Keeping it so that you always have complete freedom of moving your viewpoint is I think key to making the user comfortable. We used to be able to make the controls avatar-centric, but now we need to be aware of the range of motion the user has, and allow them a natural way of expressing a complete range of motion without discomfort.

Instead of assuming that forward is where you’re looking, you need to build in some awareness of the user’s torso. The only natural way I can think of to do that is to add a quick and simple calibration point. Ask the user to look directly forward, and then press a button. From then on, that direction is the reference centre, and should align with the direction of motion of the avatar. Forward means moving in that direction, regardless of where the headset is pointing. Same for strafing right and left. Ideally, like the mech and tank games that have to do this naturally, you’d have an in-view indication of where forward is relative to your viewpoint. That might be your avatar visible from your viewpoint (e.g. a gun or arms), or a HUD indication.

Lack of information

Posted in Industry Rants on January 1st, 2015 by MrCranky

Starting the new year afresh and reinvigorated, I am looking forward to 2015 and the changes it will bring. In an effort to get out from the hole I’ve made for myself to quietly work away on client work, I thought I’d shared below the response I just wrote up to the question: How the issues that hinder the growth of creative industries can be overcome, and how to capitalise on opportunities?

To me the biggest thing that the public sector could do to aid the creative industries, especially games, is to provide the broader view that we in the private sector are solely lacking. The dearth of information on what is actually happening in the games industry is shameful. Sharing of information will help us all to grow, to avoid making the same mistakes, to spot opportunities as they arise and not well after they’ve been exploited by others. But we can barely even claim to know how many studios and developers are in the industry, let alone the more useful information like what they are working on or in what areas they are seeing growth/recession. We have trade bodies who poll their own members, but that represents only a fraction of the industry currently working. It’s frankly embarrassing that so little resources are put into tracking what the games industry is doing, and it seems to me that the government itself would benefit from being able to point to the growth of the Scottish games industry. It’s a manageably small sector to collect information on, smaller than the UK, and I’d guess more interconnected as well.

We in the Scottish games industry want to be able to shout about our successes, but we can’t, because we don’t have the context to say how much better we are doing than last year. Individual successes are great, but they are fleeting, what matters is the overall trend in the industry. I feel that it’s a positive trend, but I have absolutely no data to back that up, and asking around, it seems that no-one else does either, not even the government bodies who are supposed to be there to support the industry. But how can we be supported if they don’t even know who we are and what we’re doing? Don’t we run the risk of allocating resources based on a woefully out of date picture of what is happening? What use is it to the industry if support is provided for console games that form a dwindling share of development; or for social games when our market has moved on to mobile platforms?

I think that the very first step that must be taken is to put resources in to dramatically improve the information we have on the games industry as it is now; and to commit to keeping that information current as quickly as the industry itself moves. Without that information to inform us, I feel that the answers to all of the other questions the committee are asking run the risk of being out of date and useless before any actual answers can be agreed upon. Armed with that information, the public sector can know who to engage with, and the private sector can know how their industry is changing and seek out new opportunities rather than be left behind.

Adventures in Android In-App Billing

Posted in Tales from the grind-stone on August 28th, 2014 by MrCranky

Standards Proliferation

Enough said.

Ubuntu 14.04 upgrade

Posted in Technical Guidance on August 21st, 2014 by MrCranky

After the kerfuffle with Heartbleed earlier in the year, and finding out our server installation was way out of date, I resolved to keep it more current. That meant upgrading from 12.04 to 14.04. Not as painless as previous upgrades sadly, and left me with three notable problems. I’m posting my notes here in case they’re helpful to anyone else upgrading. Basically our server box acts as a DHCP server, file server (using Samba) and gateway for the internal network, as well as hosting a couple of websites which we use both internally and externally. After upgrading, I noted:

  1. Two of the hosted websites were no longer working: they were giving 404 not found messages.
  2. A persistent message being posted at startup and various points during shell sessions: “no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory”
  3. DHCP server stopped responding properly the day after the upgrade

I’d had to merge a few configuration files, of which Samba and dhcpd were one, so my initial thought was that I’d botched the merge. However there wasn’t anything obvious in the merge results that would explain why. Anyway, issue by issue:

No talloc stackframe

This one was the most easily resolved. This post points the finger at libpam-smbpass, a Samba module which seems to have an outstanding bug. Fine, it’s not functionality we rely on, so uninstalling the module makes the problem go away:

sudo apt-get remove libpam-smbpass

DHCP server stopped responding properly

This one didn’t bite me until mid-way through I was trying to diagnose the Apache issues, my DHCP lease ran out and suddenly the laptop I was remoting in to the server was without network. Super annoying. Setting the IP address / DNS of the laptop manually got me network connectivity to search for solutions, otherwise it would have been guessing blindly from the server terminal (which doesn’t have web browsing).

While there wasn’t anything hinting at a DHCP problem in the logs, I noted at server reboot time a line along the lines of the following:

Apparmor parser error for /etc/apparmor.d/usr.sbin.dhcpd at line 69 Could not open /etc/apparmor.d/dhcpd.d

I couldn’t find that line anywhere in my logs, but I probably wasn’t looking in the right place. The configuration file that is complaining is basically trying to #include the dhcpd.d subfolder, and failing. Still, it suggested some sort of permissions or configuration problem with AppArmor and dhcpd. Oddly though, DHCP had been working after the upgrade the afternoon before, and I could see successful DHCP negotiation going on from this morning, but it all ceased an hour or two before my DHCP lease expired. All of my searches were throwing up results for the package isc-dhcp-server though, whereas I was pretty sure the package was dhcp3-server. On checking, isc-dhcp-server was not installed. Installing it:

apt-get install isc-dhcp-server

Lo and behold, DHCP was functional again, using our already existing configuration. So, I’m guessing, the packages on our legacy machine (upgraded using do-release-upgrade from 10.10) aren’t properly handled by the release upgrade procedure, and were left with folder permissions set incorrectly; which was fixed by installing the correct DHCP server package.

Apache website issues

Ubuntu 14.04 brings with it a fairly major upgrade from Apache 2.2 to 2.4. While the web server was still functional, and I could access pages resting directly under the DocumentRoot, our two sites set up using Alias directives were no longer accessible. Both returned 404 errors. Using a symbolic link in the filesystem under the DocumentRoot would allow them to be accessed, but that wouldn’t allow us to enable/disable the site at will. While there are changes to the permissions system in 2.4, we don’t use those with our sites. So all very odd.

Our setup was very simple: each site had a configuration file that only contained a single Alias line, remapping the appropriate site folder to the folder on the local disk. Further experimentation showed that we could shift the same Alias line into the default site configuration, and have it work. It gave a 403 Forbidden error, but not a 404 any more. Adding an appropriate Directory element with a “Require all granted” directive inside fixes the 403. So presumably the default permissions for an aliased directory have changed to deny by default instead of grant.

So from that I can only conclude that Apache 2.2 was more forgiving of having Alias directives standing alone in their own site .conf files, for whatever reason. I’m probably missing some nuance of the setup as to why it worked before. Rather than spend too much time figuring it out, I’m going to just go with having the sites as a part of the main site instead of as sites on their own.


Email: info@blackcompanystudios.co.uk
Black Company Studios Limited, The Melting Pot, 5 Rose Street, Edinburgh, EH2 2PR
Registered in Scotland (SC283017) VAT Reg. No.: 886 4592 64
Last modified: August 14 2014.