Archive for the 'Coding' Category

Localisation

Posted in Coding, Industry Rants on April 15th, 2015 by MrCranky

A discussion I was reading elsewhere linked me to this old gem on “falsehoods programmers believe about names.” I laughed, but it was one of sympathy rather than surprise. I’ve tackled localisation on a bunch of projects in my time, and the thing that takes the time is not content wrangling, or getting the right unicode fonts in. It’s dealing with the assumptions that the code team have already made and implemented in the early days of development.

It’s a print-to-string line that formats currency for display as $1.23, regardless of the user’s locale. Or a user signup form that has one box for First Name and one box for Surname, and expects exactly one word in each. Simple things, taken from the programmer’s own experience as ‘obviously’ the way it is, thrown in because they have to get a working implementation done quickly, and no-one has asked them to take localisation into account. That can all get sorted later, right? No. Not when you build in assumptions at the very base level that simply aren’t true.

For example, Steam. I was probably one of the early adopters of it. I didn’t want to be, annoying system tray icon that it was, but I wasn’t going to wait for Half-Life 2. To sign up, I had to use my email address as a username. Sure. Whatever. It’s now over 12 years later. That email address, tied to an ISP I moved away from, is long since gone. Can I change my username? No. Because they insisted on a particular form of unique ID at the time, and they insisted that usernames can never change. New users don’t have the same restriction, they can pick whatever username they want, but mine is frozen in time. Even though there is an actual email field on my account which is wholly separate from the username, I still have to enter a 30 character email address that has no relation to reality every time I want to log in. While I can just treat it as not an email address but just an oddly formatted unique identifier string, it jars with me, every single time.

Arguably this is just the meat of development – changing requirements over time invalidating initial assumptions. But for me it’s a plea to other developers – to slow down and take some time thinking about your initial implementation. If it’s not on a specification handed to you and you’re winging it based on how you think it should be; maybe think to yourself what the ramifications of how you choose to implement it will be. What won’t you be able to do if you implement it this way? What are the awkward cases, the potential ranges of input. Will it be possible to fix it later once the system is live and populated with data, or are you building in something that’s a fundamental to the system?

Coding conventions

Posted in Coding on July 30th, 2012 by MrCranky

Another mini-rant on coding this week, originally composed as a response to someone who didn’t see why conforming to coding standards was such a big deal. In this case (roughly sorting header #include statements alphabetically) the defence was “it’s trivial to do that automatically, so why should you care whether a coder does it themselves?” That’s a pretty typical response, but the answer to it for me sums up exactly why following conventions is important, and it is nothing to do with the conventions themselves, and everything to do with how you work as a team.

First off I’d agree that this case in particular is not a major issue. None of them (indenting conventions, space conventions, capitalisation conventsion) are, but that isn’t why it gets people worked up when one coder decides to go ahead regardless. The problem is that you have a choice between:

  • Original coder does it as agreed first time

or

  • Original coder decides to ignore convention previously agreed on
  • Entire team endures negative effects of said change until either:
    • Another coder takes time out from whatever else they’re doing to fix it:
      • If they do it as part of a commit they’re already doing, it obscures the diffs for the ‘real’ changes they’re making.
      • If they do it as a separate commit, they’ve got to take the time to make sure that they’ve not accidentally broken something everyone chooses to leave it as it is and over time the entire codebase degenerates into a collection of such issues
    • Somebody writes an automatic tool to fix the problem

Fixing it after the fact is not a good solution, because it’s far more expensive than just doing it right the first time. If there’s a policy, then everyone should stick to it. If they don’t agree with the policy, then they should take that up amongst the team, not just ignore it because they don’t agree and they expect someone (or something) else to fix it later. If it’s a stupid policy, then the team can agree to get rid of it. If it has merit for others then they should respect that even if they don’t personally agree with it: because they’re working as part of a team, not just as individuals, and that should entail a certain amount of respect to your team-mates.

Most of us will have known ‘renegade’ coders, who go off into their own zone and implement some big bit of functionality without consulting with the rest of the team. Sometimes that works well, and other times they come back, throw the code over the fence at the rest of the team and act surprised when they have problems integrating it. That is no way to work, and not only will it lead to friction amongst the team, it also generally means a bunch of wasted effort that could have been avoided with better communication up front. Not respecting coding conventions isn’t nearly as bad as that, but I feel like it’s the first step down the road towards it.

When you’re working in a team, you don’t have the luxury of implementing things in a bubble: you have to work with other peoples’ code, and they have to work with yours. Coming to a common agreement as to how to work with each other is the most basic part of that, otherwise you’ll find yourself working at odds with each other. There can and should be compromises to get to that agreement, but ‘agreeing to disagree’ is generally not a viable option.

Conflicting ideas about the size of STL strings

Posted in Coding, Technical Guidance on July 18th, 2012 by MrCranky

This post is one of those “I couldn’t find it when I was Googling, so here’s a succinct description of the problem / solution so other people can avoid the same round-about research.”

Symptom:

You have one bit of code (perhaps a library or a DLL) which thinks that sizeof(std::string) is 28 bytes, and another bit of code which thinks that it is 32 bytes. In Release mode they both agree that the size is 28 bytes. In our case it was actually std::wstring, but both string objects are actually the same size and exhibit the same problem.

Diagnosis:

You have a mismatch in your configuration between the two projects, essentially you’re trying to mix Debug code and Release code, which is just fundamentally not allowed. This much information is readily available on the Internet with some basic searching, but crucially most of those places don’t tell you the one piece of information you really need: exactly what setting is different? Which one of the dozens of settings that typically differ between Debug and Release is the STL code actually paying attention to?

The real answer lies in the details. It is not a Debug vs Release problem (well it is, but only indirectly). If you’re like me, the first thing you checked was the presence (or absence) of the _DEBUG or NDEBUG pre-processor directives. After all, they’re the defines most often used to get differing behaviour between the debug and release builds. You’ll find however that those definitions have no bearing at all on the size of std::string.

Now is probably a good time to visit this Stack Overflow question which links to good information on the subject.

In fact, the root cause is the presence and value of the preprocessor definitions _SECURE_SCL and/or _HAS_ITERATOR_DEBUGGING. If these are defined and set to 1, then sizeof(std::string) will be 32. If they are defined and set to 0, sizeof(std::string) will be 28.

More troubling is that even if those definitions aren’t explicitly listed in the set of pre-processor definitions, I believe the compiler (the Visual Studio compiler at least) will define them for you, based on its own internal logic. _SECURE_SCL will always be 1 for both debug and release builds, but _HAS_ITERATOR_DEBUGGING will be 1 for debug builds, 0 for release builds (as it has a tangible performance impact). You can explicitly turn off _SECURE_SCL to get more performance if you want, but you should understand the drawbacks before you do so.

I will update this post if I find out more about the internal setup of those definitions, but simply knowing that they are the cause of the size difference is usually enough to get to a resolution. I would certainly recommend adding some logging to both code modules that spits out the value of these two defines so it’s clear to you what the values are on both sides.

Resolution:

For most, an immediate solution is to simply manually define iterator debugging to be on or off in both projects so that they are consistent. To do that, simply add _HAS_ITERATOR_DEBUGGING=1 (or 0) to your project’s preprocessor definitions.

You may want to avoid setting it explicitly (ideally you’d simply rely on the compiler defaults), in which case you’ll need to figure out why iterator debugging is enabled for one module but not the other. For that I’m afraid you need more information about how the compiler decides to set those defines, but presumably another one of your project settings is indirectly making the compiler decide that iterator debugging should be enabled or not, and it is that setting which is different between your two modules.

In defence of object orientation

Posted in Coding on October 22nd, 2011 by MrCranky

So, rather randomly, I was discussing with @PetMac about the merits of a particular engine being split up into multiple libraries. We’d suffered from the other extreme: one gigantic project that contained everything (and had several minute link times as a result). I opined that it was, by and large, a good thing, even if it was inevitable that a lot of time be spent splitting off chunks of functionality and pushing them up or down the hierarchy of libraries to avoid circular dependencies. The alternative being of course that libraries end up tightly coupled, and even though they are two separate units, they are effectively indivisible. That is, not quite spaghetti code, but certainly a tightly snarled up ball of functionality that it would take many man-hours to pull apart. And as soon as libraries start to stick together like that, the rot sets in quickly; one  reference turns to dozens, and even if it might have been possible to separate them again before, it isn’t feasible any longer. I think (and he can correct me if I’m misstating his opinion) that Pete agreed on that front.

Why is that relevant to object orientation? Well, because the means by which you most commonly ‘fix’ a circular dependency is to abstract one side so that it becomes unaware of the other. So your character system knows about your audio system because characters carry around audio references; but instead of your audio system being aware of characters (so that they can, say, position sounds), you rewrite your audio system in terms of ‘character-like things’. Or more cleanly, ‘things with position’. Because that’s all the audio system really needs to know about. In an object oriented system, you’d use an interface definition to say that characters can satisfy the definition of a ‘thing with position’; of course that’s not the only way to achieve the same goal, but it’s certainly a nice easy way to do it. What’s important is that the library has a nice clean interface, that expresses exactly what it needs, in a succinct way. Ideally, it is also written without explicit knowledge of any other libraries. Having a clean and clear interface is what helps you keep that lovely de-coupled code, and lets you re-use it elsewhere.

Personally, I’ve never had a problem with using interfaces or other object-oriented mechanisms. But recently Pete has been trying to persuade me that object orientation is the dark side, and that our code would be much better if we only thought about things in terms of data transforms. There’s been a lot of eminently sensible stuff written on it, including stuff by @noel_llopis over on his blog, and by @TonyAlbrecht in a talk for GCAP. I’ve read their pieces, and don’t really disagree with most of it. If I have an issue at all, it is that their concerns about OO (and C++ specifically) primarily relate to performance, and when I’m coding, performance is only one factor; an equally pressing factor is how easy the code is to write and maintain.

Here’s the thing though; object-orientation can be really bad for performance, sure. And used badly, it can be really bad for design as well. C++ has a whole lot of cruft that means that expressing the code design you want, without locking yourself into bad assumptions, is hard. Not impossible, just hard. But there are a whole lot of code design needs I have which are very hard to satisfy without the basic features of C++. Interfaces and polymorphism straight off, and probably more. Really though, my problem lies with anyone that tells me that we should all go back to using C instead of C++, because it will avoid all of that bad stuff. Well, sure. I could go back to writing in assembly and never worry about variable aliasing as well, but I’m not going to. I’ll use C-style interfaces when they help, and C++ when they help, thank you very much. Whatever gets me the simplest, cleanest, most maintainable interface, that still lets me do the work.

I have no doubt that using C-style library interfaces would avoid a lot of unnecessary object-orientation. @PetMac is trying to persuade me though that a C-style interface is just plain better, and not only that, but that the inputs and outputs should only be structures defined in the library interface. So an audio transform would be ProcessAudioEmitters, and if you want to process a bunch of positional audio emitters, one for each character, you have to marshal an array of audio emitter structures, and copy the position from your character into its audio emitter. Which doesn’t sound so terrible, if it leads to a cleaner interface. I’d probably be fine with that. At a simple level, for core systems like audio or rendering, where the inputs and outputs are clear and rarely change, I think that would probably work well. Best of all it makes the audio library completely independent – it knows nothing of the things that it’s working with, except the data the other systems choose to feed it.

My problem comes when I consider how I would make that approach scale to all the other systems I need to build. The example I posed to Pete was one of an AI system. To use Pete’s preferred paradigm, and think of data transforms, the AI system would be a DecideWhatToDo transform. Great. What are the inputs and outputs? Well, that depends on the AI. One type of AI might want just a list of character types and positions. Another might want to know about the environment as well. Smarter AI might want character positions, damage levels, movement histories, factional allegiances; as well as the ability to co-operate with other characters. The outputs of the AI are just as bad – they can affect everything from the desired target position, to queuing animations, in fact pretty much anything a character can do might be an output of the AI.

I would describe Pete’s system as a ‘push’ system. Everything the system needs has to be fed to it explicitly, in terms it can understand. The problem with push systems though is that when the number of inputs goes up, the amount of code you have to maintain just for the marshalling of the push grows with it. You find yourself implementing the same code several times: you add the notion of damage to the character, then you have to add the ability to marshal the damage information into a structure the AI system would understand, then you have to add the notion of damage to every single AI interface that wants to know about damage. And in a system with dozens of different sorts of AI, that might be a lot of interfaces.

To me that smells wrong. It means that you’re baking implementation details (like AI ‘X’ cares about damage) into the interface. Conversely, the ‘pull’ system stays relatively clean. You simply pass the list of characters to the AI, or the environment, and allow the AI system to ask the character interface for only the data it needs. Characters might provide a vast array of query-able state, and the AI can pick and choose what it asks for. Of course this comes with a down side. The AI system now has to have knowledge of the character system (or at least, provide abstractions which the character system can fulfil). It’s no longer truly independent. The performance impact of lugging over the entire character object, when perhaps you only want to access a few small parts of it, is very real. But in terms of the ability to write clean code, without a massive amount of interface book-keeping, it’s a big win. That said, I’m open to persuasion. If someone can describe to me how they would write a succinct AI library interface in a C-style, for a few dozen varied and sophisticated character AI, without giving the AI library knowledge of the character interfaces, I’d be happy to change my point of view.

There will be those who say that if your structures are that complex, you’ve already done something wrong. That’s very idealistic thinking. The simple fact is that we are often writing fantastically complex simulations. Sometimes the ‘pure’ systems that you’d need to build to support the level of complexity the design calls for are just far more effort than the benefits they would give. When it comes down to it, we need to write code effectively more than anything else. We need to be able to code quickly, cleanly, and flexibly; especially when the game design is changing quickly as well. It’s of no benefit at all to spend months building a fantastically clean engine to support one game design, only to find that in the time it took you to build it, design changes have rendered it obsolete.

To sum up, because I’ve gone on for a long time: the one thing I like less than being accused of ‘drinking the OO kool-aid’, is the notion that there’s only one right way to do things. As a coder, you should be constantly and critically evaluating all your systems and interfaces. Sometimes a data oriented approach is better: consider the purity of the interface and the vastly improved ability to parallelise and minimise your memory accesses. Other times the structures and inter-dependencies are simply too complex, and object orientation is the most effective tool at keeping your code clean and versatile. I won’t claim to always get it right (as Pete and Tim have both at various times pointed out, I tend to over-structure my code), but I’d hope I always aim for clean code as best I can.

Indie Development vs Modding

Posted in Coding on June 28th, 2011 by OrangeDuck

There are two main areas where amateur game development happens. The first is the Indie scene, which encompasses most forms of game development done by a single developer. This can mean cheap action games on Steam, cult hits like Minecraft and Braid, flash developers working on Newgrounds or mobile and smart phone developers. These games are often simple, 2D, and tend to be creative or puzzle type games with accessible graphics.

The second area is in the modding scene. This consists of unofficial add ons, changes and modifications to the games called mods. The modding scene has existed since PC gaming began. It has had wavering popularity but game developers such as Epic with the Unreal series still herald their games “moddability” as a selling point.

The interesting thing about these two scenes is their complete isolation from each other. It sometimes even goes as far as hostility. Modders can see indie developers as stuck-up and pretentious – working on mediocre puzzle platformers with pixel graphics. Indie developers can see modders as simple fan boys, making Call of Duty machinima videos set to “let the bodies hit the floor” and yet more tedious realism mods.

On course, in reality, neither of these are exactly true. My background is the modding scene, so perhaps I have a natural bias toward it; but I’ve been lingering in the indie gaming scene and increasingly I’m seeing the void between the cultures as doing increasing damage.

 

Artists and Programmers

One of the major differences is that the indie scene is largely constituted of programmings – often contracting out artwork for games. The modding scene, on the other hand, has an abundance of artists, across all skill levels, willing to get their hands dirty.

It isn’t really the obvious benefit that could be gained by a more balance skill set that bothers me most. What I find most annoying is the disjointed and boring artistic direction in both scenes. In the modding scene I don’t want to see another Star Wars mod, another Lord of the Rings mod, another Graphical Enhancement mod – and this is coming from someone who loves all these things like no one else.

In a similar way, in the indie scene I’m so bored of pixel graphics, cartoon graphics, crappy looking 3d games, terrible assets.

In both scenes we have very uninspired and boring artistic direction, with poor technical features due to the tiny number of graphical programmers working with, or being, artists.

More collaboration, sharing of ideas and passions, would be amazing great!

 

Individuals and Teams

For the modding scene the de-facto standard is to bring a team together to put our your mod. This is seen as essential on anything large at all, and timescales are assumed to be as short as possible. Indie development is the opposite.

I’m not going to discuss if team development or individual development is better or worse. That is one for another time. They clearly have their strengths and weaknesses. Team development is all but useless without someone in charge who is organised and knows what they are doing and individual development is powerful but suffers from burnout.

I think both sides just need to view (or experience) the benefits of the other. Indie developers seem unkeen to share their vision with untrusting individuals, missing out the benefits of a shared workload and new and interesting contributions. Modders set their sights too high, assuming the team will carry the weight, just to fall at the unreliability of others.

More importantly though, I think there needs to be more communication from those with successful and released modding projects and indie games – giving insights into what is needed to finish a product.

 

Money

For many indie developers making games is a way to make their living and the idea that they are developing for money is a no-brainer. modding on the other hand is almost always free and has a feel about it akin to the open source community.

The result of this is that modding can be more fun – you don’t feel the pressure, and legal complications are greatly reduced. The problem of course is that there is a huge uproar when someone wants to charge for their mod – they are often benefiting off much previous development made by other teams in tools, tutorials and tips. There are strong feelings of betrayal and greediness.

The ideal situation would be that modding remains fun, with reduced legal issues, but developers are more motivated due to potential of making money. The modding community needs to have a serious think about this if it wants to progress, and there are lessons to be learnt from indie developers. I don’t want to make games if it isn’t fun, but I, like everyone else, am sick of failed projects.

 

Unity

So this is my modest proposal: A community for indie developers and modders to get together and find ways of working on projects that everyone is excited about.

Software Engineering Methodology versus The Real World

Posted in Coding, Random Stuff on November 12th, 2010 by MrCranky

It’s often the case that in the industry,  people will do research on particular software engineering methodology, or a team will publish a post-mortem in which they talk about a particular style of working and how successful it was for them. And the discussions following those posts will usually descend into an argument, with different people chiming in on how they tried that methodology, and it was rubbish, or how their own methodology (or ideal methodology) is better.

This sort of debate annoys me, because it’s always couched in absolutes. In software engineering there are no absolutes. So I felt I had to respond when someone declared, without much in the way of context:

Asserts() should always be on during development.

No they shouldn’t. At least, not unless your team ethos supports it.

Just like all of these statements about how things should or shouldn’t be done, there is a whole bunch of context needed before you can say whether or not a strategy is successful or not. You can’t just look at those stats and say “TDD is the way to go”, or “asserts should be everywhere and always on”.

Every last one of these tenets of development requires a particular way of working before it is viable and/or usable. Asserts are great, as long as the team ethos is to never (or nearly never) allow them into a live build. What do you think you get if you just turn asserts on everywhere, when the build is riddled with conditions which aren’t show-stoppers, but which result in asserts? You get designers and artists that can’t use the build any more, and everyone gets pissed off.

Similarly, what do you get if you have a team which is notionally doing TDD, but in fact many of the developers aren’t structuring their code to support complete tests, or have incomplete test coverage where it counts? You get slower development, without a great decrease in the number of defects, and now you’ve got less time to fix them when it comes time to ship.

People should stop looking for one-stop, quick fix solutions to the problems which all development teams have. Every last one of these solutions will a) make things worse if applied in a half arsed way, and b) stem from an underlying mentality which is “what can we do to improve maintainability, increase coder efficiency, and smooth out problems in our day to day development?”

Sure, read the stats, read other people’s techniques, but for the love of Mike, don’t try to just stamp a particular technique on your development team and expect it to improve your lot. Instead, take a long hard look at your day-to-day process, identify the root of the problems that your team actually has, and take small incremental steps to fix those problems. Repeat ad-infinitum (or until you ship).

More than anything though, make sure any steps you do take work with the team you currently have, not with some ideal team that you’ve read about. If you find yourself applying a solution which will only work if everyone has a certain mentality or set of skills, then you’d better make damned sure that your team has those things before you try to apply the fix.

MSDN

Posted in Coding on August 3rd, 2010 by MrCranky

Finally gotten around to putting VS2010 on this machine, and this time around I’m breaking with tradition and simply not putting MSDN on there, at all. It used to be a no-brainer, put the reference libraries on as you’re going to be looking stuff up all the time. But these days it was always more an exercise in frustration than a useful tool.

Many topics are just “not there”. Huge swathes of really basic stuff are just missing (basic date formatting string specifiers – that’s pretty low level!), so that when you navigate to them it tells you the page is missing. Go online to the MSDN reference there, and you’ll find the page, just not in the locally installed copy. I thought to begin with it was just because I’d installed it badly, but even from a clean install it was still just not there. I’ve since concluded that it must be the Express versions are just subsets of the full documentation, to keep the downloads small. It’s certainly not a functionality split – like they’re only putting in help topics for things in the Express editions – because the Express editions are really quite close to fully functional. No, this is stuff that’s core to .NET and the language.

So, since I’m having to fall back on searching the internet anyway, I figure I might as well have my hard disk space back. The online resources available now are fantastic anyway, and it’s rare that I’m not connected when developing. Most often it’s the online MSDN references that show up first in the search, so in the end it’s much of a muchness – except I don’t have to use the horrible HTML help interface which has been getting steadily worse with every revision of Visual Studio.

I guess this is just another nail in the coffin of the disconnected computer: so many things now expect/assume/require you to be connected to the Internet. Which wouldn’t be so bad, but even with 3G connectivity, a network connection while on the move still isn’t something that can be taken for granted. But I’ll stop grouching about it like an old man, and go with the flow…

True, dat

Posted in Coding on March 22nd, 2010 by MrCranky

Picked up on when reading through some old posts on the sweng_gamedev list, and had to be shared.

On 11 December 2009, Fabien Giesen wrote:
Abstraction provides leverage. This is well understood in one direction and not so well in the other direction.
The power of abstraction is that I can do with one line of code what might take me 100 lines otherwise. The problem is that I’m now writing code one 100-line-equivalent at a time :). Any conceptual flaws or minor misunderstandings present at the level I’m working on are amplified by a factor of 100 by the time the machine gets to see the code. This is a crucial thing to understand when working in a team, where the user and the designer of a module aren’t necessarily the same person.
This is so true. While I’m all for abstraction and making your code clean and high level, you really, really have to be aware of what that means.

CruiseControl.NET / Custom Plug-ins

Posted in Coding on December 23rd, 2009 by MrCranky

I must admit, I’m a bit of a CruiseControl.NET fan. It drives most of our automation systems, and provides us a backbone from which we can hang many different systems. I use it both here at the Company, and when I’m doing tools consulting with other teams. That said, it’s not without it’s flaws and limitations; not least of which is the lack of documentation on custom plug-ins.

There are a wide variety of pre-built plug-ins already available, for source control systems like Perforce and Subversion, and build systems like Nant and MSBuild. These usually work pretty well straight out of the box, if you’re building a vanilla continuous integration server. That is, every change made to source control results in a build of some sort. I’m not going to dwell on why that’s a good idea (it is), I’m only going to say that there are times when you need something different. There are inelegant ways around this, but in truth CC.NET has all of the customisability to let you do this properly within the system – by defining your own plug-ins.

Reading the documentation on CC.NET, you get only a single page of documentation on custom plug-ins, and that page is pretty spotty at best. It shows you a “hello world” plug-in, with no clues as to what information you get into the plug-in, when your plug-in’s methods will be called, or how you’re supposed to pass either status information or logging back to CC.NET itself. In short, it’s pretty useless, other than to highlight that the facility is there, and give you enough of a pointer to get you into the code and poking around.

Sure enough, with Auto-complete on, you find that there are lots and lots of information passed to you, and by debugging and breakpointing inside your plug-in, you can get a feel for when your code is called. Making CC.NET load and unload your plug-in is refreshingly easy and fairly robust, and once you figure out how to pass parameters to your plug-in by specifying them in the config file, you can see the possibilities open up. More importantly than the sample building plug-in though is that, through a bit of digging, you find you can also define your own custom source control plug-ins, and with the combination of those two things, you can do pretty much anything you want.

In one particular situation, I’ve been working with a Perforce (p4) source control system, and a rather black-box build system for the game itself. Rather than wanting a build made of every single commit to the p4 repository, I needed it to make only particular builds – those marked as verified by the developer’s internal test team. This is a pretty common situation when you’ve got an automated build system – you have a raw source control system that lies underneath, which operates at the level of atomic commits. But above that is a logical structure, which only people understand – that operates at the level of ‘nightly builds’ and milestones. You have custom logic which you can apply to the system, using some basic rules. So in essence you have a virtual source control system, built on top of the raw version. By writing a custom source control plug-in for CC.NET, you can expose an interface to your automated build server, so that it recognises when something new is available from that virtual source control, and only builds exactly what you want.

CC.NET offers great flexibility, and those people who develop it, know all about that flexibility. But information on what you can do, and how, is rarer than hens teeth. So over the next few weeks, I’m going to write up and publish here some examples of real-world plug-ins that I’ve written. That should hopefully give readers enough context to go off and write their own plug-ins, to suit their own needs.

Disclaimer: all of the points made here refer to CruiseControl.NET 1.4.x, not the later versions of the system. There are some big and eagerly awaited advancements in the newly released 1.5 version, that many people like myself will have to avoid for now, until it’s bedded in.

JamPlus

Posted in Coding, Tales from the grind-stone on February 4th, 2009 by MrCranky

Amongst various things I had to sort out today, I was asked to write out a blurb for a potential client about improving build processes and automating/scripting things in the development pipe-line. It’s a subject I get quite passionate about, because unlike so many things in games development, it’s a nice task to do. There are clear, quantifiable goals (“make creating a build a one click process”, “speed up turn-around times for artists by 50%”), and usually plenty of options about how to get there. It is also a nice, self-contained task that you can just wade into and make progress on, unlike for example gameplay coding, where you can often get blocked on feedback from the creative team, having to rework things and so on.

I think that’s possibly why I like to spend time on improving our pipe-line at the weekends or in my off-time; even though I could spend more time on the big pile of client work that needs done, I find myself tackling little bits of our own pipe-line because I know it’s a task I can get done without any other input.

On that note, with some more collaboration with the developers on some little niggles, we finally switched our creaky old makefile based system over to using JamPlus properly. Both build processes still run side-by-side, but the JamPlus version has a fraction of the number of lines in the makefile, runs much faster doing dependency checking etc., and in general is much cleaner and will be more maintainable going forward. I’ll have to walk the guys through what’s there so they can maintain it too, but after that I should be able to scrap the makefiles altogether.

Next step is the art/audio asset to platform binary conversion process, and this is why I really wanted to switch over to JamPlus. Our previous art pipeline would always rebuild platform assets, even if the source assets hadn’t changed. That was fine early on, when all of our tools ran lightning fast and we had few source assets, but very quickly it grinds when you introduce slow tools (such as our font encoding tool that does smart packing of glyphs and colour conversion), or many assets. Also the build scripts which make those assets are all Lua based, and so we have different technology for building code than for building art and audio. I’m pretty hopeful that we can make JamPlus fulfill both functions, and in the process get fast dependency checking for our art assets so that only the assets that have changed get rebuilt. But for that I’ll need a free day, and those are few and far between right now.


Email: info@blackcompanystudios.co.uk
Black Company Studios Limited, The Melting Pot, 5 Rose Street, Edinburgh, EH2 2PR
Registered in Scotland (SC283017) VAT Reg. No.: 886 4592 64
Last modified: February 06 2020.