Adventures in Android In-App Billing
Posted in Tales from the grind-stone on August 28th, 2014 by MrCrankyEnough said.
Enough said.
After the kerfuffle with Heartbleed earlier in the year, and finding out our server installation was way out of date, I resolved to keep it more current. That meant upgrading from 12.04 to 14.04. Not as painless as previous upgrades sadly, and left me with three notable problems. I’m posting my notes here in case they’re helpful to anyone else upgrading. Basically our server box acts as a DHCP server, file server (using Samba) and gateway for the internal network, as well as hosting a couple of websites which we use both internally and externally. After upgrading, I noted:
I’d had to merge a few configuration files, of which Samba and dhcpd were one, so my initial thought was that I’d botched the merge. However there wasn’t anything obvious in the merge results that would explain why. Anyway, issue by issue:
This one was the most easily resolved. This post points the finger at libpam-smbpass, a Samba module which seems to have an outstanding bug. Fine, it’s not functionality we rely on, so uninstalling the module makes the problem go away:
sudo apt-get remove libpam-smbpass
This one didn’t bite me until mid-way through I was trying to diagnose the Apache issues, my DHCP lease ran out and suddenly the laptop I was remoting in to the server was without network. Super annoying. Setting the IP address / DNS of the laptop manually got me network connectivity to search for solutions, otherwise it would have been guessing blindly from the server terminal (which doesn’t have web browsing).
While there wasn’t anything hinting at a DHCP problem in the logs, I noted at server reboot time a line along the lines of the following:
Apparmor parser error for /etc/apparmor.d/usr.sbin.dhcpd at line 69 Could not open /etc/apparmor.d/dhcpd.d
I couldn’t find that line anywhere in my logs, but I probably wasn’t looking in the right place. The configuration file that is complaining is basically trying to #include the dhcpd.d subfolder, and failing. Still, it suggested some sort of permissions or configuration problem with AppArmor and dhcpd. Oddly though, DHCP had been working after the upgrade the afternoon before, and I could see successful DHCP negotiation going on from this morning, but it all ceased an hour or two before my DHCP lease expired. All of my searches were throwing up results for the package isc-dhcp-server though, whereas I was pretty sure the package was dhcp3-server. On checking, isc-dhcp-server was not installed. Installing it:
apt-get install isc-dhcp-server
Lo and behold, DHCP was functional again, using our already existing configuration. So, I’m guessing, the packages on our legacy machine (upgraded using do-release-upgrade from 10.10) aren’t properly handled by the release upgrade procedure, and were left with folder permissions set incorrectly; which was fixed by installing the correct DHCP server package.
Ubuntu 14.04 brings with it a fairly major upgrade from Apache 2.2 to 2.4. While the web server was still functional, and I could access pages resting directly under the DocumentRoot, our two sites set up using Alias directives were no longer accessible. Both returned 404 errors. Using a symbolic link in the filesystem under the DocumentRoot would allow them to be accessed, but that wouldn’t allow us to enable/disable the site at will. While there are changes to the permissions system in 2.4, we don’t use those with our sites. So all very odd.
Our setup was very simple: each site had a configuration file that only contained a single Alias line, remapping the appropriate site folder to the folder on the local disk. Further experimentation showed that we could shift the same Alias line into the default site configuration, and have it work. It gave a 403 Forbidden error, but not a 404 any more. Adding an appropriate Directory element with a “Require all granted” directive inside fixes the 403. So presumably the default permissions for an aliased directory have changed to deny by default instead of grant.
So from that I can only conclude that Apache 2.2 was more forgiving of having Alias directives standing alone in their own site .conf files, for whatever reason. I’m probably missing some nuance of the setup as to why it worked before. Rather than spend too much time figuring it out, I’m going to just go with having the sites as a part of the main site instead of as sites on their own.
One of the most interesting things about the field in which I work is the sheer range of topics I get to work on. Not just on different platforms or in different languages, but the actual subject matter of the projects. The project I’ve just completed, again working with Eutechnyx, manage to exercise parts of my skillset that I haven’t had to use for a while. Sometimes, working in games, you find yourself working on ostensibly the same problems. Different skin, different IPs, different engine, but really it’s the same fundamental concepts and functions that you’re re-implementing in a new project.
So when you get challenging problems, it’s really quite refreshing, because you have to go back to first principles of analysis and visualisation techniques to solve them. Where you’re presented with an overwhelming amount of information, and you have to design and implement something which wrangles that raw data into something coherent. Where you have to figure out a way of presenting that information in a way that is visually compelling and conveys that information in a useful, concise form. They are fundamentally hard problems, sometimes solvable, sometimes not, and finding the right solutions, if they exist at all, takes a level of concentration somewhat higher than the usual required to bring our games to life.
This is the state of mind I’ve been in for the last few months. For confidentiality reasons I can’t say anything very much about the project itself (it’s not the lovely visualisation linked above of course), but I wanted to talk about the satisfying nature of developing visualisations in general.
Often when you’re developing code, your debugging tools are limited to logging and step-by-step debugging. But for complex data sets, especially those dealing with spatial data, it’s far more useful to display that data visually. The same is true for end users – a sequence of numbers means very little; a dumped spreadsheet of data, while accurate, doesn’t let you see shapes or patterns. Turn those numbers into 2D graphs, and you can discern patterns, noise and trends. But that still may not be enough. You can make a line graph of each component of a 3D position that changes over time, but in 2D those graphs make little sense. Allow it to be viewed in true 3D space however, and suddenly you can see the shapes. But a line covering every point that 3D position visited tells you nothing about how quickly it moved between those points. So you introduce animation over time, or colour, and suddenly the data makes sense. When writing processing code, a visual representation of the outputs lets you pick out flaws that would otherwise be hidden. Spikes where there should be smoothness, patterns where there should be only random noise, correlations that hint at relationships you didn’t realise existed.
Of course it isn’t as simple as layering in more and more information. With too much visual clutter, it becomes impossible to discern any useful patterns from the data. So knowledge of what data is important becomes vital; ways of filtering information to show only what is relevant allow you to show information where it is needed, and hide it when it is not.
This, again, is one of the reasons why I’m so enthused about VR development. Being able to visualise spatial data is very dependent on good camera work – you have to be able to look around the visualisation. If the camera is out of your control, then you’re utterly reliant on the generated camera. If that is poor, then you might as well have a 2D visualisation of the data, because you need to have some useful spatial context to be able to process what you’re actually seeing. It’s for that reason that many optical illusions that rely on a particular perspective are defeated by moving your viewpoint; if you viewed the same illusion from a static perspective, you wouldn’t be able to tell it was an illusion at all.
So the introduction of VR allows us to get great flexibility of viewpoint, while not requiring the user to learn a cryptic set of control inputs to gain full 3D control over their viewpoint. That opens up great possibilities for exploring even more complex datasets and 3D structures. We’re living in an age where technology has advanced massively and the integration of computing into our everyday lives has resulted in masses of new information becoming available, in overwhelming amounts. Being able to visualise and process that information is the first step to being able to make use of it, and we’ll need new tools and tricks to do that.