Archive for the ‘ Issue 2 ’ Category

Editor’s Letter : A little help please

peteHail all Geeks

Hi guys,
It’s with great pleasure that I bring you issue 2 of GeekDeck. I was concerned at how this whole thing was going to work out. Whether I’d be able to fill the magazine with an average of 10 articles each month, but so far, it’s going just fine. We still have much to talk about and there are many ideas for features and interviews boiling away in the backs of our minds, which I hope will fill you all with wonderment for a long time to come.

I’d like to say a huge thank you to all of the contributors. It’s not easy having me on your back for an entire month and I have to say I did give a fairly tight deadline of about 2 weeks for the first issues articles, hence a lot of issue 1 was written by me. I was prudent in storing up a good few articles, just in case I needed them. As I sit here at the train station, three weeks away from the release of issue 2, I already have 2 articles, a sign-off and this letter already in the bag. So I should be ok for issue 2 too.

There are some really great articles coming up, however, what I’d like to do, is ask for a little help from anyone reading this that feels they may have the right skills, or knows someone who may have the right skills. The GeekDeck team is in need of a few things right now. The first is an audio cut and shut shop. We’re considering doing a podcast which would be the main contributors talking about each others articles and giving their views on the previous issue. Several of the contributors are really keen to do this, myself included, but I just don’t have enough time to sit there and cut up the audio, insert funky music etc.

The second thing we’re after is possibly someone to help layout the PDF version. Currently this is done using Scribus. “Woah!” I hear you cry. “That sounds like hard work” Well, actually….No. Granted it does mean I am limited in some respects, but laying it out in Scribus is actually working out really well. This way I can get an idea how things are going to look quickly and without any expensive software. However, if there is anyone out there who is willing to give us a hand in this respect, we’d welcome them with open arms.

Finally, we want to spruce up the GeekDeck website a little. This involves a measly $15 for the wordpress upgrade, but along with that, we need someone who can design us a buttkicking website to work with wordpress. Anyone who feels they are up to the job, send us a portfolio.

So I hope you enjoy Issue 2, it’s been fun writing for it and we always want to hear your comments and feedback, good and bad, it helps us to do better for you.

Thanks

cbx33

PS, you can now follow us on twitter, http://twitter.com/geekdeck

Walkabout: Ken Vandine

Og

If you’re a GNOME user and happen to attend open source events, chances are that you’ve either met or seen Ken Vandine, and how could you miss him? Founder of the Foresight Linux distribution and long time contributor to open source projects in general, he is the most upbeat and easy going person you’ll ever meet! Ken and I met when I moved to North Carolina and was looking for a job. Fast forward 3 years later and he is still the same affable person who spends way too many hours working on his projects and getting close to no sleep. GeekDeck managed to get a hold of him via IRC long enough to ask him a few questions and kick start this new column, Walkabout! So take a walk with Ken and I if you will and get to know him a bit more.

GeekDeck: Tell us a bit about yourself and what influenced you to get started with Open Source?

Ken VanDine: Community… I am a pretty social guy, and I really loved the inviting nature of community participation. I think that is why the GNOME Love initiative is so important to me, I want to show others how they can get involved and really make a difference. Open Source isn’t just about the software, it is about people too.

GeekDeck: You are responsible for spearheading and maintaining the Foresight Linux project. Why did you decide to create your own distribution and what was your immediate objective?

Ken VanDine: I have been a GNOME user for many years now and probably really got more involved in GNOME advocacy just a couple years ago shortly before I created Foresight. I realized there was some very interesting work being done on desktop tools, but they were hard to integrate into existing distros. There just wasn’t an easy way for users to see what was on the horizon and how great Linux on the desktop was becoming. So I created Foresight in 2005 to showcase some of these technologies that I thought were going to make a big impact.

GeekDeck: How did you turn your “hobby” distribution into something that could attract other fellow enthusiasts? In other words, what did you do to get people to join the project, and more importantly, what made them stick around?

Ken VanDine: I think it was mostly our friendly, inviting IRC channel. Also the rolling release concept really attracted lots of folks; they like the “new” stuff. I also credit conary a fair bit; the ease of packaging is really unsurpassed, which opened it up to more people that would otherwise not be able to get into packaging.

How can you not like this guy???

GeekDeck: Do you still think that Foresight is achieving its original objective from when you first started the project?

Ken VanDine: Yes and no actually; some of that really new technology has made it into the more main stream distros and the pool of really innovative stuff that isn’t available yet is getting shallower. This is a great thing, it means we are getting better 🙂 Foresight has turn more into getting the software to the users faster, with it’s rolling release model.

GeekDeck: I had the pleasure of working with you at rPath and the one question I’ve always had is: how many hours do you sleep and do you have caffeine running through your veins? Hehehehe now, seriously, how many hours do you devote to your projects on a daily basis?

Ken VanDine: I generally get very little sleep, not sure how I do it. I usually get to bed around 1am and get up at 7am; when I was with rPath and had to drive to work I was up by 6am… so now I get an extra hour of sleep, yay!

GeekDeck: Early this year you left rPath and started working for Canonical. What can you tell us about the work that you are doing now and what can Ubuntu users expect from the work you’re doing?

Ken VanDine: I am working as an Ubuntu Desktop Integration Engineer, helping the Ayatana (AKA Desktop Experience) and Online services teams get their cool new stuff into the distro. There is some cool stuff being worked on for sure; we just announced the first beta of the Ubuntu One service , and Jaunty included the new notification system as well as the messaging indicator. What comes next should be clearly after UDS later this month.

GeekDeck: What does your envolvement with the Ubuntu distribution mean for Foresight users? Do you see yourself passing the Foresight baton to someone else and focus more on Ubuntu work or can you see a way of juggling both worlds?

Ken VanDine: I plan to continue working on Foresight as long as I can. So far my new job really has only changed one thing, I spend less daytime hours working on Foresight. I still love Foresight, and I really feel like it is a way for me to express myself a bit. It is molded after what I think the desktop should be.

GeekDeck: What is your idea of the perfect desktop for a home user? What applications, features and specific technologies would you use to create this utopic desktop?

Ken VanDine: I think the real key to the perfect desktop is one that seamless integrates your life. Easy to use applications are important, like banshee, tomboy, f-spot, pidgin, etc. But being able to effectively use those applications in a modern computing era with many users having more than one computer, and the ever growing desire for social technologies. I want to import a photo from my camera on my desktop with f-spot, send it to my mother without opening some web page or another application… let her comment that the picture is great except for the runny nose, see her response later that evening on my laptop and clean up the runny nose on my laptop and publish it again. For me that is the ideal situation: your data is everywhere you are, and the applications know how to access it. This is why I am so excited about the Ubuntu One service; the infrastructure to do just that is there now… someone needs to just do it. So in short… cloud integration in the desktop, using web services in a web browser is soooo 2005 🙂

GeekDeck: You are a busy father of 3 (last time I checked) and seems to always be present in their school activities. What advice do you have for those of us who are also full-time parents and have to juggle different open source projects?

Ken VanDine: Dedicate your time to the kids first, they are precious and your actions have a huge impact on them. When they go to bed you have plenty of time to hack. My kids go to bed by 8pm (generally), which gives me a solid 3+ hours to dedicate to my hobby (I have an amazing wife!). It does get harder to dedicate time to my projects, now that I have a baby again. His schedule isn’t very predicable yet, he seems to always have an ear infection and trouble sleeping. But again that takes priority, if he has a good night I get plenty of time to work on what I want to, if not I end up holding him which is great too.


Og Maciel is a QA Engineer for rPath and a long time contributor to the translation efforts of several upstream projects. When not spearheading new projects or communities, he likes to fish, watch ice hockey and spend time with his 2 lovely daughters in Chapel Hill, North Carolina.

Photo supplied by Kevin Harriss

Review : Cherry Picks of the Month: Sliced Bread

Og

This month’s edition of Cherry Picks of the Month has a review of a very special product that will definitely turn many heads. It is one of those products that you proudly claim to be “the best thing since the invention of sliced bread.” At least I think so! 🙂 But first things first:

DISCLAIMER: I am a QA Engineer at rPath, the company behind the technology here presented. This post belongs solely to my person and is not, in any shape or form sponsored by my employer.

Now that I have gotten that out of the way, let us dive head first into software appliances and why you should pay attention to them.

Wikipedia tells us that “A software appliance is a software application combined with just enough operating system (JeOS) for it to run optimally on industry standard hardware (typically a server) or in a virtual machine.” In other words, it is a lean and mean (usually?) GNU/Linux machine stripped down to its bare minimum configuration with a single (or combination of) application that does a very specific job. A good example of this would be a network firewall which is probably the most common type of software and hardware appliances out there.

So what is the big deal, right? Anyone with enough knowledge and time can install an operating system, gut it of all the pieces you don’t need and get it small enough to fit your needs, right? You could then take the application being developed by the other IT guys and manually install it to arrive at a software appliance right. And if you find out that there is a dependency required to run that application, you can always download the source and compile it to make it work. What? The version you installed is not the same used by the IT guys? Ok, you can search for the proper version and compile it again, not a problem. And what if a major security update of one of the underlying components becomes available the day you’re finished building your appliance? Guess you’ll have to download it and install it.

Do you see any problems with building your systems this way? What if you needed literally 1000 of them tomorrow? Do the words “time constraint”, “dependency hell”, and “Seppuku” mean anything to you? If you see yourself nodding affirmatively, then let me introduce you to rBuilder Online, the easiest and most efficient way to build a software appliance, and keep your sanity intact.

In a nut shell, rBuilder Online is a 100% free online “software appliance manufactery” maintained by rPath. A true window to the technology developed by the rPathians, rBuilder Online allows you to create, develop, maintain and (most importantly) deploy software appliances from the comfort of your chair. A very basic appliance can be built and deployed in literally minutes, all requiring only the use of your mouse and some simple decisions such as what platform you want to use (several are currently available, all built with the highly advanced conary package management system), what type of images you want to generate (i.e. ISOs, VMware, Xen, Citrix, EC2, etc) and what packages to include besides the bare minumum. The flash animation below shows how I created a product called GeekDeck based off the rPath Linux 2 platform, chose to generate an EC2 image and added the Apache web server as one of its components (if you cannot see it, please visit this link).

Creating a new appliance

The combination of these choices are all put together into what is called a group, a very detailed compilation of all the packages that make up our product, with dependency tracking down to the file level! Let me say that again: Every file of every single package that make up this product, from the very basic component to its kernel and the packages we added atop is tracked and mapped in the same style a versioning control system does. What that means is that for every customer you ship your product, you will know exactly what files and what versions of these files (and its dependencies and its’ versions and the dependencies of the dependencies and so on and on) are installed on their system. And if tomorrow you rebuild your group and add new content to it, your customers will be able to update their systems and they will get only what you specified on your group! Say goodbye to dependency hell!

It is well worth mentioning that every single appliance gets a web-based appliance management interface that will allow you to not only manage services and configure settings of your system but also keep it up to date with newer versions of your product. Also, rBuilder allows you to manage the content of your product by promoting it to different labels giving you control to move it through different phases of a release cycle, say moving a well defined group from a Development label through QA and eventually Production.

By the way, the EC2 image I built for this demo can be launched from rBuilder Online. Make sure to have a valid Amazon EC2 credentials ready.

I could go on and on about some other cool features that are built in and available free of charge on rBuilder Online, but I’ll stop it here and let you do some research of your own. Better yet, you could opt out of using the community version of rBuilder and instead try the brand spanking new rBuilder Appliance. It is like having your own software appliance at your fingertips! 🙂


Og Maciel is a QA Engineer for rPath and a long time contributor to the translation efforts of several upstream projects. When not spearheading new projects or communities, he likes to fish, watch ice hockey and spend time with his 2 lovely daughters in Chapel Hill, North Carolina.

Gaming : Hello, my name is: _’Gamer’_

markI was in a computer store the other day browsing high end graphics cards and the latest games, killing time waiting for a train, when the salesman came up to me and asked in a rather excited manner “does one like playing games? Is Sir a hardcore gamer?” Worrying that if I didn’t stop him talking soon the next few sentences out of his mouth would sound something like “Are there certain games sir likes to play? Does sir like to rub oneself up against the cases, touch them when no one’s looking, OH Suit You Sir!”, I blurted out “err, yes, I’m a fairly hardcore gamer”. The salesman then proceeded to squeak something else at me but I didn’t hear him as what I’d said got me thinking. How do you define ‘hardcore’? How do you define ‘casual’? Is there a middle ground? More importantly, where do I fit in? Am I ‘hardcore’? Even more importantly than that, has the salesman gone yet? I could only answer the last one and it was definitely ‘no’!

On to the train and I’m still thinking. I’ll freely admit that I’m a gamer but that’s about all. I don’t consider myself ‘hardcore’ in the truest sense but by the same token I don’t consider myself ‘casual’, however, seemingly, middle ground isn’t acceptable. You are either one or the other. I found myself stuck with the same unanswerable question that I face whilst stood in the frozen desert section in the supermarket. “Mint choc chip or cookie dough ice cream, why can’t we have the best of both worlds? Damn you Ben and Jerry”.

So what is ‘hardcore’ and what’s not? Hardcore gaming is something that’s developed over the years and is referred to by Wikipedia as ‘gamers whose leisure time is largely dedicated to playing or reading about video games. This type of gamer prefers to take significant time and practice on games’. The media has polarised this image further by creating one of a socially awkward teenager preferring to spend time with virtual friends rather than real ones. An image often used in Hollywood to signify an outsider. Casual is another label that’s been popularised by the media, mainly after the launch of Nintendo’s Wii. Wikipedia refers to a casual gamer as someone ‘whose time or interest in playing games is limited’ often preferring the ‘pick up and play experience that any age group or skill level could enjoy’. By these two definitions I like both aspects. I love to get immersed in the long running story of a role playing game such as Fable 2 or Oblivion, but at the same time I love the idea of picking up a Wii remote and beating the hell out of someone at a game of boxing (virtually of course but having seen the damage Wii remote thrown at speed can do it wouldn’t be hard to do it for real).

I thought to start with, I’d analyse my console history to see if I could build a picture of my gaming habits. It dawned on me that by doing it this way I bordered far more towards the ‘hardcore’ definition. I produce a sharp intake of breath, “Oh dear” I say, forgetting I’m still on the train, panicking those around me. I’ve had at least one console from every generation and at least one from each category, in most cases trading the previous gaming system for the new one at, or slightly after, launch. Looking at more recent times it makes the situation look even more ‘hardcore’. Previously I’d only generally have one console at a time but in the past year I’ve managed to work my way to owning every one of the current generation consoles simultaneously. Bringing it right up to date, I recently sold my Wii (The novelty of the console may have worn off but the innuendos still make me giggle like a school girl) which to many, including the media, is considered a ‘casual’ gamer’s console, stating a marked preference for the more involving, more in depth (more ‘hardcore’) games of the Xbox 360 and the Playstation 3. Hardcore: 1, Casual: 0

Then, as the train hurtled through the English countryside, my brain hurtled towards another thought. This time I thought I’d look at my actual game playing habits. This instantly started to swing the argument the other way. I thought ‘with all these consoles I must play games all the time, heck, I only played Bioshock recently, when was it….erm… well I watched that film last night, oh, and I was out at the snooker hall with Dan on Wednesday…err… Tuesday was my turn to cook and the subsequent kitchen decontamination took all evening’. I continued to work back and it turned out I last played a game 7 days previously. Then it occurred to me, having thought about Bioshock, the last save file for that particular game was Feb 2008. I hadn’t played it in nearly 14 months! I tend to wander through games at my leisure, often playing frequently for a week or so after a release and then it slowly trails off. Not exactly ‘hardcore’ by any means. The score was even, Hardcore: 1, Casual: 1. Then I remembered why I hadn’t played Bioshock for so long. I have about 29 other games and that’s just on the Xbox. Hardcore: 2, Casual: 1. Bugger.

Ok, I thought ‘let us look at another aspect’ (by this point I was worrying about my use of ‘us’ when I was the only one there. The guard on the train clearly saw my worried expression as he said “Don’t worry, all I want is to check your ticket”… ‘What? Oh. Yes, sorry’). This other aspect, once I’d got my train of thought back, was my use of the surround gaming community. Referring back to the Wikipedia definition, I don’t really read magazines about gaming, only dipping in to the odd magazine in the newsagents if I’m intrigued by a cover story, usually just killing time waiting for a train. I’ve already spent my pennies on an Autosport magazine subscription. As far as the extra features available on Microsoft’s Xbox Live and Sony’s Playstation Network go, well…. those of you that have read my article **’Home’ Isn’t Where the Heart is’** will know that I really don’t get on well with Sony’s ‘Home’ network. A place where you can go to chat and play games arcade games with fellow gamers, check out the latest releases and shop for virtual items. I’d rather suck my own socks thank you very much! The Xbox Live features fair better in my opinion as it’s a little more intuitive and lets the user choose what they’re exposed to and what to ignore. However, the downside to it all; this Xbox content is premium content and therefore costs money. I really can’t be bothered to hand over hard earned cash in order to be able to play games online against random people. People who seem no more than 8 years old and are endlessly amused by ‘your mum’ jokes. People who shout endless torrents of abuse at you when you kill them, ruining they’re perfect 1 million kills, 0 deaths ratio whilst you’re trying to rectify your 0 kills, 1 million deaths ratio. There’s also the endless radio chatter. “Oh man, did you see that kill?” someone screams over the radio. “YES, I was the one doing the dying; you don’t need to walk me through it again.” I will freely admit that in the grand scale of worldwide game playing, I suck, big time. I’d probably get better practice but I simply don’t find it fun playing against overenthusiastic people that are so dedicated to the game that they lose all notion of the fact that ‘it’s just a game’. Is that the statement of a ‘hardcore’ gamer? Er…No. Hardcore 2; Casual 2

It’s at this point that I stopped thinking (not an unusual occurrence if my girlfriend is to be believed). Why do I have to justify what my habits are? Surely I am a gamer first and foremost, whatever my habits are beyond that are irrelevant. I may simply drop in, casually, on a game for 15 mins with a group of mates or I may sit down for hours each day, on my own, to play through the latest blockbuster game in a ‘hardcore’ manner. It’s all a modern phenomena anyway. Before the Wii, and to some extent the Nintendo DS, the tag was, more often than not, ‘gamer’, the only variation was how often you played. Yes, the extreme of that gamer ‘label’ could be referred to as hardcore but it wasn’t a label in itself. It’s only since the emergence of ‘casual’ as a label on its own that ‘hardcore’ has had to become the polar opposite. Bruce Lee famously once said “Empty your mind. Be formless, shapeless, like water. You put water in a cup, it becomes the cup. You put water in a bottle it becomes the bottle. Water can flow; creep, drip or it can crash. Be water my friend…be water.” I couldn’t agree more. I’m not a hardcore gamer. I’m not a casual gamer. I’m going to be water instead!

Feature : The trouble with pirates

We’re a small outfit here at GeekDeck. We don’t get asked to visit games studios or talk to famous people. We just write about what matters to us and that’s why we’re able to take on such a large, controversial topic as piracy. Thinking back only a few years ago, the term piracy was reserved for Captain Blue Beard and his motley crew who would go around plundering sea vessels, terrifying people with their swords and daggers and stealing treasure, lots and lots of treasure.

In the modern day version of pirates we have to make a few subtle changes which to be perfectly honest hardly alter the image at all. Captain Blue Beard is replaced by an obnoxious teenager who can hardly string a sentence together. The ocean is replaced by the vast landscape that is the Internet. Swords and daggers could be replaced by small perl scripts, and the treasure? Well that’s the least subtle change of all. You see the problem is, the treasure isn’t really even tangible treasure anymore. It’s more like a secret code, or information. Knowledge is power, who said that? In this age of information and digital media, the physical treasures have been largely replaced by an abstract collection of unos and zeros.

In my honest opinion, piracy is a necessary evil.

The big companies, in an effort to prevent piracy and increase their media presence, hold back or lock down their content. This makes it much harder to hear, see, or play the end product. Take the music industry for example; most bands will post a couple of new songs up on Myspace or maybe just 20 second samples on the their website, but that’s all. I personally will not part with £10 of my hard earned money until I know there is £10 worth of material that I like. This requires listening to the whole album, and unfortunately, piracy is the easiest way to do that.

I will admit that I once was an illegal downloader. Since giving up, my spending on CD’s has dropped considerably. There are 2 albums out currently that, despite searching for on the Internet, I cannot find a sufficient amount of audio to make the decision to buy the CD. So I’m simply not buying. In the past I would have simply downloaded it. If there were more ways to listen, I’d buy more. Take The Fratellis for example. They posted their debut album in its entirety to stream on their MySpace page. I listened, I liked, I bought. Simple!

I’m fully aware that it’s the actions of illegal downloaders that cause the big companies to shy away, but, if they don’t ‘give’, people will inevitably ‘take’. It’s a catch 22 situation and I therefore believe piracy is a necessary evil.

Mark

The problem is – and it’s fairly obvious if you think about it – a collection of binary data can easily be duplicated without harming the original or removing it from it’s currently place of residence. A £10,000 piece of software can sometimes be replicated in a few minutes if you have the knowledge, skill and inclination. Here’s where it gets a little sticky. The physical, or as near as you can get to it with digital media, tangible substance is gone. Why is this string of ones and zeros worth £10,000? Realistically it’s not worth anything other than the price of the media on which it is being stored, but here’s where publishers will stick in their oars.

Digital media takes time to produce. Whether it’s the latest action film, a copy of Photoshop or Tom Paxton’s latest album in mp3, each of these has required time and effort to produce. It’s the same in essence as buying a straw hat. You could go and pick up a bale of straw for next to nothing, but do you have the skill and tools to turn it into a straw hat? Of course if your definition of a straw hat is anything that sits on your head that’s made of straw, then strapping the bale of straw to your head would probably qualify as the aforementioned and obviously highly popular, straw hat.

Do you have the skills to reproduce a movie in it’s entirety, or code a Photoshop clone, or sing like Tom Paxton. The chances are, no and if you can do all three, then please write in to GeekDeck, we’re short of a singer/songwriter/coder/film director. I used to know someone who would occasionally copy a song or two. He was a musician and was of the mind that he was able to play and record a cover of the song, so essentially he could either make his own version or copy the original. Yes, you guessed it, he went with the easy route.

“I have learned to respect all and every type of work. Whether we’re talking about someone who knows how to bake cakes or someone who can turn rocks into beautiful jewellery, this person took the time to create something unique. Some of them can afford to give it away for a lower cost or even free; some people value their work more than others and feel that they should be compensated for it. If someone has decided that their work deserves financial compensation, it is up to me to either pay or not. But pirating it, even if “just to take a peek” violates the copyright’s owner’s decision of not giving it away for free. When you take something that doesn’t belong to you without the owner’s consent, that is called stealing. So is piracy!

Og

Seriously though, the price has been set by the publisher because it represents what they believe is the value of the information behind the binary data. Take Photoshop for example, if it could only draw circles and fill them green and blue, you’d probably pay the equivalent of a fettered toenail for it. Now take into account the unimaginable vastness of digital canvas that Photoshop allows you to create, and you can see why it carries the price tag it does. The question is, are all prices realistic?

“Why is this string of ones and zeros worth £10,000? Realistically it’s not worth anything other than the price of the media on which it is being stored, but here’s where publishers will stick in their oars„

The answer to that question is really irrelevant, moreover we should be asking the question, Are we prepared to pay that price for that “information”. If the answer is yes, then the world continues in it’s merry cycle. We give our money away, the publisher gets richer and hopefully maintains the software, publishes more songs or films, and produces upgrades. However if that answer is no, something very interesting happens. Unlike the real world, where if you don’t have enough money to buy something you’re left with three choices, don’t buy it, steal it – usually associated with a lengthy time in a little box room if you get caught – or borrow money from someone else who can afford to buy it, with digital media, a secret hidden option appears. Attractive offer number 4. Pirate it.

Piracy is such an accepted method of retrieval of information that we don’t even call it stealing, yet in essence that is what it is. The problem with the definition of stealing is, we’re not actually physically taking something away from anyone. We are not traveling to the publishers home and prising their _sometimes_ hard earned cash from their fingers. We’re copying the information from another source. Some would argue that if the money hasn’t been given to the publisher, then we’re not stealing it when we copy the information.

“I have worked in the software industry for a while now and there are two things that I am rather confident about:

Software piracy has to harm publishers. The estimated 35% of unlicensed commercial software installed on computers worldwide has to result in some loss for companies that could otherwise invest into further improvement of their products.

Software piracy has to help publishers. It provides them with new users that would have never bought the software because they simply cannot afford it but who are nevertheless being educated to the platform. This is free advertising and leads to more sales later.

I would like to see software piracy being tackled from two new angles that very much differ from the current status quo.

First, I would like to see software publishers apply prices that are tailored to the income levels of each country. I would like to see software that is easier to install and doesn’t hamper the rightful user (who here still likes DRM?). There is a lot of room for innovation in how and why software is sold.

Finally, I would like to see governments pushing for prevention through education of users and promotion of free software alternatives rather than blind enforcement of the law.”

Damien

So why is it so accepted, and what consequence is there to the publishers and artists? In talking to people and receiving their comments, it’s quite clear that one of the main reasons that it’s so accepted is because it’s so damn easy to do. If publishers really wanted to stop us, they would, is often the reply that I hear. Whilst in theory there is some truth to this, as we have already discussed previously there is a whole convenience/security trade off which applies itself very nicely here. If you make your product X amount more difficult to use, because of DRM, or requiring a special playback medium, then the amount purchased can often drop accordingly. Whilst this is not always the case, cue the apple store, it can often be a limiting factor that leaves some publishers scrambling for sales which were once high above their competitors.

Whilst publishers can secure their media with technologies such as DRM, there are always those innovative people who find ways round it. You enforce it so that to play a piece of music requires it be decoded by a special DRM enabled player, job done? Wrong. At the end of the day, the data is still just signals and you can guarantee that some bright spark out there has an almost lossless way of getting that data back into a digital form once again for immediate pirate release. Plugging the line out into the line in of a laptop is a cheap layman’s way of getting a DRM stream back into the hands of the pirates. These are the real pirates, the ones who actually do carry an equivalent sword in the digital age.

As well as it being socially accepted that people download films/music/software free from the Internet, we also hit another factor in our quest for an answer to the “why” question. It’s so damn easy. Ignoring for the fact that many files downloaded from peer to peer and torrent sites are infested with viruses that makes Pig-Flu look like an amoeba, downloading files from the Internet is so easy, your gran can do it and in some cases, she does. Sometimes not directly, a grandkid could easily download “Granny’s hottest hits from 1945” from a torrent and slap them onto CD, but it doesn’t change the fact that however good the intention this is still piracy.

Cue the RIAA and it’s here that we bring in one of the most seemingly ruthless and hard hitting enforcers of the copyright laws. The RIAA are not shy about who they go after. Numerous articles have seen them suing 10 year olds, disabled people, and elderly users alike. It seems that the RIAA are trying to make a point about piracy by attacking the weak and feeble, yet the feelings that this raises in the various communities seem to suggest that people want to pirate media more, just to get back at the evil RIAA.

The thing is, technology, knowledge, as it ages, begins to reach a point where the information becomes common knowledge among others. Even information like creativity begins to be common place among us. Crazy to think that when I was in High-school, good information was so hard to come by you couldn’t even check out the encyclopedias, because if they were lost it would take too much effort and pain to get them back. Now we can acquire accurate information, piles of it even, and it comes in a easy to use format. So in terms of general piracy, this is the Farriers that are complaining that they are loosing business because a new technology has come along to destroy their lives. Or Kodak suing the digital camera makers because it hurts to see people copying pictures or sending them in email because they are loosing business.

In terms of copyrights I don’t think we should break the law. But you can’t create a cover all license and then start popping customers left and right because companies just can’t get with the times. Blue Ray? “Are you telling me I still have to buy something physical?”
Gees, catchup!

Jason

Whilst it’s motives are definitely questionable and there appear, on the surface at least, to be far more likely candidates for attacking than crippled old ladies, the RIAA has achieved one goal very successfully, and that is to bring some fear to the word piracy. Recently in Sweden the laws were changed to give enforcers the right to make ISPs divulge information about illegal file sharing users. Figures suggest that the day the law was introduced, file sharing across Sweden dropped 40%. That’s a lot of scared bits and bytes.

But is it really stealing? Is it really wrong? Large publishers are no more hurt by a single act of piracy than a rhino is, sleeping on a bed of peas. It really doesn’t care. The difficulty comes when the numbers are not one, but one million. Suddenly the whole concept is a lot more damaging. We’ve seen CD sales dropping to alarming rates, but is that due to piracy, or the increasing number of mp3 sales on places like the apple store? A recent article by the BBC, puts the blame squarely on the shoulders of the pirates, stating that the increase in sales of mp3s in no way makes up for the decrease in sales of CDs. Now, either people are just not buying music anymore, which to me seems incredibly unlikely, or the pirates have gained some serious territory.

Another side of the whole piracy argument is the try-before-you-buy mentality. It’s not often that you get to fully try out a piece of software or see a really genuine film trailer before you buy it. Demos, trailers and next weeks number 1 in the charts are there for one reason and one reason only, to sell copies of a product. They are engineered to your liking. They are tailored to look exactly as you want it, to do everything that you’d ever need, to give you an experience so amazing that you’d be quite happy just empty your bladder right there. Well maybe that’s a tad too far, but the fact of the matter is that in a lot of cases, the “demo version” of a product is just not an accurate representation of the finished article. Many people argue that they would be much more likely to purchase a product if they have either seen the real thing, or tried the real thing, without any limitations. In the piracy world, for films and music, users will often experience the entire product in a less polished way, often citing poor quality video etc, before they decide to buy it. Good for the consumer, bad for the publisher.

However if we view this on the flip side, it could actually turn out that this is the lesser of two evils. Imagine these two mentalities; Bob downloads a copy of Star Trek, directed by JJ Abrams, and absolutely hates it, he’s lost nothing, the publisher has lost a potential sale. Bob sees a new film out by JJ Abrams, downloads it, loves it and consequently buys it and the 3 sequels on DVD. Now imagine Alice. Alice buys the new Start Trek movie when it’s released on DVD as she was unable to get to the cinema to watch it, but she thought the trailer looked very enticing. She pays full price for it at a whopping £19.99. She watches it, and like Bob, hates it. Alice vows never to buy another JJ Abrams film again. She feels the trailer misrepresented the final product and feels she was stung by it. Not wanting to waste her money again, she sticks to her guns and misses out on buying 4 films that she actually could have enjoyed very much.

I believe the problems we are having recently, if they even are problems are entirely created by greed and companies wanting too much control.
Like most things in life this issue is very grey not black nor white. There are people doing bad things, making their living by selling other people’s material and not contributing anything back to the original creators or the greater artistic community. But most of what is getting attention today — the pirate bay, napster-like sharing is really really blown out of proportion.

Reasonable people will do reasonable things, and I believe most people are reasonable. I download tons of copyright material without authorization, but its not like I don’t contribute anything back. I purchase lots of music, concert tickets, etc, and in most cases if I am downloading it I would not have bought it anyway. Artificially restricting my exposure to music and art isn’t going to solve anything.

If I don’t make the effort to go all the way to the store and give over money for a digital good, and the publisher gets no money. If I don’t consume this good at all same thing happens. Can anyone really say that this tiny decision on my part makes a difference. No one notices one missed sale. Get over it. We have much bigger problems facing the world right now. Put some effort towards those.

Laszlo

In essence this mentality seems quite fair, and looking at the example above, the film publisher actually lost out more to the person who bought the DVD legitimately in the first place, as opposed to the person who pirated the first one, and bought it and subsequent others. Despite what people think, piracy does harm the publishers, be they music, film or software. People who say otherwise are at risk of being naive. On the other hand, it seems apparent that some piracy, or copying of media can actually have a benefit to both the end user and the publisher.

An interesting twist to this story is YouTube. Many people use content from YouTube to see if something is good enough to buy. You can find many many clips of films and TV shows on YouTube that haven’t necessarily been in trailers. After missing the first half of the third season of LOST, we were able to get the jist of bits we’d missed via YouTube. Is this piracy? It’s an interesting argument. On the one hand we haven’t broken any laws in obtaining the media. The blame actually falls squarely on YouTube’s shoulders for hosting the copyrighted content, but is it wrong to watch it? And what about accessing content in another country from yours without paying the appropriate import export taxes. Is this statement just silly?

One aspect which many people seem land in the hypocritical bucket is in that of distributing pirated media. Some people are perfectly happy with downloading content from the Internet, but are totally against selling this or distributing at practically zero cost to others. Deep down, the media has still been stolen in both cases, but it does certainly seem that the people distributing it are much further into the black side than those who just watch it. It’s been heard several times researching this article, “Watching it is OK, but selling it on?? No way dude” It’s big business and funnily enough the people who claim that reselling pirated material is wrong are sometimes the same people that buy DVDs off EBay in the full knowledge that they will probably be pirated versions. Does this bother them? Seemingly no. What seems to be worse is the multiple cases of people buying a hookie copy of X, believing it to be legitimate, contacting the appropriate authorities and hearing nothing back, despite clearly stating that the source appeared to be one of mass distribution. This sends a clear message to the consumer that piracy isn’t that big of a deal to them, the very authorities that have been put in place to stop it.

“the Open Source community provides an almost unbounded plethora of free alternatives to almost every application you could think of„

Moving on to our final topic and some of you may be feeling that something has been missed out. Open Source anyone? The topic has been left till last for good reason and the reason is that some open source advocates may not like what is going to be proposed, and we didn’t want people skipping out before they’d heard the ending, did we? It’s a common argument that in many cases piracy is unnecessary because there are plenty of other free alternatives to proprietary or commercial digital information. On the software side, the Open Source community provides an almost unbound plethora of free alternatives to almost every application you could think of. Most are perfect replacements for their commercial counterparts, some are less polished, and a few even further down the line are pretty poor.

On the Music side, we have sites like Jamendo, where people offer their musical talents for free, not just to listen to, but also to be used in certain works of your own, providing that certain restrictions apply. The film and video region is a little thin on free alternatives, and that’s largely because films take a lot more money to produce. People who want blockbuster films at free free prices are going to be out of luck, as these often take millions of pounds/dollars of investment. Though people commit voluntary donations to open source projects, the only one that springs to mind which is anywhere near the same ballpark is the $10,000,000 that Mark Shuttleworth put together for start the Ubuntu project.

The problem with Open Source alternatives and free media, is not so much that it’s in the minority, or not as polished as commercial offerings at all, or even that people are unaware of it, although that is sometimes the problem. The problem is at a very fundamental level, people don’t want GIMP, they want Photoshop, they don’t want Charles Fenollosa, they want Mike Oldfield, though there is abosolutely nothing wrong with the GIMP or Charles Fenollosa. Often, the reason people who want this digital media so bad and are willing to resort to piracy to get it is because they want the real thing. If you’re looking for a song by Tom Paxton – why the heck do I keep thinking of him, I’m definitely not a fan – you’re not going to find it at Jamendo. Though you may find something similar, it’s never going to match to the real thing. On the software side, people are used to using Microsoft Office, they don’t know Open Office, and are reluctant to try it if they _can_ get hold of a copy of Microsoft Office by other means. Though the free movement as a whole is very noble and is something that should be encouraged and supported, it’s never going to be a complete replacement for the proprietary market because it’s not always what people want. Many end users don’t want, or are not looking for functionality, they are looking for a specific product that they have seen and know will do a job, whether that be pleasing their auditory cortex, or editing their family snaps at DisneyLand.

It’s been a lengthy trip this time and if you’ve made it this far, and read all the contributors views then you are to be truly commended. Piracy is such a large issue, and the intent of this article was never meant to be an exploration of right and wrong, nor a definitive guide to life, the universe and piracy. It was intended to perhaps raise thoughts and ideas that you as a reader and a user may not have thought about before and encourage you to make the right choice, to be a noble member of this digital society. Truth be told, we’re probably not going to see the decrease in piracy that certain groups of lobbying for. Like it or not, piracy is an evil that many call necessary. Maybe the anti-piracy groups are not there to squash every instance of illegal downloading and distribution, maybe they’re really there to both keep it under control, and to do pirate marketing. Who would have thought it, the very organisations set up to prevent and control piracy may actually be doing a better job at promoting it. Who can say? It’s quite possible though, that for all the bad press the pirates get, they may actually be doing the industry a favour and that’s probably one of the scariest things of all.

Gaming : Multiformat Releases == BAD?

peteIt’s hard sometimes to pick out quality among the huge amount of just average mediocrity that paves our games market.  Finding something truly stunning and groundbreaking is not only difficult but sometimes almost impossible.  It’s true to say that a great title comes along around 3-4 times per platform, per year.  For your average casual gamer, that’s about right in the spending department.  Unfortunately these rare classics don’t always fall into the genres that we either like or adore.  However why are there so many rare gems around?

One of the reasons for this, I believe, is the notion of multiformatting and in the 7th generation of games consoles it seems to be widening a chasm which is going to be difficult to fill.  We currently have three 7th generation consoles, 4 if you count the PC, the Wii, the XBox 360 and the PS3.  In fact, sometimes I sit here and wonder really what is so similar about them, apart from the fact that they all play games.  Taking a quick look at these machines, and I’m not going to go into any real details here, we have the following contenders.

Weighing in at the top end, in terms of raw power is the 8 core monster that is the PS3, its sheer architecture requires a completely different way of coding games, which according to some coders is frustratingly difficult.  Next in line is the XBox 360, which although is a console in name, is really just an non-upgradeable super powerful gaming PC at a very reasonable price.  Last in line is the tiny white fruit-esque, though we can’t think why, Wii.  At its core, there’s precious little extra in terms of processing power than its predecessor.  Many people have called it a GameCube with Bluetooth, because essentially, technically, it’s not far from the truth.  Where’s the PC I hear you ask, well, quite frankly, it could be in all three of them.

I’m not going to venture into the realms of handheld consoles such as the DS and the PSP, but you can see already the shear difference in the machines that are available in today’s gaming society.  So what’s wrong with that I hear you ask?  Variety is a good thing, it’s what keeps our society and industry moving.  I totally agree with you here dear reader.  Variety is the spice of life and it’s what keeps one manufacturer from hogging the entire market.  It forces companies to constantly reevaluate their current product and come up with something better and new.

Whilst for the overall market variety is a good thing, the problem comes when publishers want the largest slice of the pie.  I am of course talking about the main subject of the article; multiformat games.  Multiformat games allow publishers to target the broadest range of gamers imaginable.  From the timid and often amateurish nature of the Wii owners, to the down right dirty, all out war, dukem nukem, rockem sockem, die hard PS3 owners.  Although multiformat games have a great advantage for the publisher, ie, more monata, the benefit for the consumer is often less so.

I can hear some of you in the audience already with their hands half up, wondering whether to say something or not.  Yes, I agree, multiformat can be a good thing for the consumer as it allows them the opportunity to play the same game that their friend has on an entirely different platform, but it’s worth looking at the overriding argument of, wait for it, quality.  And you thought I was going to mention the cost of producing all those different covers.

“a supercar body, which was the original design idea, is going to handle like a pushbike with a jet engine when placed inside a three wheeler„

Spare a thought for the coders of these ill-fated franchise games.  Whilst coding for any console is no easy ride, making sure a game is physically implementable on several must be a nightmare.  Having little insight into the actual process behind multiformat game developing makes it difficult for me to come up with some definitive citations, however one thing seems abundantly clear;  multiformat games are generally of lower overall quality than their platform exclusive counterparts.  I recently ran across a thread on a forum where some XBox 360, PC and PS3 gamers were battling out one of the age old troll wars; “My console has better graphics than yours.”  Whilst I agree graphics isn’t everything, it does seem to be one of the more important aspects for gamers.  I’m as much a sucker for slick graphics as the next CG fanboy, but I do feel deep down that there is some truth behind graphics being one of the more important of the console ideals.

Of course this used not to be the case, before we had fancy controllers and console OSs, it had to really be plainly squared on graphics and playability, now all of a sudden we have a new contender in the “Mine’s better than yours” campaign.  It’s true that Nintendo has revolutionised the way people interact with their gaming consoles.  Having not used an XBox I cannot comment here, but the Sixaxis feature of the PS3 controllers does tend to feel a little tacked on and definitely doesn’t have the same level of responsiveness as that of a Wiimote.

Of course I’ve digressed quite wildly, as is the nature of my articles on numerous occasions, however the user interface is yet another aspect that the poor developers have to think about when converting a game’s core ideals to several platforms.  It’s like building the exact same car body around a supercar, a centurion tank and a three wheeler.  What fits one isn’t necessarily going to fit the other and here’s where the consumer gets hit in the face.  Coders will and do make cut backs in functionality in order to make a game fit to its intended platform.  If you think about it, they have to because a supercar body, which was the original design idea, is going to handle like a pushbike with a jet engine when placed inside a three wheeler, and is going to have to be subjected to many rounds of panel beating to get it to fit on the Centurion tank and even then, it’s just going to look naff and half finished.  Ringing any bells here.

How many times have I heard people say that a game feels more unfinished on one platform than it does on another, that games feel flat on one platform than on another.  It’s sad when a great game gets its guts reorganised to ensure that it’ll still remain usable on another platform.  What’s even scarier is the amount of innovation that may be left out of a game, purely because it isn’t implementable on another console.

So why do the graphics looks better on one console than on another? Probably because initially development may have been for one platform in particular and then developers were forced to include more, leading to a less glossed finish on the subsequent implementations. However, platform exclusivity can lead to some of the most awesome games ever.  At the moment I’m thinking about Final Fantasy VII, Final Fantasy X, KillZone 2, Little Big Planet, World of Goo, Super Smash Bros.  For me Killzone 2 has some of, if not at the moment, the best graphics out of any game I have personally ever played.  I’m hoping I’m not going to receive a torrent of, X looks so much better than KZ2 or, Y beats the stuffing out of KZ2, because quite frankly I don’t care.  These are my opinions, and you know what, I’m entitled to them.

“we have to remember that we are all just pawns in the publishers ever more difficult and strategic game to make the stockholders more money„

There has been some speculation that Sony played a part in the success of KillZone 2, purely because of the disappointment at the first installments wow factor after the initial tooting it was given.  To be honest I don’t really care, but it does go to show, that when a game is designed exclusively for a particular platform, it can absolutely shine.  Little Big Planet is another great example of this.  The very nature of the game just wouldn’t be possible on the tiny processing power of the Wii, the physics would be far too complex.  Contrast this with the beautiful Lost Winds on the Wii, and you can see how development around the Wiimote has really played to Nintendo’s advantage. 

Several people on one forum were comparing a multiformat game’s graphics and saying how much more superior it was on the XBox 360 to that of the PC and the PS3.  I must admit, I’ve experienced a differing quality in F.E.A.R 2.  The PC version has far superior graphics to that of the PS3, in comparison I’d say that KillZone 2 on the PS3 beats the graphics in F.E.A.R 2, both on the PS3 and on the PC, but I guess some level of subjectivity is needed here.

So why don’t developers and publishers alike put the effort in and make every game on every console a winner?  Quite frankly the large factor is the same as it’s always been and it’s cost.  Tada! Suprised?  I have no figures to back this up, but it wouldn’t surprise me if the development cost of KillZone 2 on a single platform, equaled or exceeded that of Tomb Raider:Underworld on all the platforms it was released on.  At the end of the day we have to remember that we are all just pawns in the publishers ever more difficult and strategic game to make the stockholders more money.  It’s all about maximising profit at the end of the day and developing for one console may be much easier than another.  Optimisation is always key here, as it drives down the expense, and raises the profits.  Coders will try to reuse as much code as they can.  So, and this is just an example here as there are probably built in routines for this, whilst a rendering engine built for the Wii may perform exceedingly subpar on the PS3, if it can be adapted quicker than writing a separate engine exclusively for one game, for one platform, which option do you think they’ll pick?  It really is a no brainer.

So who’s the real winner here?  Well unfortunately it appears to be the publishers again.  The wool has once again been pulled squarely over our eyes.  It’s a shame, but it really does seem like multiformat games tend to perform poorly on at least one console.  At the end of the day there really “ain’t a lot we can do bout it guv”.  The situation is here to stay.  Consoles will be different.  Publishers will want to do multiformat releases.  Personally I’m hoping that the dual release of Final Fantasy XIII on the XBox 360 and the PS3 doesn’t hurt it too much.  Hey I might get lucky.  Mightn’t I?

Culture : Come on Lan, let’s have a party

peteLAN parties.  Is there anything more exhilarating?  Probably.  However, few geeks can deny the certain “Je ne sais quois” that they feel in the belly of a damn good LAN party.  It’s not just about the games, it’s about the pizza, the beer, the company and most importantly beating the pants off your mates. 

I remember a good friend’s brother used to host LAN parties occasionally at his house, sometimes filling the place with an extra 14 people that used to suck the life out of the poor buildings electrics and fill the air with excitement and anticipation.  I’d lug my fairly decent, optimised, slender tower round to his house, only to find the place filled with the biggest towers, hard drives, power supplies I’d ever seen.  One guy in particular had a case which stood from the floor up to his waist, and he was by no means the smallest guy at the party.

We’d generally split the rooms to begin with based on who wanted to be in each team.  This always led to rivalry and jeering, envy, horror and shout upon shout of, “But you can’t go with Jim, cos you’ll beat the pants off us.”  Ok, so that quote was lacking some authenticity, insert some random swear words into it, and you’ve got a much more realistic idea.  The kitchen area would be filled with roughly seven people, whilst on the other side of the house, the games room, and two bedrooms would house the other mob.

“He mentioned that he had a date that night. When questioned about the venue of the date, one of his friends blurted out that he’d heard they were meeting online, in a game of Resident Evil 5„

Then the games would begin, slowly at first, as people tinkered with their machines, dropping in and out trying to obtain the best possible advantage.  Generally, at our LAN parties short of physically cheating, such as looking at an opposing players team, most other forms of advantage were permitted, removing textures from everything but other players to make them stand out, changing your FoV so you could almost see round corners.  Occasionally people would have to swap teams.  For the nonchalant of us that would mean uprooting yourself and moving to another room.  For the more hardcore, their entire PC would go with them, no one else was allowed to touch it.  By the end of the night, this practice would become less and less common, as either the teams evened out, or the players didn’t care which machines anyone used anymore.

It was during these games that I acquired a trait for which I am now constantly moaned at by my current set of gamer friends; inverting the Y-Axis.  I seem to recall at one early session, the overall master of all gaming showed me a few tricks.  One such trick was to invert the mouse.  He said I’d find it much easier, that it was more intuitive.  To be honest I totally agree with him, and liken my inverted mouse to flying a plane.  Push forward to go down, pull back to go up.  Easy.  Not so for my current set of friends who think I’m just plain weird.

So where did the LAN parties go?  It seems that people are having them less and less these days.  Sure there are still the huge corporate organised events, where thousands of gamers get together in intense two or three day events, but what about the little games, the local LAN parties.  From what I can see, they all appear to have almost vanished.  By and large, it’s probably the Internet that has had the most impact on this.  In the days that I used to play, Internet speeds were pretty dire, and that was if you actually had the Internet.  Couple that with the fact that most people only had one phone line, and the parents got a little narky if Jimmy was spending 3 hours tying up the line, and you have a recipe for not a very wide area network.

The Internet revolutionalised this.  The first game I played online was probably Red Alert.  The connection was diabolical, the speed sucked and tying to find someone decent to play with was like tying to wash your jeans in a tea cup.  As the speed of people’s connections increased, so did the capacity to play games reliably online.  Thinking back to the more recent times of me playing CS:Source online, the game play was much better, but there was still jerking of players and just general lag.  On a side note I love the way some gamers use lag to justify their poor performance.  “Why did you drop out Matt?”  “Oh I had to there was…eh…..too much lag.”  “Oh yeh?  That sucks”  More recently I have been experimenting with KillZone2 online and I have to say, I don’t think I have yet experienced any problems in the movement and reliability of the online gameplay.

With the Internet changing the LAN to a WAN, does playing multiplayer with people you know and love still have the same oomph?  In part yes, but overwhelmingly I feel a big fat no.  On the one hand it means you can plan tactics and talk to each other privately without anyone on the other team having any idea about what you’re thinking about.  This makes the, “Let’s gang up on Martin” rounds all the more fun.  However the whole spirit of it is largely lost on me.  The funny thing is I’m a geek, I don’t generally like to exist in large groups of people, but if those large group of people are also hell bent on shooting each other with MP4 machine guns in a virtual environment, then count me in.  Sometimes I just don’t want to play alone.  I want someone to be physically there talking to me about how they’re doing etc.

“I’m a geek, I don’t generally like to exist in large groups of people, but if those large group of people are also hell bent on shooting each other with MP4 machine guns in a virtual environment, then count me in„

On the flip side, the online era offers some distinct advantages and these mustn’t just be glossed over.  Sure, people are not there with you, but sometimes that’s not just an inconvenience, it’s a definable problem.  How do you meet up with someone you know in Australia to play a LAN game of Call of Duty, when you live in the UK.  Intercontinental LAN parties tend to be rather expensive, not to mention getting your all important PC along with you.  Do you really want to risk it getting beaten around in the belly of a 747?  I certainly don’t.  No, the Internet definitely has it’s advantages in this respect.  Not only can you play with people you know in distant countries, you can also play with people you don’t know and make new friends, often meeting tens if not hundreds of people a night, depending on whether you switch games often or not.

It doesn’t stop there of course.  One of the other main advantages of the online model is that of availability.  It’s inherently difficult not just to fit 14 people in one building, but to plan fitting 14 people in one building.  You have to consider dates, consult your diary, ring around, or in these days txt people.  “Are you free on the 24th?”  “No, sorry m8 got a new girlfriend and we gonna hang out for the day”  “Damn”  With the online model this doesn’t matter so much.  People can dip in and out whenever they please, and more importantly sometimes, more that 14 people can dip in and out during the course of the day.  You just can’t expect to have constant LAN parties, where as with online play, you can play whenever and wherever you like.

Sounds like the Internet is the bees knees, doesn’t it?  Well it is and it isn’t.  Forgive me for being old fashioned, but I like the physical touch.  The air always seemed so charged at LAN parties and if you came across a situation where your comrade John was standing with his face 3cm from the wall, you could always yell out “Oi John, where are you?” and wait for the reply “I’m just taking a dump!”  Seriously though, online gaming is just a different method of achieving the same thing, playing with multiple REAL people.  Some people prefer the anonymity of online gaming, welcoming the ability to hide behind an avatar, a virtual character, through which they can achieve things and interact with people in a way they just can’t do in real life.  Some people crave the attention they get from being #1 on the leaderboard, and dealing with the flurry of clan invitations.  Some people enjoy hanging out with friends, talking about their lives, and kicking some serious bottom whilst they do it.  Me?  I guess on second thoughts, I love a bit of everything.  I enjoy the online play, and I enjoy the LAN party.  They kind of go hand in hand for me.

As I was on the train today, I overheard a conversation between a guy and his friends.  He mentioned that he had a date that night.  When questioned about the venue of the date, one of his friends blurted out that he’d heard they were meeting online, in a game of Resident Evil 5.  After all the social nature of things is changing wildly.  Maybe I’m blind, maybe I just don’t understand things anymore, but it certainly seems to me that being apart is the new being together.

Programming : Code optimisation

peteI started programming when I was around 10 years old, after my dad showed me how to program on the Atari 800XE. I remember progressing on from that to QBASIC when we got our first PC. It was QBASIC that stayed with me for a good few years on from that until I believe I moved on to PHP. Whilst using QBASIC, three very important things happened. 1) I started college, 2) I met a guy called Alan O, and 3) I learnt about the elegance and beauty of code reduction and optimisation.

When starting any class in computing it’s usually apparent after the first few lectures, who the real geeks are and who the rest of the class are. I don’t know why, and I guess I never will, but there is always a group of people in computing classes that enjoy laughing at the guys who really do want to learn. Reminding them that they’re in the computing class too usually brings them down to earth for a while, but it’s this leveling effect I enjoyed because it usually meant I was about to make some new friends who were as eager to code and learn about computing as I was.

“He looked at the screen and then the mouse, picked up the mouse and spoke rather bluntly, gesturing to the screen. ‘What the heck do I do with this?’„

In my A level Computing class at college I met a guy who changed my outlook of programming forever. I’d had no real external input from anyone else before then, apart from the small amount of Visual Basic that we were taught at the end of school. This was the first time I’d met someone else who was better than me at programming, Alan O. I almost overlooked this guy after our first lecture in the computing lab. We’d been sat down in front of Win 95 machines and instructed to run up MS Access. Alan sat there and looked rather bewildered. Curious as to his problem I leaned over and asked him what was the matter. He looked at the screen and then the mouse, picked up the mouse and spoke rather bluntly, gesturing to the screen. “What the heck do I do with this?” I looked at him and then the mouse and then him again. “What’dya mean?” I asked?

verticesAs it turned out, Alan had never used Windows at all, he’d lived entirely in the land of DOS and had never even used a mouse. I knew this guy coded, but at this point I was seriously wondering if he “really” coded or not. A few days later, I took in a program to show him, the code for which is displayed below in it’s entirety. I had programmed a clone of the “Mystify Your Mind” screensaver from Windows 95. Essentially four vertices or points, joined by lines bounced around the screen. See Figure 1. Alan, looked genuinely impressed and asked to see the code. Had I know what he would show me the next day, I would certainly never have bothered to show it to him at all. His creation was not just breathtaking, it was utterly astounding. He suddenly became my mentor overnight. Alan had programmed a full screen scrolling RPG framework in QBASIC. He had transparent menu overlays, double buffered graphics and well, I can’t even begin to describe what else. I remember him talking about interfacing directly with the graphics card, through what I believe was Assembler. I’d never seen anything like it. Not in QBASIC. Forget Nibbles, forget Gorillas. Alan was da bomb.

Rewinding back to my own submission to the coding genius, Alan sparked off something in me that I’d never had before; A quest to optimise and reduce my code as much as possible. Naively, I had always been under the impression that more code lines meant I was more elite. Come on, I was only young. Alan showed me that the real key to everything was reduction and optimisation. It was the first time that I’d seen code as an art form, as something elegant to strive towards perfection; The most optimised, reduced code possible.

verticesI was pretty pleased with the code below. The first section sets up two arrays g and h. I then set the screen to mode 11. g and h were actually x and y, and you can see the next set of lines set up the initial coordinates for the 4 points of the of the traveling polygon. After that comes something interesting. There are five lines that set up the status for the vertices of the lines. Note the inconsistencies in my early programming. I declare coordinate arrays with 10 points, I set up initial coordinates for only 4 points, and I define a status for 5 points. As a side note, QBASIC was a language that had line numbers optionally. Note the ’10’ on it’s own that I used as a line identifier.

The lines of code after that do the drawing of the lines. You’ll notice at the end of the line draw, there is a for-next loop on every other line, and a 0 on the in between lines. Here I would draw the first line, wait for 100 cycles, then draw a black line over the top to blank it out ready for the next frame then lather, rinse and repeat for the remaining lines. In retrospect I should have drawn all four lines, then waited 100 cycles and blanked them all in one go. This would have probably given a better frame rate, and looked a lot nicer too. The reason for the loop wait was because if I cleared immediately after I drew, you ended up either seeing a very flickery image, or not much at all.

The next statements were REM’d or commented out. I can’t quite recall what the PSET commands were for. Looking at it now I’m suprised it didn’t throw an error as they were arrays after all and PSET required a single x,y value. The GOSUB 5000 was almost a function call, because at the time I was unaware of how to do functions at all in QBASIC. After that you see the GOTO 10. When the program had finished in the pseudo-function or subroutine, it would return and go to line 10, essentially performing a loop that would keep redrawing each frame.

The “function” at line 50000 actually did do some optimisation. This part of the code deals with checking if a point of the polygon has hit the edge of the screen, and if it has, “bounces” it back. Either I couldn’t be bothered to copy and paste the code, or I actually had a clue, but the FOR loop, loops through each vertex. Notice here I loop through 5 points, even though there are only ever 4 used in the draw sequence.

verticesAn interesting thing now, is to talk about the vertex states. Think about this, my code was very simple, a point could only move diagonally at 45 degrees because I was either adding or subtracting one pixel from each axis. Taking this into account, a vertex could only be in one of four possible states. These were, moving top left to bottom right, moving bottom right to top left, moving top right to bottom left, or moving bottom left to top right. I thought I had been clever here. You can see four sub routines, defining what to do for each state and above it some IF statements telling the code where to jump depending which state the vertex is in. Each of these routines does four things. It checks one axis to see if it has exceeded a limit, if it has, it changes the state. Then it checks the other axis. Finally it increments or decrements the coordinates of the point in both axis, depending on the state of the vertex in question. You may be wondering why the limits are set to 1 for the lower limit, but the upper limit is the actual edge of the canvas. In QBASIC, you could draw past the upper limits, but not below zero. It would mean that the drawing was off the screen, but it was still legal, if I remember correctly.

In essence, that was the entire program. The sub routines would move the points around and then the draw statements would create the lines on the screen. After looking at it, and I’m positive that both he and others could have optimised it even more – double buffering for instance – Alan said to me, “Can I make a suggestion?” I said yes and what follows is a brief summary of our discussion.

He asked me, in a nut shell, what the subroutines were doing. Of course he _knew_ the answer, he was just trying to get me to think. So I replied that they incremented or decremented based on the direction of the vertex. “I can reduce that code for the subroutines to about four to six lines,” he said. I was shocked and I certainly didn’t believe him. He asked me what was happening at each edge. So I replied that essentially the speed for the vertex at a specific zxis was getting flipped from positive to negative. I could see his mind turning, but mine certainly wasn’t.

“What’s an easy way of flipping positive to negative?” he asked. “Store a flag and based on that flag either increment or decrement,” I told him, smugly. Unphased, he turned to me and simply said, “What if the flag could be the incrementor and the decrementor at the same time?” I just stood there like a dummy. I had no idea how. “What’s one times minus one?” he asked. “Duh, minus one,” I replied. On to my next mathematical challenge. “What’s minus one times minus one?” Again with the easy questions I thought. “One” I replied. Then it hit me. By starting with the number one, and multiplying by minus one each time, I was oscillating from positive to negative. Each time my vertex hit an edge, I just needed to multiply the speed by minus one and the direction would be reversed.

“The elegance of Alan’s solution is something I don’t think I will ever forget. It led me to change my entire way of thinking.„

It was genius, but it was something I had never ever thought of. He also slimmed down the conditional statements. If we’d simplified the core idea down to the flipping of the polarity of the direction, we didn’t need to know which edge it had hit, just that it _had_ hit an edge. By doing this, we only had to do two things. 1) Add two more small arrays, gspeed and hspeed, to hold the speed for each vertex in it’s associated axis, set up initial conditions and 2) Replace the lines from “REM LOCATE 5, 1: PRINT n” to the last “RETURN 59000” with this.


IF g(n) 640 THEN gspeed = gspeed * -1
IF h(n) 480 THEN hspeed = hspeed * -1
g(n) = g(n) + gspeed
h(n) = h(n) + hspeed

To this day I still find the way in which Alan reduced and optimised 25 lines of my code, to just 4 in the loop, 2 more declarations, and a set of initial conditions, absolutely staggering. Not only was the code cleaner, but it was more optimised. Doing comparisons takes CPU cycles, doing memory writes takes CPU cycles. The elegance of Alan’s solution is something I don’t think I will ever forget. It led me to change my entire way of thinking. Years later at university, whenever there was a computing assignment, we would always challenge each other to try to do it in as few lines as possible. Most people didn’t care, but there were five or six of us that always pushed to the limit.

Hand in hand with this memory is a very fond one whilst mentoring for GSoC. My student had shown me some code he’d written and had a few problems with it. After looking at it, I saw what Alan must have seen that day, a huge number of repeated sections of code. “Why not put the names of the labels in an array and step through it using the same routine, instead of copying and pasting all those lines?” I asked. A short silence was broken by a gasp of excitement as what I had suggested was finally understood. Hearing the excitement in my students voice was something I’ll never forget and all in all, we must have removed 60 to 70 lines down to about 10.

Sadly, optimisation seems to be a dying art. People are so used to having abundant memory and cpu cycles that many coders just don’t seem to bother optimising anymore. The trouble is, if no one optimises, programs bloat and run slow. Get several of these running on the same machine and you have a recipe for running through treacle. I hope this article has instilled some of the optimiser in you. It was this defining moment that made me see that coding isn’t just about getting a job done, it can be down in an incredibly artistic way, leading to something elegant and sometimes almost beautiful. Happy Coding.


DIM g(10)
DIM h(10)
SCREEN 11
g(1) = 100
h(1) = 100
h(2) = 100
g(2) = 200
h(3) = 200
g(3) = 100
h(4) = 200
g(4) = 200
stat(1) = 1
stat(2) = 4
stat(3) = 3
stat(4) = 2
stat(5) = 3

10
LINE (g(1), h(1))-(g(2), h(2)): FOR i = 1 TO 100: NEXT i
LINE (g(1), h(1))-(g(2), h(2)), 0
LINE (g(2), h(2))-(g(3), h(3)): FOR i = 1 TO 100: NEXT i
LINE (g(2), h(2))-(g(3), h(3)), 0
LINE (g(3), h(3))-(g(4), h(4)): FOR i = 1 TO 100: NEXT i
LINE (g(3), h(3))-(g(4), h(4)), 0
LINE (g(4), h(4))-(g(1), h(1)): FOR i = 1 TO 100: NEXT i
LINE (g(4), h(4))-(g(1), h(1)), 0
REM LOCATE 1, 1: PRINT g(1); h(1); stat(1)
REM LOCATE 2, 1: PRINT g(2); h(2); stat(2)
REM LOCATE 3, 1: PRINT g(3); h(3); stat(3)
REM LOCATE 4, 1: PRINT g(4); h(4); stat(4)

PSET (g, h)
PSET (g, h), 0
GOSUB 50000
GOTO 10

50000
FOR n = 1 TO 5

REM LOCATE 5, 1: PRINT n
50008 IF stat(n) = 0 THEN GOSUB 55000
50009 IF stat(n) = 1 THEN GOSUB 51000
50010 IF stat(n) = 2 THEN GOSUB 52000
50020 IF stat(n) = 3 THEN GOSUB 53000
50030 IF stat(n) = 4 THEN GOSUB 54000

51000 IF g(n) < 1 THEN stat(n) = 2
51010 IF h(n) < 1 THEN stat(n) = 4
51020 g(n) = g(n) - 1
51030 h(n) = h(n) - 1
RETURN 59000

52000 IF h(n) 640 THEN stat(n) = 1
52020 g(n) = g(n) + 1
52030 h(n) = h(n) - 1
RETURN 59000

53000 IF g(n) > 640 THEN stat(n) = 4
53010 IF h(n) > 480 THEN stat(n) = 2
53020 g(n) = g(n) + 1
53030 h(n) = h(n) + 1
RETURN 59000

54000 IF h(n) > 480 THEN stat(n) = 1
54010 IF g(n) < 1 THEN stat(n) = 3
54020 g(n) = g(n) - 1
54030 h(n) = h(n) + 1
RETURN 59000

55000 RETURN 59000

59000 NEXT n
RETURN

Retro : Shiver me timbers

Hmmmmmm. Sleek, silky smooth black body. Nice tight little footprint. Little grippy rubber keys. It’s Christmas 1986, you’re 14 years old and you’ve just unwrapped what in the future will be seen as a monumental shift in home entertainment. The Sinclair Spectrum 48k had arrived and you’re as excited as you were when you saw your first bra.

You slam home your Horace Goes Skiing uber game and hit PLAY. Buuuuuuuuuuuuuuuuuuuuuuuw Bip !!!! The header file descriptor pops up. Buuuuuuuuuuuuuuuuuuuuuuuw Biiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiip ……….and loads. Buuuuuuuuuuuuuuuuuuuuuuuw Bip !!!! The data file descriptor pops up. Buuuuuuuuuuuuuuuuuuuuuuuw Biiiiiiiiiiiiiiiiieeeiiiiiiiiiiiiiiiiieeeiiiiiiiiiiiiiiiieeeeiiiiiiiiiiiiiiiiiieeiiiiiiiiiiiiiieeeeiiiiiiiiiiiiiiiii……………………..

You wait patiently for the ten minutes it takes for the game to load into the massive 48k memory….and then roar down the slopes at a million miles an hour as giant spiders and trees flash by….Brilliant. But a 14 year old in 1986 can’t afford games. Not at £10 a throw. I could barely afford the Mastertronic £1.99 games off the market. Luckily my school friends could, and quite early on I cottoned to the fact that games tapes could be copied. So I copied them. Usually to C90’s from Boots the chemist. Much, much cheaper. And so a long, un-illustrious, tempestuous love affair began with piracy.

“The money was rolling in, and I’d even perfected the art of getting a factory-sealed-coiled-ninja-spring-loaded rental tape apart.„

I’ve pirated just about everything in my time, from Speccy games to DVDs and quite a lot in between, but I think things really started to get silly in the late 90’s. I spend a lot of time watching movies, so naturally before online DVD rentals, I spent a lot of time at my local video shop choosing films and chatting up the female members of staff, who would let me borrow pretty much what I wanted as long as it was last thing before lock up and as long as I posted it through the door so it was there for opening. So, like a good little capitalist, I took as many as I could get away with and I made copies for myself to watch at a later date. Before long I had a good few hundred tapes which I was then copying again for mates, for a small fee of course. Before much longer, I had a pretty stable client base on the south coast and it was growing fast because of my dedication to quality. All pirate films were recorded macrovision free, with no lumps bumps or distortion of sound quality, and sleeves and labels were printed with a nice new ‘bubble-jet’ printer onto glossy card. See, I still got the sales patter…you want some don’t you, be honest?

Then, something magical happened!!! A real stroke of luck. A local radio station had a phone-in competition to win a month’s free video rentals. WOW!! I enter the competition and to my surprise, I win!!! The next Friday a name would be drawn from the previous weeks winners to win a whole YEAR of FREE VHS rentals. Friday came and so did the phone call. I WON!!!! I very nearly had a trouser accident I was so excited. What an opportunity!

So I got to know the staff at my new rental outlet too. Mark (the name has been changed to protect the guilty) was very interested in my little enterprise and we would have many a coffee evening, chatting at the counter about what films he wanted and how much discount he was getting, all safe in the knowledge that the shop security camera recorded only images and no sound. Later, I’d borrow the sleeves and copy them too, as well as the front and spine labels. It was all very professional. I was making a small fortune and everything looked rosey. The money was rolling in, and I’d even perfected the art of getting a factory-sealed-coiled-ninja-spring-loaded rental tape apart. I’d remove the tape, replace it with a cut down E180 el-cheapo copy and return it. That way, I would always have the original tape to copy, no more photocopies of photocopies. Quality tapes, and yours for only five quid a pop or three for a tenner.

“I made several million hurried trips up and down my stairs and into the garden where I deposited box upon box upon of ‘master’ VHS tapes over the six foot fence into my next door neighbour’s garden„

Then it all went horribly, drastically wrong. One day, some other enterprising little stick stole some cases from the store. The security tape was duly taken home by the manageress who reviewed it to see who the culprit was. The good new is, it wasn’t me. The bad news is that I had been sat chatting to Mark at the counter over a coffee and a couple of jam doughnuts that very day. The really bad news was that the security tape actually DID have sound on it. The really really bad news was that the police were on their way to my house, along with a FACT officer in a suit. Luckily, Mark managed to phone me not 20 minutes before they arrived to warn me.

It was a race against time. I made several million hurried trips up and down my stairs and into the garden where I deposited box upon box upon of ‘master’ VHS tapes over the six foot fence into my next door neighbour’s garden in addition to over 600 Playstation game copies. The police arrived with the fed guy, looking quite serious. I was grilled for over an hour about my operation, and every drawer in the house was violated as they searched. Somehow, they knew I had been tipped off so I could get rid of the evidence. They found only one pirate VHS tape which was in a very well presented box (so good in fact I myself overlooked it). I was bricking it. Sweating like a nineteen stone marathon runner wearing my favourite shell suit and my best poker face.

Looking back now, it was like being grilled by Agent Smith in his ridiculous sunglasses indoors and I was totally expecting to be carted off and tortured and even executed. Or even killed. Then tortured again for a bit until I was blubbering like a baby and confessing to everything from dipping my fingers in the sugar at the age of three to the most heinous and despicable act of dressing up in skintight leopard-skin all-in-one cat suit for an ABBA tribute concert and inadvertently baring my butt on a twenty foot video wall in front of hundreds of naïve shocked 70s revivalists. Eh-hem – anyways, moving on………

I was cautioned, and I was fined a substantially (but still far far far far – repeat to fade – less than I’d made), and I got away with it !!! I wasn’t a ‘big-fish’, I was an insignificant boil on the face of humanity, worthy of nothing more than a quick lancing and a caution. Unfortunately, Mark lost his job. Worse still, I lost my membership and my free tapes for the year.

anon

Review : HandBrake

peteFor a long time, I’ve been looking for something that will take media from a DVD and put it in a suitable format for watching on [insert media device here]. It’s an age old problem, you want your media on a different device, but often getting it there is a problem, especially in the open source world. Sure there are tens if not hundreds of converters, transcoders, encoders and compressors out there, but few have the actual finesse to complete the task. Most seem to use the same framework under the hood too, which often leads me to wonder how one can get it so right, and another can get it so wrong. Having played with ffmpeg and mencoder myself, whilst both are fantastic products, using them can be a little bit of a black art. You have to really understand what you’re doing with video file formats and audio bit rates to make anything decent enough to play on another medium. I have spent many an hour encoding up bits of video and comparing them for quality, with the biggest problem of all being the audio sync.

Enter HandBrake, a fairly new offering to the encoding market. Cross platform on the Windows/Linux/Mac trio, HandBrake offers not only a wide range of features, but probably the easiest and most intuitive interface out there. I’ve tried several other rippers in the past, my previous favorite being dvd::rip, however I could never get a good solid video out of it, plus the interface was exceedingly complex. Now don’t get me wrong, I’m the kind of guy that loves to be able to change every conceivable option, but I also tend to like having a simple interface that can also be used if all I want to do is click-and-go. HandBrake seems to offer the above, and then some. The finer grain controls may not be quite as fine as something like dvd::rip, but it is still perfectly adequate for the task.

I took a reference video that I generally use for testing out new encoders and tried it out. HandBrake claims to be able to take almost any video file, and convert it to a format of your choosing. However, I found difficulty getting videos that it would actually work with it. Maybe it was because I had some videos with funky formats, I don’t know, what I do know is I successfully managed to crash HandBrake several times when giving it one of these videos. The job I really wanted HandBrake for, was taking some videos from an old DVD I had, and converting them to be able to play on my PSP. Thankfully HandBrake actually did a really, really good job at this.

Inserting a DVD and choosing it from the file menu, HandBrake will first scan it for titles before adding them to a drop down box in the interface. The HandBrake screen is split into three sections; The top left rectangle giving a very simple source/destination box, below this is another rectangle of roughly the same size which gives more options for encoding. On the far right is something I call a recipe list, where you can choose your output medium. HandBrake offers four main video formats, H.264, MPEG-4 (ffmpeg), MPEG-4 (XviD) and Theora, and these are customised by selecting one of the recipes. For example, choosing PSP as the output device, will change the video type to MPEG-4 (ffmpeg) and will change the bitrate to 1024. Changing this to the PS3 makes a few modifications, the most noticeable being substituting MPEG-4 for H.264 and upping the bitrate to 2500. The only thing which is a little disappointing is the inability to change the resolution of the resulting video easily, more on this later.

As with many rippers you have the ability to decide what is most important to you and HandBrake is no different here, giving you the option to choose from, Quality, BitRate or Target Size. Usually I find a bitrate that seems to offer good video quality and good file size and this is what I did with HandBrake and found the value to be around 500 for the PSP. Starting an encoding job is quite exciting, as you get to either start it immediately, or put it into a queue. Meaning you can queue up multiple titles from a DVD and leave a disc running overnight if you like.

For a 25 minute video, the file size on the PSP turned out to be around 100Mb. Using another video format, I have previously gotten this number down to past half that size at around the same, if not higher, resolution. To be honest though, HandBrake did a fantastic job and the overall quality of the video was noticeably higher than my previous efforts. What impressed me further was when I put the same video in my media library and played it on my PS3, though the compression was noticeable, the video was really watchable and that was something I never expected, especially with the low resolution that HandBrake pumps out for the PSP.

Overall, there’s not much left to say. For DVDs, HandBrake does a fantastic job of taking the physical media and turning it into a video for use on another media player. Though the file sizes are a little larger than what I used to strive for, I’ve come to the conclusion that I can’t watch 8 hours of media in a 30 minutes journey, so why do I try to cram that much media in anyway. The number of options available in HandBrake is both impressive and extremely well laid out. My only gripe is the lack of control over the resolution, however on closer inspection, a brave soul can change these details in the .config/presets file. In conclusion, if you need a simple DVD -> file converter, HandBrake definitely makes a great choice.

Sign-off : I HATE MY PHONE

peteSo we come to the end of another issue of GeekDeck and with it, the sign off. The place where someone gets the last word. Surprise surprise it’s me again. So what have I got to moan and whinge about this time? Mobile phones. Now don’t get me wrong. I have a mobile or cell, if you live in some parts of the world. I actually have two, if you count my work phone. However what I can’t get over is just how bad the signal can be sometimes.

It’s not even as if I’m roaming in a car from base station to base station. I could understand it then and would fully accept missed and dropped calls, lack of signal coverage and quality issues. However, when I’m in my place of work, and my wife is at home, in a fairly built up area, I do not expect to have to call her 3 times to finish a 5 minute conversation. That, my friend, is utterly ridiculous. We’ve found with our house, that upstairs is generally better than downstairs. Hence if you want to send a message in our house you compose it downstairs and then run up to the top of the stairs, waving your arms around as if you were trying out for the international semaphore Olympics.

“However, when I’m in my place of work, and my wife is at home, in a fairly built up area, I do not expect to have to call her 3 times to finish a 5 minute conversation.„

It’s ridiculous that a technology as “mature” as mobile phones should have such issues when trying to perform the task for which it was designed and manufactured. Am I in a remote area? No. Am I driving around, really really fast, in my Ferrari? I wish. Am I phoning someone in a another country, a thousand million miles away? No, I don’t have enough friends in my homeland, let alone finding some in other countries. The fact that I can’t even finish a simple conversation without hearing the cheery little jingle that my phone seems to take such pleasure in playing to me when a call finishes, isn’t just annoying, it’s purely outrageous.

Mobile phone technology was sold to the public under two pretenses that I have yet to have 100% proof that either are correct. 1) That it is completely safe, and 2) That it is reliable. The first of these really really gets my goat. Even I would have had health concerns whilst developing such a device, and would have made damn sure that adequate testing was performed so that ten years later we’re not still asking the question, “Wow this is great, but is it frying my brain?” or “Do I now have cancer?” The problem as I see it, and forgive me if I’m being naive, is that the technology was rushed to market. Probably by competing manufacturers. Once again money seems to be the root of the problem, or actually not money but greed.

The same is true for wireless technology. Though it seems to be getting a little better now, the market was initially flooded with different specifications and terminologies some of which worked well together, some of which didn’t. I understand the nature of business, but it’s a great shame when the emphasis on profitability exceeds that of customer service. After all, would you rather ship 1,000,000 units in the first quarter only for people to realise how bad your product really is, or would you rather start off slower at 10,000 units because you took the time to get it right, and reap the benefit of being triumphed as “the company that actually got it right first go.” Customer satisfaction is a big thing and though some people pay great attention to it, it is surprising how many companies out there that don’t. The worrying thing for me is that these companies are still flourishing, meaning the market is still being saturated with bad products.

“The worrying thing for me is that these companies are still flourishing, meaning the market is still being saturated with bad products.„

I digressed slightly to wireless and other technologies, however I’m hoping that I’m not alone in my root thought here about the very nature of our industry. The fact is, poor products affect many people; The consumer who is left stranded with something they have paid good money for that doesn’t work. The technicians, both working for the company and working on behalf of the consumer, who have to deal with angry end users and overall, the company itself. Though I know this sign-off article won’t change the world, maybe, just maybe, there will be a few designers/coders/project managers who may read it and think, you know what, you’ve got a point. Rushing things to market maybe good in the short term, but the longer term picture may not be so rosy. A prime example of this would be the latest offering from Microsoft. Vista was tooted to be the next big thing and although it certainly looked like it walked the walk, the fact that Microsoft have had to extend the availability of XP, allow people to downgrade from Vista to XP, and if rumours are correct, bundle an XP VM with Windows 7 to XP just proves the point that the big Vista push wasn’t all that worth it.

I’ve considered switching phone provider in an effort to achieve better call quality, but part of me thinks, I shouldn’t have to bother doing this. I’ve also considered getting rid of my cell altogether and instead using a VoIP softphone to talk to my wife. Whilst this is a great idea in theory, it’s just not as portable at the moment, at least not with the current technologies. If I wanted to do that, I’d probably have to rely on yet another piece of wireless, mobile technology and to be honest that’s something I just don’t have any trust in right now.