Archive for the ‘ Issues ’ Category

Review : HandBrake

peteFor a long time, I’ve been looking for something that will take media from a DVD and put it in a suitable format for watching on [insert media device here]. It’s an age old problem, you want your media on a different device, but often getting it there is a problem, especially in the open source world. Sure there are tens if not hundreds of converters, transcoders, encoders and compressors out there, but few have the actual finesse to complete the task. Most seem to use the same framework under the hood too, which often leads me to wonder how one can get it so right, and another can get it so wrong. Having played with ffmpeg and mencoder myself, whilst both are fantastic products, using them can be a little bit of a black art. You have to really understand what you’re doing with video file formats and audio bit rates to make anything decent enough to play on another medium. I have spent many an hour encoding up bits of video and comparing them for quality, with the biggest problem of all being the audio sync.

Enter HandBrake, a fairly new offering to the encoding market. Cross platform on the Windows/Linux/Mac trio, HandBrake offers not only a wide range of features, but probably the easiest and most intuitive interface out there. I’ve tried several other rippers in the past, my previous favorite being dvd::rip, however I could never get a good solid video out of it, plus the interface was exceedingly complex. Now don’t get me wrong, I’m the kind of guy that loves to be able to change every conceivable option, but I also tend to like having a simple interface that can also be used if all I want to do is click-and-go. HandBrake seems to offer the above, and then some. The finer grain controls may not be quite as fine as something like dvd::rip, but it is still perfectly adequate for the task.

I took a reference video that I generally use for testing out new encoders and tried it out. HandBrake claims to be able to take almost any video file, and convert it to a format of your choosing. However, I found difficulty getting videos that it would actually work with it. Maybe it was because I had some videos with funky formats, I don’t know, what I do know is I successfully managed to crash HandBrake several times when giving it one of these videos. The job I really wanted HandBrake for, was taking some videos from an old DVD I had, and converting them to be able to play on my PSP. Thankfully HandBrake actually did a really, really good job at this.

Inserting a DVD and choosing it from the file menu, HandBrake will first scan it for titles before adding them to a drop down box in the interface. The HandBrake screen is split into three sections; The top left rectangle giving a very simple source/destination box, below this is another rectangle of roughly the same size which gives more options for encoding. On the far right is something I call a recipe list, where you can choose your output medium. HandBrake offers four main video formats, H.264, MPEG-4 (ffmpeg), MPEG-4 (XviD) and Theora, and these are customised by selecting one of the recipes. For example, choosing PSP as the output device, will change the video type to MPEG-4 (ffmpeg) and will change the bitrate to 1024. Changing this to the PS3 makes a few modifications, the most noticeable being substituting MPEG-4 for H.264 and upping the bitrate to 2500. The only thing which is a little disappointing is the inability to change the resolution of the resulting video easily, more on this later.

As with many rippers you have the ability to decide what is most important to you and HandBrake is no different here, giving you the option to choose from, Quality, BitRate or Target Size. Usually I find a bitrate that seems to offer good video quality and good file size and this is what I did with HandBrake and found the value to be around 500 for the PSP. Starting an encoding job is quite exciting, as you get to either start it immediately, or put it into a queue. Meaning you can queue up multiple titles from a DVD and leave a disc running overnight if you like.

For a 25 minute video, the file size on the PSP turned out to be around 100Mb. Using another video format, I have previously gotten this number down to past half that size at around the same, if not higher, resolution. To be honest though, HandBrake did a fantastic job and the overall quality of the video was noticeably higher than my previous efforts. What impressed me further was when I put the same video in my media library and played it on my PS3, though the compression was noticeable, the video was really watchable and that was something I never expected, especially with the low resolution that HandBrake pumps out for the PSP.

Overall, there’s not much left to say. For DVDs, HandBrake does a fantastic job of taking the physical media and turning it into a video for use on another media player. Though the file sizes are a little larger than what I used to strive for, I’ve come to the conclusion that I can’t watch 8 hours of media in a 30 minutes journey, so why do I try to cram that much media in anyway. The number of options available in HandBrake is both impressive and extremely well laid out. My only gripe is the lack of control over the resolution, however on closer inspection, a brave soul can change these details in the .config/presets file. In conclusion, if you need a simple DVD -> file converter, HandBrake definitely makes a great choice.


Sign-off : I HATE MY PHONE

peteSo we come to the end of another issue of GeekDeck and with it, the sign off. The place where someone gets the last word. Surprise surprise it’s me again. So what have I got to moan and whinge about this time? Mobile phones. Now don’t get me wrong. I have a mobile or cell, if you live in some parts of the world. I actually have two, if you count my work phone. However what I can’t get over is just how bad the signal can be sometimes.

It’s not even as if I’m roaming in a car from base station to base station. I could understand it then and would fully accept missed and dropped calls, lack of signal coverage and quality issues. However, when I’m in my place of work, and my wife is at home, in a fairly built up area, I do not expect to have to call her 3 times to finish a 5 minute conversation. That, my friend, is utterly ridiculous. We’ve found with our house, that upstairs is generally better than downstairs. Hence if you want to send a message in our house you compose it downstairs and then run up to the top of the stairs, waving your arms around as if you were trying out for the international semaphore Olympics.

“However, when I’m in my place of work, and my wife is at home, in a fairly built up area, I do not expect to have to call her 3 times to finish a 5 minute conversation.„

It’s ridiculous that a technology as “mature” as mobile phones should have such issues when trying to perform the task for which it was designed and manufactured. Am I in a remote area? No. Am I driving around, really really fast, in my Ferrari? I wish. Am I phoning someone in a another country, a thousand million miles away? No, I don’t have enough friends in my homeland, let alone finding some in other countries. The fact that I can’t even finish a simple conversation without hearing the cheery little jingle that my phone seems to take such pleasure in playing to me when a call finishes, isn’t just annoying, it’s purely outrageous.

Mobile phone technology was sold to the public under two pretenses that I have yet to have 100% proof that either are correct. 1) That it is completely safe, and 2) That it is reliable. The first of these really really gets my goat. Even I would have had health concerns whilst developing such a device, and would have made damn sure that adequate testing was performed so that ten years later we’re not still asking the question, “Wow this is great, but is it frying my brain?” or “Do I now have cancer?” The problem as I see it, and forgive me if I’m being naive, is that the technology was rushed to market. Probably by competing manufacturers. Once again money seems to be the root of the problem, or actually not money but greed.

The same is true for wireless technology. Though it seems to be getting a little better now, the market was initially flooded with different specifications and terminologies some of which worked well together, some of which didn’t. I understand the nature of business, but it’s a great shame when the emphasis on profitability exceeds that of customer service. After all, would you rather ship 1,000,000 units in the first quarter only for people to realise how bad your product really is, or would you rather start off slower at 10,000 units because you took the time to get it right, and reap the benefit of being triumphed as “the company that actually got it right first go.” Customer satisfaction is a big thing and though some people pay great attention to it, it is surprising how many companies out there that don’t. The worrying thing for me is that these companies are still flourishing, meaning the market is still being saturated with bad products.

“The worrying thing for me is that these companies are still flourishing, meaning the market is still being saturated with bad products.„

I digressed slightly to wireless and other technologies, however I’m hoping that I’m not alone in my root thought here about the very nature of our industry. The fact is, poor products affect many people; The consumer who is left stranded with something they have paid good money for that doesn’t work. The technicians, both working for the company and working on behalf of the consumer, who have to deal with angry end users and overall, the company itself. Though I know this sign-off article won’t change the world, maybe, just maybe, there will be a few designers/coders/project managers who may read it and think, you know what, you’ve got a point. Rushing things to market maybe good in the short term, but the longer term picture may not be so rosy. A prime example of this would be the latest offering from Microsoft. Vista was tooted to be the next big thing and although it certainly looked like it walked the walk, the fact that Microsoft have had to extend the availability of XP, allow people to downgrade from Vista to XP, and if rumours are correct, bundle an XP VM with Windows 7 to XP just proves the point that the big Vista push wasn’t all that worth it.

I’ve considered switching phone provider in an effort to achieve better call quality, but part of me thinks, I shouldn’t have to bother doing this. I’ve also considered getting rid of my cell altogether and instead using a VoIP softphone to talk to my wife. Whilst this is a great idea in theory, it’s just not as portable at the moment, at least not with the current technologies. If I wanted to do that, I’d probably have to rely on yet another piece of wireless, mobile technology and to be honest that’s something I just don’t have any trust in right now.

Editor’s Letter : A brave new start


Dear Geeks,

Welcome to the first issue of GeekDeck, a new zine/blog thingy that I’m hoping will wet your appetite for all things geek.  You may be wondering why GeekDeck was started, why it’s going to be different from every other technological zine/blog thingy out there?  (Note to self, find a better word for zine/blog thingy)  Well, I can’t tell you it’s going to be the best thing since MOSFET transistors, but I can tell you that we intend to do our best to cover as many different areas as possible.

Thinking about it the other day, I decided the main purpose of GeekDeck, was not so much to provide people with interesting articles to read, but more to entice them in to read one article, but at the same time, hopefully, get them to read another article from a completely different section.  The overall plan is to expand peoples geek palettes.  Get them interested in things that they otherwise might never have stumbled upon.

We’re aiming to release an issue every month.  This is where it’s more like a magazine.  However, we’re using a blog to do it.  Hence where it’s more like a blog.  At the moment, the team is 6 man strong.  These guys will form the core of GeekDeck and in time we’ll start opening it up so that anyone can contribute articles to the zine/blog thingy.  (The problem is I just hate the word ezine….ewwghgh, gives me the shivers)

On a slightly different note, we’ve decided to release a PDF version of GeekDeck too. This PDF will look a lot more like a traditional printed magazine. The aim of the dual release is to enable people who like fancy magazines to have the PDF, and people who like like RSS/text based versions to be happy too.

Well, I think I’ve rambled on enough for the first editorial.  I really do hope you enjoy the first issue, and if you do, blog about it, digg it, slashdot it, spread the word, because as you can imagine in the early days, we’ll need all the help we can get.

The GeekDeck Team

Review : Cherry Picks of the Month

A new month has started and with it a new selection of interesting applications have caught my fancy. I’m always on the lookout for new and innovative ideas for graphical interfaces, making it a point of flipping through web sites and magazines just to see what other people are doing out there.

This month’s selection include a couple of games, a spreadsheet, a great dynamic debugging tool and a new way of managing your files.

Professor FizzWizzle is a fun, mind-expanding puzzle game, where you take control of the diminutive genius, Professor Fizzwizzle. You must help the professor use his brains and his gadgets to solve each exciting level. Do you have what it takes to get past the Rage-Bots and bring the prof back to his lab? My oldest daughter started playing the demo for this addictive game and soon enough father and daughter were huddled together debating how to best complete the puzzles.

Professor Wizzle

Professor Wizzle

I also recommend FizzBall by the same company, also receiving high scores from my daughter. Have you heard of that old cliché “learn by playing”? This game will definitely validate it for you and your kids.

Pyspread is a cross-platform spreadsheet application that is based on and written in the programming language Python.

Pyspread - The power of Python in a spreadsheet

Pyspread - The power of Python in a spreadsheet

Can you imagine being able to write your own “formulas” in Python and use it in a spreadsheet? Ohhh, the possibilities…

Parasite is a debugging and development tool that runs inside your GTK+ application’s process. It can inspect your application, giving you detailed information on your UI, such as the hierarchy, X window IDs, widget properties, and more. You can modify properties on the fly in order to experiment with the look of your UI. In other words, it is like Firefox’s Firebug extension, but for Gtk applications. Salivating yet?

Checking out Eye of GNOME with Parasite

Checking out Eye of GNOME with Parasite

In order to poke around Eye of GNOME and uncover all the layers that make up the UI, I ran the following from the command line:

GTK_MODULES=gtkparasite eog

I could now freely inspect every single component of the interface and interact with them via a very handy python shell at the bottom of the screen. I highly recommend it!

Finally, GNOME Zeitgeist, a very interesting way of managing files in your desktop. My dad just recently bought a brand new computer (a Mac actually) and paid an extra fee to have a technician come to his place and transfer all of his contents, thousands of digital photos and music accumulated through the years, from the old computer to the newer one. Once all of his files were safely transferred and the old computer conveniently put to use as a door holder, he was presented with an interesting problem: he had absolutely no idea where his media was!

Smart file management with GNOME Zeitgeist

Smart file management with GNOME Zeitgeist

This is where an application like GNOME Zeitgeist comes in handy. The physical location of files in your operating system should not be of concern to a desktop user. Ask him when he took a picture or recorded a song, and you may get a better answer: “It was the first week of March, 2009”. Now, ask which folder/partition that file resides and you’ll be out of luck! Being able to search by date and tags is a very interesting way to deal with the proliferation of multimedia files accumulated through the years and this project is well worth watching (literally, watch the video here).

Industry : Time is an illusion


They say time heals all wounds, they say that patience is a virtue, they say that learning the ancient art of yoga will allow you to travel through space and time.  OK so the last one may be a bit of a fib but has anyone who ever made these ridiculous claims ever worked in the IT industry?

It’s hard to find a single day when you don’t have to wait on a computer to do something.  Whether it’s booting, saving, loading, initialising, saving again, copying, pasting, deleting, undoing, formatting, installing……….you always find yourself sitting there, wishing it could go just that little bit faster.  Do you realise if you added up all the time you sat on your rear end waiting for a PC to “work” in a week, you could have watched an entire season of Jack Bauer saying dammit and throwing a wobbler at terrorists???  OK, I get it, enough with the exaggerating, gee you’re a tough crowd to please, I’m the one out on a limb here, not you.

It’s gotten to the stage that we are so obsessed with making sure we make the most out of every minute on our PC that it affects our ability to multitask.  Why?  Picture this.  You start your installation and hit the go button, it gives you an estimate of 2 hours.  OK, not a great start, but you accept the fact it won’t finish any time soon, and quickly leave the PC to start your next task.  Great usage of time right?  Wrong!  What you don’t know, is that shortly after you left your installation running, it popped up and asked you if you were really really sure you wanted to install TinkleTown’s YouSmell 2007.  So upon returning to your desk you find a) you’ve run out of coffee, b) you’ve just wasted 2 hours of installation time and c) there now no chance of being able to install it before you leave because it’s 5:22 and you have to swing by the pet store on the way home because your pooch has contracted an exotic infection from eating a spark plug you accidentally left in your slipper.

We now have to babysit our longer running tasks because if we don’t it really does hamper our productivity.  It’s not just the fault of grandfather time for making copying 100Gb of data take so long to copy, it’s also the fault of those annoying estimates, which either change more often than a celebrities weight, or are so far off the mark it makes you seriously wonder if these people even managed to learn how to use a clock at all.  The following is total 100% pure fact.  I left a backup running over a 3 day weekend once, I came in on the Tuesday to find it had apparently been running for 126 days.  Say whaaaaaaat?

“We now have to babysit our longer running tasks because if we don’t it really does hamper our productivity„

I digress.  I apologise.  My point of writing this was to discuss whether or not all this waiting makes IT professionals more patient people.  It’s weird, but it’s something I’ve sat and thought about for a while.  I’m the kind of person who orders something online and checks the tracking number at every stage to see if it’s been picked, placed on the lorry, dispatched, sorted, dispatched again….etc.  Sitting here writing this, it’s kind of like expecting to see a real-time progress bar for the dispatch of my order.  I want to know when it’s going to arrive, and if it doesn’t arrive when it’s supposed to, boy do I get mad.

Thinking about this further it’s clear to see that it could be due to the fact that my expectations, on a day to day basis, are constantly being shattered.  In IT, more than anything else I believe, we are told things will be one way, when really they’re another.  Think about it, technical support…..did that even exist before IT came into existence.  Technology requires support, because of it’s fragility.  The thousands of howtos/tutorials/guides fill us with hope that the instructions provided are going to leave us with configurational bliss.  How often that turns out to be false.

Unfortunately we’ve become used to being lied to by IT software and hardware vendors as well.  The new version of SolarEdam is better than ever (false).  It runs twice as fast as the previous version (true if you’ve bought it with a new PC). Has an enhanced intuitive user interface  (we’ve made the buttons bigger).  Is much more stable (sure if you run it in our test environment with no user input).  Has increased security  (we ask you if we’re allowed to do everything because we can’t make intelligent decisions ourselves) And has many more applications available for it (because we’ve monopolised the market even further).

Some of the problem lies in the fact that we are often running such diverse and varied configurations of systems and software, that it makes it almost impossible to cover every eventuality.  The field of IT is so large that you just wouldn’t know that if you install the 4.3.2a version of TurdTacular SE, you can’t run A-dopey Nightmare Creator as they there is a typo in one which overwrites a registry key in another 😉  So are my hopes and dreams for an accurate, truth-telling, fast, IT world fanciful?  Can they ever be realised?

However, even if we know that deep down nothing is what it seems, we’re still filled with some kind of hope that maybe this time will be different.  Maybe it’s about more than just being patient and impatient.  Maybe it’s about the difference between hope and despair.  We as IT professionals have faith in the technology we work with day-in day-out.  Some of us often believe so blindly in our systems that if anyone tells us there is an issue, we try and find any reason for the fault other than, “we made a mistake”.  It’s almost religious, and we know how often that causes problems in the world.  We’ve become addicted to the failure which we experience on a day to day basis.  So much so, that when we do achieve victory in a certain area, more often than not we’re reduced to *meh*.

The IT industry is almost like a macro-culture.  Speaking with some people in the industry recently, I was surprised at how many of them feel the same way I do.  When I started this article I expected it to be met with a 50/50 mix of “Ain’t that the truth” and “Oh you have it soooo utterly wrong.”  What I actually encountered was different.  It felt like getting into a hot bath after a long day staring at millions of tiny squares lighting up.  Just out of interest, spending 8 hours looking at a 1280×1024 res screen, at a refresh rate of 70Hz, means you’re typically going to see 2642411520000 of the little blighters in one day.  If that’s not dedication to a cause I don’t know what is.

“We’ve created this huge industry to make things faster and more efficient, but we require an even greater industry to support it„

I’ve taken a round about look at time, patience, faith, hope, and despair.  A pretty deep set of words to be covered in an article about IT.  But have we come to a conclusion?  I’m not ready to answer that yet.  Call me a tease, but I’d like to entertain some more ideas about patience and time wasting.  You see another part of the discussion goes like this.  “Computers make things faster”  No.  Computers can make some things faster if they are implemented by people familiar with the processes and techniques of computational optimisation.  Put an end user, with no training in computational optimisation, in front of a computer, ask them to take a process and automate it and just sit back and see exactly how slow they can make it.  Sometimes, it doesn’t even extend that far.  Ever seen a user taking data from a spreadsheet, print it out and manually work through 50 pages of it because they don’t know how to perform the process automatically?  I have, by the way, many times.

Now we’re hinging on something big.  The whole concept of computing is flawed.  We’ve created this huge industry to make things faster and more efficient, but we require an even greater industry to support it.  Yet we all understand this, some of us even argue vehemently against it.  We don’t want to believe it.  Has it all been in vain?  Have we ourselves come full circle?  I’ve noticed I started off with a rather lighthearted approach, and descended into attacking the very culture I live in.  Yes I get frustrated.  Sometimes I want to tear my own head off at my minuscule mistakes.  Sometimes I want to slap myself in the face and shout “Do you even know what you’re doing?”  However for all the frustrations, the pitfalls and the perils, to me, IT is fun.  I love talking about it, understanding it, and working with it.  So call me hypocritical, call me stupid, call me whatever the hell you want, but if you’re reading this, chances are you know exactly what I mean.

Take care nut-job!

Interview : Popey the sailor man

Fighting hard against the forces of evil, Popey the sailor man travels the high seas of software hunting down pirates and cramming copies of the GNU public license down their throats till they bleed FSF.  Well…not really….Alan Pope is one of the nicest guys I’ve ever met in the Free Software world.  He’s consistently been advocating free software and working on open source projects for many years now.  I first met him at a HantsLUG group meet about three or four years ago.  Speaking to him both then and online, I’ve always been stunned at how down to earth he is.  It made sense then to grill his brains over a roasting fire for the first ever interview section in GeekDeck.

So, I’ve known you for a good few years now, and I’ve seen you get involved in many projects in the OSS community, but, how did it all start?

Back in the mid nineties I was working in a local college, looking after some of the IT systems, when a student first mentioned Linux to me. I was aware of the concepts of Freeware, Public Domain and Shareware, having bought floppy disks through the post and downloaded software from bulletin boards, but I’d not come across Linux or the GPL. To be honest the guy didn’t sell it well, he told me about some new system that came on a zillion floppy disks and took quite a bit of effort to get up and running. I dismissed this as a silly idea. How things change.

Over the following years I’d seen mention of Linux in computer magazines and even saw books on the subject with CDROMs attached. I think the first distro I tried out was in around 1995 and might have been Red Hat or Caldera which came with a book I bought.

Eventually I started using Red Hat on a server in the garage at home and then moved over to Red Hat Linux on the desktop a little while before Windows XP came out. I don’t remember how, but soon after that I discovered the Hampshire Linux User Group, their mailing list and the IRC channel. I attended a couple of meetings, one at a college and one at a local pub, and as a result made some very good friends whom I still very much value today.

In about 2002 I moved from Red Hat to Debian, and in 2004 from Debian to Ubuntu. I now run Ubuntu on every desktop, laptop and server I own, with one exception being my home firewall which runs ipcop. Since using Ubuntu I’ve been supporting new users, helping to organise and sponsor events and generally help out where I can.

A journey like a few other Linux users I know, and I must admit I hit the “quite a bit of effort to set up” wall.  Do you think Linux has the same selling problems with todays distros?

I think Linux has a legacy with some people that it needs to work hard to shake off. I frequently get people telling me that Linux is hard to install, hard to use, requires intimate command-line expertise, and there are no drivers for it. None of that is true. We all know modern Linux systems ship with more drivers built in than any other productive supported operating system, but getting that message out is difficult when users still find their USB webcam, video card, printer, scanner, wifi dongle or whatever device doesn’t work out of the box with Linux.

“I frequently get people telling me that Linux is hard to install, hard to use…. None of that is true„

I recently bought an HP printer which worked out of the box in Ubuntu. I plugged it all in and turned it on. By the time I’d sat down in front of my PC it had already installed the drivers for the printer, fax and scanner. I was up and running with that printer faster than any other printer on any other platform. That shouldn’t be surprising, it should be normal, the standard, the way things work.

We have seen the proliferation of graphical tools to abstract away the command line on Linux, which makes use of the shell less required. Installers have become ridiculously easy to use, and the desktop managers look and feel as good if not better than any other computer system.

One of the biggest issues I think is peoples resistance to change. We have a hard time selling Linux to people as an alternative because they already have a system which (for the most part) works. People on other platforms have to contend with random application popups telling them they need to download updates, viruses and spyware, intensive system scans and obtrusive system other maintenance tasks which get in the way of using their computer.

Unfortunately most people seem to have accepted this as the norm, as if all computer systems are like this. We can keep telling them this isn’t the case, and with things like Software Freedom Day, LUG meetings and Free CDs we have some of the tools at hand to be able to achieve that. It’s an uphill struggle though, and it will take a long time.

Do you think that open source as a whole has progressed as Linux Distros have?

I’d say more so. Look at the usage of Mozilla Firefox, Audacity and VLC on non-linux platforms. Of course they’re all popular on Linux, but because they’re open source they can be ported to run in other environments and are pretty popular as it goes. Mozilla Firefox probably has a larger browser market share than Linux has of the desktop, so I’d consider that pretty successful.

Then there’s the open source libraries and “under the covers” applications which people don’t see but get used very heavily. Ffmpeg is a good example of this. It’s used in pretty much every video conversion utility you can find.

It seems that Open Souce does better when people don’t shout about it being so.  Many people use FF, VLC and the like without ever known what open source really is.  Do you think this is a secret key to success, or does it bother you that the real root of the “cause” is hidden?

I think most people flat out don’t understand or even care about whether code is open source or not. They just want something that works, works quickly and doesn’t get in the way of what they’re doing. Firefox and VLC are great examples of that.

I do think the term “Open Source” has gained a lot of traction over the last few years, and at least in part that has to do with common applications on Windows. However Linux and Linux based applications are also gaining mind-share too. Maybe the two-pronged ‘attack’ is the way forward. Get new users used to Open Source software on their platform of choice, then the transition to another platform will be smoother.

“I would of course prefer it if I could use entirely free software on my computer today, but I’m not about to bury my laptop in the sand and live in a cave until that day happens.„

I’m quite a pragmatic kind of guy. I use free software on Windows, and I use non-free software on Linux. I tend to use whatever is the best solution for the problem at hand. I’ll continue to use the right proprietary tools for the job until those tools are replaced by suitable open source ones. I would of course prefer it if I could use entirely free software on my computer today, but I’m not about to bury my laptop in the sand and live in a cave until that day happens.

Heheh, I know what you mean there.  Do you think there is anything to be said for the more “purist” attitude, or in fact does it do more harm than good?

There is a place in this world for Richard Stallman (GNU) and Matt Lee (FSF). I happen to agree with some of their politics, but not their approach. Whilst I would like to be able to run purely free software on my computers, the fact is that there is software I need to do my job which is not free and open. Should I change my job, perhaps, but currently the job I have pays for my mortgage and feeds my two children. I could force my entire family to make significant sacrifices to enable me to run only free software on my computers. I’m not about to do that, does that make me a bad person?

Not in my eyes.  I’ve noticed you’ve been quite involved with media in Linux, ie Audio/Video.  Do you think we’re at a stage now where this is supported OOTB, both playing back and editing?

Having not used any “real” audio or video editors on platforms it’s difficult for me to tell if we’re doing better or worse than them to be honest. Having said that, I’ve seen training videos about iMovie and other Mac based video and audio editors and they look pretty slick.

As far as audio editing goes, Audacity and Ardour are fantastic products. The Ubuntu Podcast from the UK Local Community team is edited using those two packages alone, and I think we end up with a pretty good result. We could no doubt do better, but I don’t think that’s down to the tools we use, more the people using them. “PICNIC” as a friend tells me – “Problem In Chair, Not In Computer”.

“I could force my entire family to make significant sacrifices to enable me to run only free software on my computers. I’m not about to do that, does that make me a bad person?„

In the video editing arena it seems we have a plethora of half-finished editors. Rather than have one solid product we have one thats good for grabbing DV from cameras, another that’s good for stitching together video and doing simple effects, and yet another that has a good preview option. Having seen a couple of screencasts about the video editor built into Blender I gave it a go, and I was blown away. Whilst the interface is somewhat obscure in places, once you’re familiar with it, the video editing capabilities are in excess of anything else on Linux that I’ve tried. Take a look at these screencasts and decide for yourself.

I take it by “real” you were meaning commercial/proprietary applications?

No I meant “professional grade”.

Ahh…I see.  What do you see as the next big challenge for Linux to overcome and is there anything us users can do?  Spreading the word springs to mind, which you do with the ubuntu-uk podcast.

I don’t think there’s one single silver bullet for increasing adoption of Linux. I see a three pronged attach, which starts with more OEMs supplying Linux pre-installed on a diverse range of their machines, and providing solid support for those systems. Secondly more software vendors need to embrace Linux as a platform for their products, whether Open Source or not. Already the very largest software vendors including SAP, IBM and Oracle have products that run on Linux, this needs to filter down to other software vendors. Finally hardware vendors need to ensure their devices are compatible with Linux out of the box.

None of them are easy for anyone, and with just one of them we won’t “win”, but with a combination of all three, I think we’ve got a much better chance than without.

Do you think that a greater adoption in the home market will inevitably lead to greater adoption in the business environment?  Does it matter?

Not tremendously, no.

It will certainly raise awareness. I like the idea that people have a diverse set of skills, knowing how to use a Windows PC, OSX or a box running Linux is a bit like being able to ride a bike, drive a manual (stick shift) car or an automatic. It’s a diverse set of skills which can be useful in numerous situations. I’ll let you figure out which one is which.

With Windows usage in the workplace certainly helped adoption in the home. The vast amounts of revenue Microsoft made from corporate customers enabled them to target home users also. Perhaps that’s the approach we should take.

I work in large companies that implement new business systems all the time. These systems have odd interfaces with strange navigation systems and bizarre terminology. What if the change wasn’t in a back-end system, but on the desktop? People would accept it, if it was deemed beneficial for the company, shareholders, customers and employees.

Look at it this way, if you worked for a company as an office clerk, and it was decided to replace Microsoft Windows with Apple OSX on the desktop, you might complain but at the end of the day it would be made to work by those in charge. You would hopefully be given training and coaching on the new platform and you might be less productive initially until you get over the different interface, paradigms and workflow, but you’d get it in the end because you’d be using it every day. People learn, they do change, and they will accept the new way of working, it happens every day in businesses all around the world.

Now replace “Apple OSX” with “Ubuntu Linux”. No difference.

Large companies seem scared of open source, is the support model largely to blame?

I’m not so sure they are scared of open source. They may be scared of change for change sake, where they can’t see an immediate change on their bottom line. The old “Nobody ever got fired for buying IBM” seems to have morphed into “Nobody ever got fired for buying proprietary software”.

There are huge global multi-billion dollar companies that use free and open source software in their main business systems. These systems are on the back-end and as such are rarely seen by the hundreds of thousands of users.

Fact is if you’re on the internet in any way whatsoever be that browsing, watching videos, sending email or instant messaging, you’re using open source software whether you like it or not. Take away the wealth of openly licensed software such as Apache, BIND, Perl, PHP, Sendmail, Linux & *BSD and countless others and the Internet – and a significant chunk of the worlds businesses – would take a step back to the 1960’s in an instant.

I guess it’s really a matter of knowledge. Many large companies probably run a fair amount of open source software that only gets known about when things go wrong. It’s been a great interview Alan, and GeekDeck thanks you very much for taking the time to respond to the questions. Is there anything else you’d like to add?

Just thanks for the chat, and all the best with GeekDeck.

Gaming : Top Gaming Moments

peteThis is a little bit odd for me.  I usually write about computing, or the open source community, or Linux, or anything Computery, if that is even a word.  Today, I’m tackling a slightly different topic; gaming.  As a prelude to another article I’m going to write, I wanted to discuss my favorite moments in gaming.  From using the Atari 800, to firing up the PS3, this is a short journey through what I believe are my defining moments in the gaming arena.  The events are not grouped in any way, and are roughly chronologically.

Hardest Game – Ollies Follies – Atari 800

This game was one of the longest loading on the Atari 800 and in fact I think there must have been something wrong with our tape as you could almost gaurantee that it wouldn’t load every time.  Ollies Follies has got to be one of the hardest games I ever played.  It was a platform game where the entire level was contained in one screen.  Though the controls were responsive, timing was so crucial that it was almost impossible to get far into the game.  By crucial I’m kinda underestimating.  Imagine being able to snap your fingers shut on a pin travelling past you at 50 mph.  Of course the other thing to note is that I was only about 13 years old.  I’d also missed out on much of the coin-op era and so my reflexes hadn’t toughened up yet.  Oh ok fine…..I’m making excuses…..but it was freakin hard!

Most Satisfying Moment – Super Mario Land – Game Boy

I think this was probably the first console game that I ever completed, unlike some of the other rather unknown titles I have presented, SML is fairly well known.  Coupled with the fact that there were no saves, and that it was a fairly lengthy game, completing SML for the first time was an exceedingly satisfying moment.  Nintendo hit the nail right on the head in most respects in this fine gem in the platform genre, but one part shines for me above the rest.  Music.  The themes that play through the game great without a doubt, however it has to be the ending theme which made me feeel most satisfied.  As I sit here now writing this, I can still hum the tune in my head, even though I haven’t played the game for probably over 10 years.  Flying through the clouds in the ending sequence listening to that music was one of the most surreal and satisfying moments in my gaming life.

Seamless Integration – Final Fantasy VIII – PS One

If there ever was a game which hit the point of seamless integration of FMV and gameplay, then FF VIII was it.  I can still recall the moment that my wife and I fired it up for the first time.  The graphics were absolutely breathtaking, something which Square Enix have turned into absolute masters at.  And then it happend.  The FMV, which I believe was showing some kind of water craft arriving on a shoreline, slowed and our character got out.  Then we sat there.  Nothing happened.  As I recall the water continued to lap on the shoreline, but nothing else happened.  We waited for a good minute, thinking the game had crashed or that there was a bug in the FMV.  It was only then we realised that as the FMV had drawn to a close, the last few frames had become the background of the in-game graphics.  I touched the controller sticks, and the character burst into life.  It’s these levels of shear detail that make some game makers stand out above all the rest.

Scariest Moment – Silent Hill 3 – PS2

My word, talk about brown trousers time.  I’m not talking about the game in general.  It is a pretty freaky game, but overall not that scary, however there is one bit of Silent Hill 3 that I just adore for the way it made me feel the first time I played it through.  Having played the previous Silent Hill games, I pretty much knew what to expect, but this threw me completely off guard.  I’ll say one word.  Mirrors.  Ok, I’ll say a few more since 99% of you have no clue what I’m talking about.  You walk into a room……the door locks behind you…..nothing particularly scary there.  Then you walk around the room and look at one of the walls which is in fact a gigantic mirror.  As you walk about the room in a state of some confusion, all of a sudden your characters reflection stops.  A tad freaky.  Then a bath tub which is sitting in the room starts to fill up with blood and goo, but only in the reflection.  It spills out all over the floor, at which point you’re running around literally wetting yourself.  It touches your reflection and your alter ego starts to decay.  All the time you’re thinking “Oh my word, let me out, let me out, let me out” and perhaps the freakiest thing of all.  Nothing happens to you what so ever.  You’re sure you’re gonna die.  You’re sure you’ve taken a wrong turn somewhere, but then…..all is well.

Most Impressive Graphics – Kill Zone 2 – PS3

Following on from the mention about seamless integration, KillZone 2 probably has to be the most spectacular demo I have ever played.  The initial sequence with the close encounter with a friendly craft turns into proper game play, in a way normally reserved for flamboyant Royal visits, and by that I mean with the utmost finesse.  I was watching the beginning sequence thinking, is this an FMV or is it in game graphics.  I got my answer when my HUD appeard.  Couple that with the most awesome AI I think I have ever seen.  These guys are clever.  They seek cover and will fling their gun over their head and blind fire to try to hit you.  They try to flank you.  They don’t exit from hiding at regular intervals.  Heck I don’t even think these guys would ever use the same toilet twice.  Absolutely marvellous demo.  The graphics are absolutely amazing, attention to detail has not been skimped on at all.  In one scene inside a building, light is streaming through some windows.  If you’re standing in the right place you get some awesome lighting effects off of your helmet.

Presentation – LittleBigPlanet – PS3

Well it has to be said that LBP is without doubt one of the most beautifully presented games on the face of the planet.  The choice of Stephen Fry for the narration was in my book absolutely brilliant.  I must have watched the intro sequence to it many times purely because I love the concept, and I just love the way it’s been presented to the audience.  The simplicity of their being a world created from the dreams of all the inhabitants of this world it probably nothing new, but it’s the last line of the introductory narration that always gets me “And you can go there now”.  The guys at Mm have certainly done a grand job here.  The attention to detail is fantastic and I just love the cries of “Oh man, you can even see……do……make X do……”  Sheer genius.

Longevity – Destruction Derby – PC

This has got to have been one of the games I played most whilst growing up.  I would sit in front of the screen for hours upon hours smashing cars up till they exploded.  I like to tell people that I just love the physics of things smashing into each other, but deep down I think it’s just something that boys never grow up from.  It was pretty fun too to thwack a car in the tail just at the right point to spin it round 360 degrees and hurl the driver into a high pitched scream.  I think in all honesety the reason I loved playing the game so much, was although often the levels were very samey, because it was based on real physics, you’d very rarely get a round that was identical to the last one.  That’s the beauty of entropy, it always increases.  Tell that to the young ones.

Culture : The tale of the inaccurate TV show


As I was finishing up my last article on the almost religious nature of the IT industry, I turned my attention to thinking about the next article. Some people say I should finish up the first one before I move on, that somehow letting my concentration slip at such a crucial time in a litterary masterpieces lifetime is both irresponsible and unforgivable. To those people I would like to say “Shut up”. I’m guessing that the amount of people that fall into that category are so insignificant that the crux of that whole rant was pretty moot anyway.

Man! Digression already and I haven’t even begun to describe the subject of this article yet. I was drifting in and out of thought about the media. I had recently been quite angered by a program which went out to the mass media which featured in a segment, some “hacking” (and I use the term very very loosely, perhaps attempted computer misuse would have been a better term for it) and detailing some steps with which to secure your PC against unwanted attackers.


To call the program irresponsible and deeply flawed is probably an understatement for me, but then I do tend to get pretty excited when something annoys me, usually leads to another article you see. However the program in question displayed a lack of responsibility by detailing information that lured people into a false sense of security. Chumps! I thought. It dawned on me then, why were they doing a segment on computer security anyway, their usual banter was fairly well confined to talking about gadgets and all things technologically niche? Hacking and computer security has gotten a lot of press these days, and in my opinion a lot of bad press. Most instances that are being attributed to computer security breaches are actually due to people being either a) stupid, b) careless, or c) a fantastic combination of both which probably resulted in them being fired quicker than selling fake memory sticks on EBay.

So what exactly does the media gain from this? They rile up society into thinking that hackers are everywhere, then give them false information about how to protect themselves. An example of this was the the program in question talking about how you should put a password on your Windows XP PC as it is then unable to be accessed by people unless they know the aforementioned secret password. Granted it’s a little better than the 98 days of being able to remove the pwl file which stored all the passwords to the user accounts. I mean seriously who would ever class that as a good idea? It’s about as secure as etching your PIN on to your ATM card. Oh I know how about making it really secure, let’s ROT13 the PIN number first* It’s a well known fact that the standard Windows login password, and standard Linux root password come to that, do absolutely nothing to safeguard the files on your PC. I knew people that at 14 were able to boot from a PuppyLinux CD to recover files from broken Windows/Linux installs, and I’m betting there are people even younger than that who know what they’re doing now.

“To call the program irresponsible and deeply flawed is probably an understatement for me„

People who advocate the use of a login password to “protect” their PCs data against a large number of threats should be taken aside and lightly beaten with a paddle until they discover the error of their ways. Obviously the more vigorous the beating, the shorter the amount of time taken to learn their lesson, however being an advocate of peaceful resolution, a light tapping would surely eventually give the desired effect, even if the result of all the tapping was mild wood burn leading to infection and finally blood poisoning.

1145921_86593494The problem is in effect related to my last article. Time. People want a quick fix. If you tell someone they have to read a 900 page manual before being able to properly use their PC securely, they are are going to politely tell you where you can stick your 900 page manual. However it’s all due to the fragility of our technologies. If we block off enough ports and lock down the OS enough, we obtain a secure system for your average end user and below. The problem with this is that the aforementioned system is so crippled that it’s usage is severely limited. We get to the age old trade off of Security vs. Convenience.

It’s a no-brainer really. Make a system completely open-ended and loose and it’ll have more holes in it than your old mans sweater. Start to secure it, and the usability takes a nose dive. The funny thing is, if I drew an imaginary graph of convenience vs security the graph wouldn’t do exactly what you’d expect. You might expect a nice linear relationship. As the security increases, convenience decreases. Then you hit a magical point I affectionately like to call, the point of subversion. You see, on the convenience axis there is a line, a threshold if you will, the end-user stupidity threshold. If convenience dips below this line, a user will take steps to make the system more usable to them. Oh how helpful, you may be thinking. Nine times out of ten, it’s not. The reason for the name, the point of subversion is that this is where users begin to subvert security. Let’s follow a case study…come on boys and girls, gather round…..everyone got their carton of milk and sarcasm suppression hats?? Excellent, then here we go.

We have a small company, we’ll call them Aturd Technologies. They start off with an office and 4 PCs. As the company grows, they introduce passwords to the system. The graph maintains it’s shape. Then they introduce access control, still the graph roughly maintains it’s shape. Then they mandate password changes every 30 days. Bingo. We hit the threshold. 40% of users are now incapable of remembering their password correctly, and so write it on a post-it.

We obtain our first tooth in the graph, and indeed in the mouth of the end user, determined to bite the IT departments loving and generous hand. The graph continues, security increasing slightly, users being educated, when Aturd Technologies decides to implement proxy servers. Another tooth, as users start to bring in home laptops. You get the general idea.

The key to this is education. Maybe this is what the media is trying to achieve. The problem is, they typically educate John Baggins with information about dangers and threats which aren’t so pertinent to him, ie threats confined to dealing with corporate security, and then try to fix the problem by giving some wishy washy advice which is about as useful as a sledge hammer made from cucumber pulp.

What we need is one of two things, a system that doesn’t break or get attacked (Never going to happen), or an end user that understands about all the avenues of attack and their associated mitigation techniques (Very few of these rare gems actually exist in the real world. Much more common is the “I think I know everything about everything, but I don’t even know what TCP/IP stands for really.”) So again we have to settle for a happy medium. For me anyway that means a) good solid education of users, without introducing false hopes, Product X isn’t the only “real” solution out there, and b) locking a users system down sufficiently well.

“Make a system completely open-ended and loose and it’ll have more holes in it than your old mans sweater. Start to secure it, and the usability takes a nose dive„

It extends into area of blame too. Recently I was on a train whilst a user was looking at a highly confidential report from their workplace. I actually contacted the workplace and made them aware of the issue. The person on the other end of the phone seemed far more interested in finding out who the user was, as opposed to what I had seen and how.

Unfortunately, we live in a dangerous world where security is all around us. It’s part of almost everybodies lives, yet how often people don’t understand the reasons why they have a rotating password, or a captcha on a contact us form. People generally hate doing something without understanding why. This is where a lot of education goes wrong in my opinion. To adapt a well known phrase. “Give a user security advice and they’ll use it, just for a day. But give them the understanding of what the guidelines mean, and they’re far less likely to put your bits and bytes in the hands of attackers.”

* Before I get a deluge of emails telling me that ROT13 doesn’t apply to numbers. What a surprise, I already know. It’s called sarcasm. There was another example of it in this sentence. Can you spot where it is. Answers on a postcard.

Feature : Summer’s here, code up!

This year I’m hoping to take part in the Google Summer of Code, hereafter referred to as GSoC for brevity and memory saving reasons.  I mentored in GSoC back in 2007 and I thought it would be nice to run a small feature on it, looking at the advantages, disadvantages, loves and losses of the scheme.

To introduce, for those of your that have been living in caves for the past few years, GSoC is a scheme to get students doing some real paid coding work over the summer months.  Google pick a certain number of Open Source mentoring organisations, and each organisation is then assigned a number of projects for students to work on. Students currently receive $4500 for completing the program, and the mentoring organisation receives $400 for each student who finishes.  “$4500!!!! That’s a hefty some of cash,” I hear you say.  Well, yes it is, but it’s also a great motivation to actually getting something done.  Unlike some of the development that goes on in the Open Source community which is very ad-hoc in nature, the GSoC requires a fairly rigid planning document, detailing what goals are to be achieved, time-lines, etc.  In short, introducing the student to real project planning.

Obviously the goals must be achievable and also must be useful to the mentoring organisation.  There’s no point submitting a student application to improve the ability of GIMP to produce RSS feeds of all the actions you make to your images, because the chances are 99.999% are not going to want it, even if you did spend all summer creating it.

Mentoring organisations provide a “mentor” for the student, who guides them through the whole process and steps in when they feel the student needs a little help.  Students benefit from having someone with more experience take them under their wing for a while, whilst mentors often gain some experience in managing people/resources.

Well, I noticed the activity when reading the Planet Ubuntu feed and thought it would be a good opportunity to become more active in Open Source Software. From there, I was able to “attach” to a mentor and with a few recommendations and ideas I was lucky enough to get in. Summer IS for fun, and I had a great time programming and learning how to work with an online community.

For me it was the perfect transition from unconfident programmer, to knowing I could at least do something that could make a difference. It was also great motivator. My wife found it hard to to think that buying a computer was worth it’s invest, but when we took the money I got from Google Summer of Code, our views and ideas of a computer totally changed. It’s not about entertainment, and education any more. It was all about a tool that could help me and my family make money. A beautiful thing indeed!

Jason Brower – GSoC 2007 Student

It’s a great way for people to get rewarded for contributing good solid code to a community project, but does it get abused?  Students get paid for the work that they do over the summer period, but the terms of the agreement state nothing about what happens thereafter and rightly so.  You don’t want students getting tied in to years of work for no benefit.  Of course if students want to do that, then it becomes the root of the drive behind Open Source.  However, is it abuse when a student states from the outset that they are only going to do what has been specified in the task, and nothing else?  I have seen several instances where students have flat out refused to maintain the code after it’s completed.  Whilst this isn’t against the letter of the GSoC law, I kinda feel that it is against the spirit.  It could be argued that, “Hey, welcome to the world of real work, you only get paid to do what you’ve been asked to do.”  I understand this.  After all I do work in the IT field, but it really does seem to be going against the grain of Open Source in general, swimming against the current.

Then there are problems on the mentor side too.  There have been several students who have been left floundering because they can’t get hold of their mentor or their mentor never responds to emails.  Mentors should remember that they have made an agreement to help the student, if they can’t commit the time, then realistically they should give up the position to someone who can, however this doesn’t always happen.  It’s sad, but I remember one student who several weeks into the scheme found out his mentor had left, and he had no one else left to mentor him.

At MoiMoin Wiki project, we are pleased that we can participate inoogle Summer of Code 2009 again. This is the 4th time for us and our SOC projects were mostly a success in the past – we enjoyed mentoring the students, the students learned a lot from us (real world coding in Python, testing, using DVCS, group work and communication within the F/OSS community, …) and it brought the MoinMoin project big steps forward because the student developers could focus on bigger projects for a few months (instead of the main developers who are usually just using their spare time and don’t do MoinMoin Wiki development as full time job).

We are very thankful that Google is funding students to help us and that we can help getting them into F/OSS development – this is definitely much better than grilling hamburgers in your summer vacation.

Thomas Waldmann – Project Admin Moin 2009

For some students, this isn’t really an issue.  To be honest, many of them get involved in Open Source in their first or second years of university, and by the time they go to get involved in GSoC they are already well established programmers with hundreds of hours under their belt.  To them GSoC is a vital part of maintaining their education by paying for fees etc.  They don’t necessarily need a mentor, but formally and ritually it is a good thing to have.

Problems aside, the list of organisations that have been included in this years GSoC is as varied as the last; including big names such as WordPress, MySQL and Moin, along with smaller, but equally as important ones such as Abiword, BlueZ and the Etherboot project.  It’s exceedingly important that we invest in the future developers of the Open Source realm, and a hearty commendation must go to Google for funding what is surely an expensive investment.  1000 student projects multiplied by $4500, is not a small some of money.  Agreed that for the Google giant it’s probably a small drop in the ocean, however kudos to Google for organising and maintaining the event year in, year out.

Having participated in Google’s Summer of Code in the past, I learned a lot of cool stuff. Not only did it improve my programming skills, it also made me think of new ways to do things and gave me new insights on collaborative work with people you don’t meet in person.

When last year’s GSoC ended, I already knew I was going to apply again this year. I was lucky that the organization I wanted to work with got accepted and that I can now continue to work on the things I didn’t have the time to do last year. This year, I am far more confident with respect to what I think I can accomplish because I already know many of the internals of the software I am (hopefully) going to work with.

While money certainly is a motivator, it is not the main reason for my participation. There are some things I am excited about that I want to do; And if I’m allowed, I will do these exact things (and even get paid for them! Now isn’t that great? :-))

Christopher Denter – Prospective Student 2009

I can think of only a few other organisations that have initiatives to help cater for and nuture the younger Open Source generations.  Had GSoC been around when I was in university, and had I actually been interested in Open Source at the time, I would have jumped at the chance to earn $4500 doing something that I loved doing.  Don’t get me wrong, I still spent a vast amount of time on the computer, but I was a very commercialised little boy, drawing CGI images in TrueSpace and making music with Logic Audio.

As a previous mentor I can certainly vouch for the effectiveness of the program.  It gives people a new way to think about things, on both sides of the fence.  Students learn from mentors just as much as mentors learn from students.  If that doesn’t happen something is seriously wrong.  It gives students a motivation, something credible and impressive to put on their CV, and sometimes opens up new opportunities that they may have never previously had access to.  It often integrates them even more into the community and really gives them a sense of worth, a feeling of achievement, from start to finish.

Having said this, there are still those who, for whatever reason, don’t finish.  Whether it be a lack of mentor input, a lack of talent, or even plain boneidleness, there are a few that don’t make it through the gauntlet that is GSoC.  After all it isn’t a walk in the park for some.  In fact, for some it’s down right difficult.  Project planning isn’t something that comes naturally to all.  Many people are used to working in the ad-hoc way and to be honest sometimes fight against the idea of authority.

Overall for all the good points and the bad, the GSoC is a fantastic project.  If you are in full time education, you’ve missed the boat this year, but try to look out for GSoC next year.  The student I mentored last year has gone on to do many great things and I feel very privilaged to have been a part of that.  Yes it sounds sappy, but totally true.

Gaming : ‘Home’ isn’t where the heart is

markPlaystation Home is Sony’s somewhat delayed response to the ever increasing social networking phenomena. It’s also an attempt to attract its share of the ever expanding audience of the ‘causal gamer’. With Sony’s entry, all three of the main consoles now have their own social networking infrastructure. The Wii has the ‘lightest’ of the three networks, mainly focusing on ‘offline’ networking, putting the emphasis on groups of people playing together around one TV, but it is noted for the fact that it was the first to introduce avatars to a wider audience. This is not to say that the Wii doesn’t have online facilities though. You can message friends, send them your home made avatars and with some of the more recent games, play against each other online. In recent months Microsoft has pushed its new Xbox Experience, essentially an update to the consoles user interface, the ‘dashboard’, granting access to a wealth of community material such as film trailers, interviews, reviews, all from the main interface. The new update also follows the example set by the Wii in allowing users to create avatars of themselves.

So, where does Sony’s ‘Home’ fit into things? They’ve attempted to integrate the best bits of Nintendo and Microsoft’s efforts but with a slightly different emphasis. With both the Xbox and the Wii, social interaction often occurs after a game has been chosen and loaded, which, whilst providing the user like mind individuals, can often limit social diversity. There is no central place to meet new people. Sony has taken note of this and has produced a stand alone environment that exists without any association to a game. This environment consists of a virtual world, similar in nature to that of the ever popular Second Life, into which users create an avatar of themselves. It’s through this avatar that the users can explore the world. Right from the start it’s obvious that Sony have tried to implement a virtual version of real world social interaction. Every avatar has an apartment, there’s a bowling alley, an arcade, a shopping mall and a cinema. All places that Sony’s key demographic are likely to interact socially.

“Upon leaving the shopping mall a small remote control helicopter heads straight for me, exploding inches from my face. I don’t ask„

So is it any good?…… Frankly, in my opinion, no. My first use of ‘Home’ was marred right from the word go by the horrendous amount of loading needed. I signed up and waited 10 minutes for the installation files to download. I then waited again whilst the files installed. ‘Great, I’m in’ I thought. After a quick tutorial and a very limited ‘create an avatar’ task you’re left to look around your rather minimalist apartment. I went to leave the apartment and guess what…… ‘Now downloading plaza’. By now 30 minutes had passed and all I’d done was stack a few pieces of furniture and attempted to throw chairs off the balcony (within ‘Home’ obviously but by this point I was considering it for real). Out in the plaza I was struck by two thoughts. The first was how empty the place was and the  second was how limited the ‘create an avatar’ function actually is. I was faced with about 20 or so people of which about half looked similar to my avatar. Most of these people were dancing, randomly, with no music. ‘OK…..moving on’ I thought, ‘Oh, cinema, I’ll go there’…..’Now downloading cinema’…….grrrrrrrrrr. The experience hadn’t started well.

So, to the cinema I went. Once inside (10 minutes later) I get the sneaky suspicion that a new film is about to be released. I could watch a constantly looping trailer of the new film Watchmen, a teaser trailer for the new film Watchmen, a 2 minute ‘making of’ the new film Watchmen, a short action scene from the new film Watchmen and about a million posters for the new film Watchmen. I left the virtual cinema with a strange urge to go to a real cinema to watch some kind of film, the name escapes me…….Leaving the cinema means going back to the plaza once again. ‘Ahh, at least this has already downloaded’ I thought. Well, yes but, ‘Plaza loading……..’ ZZZZzzzzzzzzz.

So, back in the plaza. Another avatar runs up to me and sends a private message in German. I slowly but politely reply stating that I don’t know any German. (A painful task as text input is never quick with a joy pad and I refuse to spend a fortune of the aftermarket keypad addition for the joy pad. Yes, I can use a USB keyboard, it’s just a shame I don’t own one!). The reply to that was in German. I laboriously reply once again and apologise. They reply, once again, in German but this time in capitals with many exclamation marks with the avatar flailing it’s arms wildly……. I run, deciding to hide in the bowling alley, forgetting I haven’t downloaded it yet. I hit the invisible wall at the door that refuses entry until it downloads and look back to see the German avatar chasing after me. I run back to the cinema whilst the bowling alley downloads. That was close. Whilst there I find out about this wonderful new film called Watchmen, don’t know if you’ve heard about it but I damn well have!!!! Anyway, I head to the bowling alley once it’s installed and think about going bowling. Thinking about bowling is as far as I get. I reach the top of the stairs and cast my ‘virtual’ eye over the bowling alley. This virtual world is open to greater Europe and how many bowling lanes do they supply?…..six….genius. I walk up to one game and start to write a message asking to join, only for another avatar to run up, controlled by someone with far more supple thumbs, who manages to type ‘hello, please could I join your game’ in the time it’s taken me to write ‘Hi’. Not wishing to look like the last kid to be picked for sports I leave promptly. Wow, that was worth watching the Watchmen trailer for the 7th time for.

“By now 30 minutes had passed and all I’d done was stack a few pieces of furniture and attempted to throw chairs off the balcony„

So, I finally arrive at the last place, the shopping mall. Now I know why the apartment is minimalist and the avatars are similar. The shops within the mall all sell virtual clothes and furniture. £4 for a virtual sofa for my virtual apartment….. Excuse me? £4 for a virtual item that does nothing but sit there. On to the next shop. £2 for a new virtual shirt? There are shops in real shopping malls that sell real shirts for that much!! I look at what’s free. Wow, what a coincidence, costumes from the new film Watchmen. I stand there, watching carbon copies of my avatar arrive, get changed and leave as carbon copies of Watchmen characters. I laugh and walk out (minus the costumes). Upon leaving the shopping mall a small remote control helicopter heads straight for me, exploding inches from my face. I don’t ask. Realising that there are more pressing things in real life; like choosing which fragrance of deodorant to put on and finding my car keys, I leave ‘Home’.

Whilst I think you’ll agree that all of the above would make for one interesting day out if it were real, I really couldn’t be bothered to go through all of that on purpose in the virtual world when in reality I’ve decided to allocate time in front of my Playstation to play a game. I’m not knocking the principle. I’m just as likely to be on Facebook or some form of messenger program as the next person, I just don’t think Home is worth it. There’s nothing in there I can’t already do with Google, YouTube, a few chat rooms, a few simple browser based games and possibly Amazon or which are all available anyway if you have an internet connection. Using the modern wonders that are the mouse, the keyboard and a multi-tab browser, it’d also be a damn site quicker! In its favour it is presented very well, with good graphics and some very nice touches to the virtual world. It also has massive potential as other areas, shops and games could be simply tacked on with a future download, however, as it stands, it’s a rather hollow experience. It’s easy to see why Microsoft and Nintendo decided to dispense with the visuals and stuck with a simple menu based system. The Xbox system I find particularly good as choice content is displayed on the main screen as ‘headlines’ next to your options to load a game. You are free to ignore them and load your game or explore the ‘headlines’ further. The Wii has its ‘Channels’ that sit on the front screen. You view the channel or you don’t, it’s up to you.  You let the content do the talking, not the visuals (or random Germans).

Is it fair to criticise a free addition? Not really, but you can’t help wondering whether the time and money being pumped into Home’s development couldn’t be better spent elsewhere. Anyway……..I’m off to go to a real cinema….. Watchmen looks quite good.

Education : Open Source at the Cutting Edge of Science Education

So I’m not your traditional computer geek. I’ve never played Dungeons & Dragons or World of Warcraft. I’ve never even seen a single episode of Battlestar Galactica. What I am is a nerd, a science nerd. I’m a PhD Chemistry student and my research has nothing to do with computers. However, open source software does have a long tradition at universities. I’m reminded of things like BSD (Berkeley Software Distribution) and the MIT license, the number of .edu domains hosting Linux/Unix OS mirrors, and the sheer volume of open-source software that started out as a grad school projects. The open-source development model fits in fairly well with the “academic freedom” that universities stand for. It is also fits well with the model of scientific inquiry. Ideas like peer review, constant empirical testing, and emphasis on experimentation play right into the way open-source software developers often approach their work.

So one would think that open-source software would be simply ubiquitous in science education. However, my experience, at least in the field of Chemistry is quite the opposite. You can find lots of hard-core research applications that grad students or research scientists might use, but when it comes to teaching undergraduates in particular, I very rarely see open-source software used. In my department in fact, it’s been non-existent until this semester. I’d like to share two personal examples of how open-source software is helping push university science education and a little bit about why I think open-source software is going to revolutionize science education in many ways.

This semester I’m the teaching assistant for a Physical Chemistry Laboratory class. I’m teaching mostly chemical engineers and they mostly feel it’s just a tad bit like watching an old black-and-white movie, boring, antiquated, and irrelevant. One of the things we’ve done this semester to liven things up a bit is add a lab involving a Scanning Tunneling Microscope (STM). I’ll spare you, dear reader, from all the gory details, but in short it lets us take pictures of molecules. It’s really pretty cool and the students get to work on a brand new instrument. The STM itself costs around $10,000 USD but the company wanted to sell us image processing software for ~$3,000 USD. The professor in charge of the class called up an STM expert at another university who had been using the same instrument for teaching and asked if the software was worth it. He said, “nope, just use this open-source program called Gwyddion“. Cool, we save $3,000 and get a better working program that is cross-platform, imports the native file format of the data acquisition program, and is easy enough for students to start using with only a few minutes of instruction.

There are two important aspect to me about this software. First, it was developed by scientists, for scientists. So much of the time when I use proprietary science software I can just tell that it was designed by somebody who knew more about programming than the science that the program was trying to accomplish. Gwyddion has a community of users and developers around it to support and guide the project. Second, its free and cross-platform. It may not seem like a huge deal, but a lot of times science software is > $1,000 USD which makes it impossible to let students take it home. Usually we make them check out CDs (and there are generally not very many) for a few hours at a time in the department’s computer lab. Students hate that and so they really never explore. Being able to say “you can download this for free at home, on Windows, Mac, or Linux, and play around with it” is really a big deal.

The second personal example I’d like to talk about is an up-and-coming molecular editing and visualization project called Avogadro. Like Gwyddion it is cross-platform and developed by scientists (covering biochemistry, molecular physics, and applied mathematics). I got involved with the project while looking for something to replace the software my department is currently using to teach first-year students about quantum mechanics.

Our current software was released in 1996 and cost something around $1,000. It crashes constantly, students hate it, and I end up teaching a “howto deal with computers” lab rather than the chemistry the students need to learn. I found that Avogadro had all the features needed for the first half of the lab, but was missing a couple key features for the second half. Since I had been in contact (and even contributed a patch here and there) with the developers on the mailing list and IRC I asked them if it would be feasible to add the features we needed. They were interested in the idea and started working on getting the needed code roughed out. To make a long story a bit shorter, starting in the fall semester we will have a program that will not aggravate the students, will have much better features, and they can install it at home to do homework with.

What I love about this story is that the open-source development model let me, as an educator, get involved with the development of the tools I need to help students learn some rather difficult material. The developers where interested not in what they were getting paid to code, but on what their users needs were. And as teachers use their software, they will continue to refine features that will drive education further.

I hope I’ve been able to give you a little glance into some aspects of open-source software that have great potential for improving science education. It is more modern because it is generally faster paced, more accessible because of low/no cost and often being cross-platform, and more science-like because it’s written by scientists and lets users get involved with design and development.