Have we already reached Desktop nirvana?

Recently I’ve been taking a look at Windows 7 and though I don’t want this to turn into flame war, I have to say I’ve been less than impressed with the latest offering from Microsoft. They have fixed one bug that annoyed me to distraction, but other than that, I really can’t see what all the fuss is about. However I don’t want to get bogged down by this. What I really want to talk about is the way that the desktop has changed over the last two decades.

The interest sparked off from the fact that I spent a considerable amount of time tuning Windows 7 to look and feel like my current Windows XP setup. I then realised that I had done the same on Windows XP to make it look like 98. But why? Is it because I am an old fuddy duddy now? Is it because I am unable to embrace change? Is it because I can not take the time to learn something new? I hope not!

I’m forced to think about the other side of the fence, the Linux world. I have used a large variety of Linux distros, Ubuntu, Fedora, RedHat, CentOS, Knoppix, BackTrack, Foresight and Puppy, to name the ones I have used regularly. I have had no problem adjusting to these, and to newer versions of these. So what’s the problem with Windows 7? Why do I constantly feel the need to live in days gone by? I guess the ultimate question is in the title of the post. Did we already hit a desktop Nirvana?

This is all very interesting. Going all the way back to Windows 3.1, and the change we encountered moving to Windows 95. The difference was vast. It changed the entire way we used the desktop. Now let’s move from Windows 95 to 98, no real major differences in the design of the desktop or layout, again from 98 to XP, save for a few updates to the startbar and grouping. Now we go from XP to 7, and I have to say for me it’s kind of more of the same. There is nothing really new, the groupings work a little differently, but I missed having the text labels for programs. I usually work with more than 40 windows open at a time, call me messy, but it’s just how I work. The nature of my work means I flit from task to task all day every day.

So have we seen the last of the desktop development? Of course not! I’m not for one minute saying that there will never be any more advances in desktop development. What I am saying is that all of the “nice” new features, the ring switching, the window previews and all just “nice to have” eat resources. We should be making improvements to the desktop that both offer us better workflow, more enhanced capabilities, but without the need for a machine that is three to four times as powerful as it’s predecessor. I still recall that the compiz engine when it first came out, at the time when I bought a GeForce 7300 GT OC, still ran acceptably on an old MX440. I was stunned.

You have to wonder about the real motivations for change here. Call my cynical, but the Linux desktop seems to move and evolve with the demand from the users. We ask for things to change, and when that volume hits a critical mass, change happens. OK, it’s not always this way, and sometime design leads take the bold decision to try something new, but for the majority it seem to me that this is how the development happens. Now let’s take a look at the Microsoft way of thinking and it’s a little different, but then so is their business model. They thrive on people buying new versions of the software, so it’s in their best interest to get people “hooked” on these new features.

Microsoft sell training, they sell certifications, and so a portion of their revenue is directly dependent on how good a job they do at changing things sufficiently enough to require more training. Look at the shift from Office 2003 to Office 2007. As I said, call me cynical, but this change in thinking also benefits the PC manufacturers also. It enables them to push bigger better and faster machines, which ultimately will all run at the same speed, once they become loaded with the next generation of system hogging operating systems.

I digress. Apologies. Going back to our idea of a desktop nirvana. The ultimate root notion of a desktop hasn’t changed in many many years. True, the mobile market is starting to make us rethink things, but it is still rooted in the idea of windows, files, folders, icons and desktops. It amazes me that in all this time we have not really come up with a single new methodology for using a computer that has been accepted and implemented. Cue some references to project X, Y and Z. I understand people have probably tried, but I’m forced to consider the fact that for the foreseeable future, the desktop is as good as it gets. Yes, we’ll get things like wobbly windows, snappy left and right thingies and the like, but the fundamental desktop model doesn’t look like it’s going anywhere just yet.

But maybe this is all stemming from the fact that the whole files and folders, the root method by which we store information needs an overhaul.  It’s based heavily on the model of an office in days gone by, where you would have reams of paper files, in folders and that made sense, whilst we were still in a transitional period.  Now though, it’s causing issues.  People can’t get to their information quickly enough.  I personally want a tag based file system.  I want to be able to write in something like, “iso, 6 months ago, ubuntu” and for it to instantly bring back an iso image of ubuntu that I had 6 months ago.  What I don’t want however is a tracker system that has to use up system resources to keep an index of the files.  I want this built in.  I want the world I know.

Let me know what you guys think?

If a virtual tree falls in a virtual forest, does it bit shift?

Hi guys. It’s been a long time since I wrote in here. I’m busy with numerous other projects, but I was thinking about something the other day that I just couldn’t pass up the opportunity to discuss. Now it should be noted before I begin that I’m not an expert in any of the topics I am discussing, but I’m offering the subjects up to see what you guys think. So don your philosophical hats, and let’s take a weird trip into the essence of existence.

Starting simply
Let’s start with a simple concept. I have some data. This data is the word ‘happy’. Now, if I have that data stored somewhere and I can retrieve it, it could be said that I possess that data. Hopefully you’re all happy with this (pun intended). If someone else takes a copy of the same data. They could be said to possess the same data as me. If they have done this without my consent, we could naturally assume that they have stolen this data, since they possess the same data as me.

As previously mentioned, I’m not an expert in any of these fields, but this seems to be born out by the current state of law and affairs. If someone copies a piece of software, or a video file, without the consent of the owner of the data, and let us note here that the owner is not necessarily the possessor of the data, then they are liable to be prosecuted for stealing the data.

Speaking in riddles
Now that we have that nailed down, let’s mix things up a little. Consider that we have our word, ‘happy’, and that we encrypt it with a very simple encryption algorithm. This algorithm is implemented in the following manner. First, turn each individual letter into a numeric counterpart, a-z > 1-26. This leads to a = 1, b = 2 etc.

Now we come up with an encryption “key”. This key will be the same length as the data we are trying to encrypt. In this case let’s use the word ‘weird’. We use the same numeric conversion on our “key” and add the two words together, numerically, one letter at a time and then turn the resulting numbers back into letters via the same numeric mapping.

h a p p y == 8 1 16 16 25

w e i r d == 23 5 9 18 4

++

? f y ? ? == 31 6 25 34 29

The problem here is that some of our numbers are over 26 and so can’t be represented. To rectify this we’ll make it so that if we go over 26, we’ll just subtract 26 from the result. The final encrypted version then becomes.

e f y h c == 5 6 25 8 3

Though that was laborious for some of you, it was necessary to proceed to the next step in my musings. So we now have ‘efyhc’. Anyone looking at that “word” isn’t going to have a clue what it means. That’s the purpose of encryption right? To “hide” the data.

Possession is 9/10ths of the law
What’s interesting now though is that if I only hold the encrypted version of the data. Does the data still exist? Harking back to the whole, if a tree falls in the forest and no one is around does it make sound? argument, it’s actually surprisingly similar. Without the encryption key, the string of data ‘efyhc’ is essentially just random data.

What is it that separates it from actual random data? The goal of an encryption algorithm is to make the cipher text undecipherable. In essence to make it as random as possible, so that no patterns exist. To all intents and purposes we could call this a random string. After all it could turn up in a random string quite easily.

That sad little man inside me, wanted to satisfy this, and wrote a little python script. After processing the script several times, my trusty precious data took between 1 second and 1 minute to turn up. Now remember that in a truly random number generator my word could have turned up first. It could also have never turned up at all, no matter how long I ran it. That my friends is the beauty of random.

Going back to our idea of the existence of the data. Imagine now that we destroy our encryption key. In our example, the real data is pretty easy to remember, but let’s now assume it isn’t. Assume it is a large document. If we destroy our encryption key, does the data still exist?

Build me a wardrobe squire
So do you have an answer? It’s a little different to it’s physical analogy. Often people refer to encryption as locking something away, ensuring people don’t get access to it. Though the end goal is the same, securing the data from people who we don’t want to have access, the mechanisms are quite different. In the physical world, if we put a padlock on something, the physical item always still exists. We may be denied access to it, but it still “physically” exists. What this means is, given the effort we can gain access to it, via cutting off the lock, blowing a hole in a safe or some other means. We may not know what the physical object even is, but we know it exists.

In the digital world, the “object” for want of a better word, our data, is transformed into something different. Something which bears no resemblance to the original at all, at least if we use a good enough encryption algorithm. We spoke earlier of the effort to make our data look like a good old random number sequence, in the physical world, it’s interesting to consider how one would actually implement encryption of physical objects. Perhaps the best example I can think of is flat-packed furniture. Here, the parts in the pack would be the cipher text and the instructions would be the key. We shouldn’t be able to make the furniture (gain access to the data), without the instructions.

The difference between the two, encryption and flatpack furniture then really just becomes the difference between physical and virtual worlds, which we are all already aware of.

Give me back my stuff!!
So let’s turn our attention back to possession again for a while. Let’s consider the duality of the physical and virtual worlds together. If I possess just the flat pack furniture, do I possess a wardrobe, if I don’t have the instructions? Similarly if I just possess the cipher text, do I really possess the data, if I don’t have the encryption key?

Hmmmm?!

Well, here is where some differences lie. And we are very shortly going to leave our faithful mahogany companion behind and concentrate on more “virtual” things. If we possess the cipher text, and do not have, or have destroyed the encryption key, the data can not exist. As we said stated earlier, the cipher text is just random characters.

But wait!

The cipher text surely holds the best chance possible of being able to get the data back again? We just need to know what the secret key is? Well, let’s think about that for a second. How secret is that key. If we model the cipher text as a random string of characters, and the encryption key as a random string of characters. Then something interesting happens. The key to possessing the data, isn’t the fact of knowing the key. It’s knowing which key and which cipher text go together.

But that’s not fair!!
Remember our super secret word earlier? Shhhh don’t tell anyone….’happy’. When we paired it with our secret choice of encryption key ‘weird’, we came up with our cipher text, ‘efyhc’. We rested. Our secret was safe. No one would be able to get our data back, without first knowing our encryption key, our secret choice of encryption key. Correct?

Well, as it turns out not exactly. Consider the following.

w o z d t == 22 14 25 3 19

o n j n u == 14 13 9 13 20

h a p p y == 8 1 16 16 25

Hold the phone!!! That’s our very secret data right there in the open. But how? As it turned out I wrote a small program to take two random strings of data, and use one as the cipher text, and one as the key. I then did the reverse as in our first example, ie. I decrypted the data. Et Voila!! Without knowing anything either the key or cipher text, I have managed to come up with the real data.

Turning this back to the real problem at hand, I also wrote a script to take the known cipher text and generate a random string of characters that I used as a “key”. I quickly also found my secret key, and then subsequently my original data. This should be of no surprise to us. As I mentioned before, neither the cipher text, nor the key are secret. We already know them to be random strings of data.

So the problem isn’t so much as being able to generate the data, the astute reader will have realised by now that we could just use a random string to obtain the original data, without any kind of encryption process. No, the real problem is being able to validate the data. The real problem is that without the original, we can’t be sure that our version is an accurate copy of the original. So in short, to possess the data, we need to possess the data.

Existence and possession Part 2
Now, the revelation there may actually appear not to be so much of a revelation. To be able to possess the data we must possess the data. Seems pretty obvious. Let’s just think for a second though. What we really mean is this, that the encrypted form of the data on it’s own, doesn’t contain the data. It is part of a random string, which when “energised” with the “key”, produces the original data. One must possess both pieces of the puzzle to possess the original data.

With this in mind it’s interesting to think about theft a little more. If I steal an encrypted version of Apple’s flagship product, but I don’t possess the decryption key, then really, I possess nothing more than a random string of data. Otherwise, the fact that I possessed a random number generator would be enough to make me liable to prosecution.

I’ll leave you with a final thought though, if you possess a true random number generator, and was capable of running for infinity, and you printed everything it produced. You’d be in possession of everything that ever has, is and will be produced. Mind blowing 🙂

My old bike (of 4 months)

I was riding to work yesterday when……..boom. I almost died. Seriously, if I had been riding on the road, I don’t think I would be here today. I thank God that I was on a cycle path at the time. So, just to share with you all. My “old” bike, of 4 months, had a little accident.

It’s not even like I thrashed this bike at all. I kept good care of it. It was my work bike after all. But a little trip to the bike shop and I got it replaced with a new model. Tomorrow I’ll test it out.


It’s alive!! – Well almost…

Last night I finished off some of the more complicated parts of Ethestra, like all the quantisation techniques for all the note length, position, velocity and pitch. The pitch area still needs some work, as I’m planning to make it support multiple chords, to do so requires pulling in some data from an old friend of mine called Tigla.

Tigla was an app I tried to write a while back to allow me to play guitar chords on my Nokia N810, and for them to sound realistic. At the heart it was just a sample player, using 6 wav files with offsets for each note, but it was also something else. I had written a fairly sophisticated chord library tool, which could take the name Em, AM, A7, and return a list of allowed notes in that scale. Well, the idea is to reuse that with Ethestra.

Fear not, though, here is the first rendering of Ethestra v0.65. It will be very interesting to see how v0.7 turns out 🙂

Ethestra Update

So, the project has been continuing nicely recently. I decided it would be awesome, if the end user, me, could add filters to the traffic for each instrument. I only wanted a single scapy instance sniffing for traffic, so I had to implement the filter in Ethestra, as opposed to sing scapys built-in filter system.

Thsi left me with a problem. I had no idea how to write a parser. After thinking about it for a while, and talking to some coder friends, it became clear that if I could get the filter in the format below, I could evaluate it fairly easily.

[["ip", "==", "10.2.6.3"], "AND", ["sport", "==", "25"]]

This turned out to be true. With a little help from pyparsing, I can now enter filters like this.

ip == 10.2.6.3 AND sport == 25

Listing expected soon…

Taking the project one step further

Some of you will have read my post yesterday about my new pet project, now code named “Ethestra”, a combination of Ethernet and Orchestra. What!! All the good names were taken!!

I put the system on in the background whilst I worked for an entire morning, and I have to say, I felt very relaxed. It was kind of an ambient/zen type feeling. Very soothing.

So I got to thinking how to take the project further. Is there anywhere we can take this? Of course. As I sit at my keyboard, I have just spent the last 2 hours, and now have a fully functional tracker type midi sequencer in python. It has no GUI of course. A simple drum rhythm is defined as a pattern variable, like so.


([
(0, 0x48, 0x60, 4),
(2, 0x48, 0x60, 4),
(8, 0x48, 0x60, 4),

(16, 0x4B, 0x60, 4),

(0, 0x4E, 0x60, 4),
(8, 0x4E, 0x60, 4),
(16, 0x4E, 0x60, 4),
(24, 0x4E, 0x60, 4),
(32, 0x4E, 0x60, 4),
(40, 0x4E, 0x60, 4),
(48, 0x4E, 0x60, 4),
(56, 0x4E, 0x60, 4),
])

Ok, so it’s not pretty, but the ultimate goal of the project is for the network data itself to create the patterns. That being said, it can create at least something musical. A sample of the sequencer alone is here (this is not network music yet).

Listening to my network traffic

I know this is nothing new, and that people have done it before, but I wondered what it would sound like if I hooked up my network interface to my speakers. Not, not quite that low level, but using scapy, an awesome python network toolkit, and python bindings for portmidi, I was able to hook the two up. I then piped the midi to ZynAddSubFx for the audio creation.

Now, I hope you’re not expecting a symphony, cos you ain’t gonna get one. What you do get is a kind of ethereal sound, with a sudden flurry of activity. I apologise for the slight pops and such, I need to tweak audio settings and run it on a faster CPU, but you get the idea. The plinky plonky noise is www data. The low level AAHs that you only hear occasionally are DNS calls.

About 20 seconds into it, you’ll hear some drums kick in, that’s ICMP data there that is. I set a ping going to give a rhythm. It’s all very, spacial and weird. At the moment most of the instruments cycle up a very small arpeggio. One packet turns the note on, another turns it off. It starts at one note, then the next, jumps up, then jumps up again, before going back to the original position.

I would be interested to hear what you think. There is also another instrument in there doing my jabber conversations. Heheheh. Enjoy.

Took about 2 hours from scratch to code and figure out how to use scapy and portmidi. Quite fun. Once the code is in a better state, I’ll probably even post it up.

The file is here

LOST Finale : A review from a die hard fan

So, I watched the finale of LOST last night. Did it answer everything? Most certainly not. Did it answer enough to pass as an ending? Probably. It seems to me that the writers decided, as many people have stated, that there simply could not be a logical explanation for everything. Bear it mind that logical, does not have to equal credible. We ended up with a kind of supernatural ending, which in a way I was kind of expecting anyway.

There were so many aspects of the show that were mysterious, so many ‘what the heck’ moments, but in the end, most of these were swept under the carpet and we were left with a rather limp ending, which in all honesty could have been tacked onto the end of each season and still made sense. There was literally no need to introduce the numbers, or time travel or any of the other wacky and weird things which LOST was known for. Of course they enhanced the journey, but that wasn’t the point. The writers seemed to be egging us on, making us think that in the end, all of these totally barmy ideas, would be boiled down into one single “ah-ha” moment.

It was resolutions like “You became a mother”, which Jacob tells us was the reason for Kate’s name being scratched off the wall, which just seemed so simple and yet so right, that made me love the show. Little subtle hints like this seemed abnormally absent. There were no real answers in the finale. When they began to bring in the Daniel Faraday storyline, with the experiments and physics, I became a little more excited. Finally there was a glimmer of hope that the whole premise of the show could be explained in someway with a scientific background.

Alas, none of this was explained. It felt as if the ending could have been applied to the end of any of the seasons and the show would have been completed just fine. It was like a failsafe, seemingly conceived from the beginning, which rendered everything that had happened in between just moot. Things just didn’t matter. The Dhama, didn’t matter. The numbers, didn’t matter, though there were some hints to the origin of these in some extra material. Shoddy work, in my opinion, it would have been much better to include the origins of these numbers in the main show.

Everyone knew there was never going to be a full explanation for everything. We had to take it for granted that the electromagnetic energy had strange effects, (remember the “box” people?), but at least give us something as a parting gift. Something which shows us that everything had a purpose. Something that explained some of the origins that were made so much of during the seasons and faded away into the ether in the finale. From a spiritual point of view, yah, they probably didn’t matter. The journey is what was important, but for all those people who won’t or can’t feel that way about characters, it felt unfair to leave them with nothing more than a bad taste in their mouths at the end of the this 6 year long journey.

Shortly before the end, my wife and I were discussing possible endings and I have to say I think she came up with a much better ending than the one we were given. Though it may have been slightly more predictable, (well we thought of it didn’t we?) it seemed more LOST like. There would ultimately be some sort of struggle at the end, as we saw, but then all the main characters would flee the island, leaving Jack and Locke sitting together on a beach, in much the same way that we saw Jacob and Smokey earlier in the season. It would have been funny also to supplement this with a plane flying overhead/ship wrecking on the rocks, and the pair looking at each other. It would have lent to the cyclic nature of the show which seemed so prevalent.

Overall, the ending was acceptable, but by no means good, or even great. It’s such a shame that the writers didn’t take the opportunity to really go all out on this and give us an ending we’ll never forget (for a good reason). We were quite keen to buy all the seasons on DVD once the finale was over. Now, I don’t think we’ll bother. Disappointing people, very disappointing.

Gaming : (N)othing new (AT) (AL)l??

Have Microsoft broken all the boundaries? As I was perusing the net the other day, I came across a video on the BBC website, which was showing an application of project Natal, that Microsoft had been demonstrating at E3. The introduction to the video was claiming that this was something pretty special and I’ll have to be honest at first glance, it certainly did seem a little too good. I thought it would be interesting to take a look at the video and analyse it a little. For those of you that haven’t seen it, or indeed can’t, I’ll give a short text description here.

The video starts with a woman walking up to a screen and greeting a small child who was playing on a swing. He walks over and greets her back. They then enter a discussion where the woman, Claire, questions the boy, Milo, as to whether he has done his homework or not. The boy then changes emotion, putting his head down and starts walking with shoulders hunched, not looking at Claire at all. The narrator points this out and describes a technology where Milo can recognise Claires emotions and visa versa. Interesting. As we continue, Claire offers to help Milo with his homework. He throws her a pair of goggles, which obviously can’t permeate through the screen into the real world, but Claire stoops to pick up the virtual goggles. He tells her to put the glasses on, and she uses her hands to make goggles like shapes in front of her eyes. Milo acknowledges this, and the camera then shifts to look into a pool of water, where Claire is now able to interact, by waving her hands in front of the screen, to make small waves in the water. After this she decides to help Milo and draws him an orange fish on a piece of paper. She shows this to a device above the screen and Milo reaches up and grabs what appears to be a copy of the drawing from above the screen. We hear him exclaim that it is orange shortly before the video finishes.

Clever stuff I hear you cry. Well yes and no, I feel that in some sense the video may be misrepresenting what is actually going on in front of our eyes. Now don’t get me wrong, the Natal framework certainly looks impressive, but I wanted to take a look at current technologies and see whether there is actually anything new in this at all. First of all we have facial recognition, Milo clearly recognises Claire and responds to her by name. Though facial recognition hasn’t been perfected, many machines are able to tell the difference between several faces. Head tracking and face tracking is something that even digital cameras can do nowadays and so this doesn’t surprise me. To be honest, let’s look at the market for this framework. It’s largely going to be of home entertainment use. Owing to that fact, the number of faces it has to differentiate between is likely to be small, often consisting of two adults of differing gender along with two children separated by age with a few years. I’ll admit I’m stereotyping a little here, but it’s nothing to be concerned about, any family is going to have similar differentiations between the various occupants.

Moving on from this we have the voice recognition. Voice recognition hasn’t received a huge boost to it’s technology of late, but it’s still good enough for recognising a few keywords. Extending this to the Natal framework and it’s hard to see whether the conversation is free form or scripted. Listening to the narrator speak about the project, and watching a few things on the screen it concerns me that the video is little more than a glorified script. What makes me say this? The fact that the narrator explains that everytime the pair of goggles is thrown at the interactee, they stoop down to pick them up. This seems to me to indicate that events are not at all free flowing and still have to utilise a large amount of pre-scripted effort. This is further confirmed by the feint but still visible symbol on screen of how to make the goggles symbol and this is repeated at the beginning of the demonstration where it appears Claire has been prompted to wave to Milo. It seems the NATAL system is driven by gestures and symbols. What did intrigue me is as Milo skips off to the pond, he mentions in conversation “I don’t know until I try do I?” This seemed to be a rather out of the blue sentence and could indicate more realism to the whole system, or a string of random phrases that Milo may utter, after discussing homework.

The emotional state of Milo is something which is touted by the narrator quite heavily in this video. He claims that Milo is able to recognise emotions in the interactee and is also able to exhibit emotions back. The second claim is a little easier to stomach. It’s entirely possible to put modifiers on the motion sequences to make them look happy or sad. Dropping the head, slouching forward is nothing special. The former of the two claims is more difficult to stomach. Just how can Milo recognise emotions from the interactee. In the video, we do not actually see any evidence of this, but it could possibly be achieved by monitoring the persons own stance and features of voice. Milo’s voice does indeed seem to change with his emotion, lending his voice to vary considerably depending on his “emotion”. This could be achieved quite easily with having a number of responses, dependent on the input of the interactee. Some could be happy, sad, surprised and based on keywords from the voice recognition and emotion analysis from stance and possibly face.

The next subject is one which unless the system is really limited I can’t fully explain. The synthesis of speech is actually really good. Along with speech recognition this appears to be an area which has been lacking in technological development in recent years. It could be that the demonstration has pre-scripted lines which Milo can speak, or it could be that the words can be generated on the fly. The NATAL sensor is apparently equipped with a multi array microphone which enables it to do acoustic source localisation and noise suppression which could aid the speech recognition, but the speech synthesis would probably be handled by the software on the console.

Next comes the interaction with water. Now in my mind, this is the easiest portion of the demonstration. There are a few nice touches, but again there is nothing ground breaking here. The sensor in NATAL is apparently capable of doing 3D full body motion capture of up to 4 people. Taking the movements of the Claire and making her ripple the water really is child’s play. It was, however, refreshing to see her reflection in the water. Presumably the RGB camera in the sensor is used to map video onto a plane which is then “rippled.” To be honest though not technically impressive this was one of my favourite parts of the demonstration video. The camera is also used to take a quick photo when Claire draws a picture of a fish for Milo. Though we hear Milo exclaim that it’s orange, the video ends before we can see whether he recognises it as a fish or not. Assuming that Milo is expecting to see a certain set of shapes, it isn’t beyond the realms of possibility for the software to be able to pick out rudimentary shapes from the drawing and convert those for Milo to process.

Some of you reading this, who have watched the video may be thinking that I’m being a little harsh and that the video was pretty amazing. I’m not denying the fact that the video was impressive at all. However after my first initial watch I decided that I wanted to dig a little deeper, and not take everything on face value. I wanted to see whether Microsoft were bringing anything ground breaking to the market. In my personal opinion the technologies behind this are nothing new at all. What NATAL does appear to bring, is a way to amalgamate all of these new technologies together into a single package. If the API behind this is as good the demonstration video, then it will be very interesting to see what the XBox360 has to offer, once NATAL is released. To be honest it is all going to hinge on what Microsoft do with the technology. Having a great technical demo is one thing, but being able to turn that into an immersive gaming experience is a completely different thing altogether. After all, we all have virtual reality now don’t we? Oh….yeh….what did happen to that?

Humour : The wonderful wizard of letter writing

Few could argue that our lives haven’t been bettered by the introduction of our favorite pointy hatted friend, the Wizard. So let’s take a look at the world without the virtual sorcerer.

Dave was sat at his desk. He’d been mulling over the problem for a while now, but he just couldn’t quite get it right. Top? Bottom? It just didn’t make sense. The middle would make it look awful. Eventually after much huffing and puffing he sat bolt upright and called across the office. “Wizard!! Oi Wizard.” Nothing. It was time for something a little more drastic. Dave flung a stapler over a partition and shouted, “OI…MR POINTY” A rather strangely dressed man appeared on a wheely chair and hurled himself towards Dave’s desk. He spun the chair as he went hoping to impress or at least announce his arrival.

“Tada….I’m here….What can I do for you?” The man looked over at Dave’s desk and exclaimed with an over emphasised amount of joy, “It looks to me like you’re trying to write a letter.” Dave nodded grimly, he hated himself for asking the idiot over but he really was stuck.
“So what seems to be the problem bub?” asked the Sorcerer.
Dave took hold of the letter opener on his desk and pointed it towards the ‘wizard’. “Call me bub again…..and….” he stumbled whilst he fought for the right words….”I’ll cut the point off that bloody hat of yours.” The Wizard looked stunned and slowly but delicately took off his black pointed hat and hid it behind his back. Dave sat back down at his desk and the wizard moved towards him…wheeling the chair slowly. “I’m just having trouble with one part,” said Dave. “I can’t for the life of me remember where the signature goes.”

The wizard suddenly flung him self back in his chair and roared with laughter. “Now that’s something I can help you with me-laddo,” he exclaimed. The letter opener was once again raised and wizard’s eyes widened slightly. After a short stab in the air by the angry office worker, the blunt blade was once again lowered. “Right, let’s have a look at what we have so far.” Dave reluctantly gave the sheet of paper to wizard. Wizard started looking over it, silently humming a happy little tune to himself. Dave started slowly and rhythmically banging his head on the desk. Wizard obviously couldn’t hear himself so he hummed a little louder, and Dave combined the head banging with the addition of fingers in his ears.

Wizard quickly took a glance around, made sure Dave wasn’t looking, and then gingerly pulled the front of his trousers away from his stomach. His other hand swiftly picked up Dave’s letter and stuffed it down his pants. He found a blank piece of paper and tapped Dave on the shoulder. “Shall we begin?” he said. Dave seemed a little shocked, but then nothing about the strange little man surprised him anymore.
“What do you mean, begin?” he asked. The little man gave a short strange little smile and then continued, “You’ve enlisted the help of a wizard now, we must start everything with a blank slate.” Dave looked angry. It wasn’t surprising, even though the letter wasn’t long it had taken him a fair amount of time to compose it.
“But…but…what happened to my letter?” He asked.
“Destroyed” said Wizard.
“WHAT!!” Shouted Dave. The little man was starting to really get on his nerves. The wizard tried to reassure him.
“Don’t worry we’ll start it all over again,” he chuckled to him self, “and get it right this time.” Dave was tired. It had taken all morning to write that letter and he wasn’t about to write it all over again by himself. “So, let’s see, first of all we need the letter body.”
Dave frowned “You mean the recipient?”
“No” Said Wizard. “We start with the message body”
Dave looked at him in disbelief. “What kinda screwed up way of writing a letter is that?” Unphased, the wizard just replied joyfully, “I don’t believe you’ve been to Wizard School……Nope…..Well I have”

The two of them worked for the next 15 minutes, Dave with his head on the desk, and Wizard writing feverishly. Dave managed a short glance at the paper. Then he suddenly stood up, pointed to the page and shouted, “What the heck is that?”
“Why that would be the letter ‘a’ good Sir,” replied Wizard.
“That’s a 9!!” spat Dave. Wizard tried to smudge it and start over the letter again.
“Oh man you’re making it worse!!! I swear if this takes much longer I’m going to give you such a beating.” shouted Dave.

When they’d finally finished the body of letter Wizard looked over at Dave and asked, “How do you want to sign off?”
Dave scratched his chin and replied, “With the warmest regards, sounds about right.”
Wizard’s face dropped. “Sorry”, he replied. “You can choose from either ‘With Love’, ‘Yours Sincerely’ or ‘Yours Faithfully’. Dave slammed his hands down on the desk. This was not going well. It had taken just as long to get the stupid wizard involved as it had for him to do half the letter himself.
“Why can’t I choose what I want to write” he asked in desperation.
Wizard patted him on the back as he stood there panting. “You’ve never written a letter using the wizard before have you?” Dave slumped in the chair and just replied “Yours Faithfully.”

“Your name?” asked Wizard.
“Don’t be stupid, dim wit!!” Was all Dave could reply.

“I’ll just put David,” said the magician. Dave sat up again and then slammed his hand down on the paper. He looked tired now. His eyes were wired and his hair a complete mess. The ironic thing is that the Wizard had supposed to have been a quick end to a long and boring job. Dave extended the blunt blade in threat once again.
Through gritted teeth David hissed, “I’ve written this letter twice now, thanks to you. I am signing it myself.”
Wizard began to protest, “I’m afraid Section 3.2 of the wizard code states that no user may input anything into the document itself until the Wizard has completed the task.” The head banging commenced once more accompanied this time by fists too.

“Now”, said Wizard, finishing off signing the letter from D-a-y-v-e-d. “Who is the delightful letter going to?”

Dave responded, “Jean Kiln, Michael Simmons, Marty Beanham….”
Wizard held up his hand. “Woah, Woah, Woah….” he laughed. “You said more than one name”
“Yes” replied Dave, once again dumbfounded at the weird little man. “That’s because I want to send it to more than one person”
“Oh, I’m afraid you can’t do that with a letter”, said Wizard.
“Why the hell NOT??” Dave’s forehead was throbbing now.
“Well”, started Wizard, “it’s not in the spirit of a letter. I think what you’re looking for is more like spam. That requires a level 3 wizard who’s studied in the ancient art of Advanced Correspondence”

Dave sat there for a few minutes. His brain wasn’t quite working. He couldn’t understand what this meant. What had he done to deserve this? “So what are my options?” He said, finally breaking the silence.
“Well you could start the letter again?” replied Wizard, getting up jostling his trousers.
“Can I have that piece of paper,” Dave started, “or is that a stupid question?” Wizard did the weird smile again, “Sorry”, he replied, “Official Wizard stationary.”
Wizard tried to subtly insert the sheet into his pants, but Dave noticed him this time. “You’re a ….. you’re a real weirdo!!!” he shouted. “What the heck are you doing now???”
The little man started wheeling away, but he replied none the less. “My personal shredder is broken, so under Wizard rules I have to put all data corresponding to your request somewhere where you will never be able to see them again.” He paused, stopped wheeling and then added, “I could have chosen to burn them I guess.”
“I HOPE YOU GET PAPERCUTS”

The Rise and Fall of 3D Films – According to cbx33

Maybe I need to get out more but I sometimes find myself musing over the most stupid things. Right now? It’s 3D films. Before I dive in to the meat of the article, I thought I’d take a few minutes to give you a background as to my expertise in this area. I would like to tell you that I attended film school, have a degree in film studies and work in the industry as a producer and director of Hollywood movies. No, you don’t understand, I really really would like to tell you that. Unfortunately the truth is far from it. I like films. About 1/40 that I watch, I see at the cinema, the rest I just see on a standard TV.

A few weeks ago, could be a month or more for all I know, (time recently has become something of a fleeting beast), we decided to go see our first 3D film. We picked something…..different, something we expected would make the most out of 3D, seeing as much of it was fabricated by those tiny little miracles we call CPUs, we saw…..Alice In Wonderland. Hoping that the film would be a perfect combination of 3D-ness, wit and humour, we left, tails between our legs, licking our wounds, as the battle for cinema supremacy was ultimately lost in screen 4, row U.

As I think back on it, it had promised to be a rather stupendous outing and as we queued for and received our swanky looking glasses, I couldn’t help but feel a little like the first time I ever fired up a BluRay disc but we’ll leave that experience for another time gentle reader. The glasses weren’t particularly comfortable, as can be expected when they had been manufactured with all the right criteria in mind……cheap, durable and recyclable, but they weren’t too uncomfortable either. Kind of like sitting with your back on a radiator. You know it’s gonna make you sick, you just don’t know how long it’ll take.

Perhaps a little overexcited I sat down in my seat and immediately put on my glasses. My wife leaned over to me and whispered, “I’m sure all the trailers and adverts won’t be in 3D as well.” I continued looking at the glowing screen in front of me, unashamed and resolute and was positively brimming over with smug-ability when the first advert appeared on screen in glorious 3D. It was actually for SkyTV. It meandered through a few shots of their new 3D service, (available in autumn) and ended with their logo slowly leaving the screen and hovering about 6 feet in front of me.

At that point I had to try so very very hard not to reach out and touch it. The geek in me knew that doing so would ruin the illusion and would forever mar my perception of 3D films, but the kid inside screamed “It’s floating, it’s floating dammit, and it’s all yours……quick grab it…….go on touch it.” I resisted and instead used some of my super-smug to turn to my beloved and say, in my most sarcastic of tones, “I think you’re right, they wouldn’t show 3D trailers before a 3D film would they…….that would be stupid.” No, it wasn’t big, or clever to say what I did. Was the extra boost of smug I received worth it…….No. Not at all. I am sorry darling!

As the film started, the effect of 3D was presented in all its glory. I wanted to ooooh. I wanted to ahhhh. I wanted to run into the office the very next day and shout “I’m in love.” However, suffice to say it wasn’t totally what I was expecting. Despite the hype of a film being released “ONLY in 3D” I can’t actually see a reason for me wanting to see another. Talking with several other people about the same subject, I get the feeling that I’m not alone in this.

First, the good. It’s a nice gimmick. Being able to see the world they are trying to portray with an extra dimension does indeed make it feel somewhat “special”. However the gimmick seems to wear off around 15 minutes into the feature. The bad, overwhelmingly outweighs the good. Many people I’ve spoken to have cited a feeling of tension and headaches whilst watching and it’s not surprising when you consider what’s actually going on. Just because the film is presented in 3D doesn’t mean that it really is 3D. What do I mean by this? Well the film itself is still just a 2D image, two to be exact, which your brain superimposes over each other in order to give the illusion of a 3D world. The effect works great, but only if you don’t change your focus. So your brain is constantly fighting between _wanting_ to refocus in order to look around in a 3D world, because that’s what it’s used to, and _struggling_ to keep focus on the 2D plane to maintain the illusion.

Herein lies another caveat. I don’t like being restricted. I have been given a glorious 3D world to wonder around inside but the focus for a particular frame has already been decided for me. I want to focus on that blade of grass in the corner, the one that is being presented as existing a mere 5 feet from my eyes, but I am not allowed. Don’t get me wrong, it’s not a limitation imposed by the filmmakers, but by the technology. The films are not _shot_ in 3D, they are merely recorded with two identical cameras, capturing two 2D planes. It is an illusion, and as such it has limitations, but in this situation, those limitations annoy me.

As we all sat there, wide eyed, fighting the fierce focus fatigue, I noticed my wife lifting her glasses off her nose and trying to watch the film _a capella_. I turned to her and asked if she was OK, hoping she’d forgotten my smugness earlier. “I’m bored of the 3D,” she said, “I wanted to see if I could watch it without.” For me, that just summed it up. Couple that with the fact that the viewport we are presented with inherently has a self destructive effect, and you have a recipe for an unhappy little camper. Objects which are at the edge of the frame are often things like grass, or trees, things which the filmmakers use to enhance the 3D effect. This does add real depth, but the problem is the depth is instantly destroyed when the object hits the edge of the frame. As a result, my poor little brain can’t seem to distinguish between whether the grass is really 5 feet in front of me, or whether it’s 50-60 feet away and stuck to the edge of the cinema screen. Consequently, it does something in the middle: it looks fuzzy, it’s green, and my brain just says grass.

As the movie drew to a close I must admit I was a little relieved. It certainly wasn’t what it had promised to be. I wasn’t drawn into another world, I wasn’t sitting on the edge of my seat while weird and wonderful creatures literally came out of the screen at me. I was sitting watching a film which had simply been enhanced with a “special” effect. As the final credits rolled up off the top of the screen, yes I am one of those annoying customers who waits right until the end of the film, all that kept me sitting there was the possibility of maybe, just maybe seeing the SkyTV logo again. This time it’s mine.