... by Matt Jones and Chris Heathcote from Nokia. Unfortunately I couldn't capture the best bit of this presentation, the moment when Matt and Chris held out their phones and touched them - blip - and digitally swapped business cards. Gorgeous.
In the beginning was paper tape: completely opaque. Then came the command line: an arrow, a cursor, and you had to remember the incantations. This led to text based programming. Then the WIMP. Windows through a window. Very abstract stuff, lots of guesswork – “moving this moves that”. Programming, however, remained in the command line. Then we starting losing things. The moment you could overlap a window, you had no spatial memory of where everything is. There are various limited solutions to this – Task bar, apps bar, Expose..
But the world is not a computer. We need new ways of controlling and understanding our digital world - you can’t use a mouse with a mobile phone, or tap on a keyboard at a bus stop. Also digital is really hard. Pick up a spanner and you know which end does what: this is called ‘affordance’ in user interaction speak. Digital interactions don’t have affordances ... the Apple window bar has three traffic lights, and you have no idea what they do until you click them.
Everyone says ubiquitous computing is 20 years out, no matter when you say it. I say it’s here already, it’s just not evenly distributed. Computers are everywhere, [in the form of mobile phones] and they are starting to talk to each other – pity we can’t talk to them. So what can we do?
We can play to our human strengths instead of computer strengths, to user models instead of system models. When we’re out in the world, when we’re out there in that big messy world where people get told off for jaywalking as I did yesterday, we have to play to strengths that we use to deal with the physical world. We are situated, meaning we are somewhere – here, in the world. We are embodied. Our senses do not live in abstract bubble space. We have opposable thumbs. We’re great at pulling levers and pressing buttons and swinging from trees. We can touch. The thing we’re working on at Nokia is to use much more touch-based interactions to deal with this digital world. So using these very direct interfaces.. pulling a lever and seeing something happen is one of the most satisfying things in the world..
So this is Paul Dourish [pictures]. He works in San Diego. Paul wrote a book called Where The Action Is. This puts forward a case for using this embodied approach – being in the world – to deal with complicated digital interfaces. This is where the action is:
[image of guy at home doing one-armed breakdancing handstands on his DDR mat]
Dance Dance Revolution is the cutting edge of interaction at the moment. This is not quite what we’re proposing, but using this approach of pushing the interactions back out into the real world.. there’s a term, extelligence, which is about knowledge that’s in your head and actions that take place in your head that you abstract from the real world.. extelligent interactions is where the knowledge in the world is. [Did I get that right ?] You can only push a door one way. You can only use the spanner one way. It saves you the effort of having to think through these things, unless it’s poorly designed which is what Don Norman was complaining about.
Social legibility: if I see you doing something in the world. I can copy you.
So some things we talked a little about last year: one thing that’s in short supply is computing cycles in our heads. We have to use our attention wisely. Important info has to bubble up. The other caveat to tangible embodied computing: it’s tiring! If you have to touch, reach etc, it’s exhausting. If you’ve played six hours of WoW or six hours of DDR, the difference is pretty apparent to anybody. We have to consider the physical limits we have.
So what’s our there now?
This is all happening now. It isn’t done in research places, you can buy it all:
Tablets. Tablet PC. Audiopad. Jazz Mutant.
Smart furniture : yes, it’s actually happening. The Drift table. Sensitive Object (put one or two mics on a surface and write an interface on that surface. You can build your own interfaces for whatever you like).
All seeing eyes: EyeToy. They can now map the whole human form so the games are going to get a lot more rich in their interaction. Digital pens (writing direct to a database, faster formfilling). AR. Human PacMan. People run around the uni with a computer backpack on and they can see these dots and ghosts, while everyone else thinks they’re going mad..
Passive information display: Internet toaster. Ambient devices: lets you see stock prices etc visually. Ambient devices are mainly leveraging the work of Ishi-san in MIT… you’re able to pay attention to something before you’re able to think about paying attention to something. The stuff we can process in our periphery. The sense of being able to take in info without focusing on it.
Smart Objects: Haptics, force feedback. 2D barcodes. Natalie Jeremijenko used force feedbackc to let people understand the stock market crash. She built a robot donkey that you could ride, that explained the movements of the stock market. She made economists ride the data.
So – we’re working on NFC, near-field communications. Initially with Sony and Philips, and now Microsoft and Samsung and Logitech are on board. It’s touch technology. It works at about 5 cm away from the phone. It’s hard to make this phone talk to this computer, but if I could TOUCH this phone to the computer and it knew what it meant, it’d be a lot easier.
We’re putting NFC tags - tags hold 1 kilobyte of data, like a URL or your address or something - in covers of phones at the moment. Being able to touch something to something else gives you something that almost never happens: you can cut the number of user interactions by an order of magnitude. 10 clicks down to one touch. This is a radical improvement.
[Chris touches his phone to Matt's tagged conference name card & the phone reads the data from the tag]
You can put these tags wherever you like. This is not new technology though, this is more or less the same technology as you find in corporate ID cards that open doors ... and what we think is slightly more exciting is.. I can pretend to be a tag. I can take some info from my phone.. [touches matt’s] – and he now has that contact info.
So anyone who has ever sent anyone contact data or a picture, you know how many clicks this is usually. But here.. you just touch this phone to that phone and it's done … oh and lights blink! One thing we’re really happy about is.. consumer electronics companies don’t have a particularly great record in doing things to allow users to create. NFC has - in the standard - the ability for users to create tags. You can write into the world. We’re really happy that end users are going to be able to both use this stuff and write on tags too.
So, as we said, it’s almost commercialised, but .. this is a conference full of hackers. So what can we do with this stuff? These days it’s really really easy to glue stuff together. Computers make it easy to take inputs and manipulate, push stuff out there. Computers are everywhere. Physical computing – gameboys, mobile phones – all hackable. There are far more inputs these days. Matt Webb astutely noticed that powerbooks have accelerometers built in...
There’s a lot more outputs these days. Dotdotdot is a display for your phone that can be put on a backpack. It lets other people know who’s just called you. It’s cheap, uses bluetooth, and you can hack this stuff up. I think one of the biggest changes is airport express. The music follows you when you walk around the house.. if Apple were to just let people have programmable access to speakers in iTunes we could change this today.
Q: when you were doing the touching demo, that’s obviously a great step forward to transfer files. How much setup work is necessary to specify which data you’re transferring?
Matt: not very much. Whatever’s in focus, you transfer. Or you dive down, select "give", it puts it in the DMZ and over it goes.