Emily Evans | March 29th 2012 | @Innovation
Technology Frontiers made me think about the difference between human beings and the technology that we build. On the morning of the first day I listened to Vint Cerf and Bran Ferren having a conversation. They were talking about artificial intelligence, including the difference between perceived intelligence and conscious experience (The Turing test).
For example, Vint mentioned that Google needs to get better at figuring out the context of a question. Ask both Google and the person standing next to you “how do I get to the nearest pub?” and because you get an answer from both of them, Google and your neighbour might appear to have done the same thing. In fact, they do very different things. To answer the question, both of them first need to work out the context – where you are. The way that a computer infers context is very different to how a brain does it. A computer might use satellites in this example, where your neighbour's brain would do something far more complicated including synthesising a huge amount of sensory information and referring to abstract concepts (for example the concepts of a pub and a person asking about a pub).
On stage, Vint talked about Google’s self-driving cars (yes). They’re another example of how two completely different kinds of intelligence can be employed to do the same thing. The cars must work out the difference between a roadside pedestrian, which might cross the road, and a tree, which won’t. To do this they will access a bank of data on the internet about that particular bit of roadside and determine whether there is a tree in that spot. A human driver looks at the roadside and uses his abstract concept of a tree to identify whether or not this is one. It’s actually incredibly tricky to get a computer to do this. Tom Standage told me about a computer that was trained to spot tanks. The trainers spent ages teaching this computer about the various features of tanks, then showing it photos that included them and photos that didn’t. It seemed to be picking them out with impressive accuracy, then they noticed that all the pictures with tanks were taken on cloudy days, and all those without tanks were sunny. The computer was just distinguishing between different shades of grey. What feels simple to us as human beings is spectacularly difficult to replicate in a computer.
Hugh Herr pointed out that our bodies are in some respects so much more complicated than machines, but in others you could say they are simpler. He explained that his amazing prostheses allow him to move naturally because they were built to work in the same way as his biological legs. Human legs apparently work in a very non-intuitive way – if we didn’t have our own bodies as examples then nobody would think to build a leg like that. But emulating his seemingly complex biological legs turned out to be a beautifully simple way to create his agile prostheses.
Hugh’s talk was about augmenting our bodies and minds with technology. He talked not just about the difference between humans and machines but about how machines change us. He asked what it means to be human, and as technology contibues to evolve the answer to that question is likely to keep changing.