Note: The 3rd of December 2011 marked the second year anniversary of The Box Move which we forgot (again). In the last year, we released the first issue of our publication and, as the original team graduates, have handed over to a fresh team which is working on the second issue. Yay us!
In 1974 Thomas Nigel, put before us the question: ‘What it is like to be a bat?’ He held that no matter how well we understood the nature of bats, how they behaved, how they interacted with their environment and how they operated, we could never really know what it is like to be a bat without experiencing it.
The Kinect device opens before us this exact possibility amongst dozens of others. Kinect allows us to see the world just in terms of differences in depth. In a way of speaking, Nigel’s question is answered by Kinect, since it gives us a view of the world which is very, very similar to a bat’s conception of the world.
So, now you may ask how this is really important from the perspective of everyday people. How does it benefit us? Well, here’s how: since the day we began to think of creating robots that would be useful to us, we were thwarted by how these robots would recognize the objects in their surroundings. Problems like how to move from the ground floor to the top floor of my school’s building would take a very good robot a long time to solve if it did not know the building inch-for-inch beforehand. Even then, the problem can be complicated by asking the robot to do all this in rush hour when there are actual people moving around. Up till very recently our best shot at enabling robots to do these trivial tasks was through the use of computer vision aided with multiple other sensors which ended up being way too expensive for these robots to be of any particular use.
But robotics is not all, depth-sensing technology just makes interacting with any form of computers much more easier as now the computer and our perception of objects is closer. In a way we have managed to come one step closer for computers to figure out what is it like to be a human. Now, it is easier for us to make computers that recognize our gestures and different postures and create programs that help us more and more day by day. The best example of this is the Microsoft Xbox 360 platform that allows users to play games in a most unique fashion by actually performing actions. The computer can now understand those actions more easily and respond to them by varying the scenario of the game!
This increased interaction between man and the machine is a step forward towards our vision of the future where we can have virtual changing rooms, a large range of robots, and computers that read our actions and body language like our fellow humans, and respond to them in the most appropriate manner. If everything done is thought of as an advancement towards a goal, this has to be one of the more significant larger steps that brings us much closer to our goal.