Robots haven’t developed anywhere near as quickly as computers have. There are so many challenges involved in building them. Let’s look at a couple of them.
The first challenge for robots is perceiving their environment. They do this, of course, through sensors. Humans have a pretty robust set of sensors ourselves, but generally speaking, robots can measure things with far more precision than humans. But that doesn’t actually mean they are better at perceiving things than us. Take the problem of vision. This is a hard problem for robots. The number of things that go on in your brain when you glance down a hallway is complex in the extreme. A description of how your brain performs that minor miracle would require pages of technobabble about polygons and cones and layers. So even though you can clamp on the highest definition HD camera money can buy to a robot, that doesn’t give it a high degree of perception, rather, it just provides a whole lot of data that the robot has to make sense of.
Robots also have trouble figuring out where they are. Roboticists don’t really even have best practices around how to do this. It varies by situation. Often a robot is tasked with making a map of where it is, then keeping track of where it is on that map. This probably doesn’t sound too hard, because we do it effortlessly. But imagine the problem from the point of view of a robot. You’re a robot, you’ve been dropped into a room, you see a chair and footstool. But since the chair and the foot stool can be moved, the robot can’t use them as anchors. It is in a constant existential crisis of “Did I move? Or did the chair move?” As such, it has to constantly redraw its map. Building a map and figuring out where you are on that map is called SLAM (simultaneous localization and mapping). While it isn’t an insurmountable problem by a long shot, it’s just another one of the many things that make the job of being roboticist a challenging one.