We’ve spent a lot of time on Fw:Thinking talking about how to make machines better understand our intentions. Ideally, our devices will be able to not only detect but to anticipate our needs, providing for us the things we want even before we’re aware that we want them. But what about understanding the machines? How do we learn why a robot chose a specific pathway over every other option?
Anyone who has watched a Roomba meander across a floor has probably wondered what sort of sorcery guided the robot’s decisions. It can often appear random and inefficient. But the truth is that robots often must make a series of decisions to find the optimal path at any given moment. It may not make sense to us but it works for the robot.
To better illustrate a robot’s decision-making process, MIT researchers have created what sounds like a fun projection system. The system links up to a robot similar to a Roomba. When the robot detects an obstacle in its path, that information goes to the projection system, which projects a red dot at the location of the obstacle. Various pathways are also projected onto the floor representing all the different options the robot has at any given time. A green pathway represents the best, most efficient route.
The system isn’t just about learning why a robot chooses to do what it does. It’s also to detect errors in the robot’s programming. If the robot isn’t picking the best option or makes choices that put it at risk, programmers can search for the underlying problem and attempt to address it. While this might seem trivial with a robot vacuum, it quickly becomes crucial with other robot form factors, such as flying drones.
Ultimately, knowing more about how a robot chooses to carry out a specific function helps engineers design better robots. It may turn out that a program that seemed fine in the concept stage isn’t working out in practice. Or it may allow engineers to continuously improve efficiency to bring about that future I mentioned earlier. At the very least, it might help answer questions that robots without mouths cannot themselves answer — like why my Roomba is stalking my cat.