5 challenges for roboticists in the next 10 years
Monica Nicolescu and Kostas Alexis share the same vision: in 10 years, robots will be working alongside humans in some of our most important workplaces. They won't yet be human-like in their capabilities or their personalities, but they'll be able to accomplish specific, defined tasks. These robots, defined to fill a particular niche in a workplace, are called service robots. Service robots have been around for a while - just think of the Roomba - but their potential range of uses is growing rapidly as technology improves.
Nicolescu and Alexis are focused on fairly different applications: Alexis wants to design robots that work in environments that can be hazardous for humans, while Nicolescu is focused on robots that can work alongside humans in everyday environments like homes and offices. But the challenges they are tackling in getting robots ready for the real world are quite similar.
Operating in challenging environments
Some of the most promising applications for robots require them to work in environments with limited vision and unknown hazards such as disaster sites, nuclear storage facilities, or oil and gas fields.
"In these applications, failure is not accepted," says Alexis, an assistant professor of computer science and engineering. "That means the robot has to be able to operate in visually degraded conditions. The robot has to be robust against environmental conditions. If it's an aerial robot, it's wind. If it's a ground robot, it's terrain anomalies. There are all sorts of different robotic configurations in order to address the challenges of one or the other domain."
"I think service robots will be the first kind of robot that you will see extensively in live use because the need is obvious. They want to do a job that’s very tedious, it’s dirty, it’s dull, it’s dangerous, it costs a lot of money, and robots can potentially do it better and safer."
In addition to navigational challenges, poor visibility impacts a robot's ability to evaluate what it is seeing and decide how to respond to it. For many service robots, that's a key functionality necessary to deciding when to investigate a particular area more closely.
"Robots essentially are probabilistic machines. They have to infer the condition of the world and then they make a decision based on that," Alexis says. "As we go to more and more challenging and more and more unknown environments, these probabilities go down, which means that we are less and less certain. We have to have robots that have the intelligence to react on how certain they are about their environment."
Dealing with uncertainty
"I think the biggest issue right now is still the uncertainty of the real world," says Nicolescu, a professor of computer science and engineering. "That's what is preventing us from throwing robots out in the real world and having them do whatever we want. They will work perfectly in the lab. But you change things a little bit for them, you've already realized you have a whole host of other problems to solve. A lot of advances have been made in these algorithms, so we're able to push the limit a little bit more. So slowly we're getting there."
One of the key issues involved is the so-called symbol grounding problem, which has to do with the way robots assign labels to objects, translating between the input they get from cameras and other sensors and the way those objects are represented in their control systems.
"There's still a lot of questions regarding the nature of knowledge representation," says Nicolescu. "Kids learn from a very small age and they know how objects react. These are laws of nature that we learn implicitly. There's not a universally accepted way of representing this kind of information in a robot's brain."
"I think we will be quite advanced in 10 years in the area of service robots, especially robots in hospitals or health care centers. The robots will know exactly where the rooms are. Staff will know the robots are around, so the environment will be accommodating to them. In that application, the algorithms we have right now would be successful."
Because questions about knowledge representation are so tricky, the current generation of robots tends to be designed for highly specific tasks or highly constrained environments, limiting the range of knowledge they need to draw on to accomplish their goal.
"Modern AI can focus very well on mastery. I know how to grasp this coffee mug or something like this. But generalizing the problem is actually what robotics is mostly about," says Alexis. "We need solutions that are generic so they work in every environment. Having a robot that deals with all the complexity of the world, it's a completely different class of challenge."
Responding in real time
Dealing with that uncertainty poses a mechanical challenge for robots as well. Current algorithms incorporate probabilistic approaches to the world with continuous feedback so the robot can adjust as it gains additional information about the environment.
"That requires a lot of speed, not just of processing, we have that, but speed in terms of actual robot movement, and the robots are still not there," says Nicolescu. "If you see the robots we have in the lab, and those are state-of-the-art platforms that everybody's using, they tend to move relatively slow, and the reason for that is that they have to take into account all the obstacles around them and plan to avoid them. If you want to have the robot move its arm out of the way because something happens, like a person would do, a robot won't be able to always do that effectively, sometimes because of mechanical constraints and speed of reaction, some other times due to limitations in sensing."
Similarly, perception algorithms may require several seconds to accurately recognize an object.
"We have algorithms that work very well but they are also all very slow," says Nicolescu. "If you want to use one of those on a robot that has to react in real time, you don't have the luxury of waiting several seconds to detect that there's a car coming."
However, platform design is a relatively fast-moving component of robotics, and Nicolescu expects mechanical constraints to decrease over the next decade.
"Honestly, several years ago we didn't know we will have the type of robots that we have right now in the lab," says Nicolescu. "I can see people coming up with platforms that are going to be faster and more agile over time. I think that is going to come."
Hackers taking control of a robot, especially one that's moving around in the world or accessing sensitive areas in hospitals or businesses, is a real security concern, and one that cybersecurity experts will need to address.
"There are lots of things that we need to consider in terms of security," Nicolescu says. "If someone breaks into your robot and takes over your robot they can do damage. Robots are not yet fully secure from this point of view."
For robots to truly work alongside humans, humans have to welcome them, and Alexis believes the next 10 years will be a critical period for that process.
"It's not linear progress," Alexis says. "It takes a lot of time to reach the level where people accept new technology, but after this threshold is broken then people tend to use this technology more and more, so then the researchers get excited and then it's a feedback loop."
During that time, Alexis believes roboticists need to be actively engaged in conversation with policy makers and societal stakeholders to help the public understand robotic technology and craft smart policy that optimizes the social benefits of robots.
"There has never been in history a major change in technology without a major change in the way society works," says Alexis. "So the best thing is to openly discuss the expected change. It would be false to say that there is absolutely no problem with automation and autonomy poses no threat. It's a major breakthrough and it has a lot to offer as long as people make proper use of it."