The system uses an electroencephalography (EEG) monitor to record brain activity that the system can detect when the user notices that there was an error in a robot’s object-sorting task. The machine-learning algorithms allow the system to sort brain waves in 10 to 30 milliseconds.
“Imagine being able to instantaneously tell a robot to do a certain action, without needing to type a command, push a button or even say a word,” Daniela Rus, CSAIL director and the paper’s senior author, said in a news release. “A streamlined approach like that would improve our abilities to supervise factory robots, driverless cars and other technologies we haven’t even invented yet.”
Other EEG-based robots require the user to look at a computer screen to get the robot to performed desired actions. MIT reports that the training process and controlling the user’s thoughts can be more demanding than necessary, especially for people who have to supervise navigation or construction tasks when they’re supposed to be concentrating.
The research team wanted to address that problem by making the process seem more natural and use the brain signals called error-related potentials (ErrPs). These signals are created when the brain sees a mistake. When the robot wants to make a move, the system uses the ErrPs to see if the user agrees with the move.
“As you watch the robot, all you have to do is mentally agree or disagree with what is is doing,” said Rus. “You don’t have to train yourself to think in a certain way — the machine adapts to you, and not the other way around.”
Signals created by ErrPs are weak. They system has to compensate for the weak signals by being fine-tuned enough to classify and incorporate the signals into a feedback loop for the user. Researchers are also working on making the system detect secondary errors when it doesn’t realize the first correction.
“If the robot’s not sure about its decision, it can trigger a human response to get a more accurate answer,” said CSAIL research scientist Stephanie Gil. “These signals can dramatically improve accuracy, creating a continuous dialogue between human and robot in communicating their choices.”
Gil also added that the system can’t currently detect secondary errors in real time, but it has the potential to be 90% accurate once the model is developed. The research team also thinks that any newer systems that they develop could allow the robot to perform more complex multiple-choice tasks.
Andres Salazar-Gomez, a Boston University PhD candidate and co-author on the paper, said that the system could benefit people who can’t communicate verbally. Several binary choices could make it easier to spell.
“This work brings us closer to developing effective tools for brain-controlled robots and prostheses,” said Wolfram Burgard, a professor of computer science at the University of Freiburg. “Given how difficult it can be to translate human language into a meaningful signal for robots, work in this area could have a truly profound impact on the future of human-robot collaboration.”
The study was published online by MIT and was funded in part by Boeing and the National Science Foundation.
[Want to stay more on top of MDO content? Subscribe to our weekly e-newsletter.]