The scientists placed sensors on people’s fingers to record pulse amplitude while they were in a driving simulator, as a measure of arousal. An algorithm used those recordings to learn to predict an average person’s pulse amplitude at each moment on the course. It then used those “fear” signals as a guide while learning to drive through the virtual world: If a human would be scared here, it might muse, “I’m doing something wrong.”Hutson, M. (2019). Scientists teach computers fear—to make them better drivers. Science magazine.
This makes intuitive sense; algorithms have no idea what humans fear, nor even what “fear” is. This project takes human flight-or-flight physiological data and uses it to train an autonomous driving algorithm to get a sense of what we feel when we face anxiety-producing situations. The system can use those fear signals to more quickly identify when they’re moving into dangerous territory, adjusting their behaviour to be less risky.
There are interesting potential use cases in healthcare; surgery, for example. When training algorithms on simulations or games, errors do not lead to high-stakes consequences. However, when trusting machines to make potentially life-threatening choices, we’d like them to be more circumspect and risk-averse. But one of the challenges is to get them to identify situations in which a human’s perception of risk is included in the decision-making process. Learning that cutting this artery will likely lead to death can be done by cutting that artery hundreds of times (in simulations) and noting the outcome. This gives us a process whereby the algorithm “senses”