By James R. Doty, MD
The automotive industry is on a quest to create fully functioning, safe, and responsive autonomous vehicles enhanced by artificial intelligence (AI). Today’s AI is a function of machine learning and neural networks that rely almost completely on training and repetition. This makes sense for repeatable tasks, but what about for higher-order skills, such as driving, that require more than mere mechanized logic? Humans do not drive purely from an autonomic response to input. We add emotional intelligence, intuition, and morality as decisive factors while we’re behind the wheel.
At an early age, I was introduced to four mindfulness techniques: relaxation, taming the mind, opening the heart, and clarifying the intent. Over the years, I’ve spoken often about the importance of the third lesson, which I didn’t quite grasp until later in my life: opening the heart. What I learned is that nurturing compassion and its connectedness to all of life clarifies the last practice of setting intention. For when the heart is open, we are able to think more about longer lasting and purpose-driven goals, such as cultivating relationships, helping others, and growing more aware of the intricate beauty in our shared experiences. By opening the heart, we are capable of thinking beyond cost benefit analysis, beyond selfish desires, and into support for others and the greater good.
Today, artificial intelligence faces the same challenge. As a neurosurgeon and founder of the Center for Compassion and Altruism Research and Education at Stanford University, I am fascinated by the possibility that one day AI may not only mimic human sensory perception, but also human compassion. Currently, AI has been trained to relax, focus and clarify — but it hasn’t been trained to open its heart. Together with AEye — a company that mimics human perception to create an artificial perception platform for autonomous vehicles — we are leading the discussion to change that. But we need your help. It is our collective responsibility, as compassionate humans, to initiate a dialogue with those at the helm of this technology, so we may consider the “behaviors” that will ultimately be written into self-driving software programs for potentially dangerous situations. In other words: what will be the car’s guiding ethics? To make truly safe, autonomous vehicles, will we need to “teach” them empathy and compassion?
Can we train AI to have compassion?
As I outline in Part I of this article, Think Like a Robot, Perceive Like a Human, we have been able to give autonomous vehicles human-like perception. But can we give them a heart? And should we? This is where the debate begins. Self-driving vehicles are achieving the ability to identify relevant information, process it, and respond accordingly, as a human would. Now, we must consider how to incorporate human-like empathy and compassion in their decision-making — for blind technology without compassion is ruthless. If we want computers to have compassion, we must allow space and time to build the software appropriately to prevent processes that blinds us from our humanity.
The first way to train computers to perceive with compassion is to give them enough time to infer intent of movement. Is that a child standing on the sidewalk with potentially unpredictable behavior? Is there a ball in the road and a person following it? Is a blind person crossing the intersection, unaware of the approaching vehicle? As humans, we process these evaluations first, allowing us to not only see an object, but a person. We are compassionate in our understanding that a child may run into the street, or an individual may be unaware of a vehicle approaching the intersection. This ultimately allows us to drive with intention, taking on responsibility for the safety of other people, in addition to ourselves. This is more advanced than conventional AI, which is programmed and trained to track objects (or “blobs”), in a cold, repetitive way.
The second way to give computers compassion is to develop AI with situational awareness. Situational awareness means that a driver understands the need to approach an intersection with caution, as people may be crossing the street. Conventional AI in autonomous vehicles lacks this type of perception. However, innovative companies like AEye build sensors to have situational awareness, allowing autonomous vehicles to not only have capabilities that we take for granted in human perception (like differentiating between how we maneuver our vehicles through an urban area versus driving along a country road), but to have intuition, compassion and understanding of possible intent. For example, if the system’s LiDAR sensor identifies a large object in front of the vehicle, the camera and computer vision algorithms work in unison to more fully investigate the scene to identify whether it is a truck, an ambulance, or a school bus and, therefore, initiate an appropriate response (such as slowing down in anticipation of the school bus stopping). Building situational awareness into self-driving systems inherently builds in social morals.
Third, if we look at our own behavior (as a species and as individuals), we see that we continually act upon our assumptions (rules of thumb or shared knowledge), which are guided by feedback and effects witnessed from past behavior. Choosing the most compassionate decision is determined by the context of a single moment, and this context is determined by our unique ability to efficiently assess our environment. Therefore, in the situation of driving a vehicle, for our own survival and for the empathetic survival of others, we make calculated risks. As examples: turning left at an intersection, estimating the required speed to merge with oncoming traffic, predicting the best moment to change lanes on the highway, even simply driving in one city verses another. Each of these scenarios requires knowledge of different contexts (situational awareness) and rules guided by previous experiences (assumptions). Our perspective of the world is built by our unique history, which in turn leads to better, more compassionate perception.
An AI ‘Trolley Problem’
The AI algorithm that coordinates self-driving car responses must make decisions, which at times, may not be easy, even for a human. Consider a scenario where a cat runs into the path of an autonomous vehicle on a crowded, city street. What is the vehicle programmed to do? Option 1) it hits the cat. This decision doesn’t impact the safety of the humans in the vehicle, thus the AI optimizes for the safety of humans overall (sorry, Kitty). Option 2) it breaks hard or swerves to avoid hitting the cat. Although this decision would spare the cat’s life, it could potentially cause a serious accident, which would harm humans and cause traffic delays. Option 3) it develops a sophisticated algorithm that calculates the potential risk of stopping/swerving for the cat and determines the optimal outcome before deciding. But in this scenario, how many dimensions can be considered simultaneously? My choice would be Option 3, as I would opt (if I can) to save the cat. But this poses another ethical conundrum: who determines the programmed decision the vehicle would make?
As hundreds of thousands of autonomous vehicles enter our streets, will we need a standard definition of compassion shared by all vehicles so they can predict behavior based on situational awareness? Or will compassion be a feature that is offered to differentiate one service from another? Should the vehicle owner, the car company, or an independent entity define a vehicle’s ethics? How about its level of empathy and compassion? These are all questions that have yet to be answered.
‘A man without ethics is a wild beast loosed upon this world.’
The danger of our machinery mimicking human perception without compassion is that, if AI works correctly, it eventually won’t require human tinkering. Thus, if we don’t know how to open AI’s heart and do not prioritize certain contextual data now, we will create a world in which we get from Point A to Point B by ruthless calculations, which could potentially result in immense destruction along the way. Therefore, I amend the Camus quote, above: “A machine without ethics is a wild beast loosed upon this world.”
Unlike machines, we humans use our minds and our hearts to make decisions, cultivated by knowledge and perspective based on personal experience. The AI technology being developed today must be sensitive to that, and we must consider setting clear intentions when writing these programs from the beginning. While typical software programs have been calibrated to identify a cost-benefit to many decisions, we will soon face challenges that pose new moral issues where the cost may be a life.
Artificial intelligence will only be as sensitive, compassionate and aware as we design it to be. Humans are caring and kind, and we must incorporate the best parts of humanity (our compassion and empathy) into our artificial intelligence. Otherwise, we risk blindly building a future full of cold, calculating, and ruthless technology. Now is the time to recognize this need for the lesson I learned late in life: to open the heart and make compassion the priority — to make compassion our culture. It’s our responsibility to design our computers in this same light. Therefore, we must move beyond artificial intelligence. We must create artificial humanity.
James R. Doty, MD, is a clinical professor in the Department of Neurosurgery at Stanford University School of Medicine. He is also the founder and director of the Center for Compassion and Altruism Research and Education at Stanford University of which His Holiness the Dalai Lama is the founding benefactor. He works with scientists from a number of disciplines examining the neural bases for compassion and altruism. He holds multiple patents and is the former CEO of Accuray. Dr. Doty is the New York Times bestselling author of Into the Magic Shop: A Neurosurgeon’s Quest to Discovery the Mysteries of the Brain and the Secrets of the Heart. He is also the senior editor of the recently released Oxford Handbook of Compassion Science.