In this series of interviews. I ask scientists, engineers, and ethicists how technology might change our future. We had these conversations during the research for my book, Welcome to the Future (Quarto, 2021).
Interview 1 – Ayanna Howard
Dr. Ayanna Howard is a roboticist and dean of the college of engineering at Ohio State University. She has built robots and AI systems for NASA’s Jet Propulsion Laboratory and has founded her own robotics company, Zyrobotics. She is the author of the book, Sex, Race, and Robots: How to Be Human in the Age of AI and serves on the board of the Partnership on AI, an organization that educates the public about AI and works towards AI that will benefit all of humanity. I spoke with her in December 2020.
In my book, Welcome to the Future, I describe a world in which robots do all the work while people get to live life like a permanent vacation. What do you think of this idea?
One of the problems with this is that work defines us. We are not designed to just read books all day — we’d actually be miserable if that’s all we did. The question is, given that we have to work, what kind of work will we do? That work could be painting [artwork], it could be writing books, but I don’t think we’ve designed an educational system that will keep us productive and happy in this type of world.
How do we make sure the robots or AI we design benefit all of humanity?
Whenever you’re designing new technology, you need to have diverse voices contributing to that. You need to have diverse voices talking about what should be done, how to mitigate harms, and how to do things for the good of humanity. There are different nationalities, races, genders, sexual identities — all of these make us uniquely human. AI has to understand the uniqueness to work effectively for all humans.
How does bias work its way into AI?
Bias in AI may come from data that represents our own historical wrongs. The fact is that a lot of women [in the US] did not go to work until the 1940s or 50s. If your AI system learns from this data, the ideal job for a woman might seem to be “mom.” But that’s kind of wrong right now. Another example is that if a child’s parents went to college it’s more likely that the child will go to college. So an AI admissions system might learn to treat kids in these two groups differently. It’s up to us to make sure that AI doesn’t treat people differently based on characteristics they are just born into.
The AI of the future will face tricky ethical situations. For example, a self-driving car might have to decide whose life to protect if an accident is unavoidable. How do we design systems to make these choices?
With self-driving cars, accidents will happen at a lower rate, but they will still happen. The question becomes — what happens when a car has to make a decision about fatalities or accidents? For example, what if a small child jumps into the street with a ball? That happens all the time. The choice might be to hit the child or swerve into a wall, hurting your passengers. So what do you decide? [With human drivers] that decision is grounded in people’s belief systems. With self-driving cars, whose belief systems are being programmed in? Some countries have a very large disparity between “haves” and “have nots,” and “have nots” may be more disposable. In some countries, women or girls may be more disposable. Right now we don’t have the ability to download our ethical values into a self-driving car, but when we do, do we want European values or US values? We’re trying to figure that out as a field. We can at least start with things that everyone sort of agrees on, like you shouldn’t just kill someone or take something outright.
If we ever have robot servants, how should we treat them? If they have no emotions or consciousness, is it still bad to treat them as lesser beings?
What we’ve shown is that when we start treating robots as an underclass, that starts to translate to our interactions with people. If you have a robot maid, you might yell and demand and bark orders. Then when you have human assistant, you’re more likely than not to start treating them that same way. We reprimand kids that throw objects, even if the object is not getting hurt or the wall is not breaking. That behavior is a reflection of the person that is throwing.
Right, so if you mistreat robots, you’re a person who mistreats, and that’s not good. On the other side, is it acceptable for people to trust, befriend, or even love machines that can’t love them back?
There are individuals who say there is a problem with that. But we know robots can relieve loneliness for older adults. If your daughter or son is not coming to visit, is it better to have [a robot] to bond with or better to be depressed and have no one to bond with?
Can you tell me about a project you’re working on that you’re proud of?
We have a system now that can predict someone’s level of trust in a robotic system. Now we’re working on ways for the system to increase trust or decrease trust, so people will use systems but will not become over-reliant. If people have a good experience with robots, they stop questioning what the system is saying and doing. We see this on social media. Those are AI agents feeding us information, and many people believe it. This is a problem [when that information isn’t accurate]. My whole philosophy is that we need to empower people [so they don’t become over-reliant on AI or robots].
Do you think we will ever develop artificial general intelligence or even superintelligence?
Superintelligence might happen in my lifetime if I live long enough, but I don’t think it will happen within the next ten years.
What are the biggest dangers of smarter AI?
I actually think the biggest dangers are unequal outcomes for people. AI could expand the digital divide. What if all the benefits of AI, such as personal tutors, health care, and so on, require you to be of a certain socioeconomic status? Not everyone will be OK. But if done right, AI could help close this gap. For example, we could use AI as a collaborative partner to do retraining and education. If my job as a chef or cook goes away, maybe an AI system can retrain me in nutrition.
Any last words?
Developing beneficial AI is everyone’s responsibility. Even if you don’t think any of this applies to you – it does.