The human driver

As the ability to harness the power of artificial intelligence grows, so does the need to consider the difficult decisions and trade-offs humans make about privacy, bias, ethics, and safety.

michael kearns
Computer scientists Michael Kearns (right) and Aaron Roth (second from left) are at the forefront of the effort to ensure engineers are building algorithms that reflect society’s values, and to help translate those values into specific instructions for a computer program. Their book, “The Ethical Algorithm,” will be published in November.

As artificial intelligence has moved from the realm of science fiction into everyday applications, the thrilling possibilities—and the potential for problems—have drawn most of the interest.

Already, some AI-enabled practices have raised serious concerns, like the ability to create deepfake videos to put words in someone’s mouth, or the growing use of facial recognition technology in public places. Automated results that turned out to reflect racial or gender bias has prompted some to say the programs themselves are racist.

But the problem is more accidental than malicious, says Penn computer scientist Aaron Roth. An algorithm is a tool, like a hammer—but while it would make no sense to talk about an “ethical” hammer, it’s possible to make an algorithm better through more thoughtful design.

“It wouldn’t be a moral failure of the hammer if I used it to hit someone. The ethical lapse would be my own,” he says. “But the harms that algorithms ultimately do are several degrees removed from the human beings, the engineers, who are designing them.”

Roth and other experts acknowledge it’s a huge challenge to push humans to train the machines to emphasize fairness, privacy, and safety. Already, experts across disciplines, from engineering and computer science to philosophy and sociology, are working to translate vague social norms about fairness, privacy, and more into practical instructions for the computer programs. That means asking some hard questions, Roth says.

“Of course, regulation and legal approaches have an important role to play, but I think that by themselves they are woefully insufficient,” says Roth, whose book, “The Ethical Algorithm,” with Penn colleague Michael Kearns will be published in November.

The sheer size of the data sets can make transparency difficult, he adds, while at the same time revealing errors more easily.

You can’t just say, ‘Your algorithm has to be fair,’ or ‘Your algorithm has to protect privacy’ if we continue to be vague about what those terms mean. You have to be exceedingly mathematically precise. Computer scientist Aaron Roth

“These aren’t new issues. It’s just that it’s sort of easy to ignore them when humans are making the decisions,” he says.

Christopher Yoo, a Penn Law professor whose work revolves around the intersection of the law and technology, emphasizes that like any tool, artificial intelligence has its proper uses and its limits. The focus should be on building more care, transparency, and accountability to make sure the people working with the code are using algorithms and predictive analytics in the right circumstances, and not asking too much of them. 

“One of my fears is that we will hold algorithms to a perfection standard, because we forget that the alternative is not perfect,” Yoo says. “There is no tool that can’t be misused.

“There should be a human in the loop. We have to find the right balance. And we need to be training students as to what that is.”

Unintended consequences

Before a society can make these decisions in an informed way, Roth says, it’s important to know what’s possible. We might worry about data privacy while wearing a fitness tracker on our wrists, using Waze to find a quick route home, or taking shopping advice from Walmart.

rakesh vohra pik professor
Rakesh Vohra’s work unites economics, computer science, and electrical engineering. He says the growing use of algorithms could have implications for competition among firms and even pricing. It’s unclear, he says, how regulations could best protect consumers.

But the ubiquity of recommendations for products, movies, music, and more carries its own potential downside, says Rakesh Vohra, a Penn Integrates Knowledge professor whose game theory work combines economics, computer science, and engineering.

Activities driven by voice search are a burgeoning share of the market, he says, whether it’s telling your Google Home device to play your local NPR station, or asking Alexa to order you more toilet paper.

That example presents an interesting possibility, he says: Because you’ve used your voice, there’s no list of 10 brands to scroll through, and Alexa might read only a couple of top suggestions. Now, whoever owns the voice access is the gatekeeper, so to speak, with the ability to recommend those selected brands.

“That puts the gatekeeper in a powerful position, because that might mean that if I’m selling toilet paper, unless I have a relationship with the gatekeeper, my toilet paper is not coming to the top of the list,” says Vohra, who is the co-director of Penn’s Warren Center for Network & Data Sciences.

Amazon produces some of its own goods, but is also paid to recommend certain products to you. Google has the technology, but not the products to sell, and a huge retailer like Walmart has the products but not the technology. With only one or two choices coming out of the Alexa speaker, will some companies get an unfair advantage?

An additional possibility, Vohra says, is that the sophisticated pricing algorithms deployed by different companies competing for the same market—for example, airlines—could learn to coordinate on raising prices. This concept, called algorithmic price fixing, could be difficult to detect. If discovered, who is at fault? The firms? The algorithm’s designer?

“It raises a lot of competition issues,” he says. “Does this sort of thing need regulation, and if so, how would you regulate it?”

Bias, or a mirror of society?

It’s been well-documented during the past several years that some algorithms are demonstrably unfair to one or more groups, whether they’re spitting out parole decisions or picking job candidates. The obvious question is why.

Lisa Miracchi during a podcast
Philosopher Lisa Miracchi works in the realm of epistemology and cognitive science. She’s collaborating with roboticists to embed better intelligence into machines, but says there’s no replacing human judgment anytime soon. “Humans are robust reasoners in a way that we have no idea how to make AIs or robots,” she says.

“We know when there’s something formal like a computer program that outputs a recommendation, people tend to assume that it’s somehow more objective, somehow more value-neutral because it’s a computer program,” says Lisa Miracchi, a philosophy professor whose work in epistemology and cognitive science has carried over to helping roboticists create more intelligent machines.

“But if there are biases in data sets or the way the data sets are collected, they’re going to be carried through or even magnified by the program.”

Kearns, a computer scientist and the founding director of the Warren Center, says the answer is less about racism or sexism than a computer program that hasn’t been taught to emphasize fairness or de-emphasize bias.

“You shouldn’t expect fairness to emerge for free. In fact, you should expect the opposite,” Kearns says. “The thing about an algorithm is that it’s going to do something in every single specific situation.”

But what does it mean to be fair? Even if a given group—whether it’s a small community, single nation, or global population—can agree on a set of fairness standards, sometimes they can conflict, or even refute one another.

As more data about all of us becomes available, Kearns says, even established standards can become outdated. One instance is anti-discrimination laws that keep race off a loan or credit application in order to combat racial bias. When those laws were written, the only data credit scoring companies had was what was on the application, so it made sense to think excluding race from that information could reduce discrimination.

The kind of flexibility that human reasoners exhibit, the way we very subtly take into account the meanings of our thoughts... we have no idea how to program systems to do that. Philosopher Lisa Miracchi

“Now, we know so much about people that there are all kinds of proxies for race,” Kearns says. “We can figure your race out from content you like on Facebook, or your ZIP code.”

Given that, some anti-discrimination laws may no longer make sense as written, because it’s essentially impossible to exclude data on race or gender.

Who decides, and how?

Think back: How many times have you typed in your email address and checked a box accepting the terms and conditions of an application without reading them? You’re making a trade-off each time: Your information, even if it’s just the basics, for a discount, or the chance to see your friends’ photos.

What computer scientists and other experts are embedded in involves much more serious trade-offs. Privacy might be a minor issue if you’re handing out diaper coupons, but a major concern if you’re teaching a computer to read MRIs.

Missing media item.

Deepfake videos, which use artificial intelligence to manipulate faces, speech, and other details, are a concern because of their ability to mislead. In this clip, Facebook founder and CEO Mark Zuckerberg appears to say words he’s never said. Social media platforms have struggled to handle these types of videos because they can be difficult to detect.


“This is something that regulators are going to have to grapple with,” Roth says. “You can’t just say, ‘Your algorithm has to be fair,’ or ‘Your algorithm has to protect privacy’ if we continue to be vague about what those terms mean. You have to be exceedingly mathematically precise. The first lesson is, in order to actually get algorithms that satisfy these goals, you have to think really hard about what you mean—and be more precise than a philosopher or a regulator would be.”

Another important consideration: Each trade-off has consequences down the line, and they can be significant, Kearns says.

“It might be that to build a predictive model for some disease in a way that that model guarantees the privacy of the medical records that went into it, maybe I suffer a 25% degradation in my accuracy of predicting the disease. So, by asking for privacy, more people will die because I’ve constrained myself to build a model that does this other thing also,” he says. “What needs to happen next is a real dialogue with stakeholders and policymakers. I can show that trade-off even in a quantitative way, but should I be the one who decides the right balance?”

Engineer George Pappas says building a foundation for AI that’s robust and safe will help build the public’s trust in technology like self-driving cars.

“The biggest challenge now is that we are using AI in the wild,” he says. “While we can tag faces on Facebook, creating reliable and robust maps in real time that fully capture the complexity of traffic at a congested intersection remains a challenge.

“Trusting your advertising to AI is one thing. Trusting your life to AI is another.”

For example, a human driver can decide whether to risk being rear-ended to stop for a pedestrian who pops out into traffic. In that situation, what should a self-driving car be designed to do? Pappas, too, sees the need for engineers, ethicists, and psychologists to work together on setting standards.

Trusting your advertising to AI is one thing. Trusting your life to AI is another. Engineer George Pappas

“This is a very hard challenge,” Pappas says. “Can we categorize or model all possible interactions between driverless cars, human driver cars, bicycles, or kids crossing the street in an urban environment? What are the principles of such human-robot interactions, and how can we encode them in our programming when driverless cars encounter such life-critical dilemmas?

“Even if technically we can advance our programming to the level that such difficult choices can be made, we, as a society, need to have a discussion about what we expect, from driverless cars in such situations.”

The path forward

Kearns and Roth wrote “The Ethical Algorithm” to be accessible to consumers, and to encourage scientists to engage with these difficult questions. Roth says he’s heartened by the progress computer scientists have made on the topic of data privacy in recent years, and hopes it’s a model for addressing concerns about fairness, transparency, and safety in AI.

aaron roth during a podcast
Computer scientist Aaron Roth (left) says he’s hopeful the programming community will find solutions to issues involving privacy, safety, and fairness. “There is bad algorithmic behavior because these are hard problems, but there’s a big community of people working on it,” he says.

That progress emerged as the concept of “differential privacy,” in which algorithms are written in such a way as to make it impossible for an observer to discern whether an individual’s data was used. The idea began in obscure math papers, Roth says, and has been refined to the point where the United States Census Bureau is using it to safeguard the privacy of citizens’ responses to the 2020 survey.

“It’s a transfer of not just technology but fundamental ideas from the engineering side to the policy side,” says Roth, who has worked as a consultant for both Apple and Facebook.

“For algorithmic fairness, we’re not there yet. We don’t agree on what the right definitions are—there’s almost as many definitions of fairness as they are papers about fairness—but this is sort of what the privacy literature looked like 10 years ago. It would be premature to say there are solutions right now, but looking at what happened with differential privacy, it’s a road map.”

Ultimately, Miracchi says, much of the concern in the popular media is based on concepts, not things that are currently possible. For the time being, people still have the advantage over machines.

“Humans are robust reasoners in ways that we have no idea how to make AIs or robots,” she says. “The kind of flexibility that human reasoners exhibit, the way we very subtly take into account the meanings of our thoughts, the ways in which we can incorporate new information in a way that’s flexible—we have no idea how to program systems to do that.

“They’re no replacement for human judgment and they won’t be anytime soon.”

Michael Kearns is the National Center Professor of Management & Technology in the Department of Computer and Information Science in the School of Engineering and Applied Science at the University of Pennsylvania and the founding director of the Warren Center for Network and Data Sciences. Along with Aaron Roth, Kearns is the co-author of “The Ethical Algorithm,” a book about socially aware algorithm design.

Lisa Miracchi is an assistant professor in the Department of Philosophy in the School of Arts and Sciences.

George Pappas is the UPS Foundation Professor and Chair of the Department of Electrical and Systems Engineering, with secondary appointments in the departments of Computer and Information Science and Mechanical Engineering and Applied Mechanics, in the School of Engineering and Applied Science.

Aaron Roth is the Class of 1940 Bicentennial Term Associate Professor of Computer and Information Science in the School of Engineering and Applied Science at the University of Pennsylvania. Along with Michael Kearns, Roth is the co-author of “The Ethical Algorithm,” a book about socially aware algorithm design.

PIK professor Rakesh Vohra is the George A. Weiss and Lydia Bravo Weiss University Professor with appointments in the School of Engineering and Applied Science and the School of Arts and Sciences. He is also the co-director of the Warren Center for Network and Data Sciences.

Christopher Yoo is the John H. Chestnut Professor of Law, Communication, and Computer and Information Science and director of the Center for Technology, Innovation, and Competition at the University of Pennsylvania Law School.

Penn Today writer Erica K. Brockmeier contributed reporting.