Facing the Ethics of AI

facing the ethics of ai

Craig Wing MBA’10 would like you to know that ostriches don’t actually stick their heads in the sand when frightened. “It’s an urban legend,” he says. “Coming from South Africa, I can tell you that it is not true.”

But he’s perfectly happy to accept the validity of the metaphor, when applied to companies and individuals who fail to grapple with the changing technological landscape: “That strategy is not a strategy. Sticking your head in the sand and hoping that if you ignore it, the problem will go away. That’s not what you should do.”

As a futurist with a background in engineering, Wing is drawn to a different sort of metaphorical bird. “My job is to say, what are some of the black swans? What are some of the signals? What are the potential big disruptions coming our way, and how do we think about those things?”

Tomorrow’s Ethics Today

These questions also intrigue Babson lecturer of information technology Clare Gillan, who is slated to moderate a panel at AI World 2018 in Boston on Dec. 5. The “Emerging Standards for Ethics in AI: Fact or Fiction?” panel will address such topics as how machines may influence society’s view of ethics and whether advancements in AI are amplifying old moral questions or introducing new ones.

The panelists include David Weinberger of Harvard’s Berkman Klein Center for Internet & Society and William Kassler of IBM Watson Health.

Wing and Gillan both voice concerns about the brave new ethical dilemmas posed by AI and other emerging technologies—as well as hope that such technologies will ultimately lead to a more equitable future.

Wing visited campus in November to deliver a lecture titled “Artificial Intelligence or Augmented Inequality: How Technology May Increase the Divide.” In front of a crowd of about 80 members of the Babson community, the animated Wing posited a number of questions and scenarios that seemed to be pulled straight from sci-fi, including:

  • China using big data to judge the trustworthiness of its citizens (Wing termed it “Black Mirror-like,” referring to the dystopian TV series)
  • The breeding of a “super race” with CRISPR technology
  • Sophia, the female robot granted citizenship by Saudi Arabia (and hence having more rights than a human female); and
  • The possibility of “living forever” in the form of a digital avatar designed to mimic an individual’s thoughts, feelings, and speech

Far-fetched as they sound, each of these already exists or is on the verge of existing, and along with them comes a raft of ethical challenges.

Next Stop for the Trolley Problem

Wing explored some of those challenges, along with some more old-school ethical questions of the type that have bedeviled undergraduates for generations. (How do you know you’re human? Does a dog or a rat have a soul or consciousness?)

He cited as an example the so-called trolley problem, a long-standing question in ethics that asks whether it’s justified to take a life (by choosing to divert a trolley) if doing so will take the lives of 10 others stuck on an alternate trolley track.

Challenging as that dilemma might be, imagine how much more complicated the question becomes when the “trolley” is actually a self-driving car that must be programmed or taught to make such choices.

In conversation later, Gillan—who attended the Wing lecture—asked, “What do we train the car to do, and who will take responsibility for this decision? Today, we trust that as humans, we’ll make the best decision possible in the moment if ever faced with the trolley problem or another dire situation where there’s no good answer. As we train machines to perform tasks autonomously, we must anticipate such ‘impossible decisions.’”

The question isn’t merely an ethical one but a logistical one as well, she points out. “When you are leaving these decisions up to a machine such as an autonomous car and something goes wrong, do we blame the human backup driver who wasn’t paying attention? The software developer? Those who trained the car? The company that built the car?”

Deciding where the power to make those decisions should reside is no easy matter. “I always say I’d love to see our federal government have a secretary of the future and focus more on these issues,” says Gillan. But as government seems unwilling or unable to take the lead, other organizations, such as the World Economic Forum or IEEE, have stepped in. “They’re all over these issues right now,” says Gillan.

Wing isn’t sure who ought to set the standards, but he’s certain that the subject needs to be addressed sooner rather than later.

“Let’s have the conversations, let’s have the transparency, let’s give people the information,” he says. Otherwise, if history is any guide, we may simply blunder into the technological frontier without a map.

What About Work?

But ethical concerns about the future of AI and other technologies are by no means limited to setting standards or avoiding unintended consequences.

Invariably, the changing landscape of employment becomes part of the conversation as well. The questions often boil down to this: In the future, will robots take our jobs? Both Wing and Gillan say it’s not as simple as that.

Our present moment is sometimes called “the fourth industrial revolution,” and like the iterations that preceded it, this revolution will inevitably have its casualties: There’s not much demand today for buggy-whip salesmen or Gutenberg press operators.

The typical analysis suggests that though some jobs will go away, more will be created in their stead—but the latter may not always be an obvious consequence of or replacement for the former.

Wing points to the demise of the secretary. “Secretaries used to type out letters and all that kind of stuff because the boss was too busy. Suddenly, the internet came, and with it the use of email. At that point in time, it could be argued that this email thing was taking away the job of secretaries, but it has subsequently created a huge sector around software development.”

He says that while jobs are lost in the short term, there have always been gains in the long term, especially in the creation of new sectors. “Think about the rise of manufacturing from the ‘demise’ of agriculture and the 30-fold increase in job creation,” he says. But he’s not entirely confident that model will continue to hold. “As these sectors are created, are we skilling up people to fill that void? I think that’s actually the right question to be asking, and that’s the piece I see a lot of companies, specifically CEOs, are missing.”

Robots—or AI—are sometimes touted as a means to relieve people from doing tedious, repetitive work. But, says Gillan, what of those who take pleasure and pride in work that may be automated in the future (or indeed may be automated today)? “If somebody is happy working in a sub shop making pizzas and subs, which can be done by robots, where do they go?” she says.

“People want to feel productive,” she continues. “So what we need to do is find a way to make people productive in this evolving digital world. Job experts studying these trends tell people to get digital skills, be creative, do things that even intelligent machines don’t do so well. We are preparing our students to land on the positive side of change, but there are a lot of people out there making a living doing repetitive or predictable tasks.”

“And AI and robotics won’t stop there. Cognitive systems such as IBM Watson are being trained to create recipes and paintings. It’s important to remember that AI will make some jobs better and create new jobs. Still, it’s a big challenge.”

Although gaining 21st-century skills is oft-heard advice, Wing has a somewhat different slant. He, too, suggests pursuits that robots haven’t yet mastered, but somewhat counterintuitively, he’s advocating for soft skills like emotional intelligence, compassion, and empathy, rather than, say, coding or engineering.

In the future, he says, even such traditionally secure jobs as doctor or lawyer will be automated. But, he says, “Managing and creative functions will be hard to replace with technology.”

Here, he says, he sees a particular opening for those with a Babson education: “Fortunately for Babson, I think, a lot of that is about managing people. What fields will they be in? That’s TBD, but I think the irreplaceable jobs will be about managing people, showing respect, showing empathy—the human side of things.”

And Gillan sees future opportunity for those who can be comfortable in fast-changing, often ambiguous situations—skills, she says, that are developed and emphasized at Babson.


Posted in Insights

More from Insights »