Many philosophers have a pragmatic view of the world, and it would appear that Alex John London, a professor of philosophy and director of the Center for Ethics and Policy at Carnegie Mellon University, counts himself among this group.

London, the first afternoon speaker at the day-long VM summit, addressed the topics of engagement, ethics and leadership as they relate to Artificial Intelligence (AI). But, first, he cautioned attendees about forming their thinking about the opportunities and consequences around AI by pushing their perspective into the way-off or far future.

“I think the focus on the far future is a ‘wizard,’” he said, quoting Homer Simpson from an episode of “The Simpsons.” “It obscures the way that the decisions that people like you and your firms make now, and the way those decisions will determine which future we will bring about. It also affects what values we build into the systems that we are creating and how we deploy those systems in our society so that they determine the kinds of interactions we have—less with the machines and more with how these new innovations affect the way that we relate to and interact with one another.”

Instead, London said his work and focus are on the “transitional problems,” or the AI-related issues that may arise in the next five to 10 years. “The near- to mid-term problems are appearing now and these are the problems that are going to create adverse effects or ethical problems now … and that need our attention,” he explained.

The kind of questions that his ethics and policy group are addressing include the handling of AI-human interactions, the reliability of AI, and how people can be assured the AI systems they create are going to do what they want them to do and that can be relied upon and trusted?

“These systems already encode values. So the question is not whether we encode values into these systems, the question is which values do we encode in the systems?” he said. “And how do we ensure that they are not just narrow, or a limited set of values, but rather that they have the hooks to interface with larger social values in the greater ecosystem in which these systems are deployed?”

There also are social challenges that are required along the path to ethical integration of the autonomous systems into society, and how to assure that this integration leads to a more inclusive social order and not “as a very powerful tool for ‘commodification’ or social domination or repression,” he explained.




When queried on the role of ethics in AI by VM’s Andrew Karp (r) Alex John London, professor at Carnegie Mellon University, said, “The people who are developing AI are very concerned with value questions.”
Professor London cautioned attendees about forming
their thinking about the opportunities and consequences surrounding AI by pushing their perspective into the way-off or far future.
However, there is time now to consider all of these issues and challenges. London said he does not expect that “in the near term” we will encounter a fully autonomous system deployed. “The future that we have now is going to be highly dependent on human/AI interaction,” he said. “And the quality of the product that we get is going to depend on our ability to make sure those interactions are seamless, fruitful and not inefficient.”

As a result, a challenge today for rolling out AI is to ensure the human/machine systems are “well-paired and that we’re mixing and matching the strengths of the human with the strengths of the machine and not having them work at cross purposes,” London explained.

Another challenge related to AI is, unlike a traditional computer program where the norms of the operations that will be performed are programmed into the system, machine learning is far more inscrutable to the user, London explained. “The user and the programmer may not understand how that machine does what it does,” which may make creating effective teams even more difficult.

In further advances and iterations of AI, London said he believes a primary issue will be the consideration that’s given to whether the systems are deployed fairly “so that firms can advance their interests, but that we don’t leave important social values withering.”

He added, “We want to make sure that the future that we inherit is a more just and inclusive future and not one that supports exclusion and in which people are reduced to data points that are mined to the advantage of a lot of nameless or faceless corporations. That is the dystopia scenario,” he said.

“The utopia is that we use all of these things to personalize medicine and to have systems to advance our opportunities that ameliorate our limitations, enhance our abilities and allow us to have better relationships with each other,” he added. “That’s not going to happen on its own and it’s not going to happen because machine learning was invented. It’s only going to happen if we make an explicit effort to use those tools to that end,” he concluded.

— Mark Tosh