Journalist, ex-DARPA head, CASBS director talk artificial intelligence

Nov. 14, 2017, 12:33 a.m.

Pulitzer Prize-winning tech journalist John Markoff. Former director of Defense Advanced Research Projects Agency (DARPA) Arati Prabhakar. Director of the MIT Media Lab’s Ethics Initiative The Venerable Tenzin Priyadarshi. What do these figures have in common?

They are all fellows at Stanford’s Center for Advanced Studies in the Behavioral Sciences (CASBS), where they are spending a year exploring the societal implications of technology. All three fellows will discuss these issues in an upcoming symposium entitled “AI, Automation and Society.” The event will take place Tuesday at 5:30 p.m. at the Center and will also be live-streamed online.

The Daily visited CASBS to talk to two of the fellows and CASBS director Margaret Levi about AI and automation as the topics relate to the Center’s work.

Optimism on AI’s impact

One of the first questions people ask about artificial intelligence (AI)  is simple: Is a robot going to take my job? The answer, according to Levi: It’s impossible to know right now, but that outcome is unlikely. In the past half century, Levi said, only one job has truly disappeared, that of the elevator operator. Even if some professions become automated, she argued, new roles will likely replace them.

She noted the historical precedent for this.

“A lot of jobs disappeared with the advent of the automobile,” Levi said. “We don’t see the need for as many blacksmiths as we once did.”

Levi noted some types of jobs for which society may need robots, because people will not take the positions on. These undesirable roles typically involve difficult or dangerous physical labor. Looking to the future, Levi cited the need for infrastructure development, where robots could play a critical part — already, machines are used to navigate mines and sewers.

But the future of automation may lie in a less physically demanding field: caretaking. Markoff’s 2016 book “Machines of Loving Grace” focuses particularly on how robots could help address the aging crisis.

Markoff said that conversation often prevents dementia in the elderly, but seniors are frequently left alone in front of television sets for extended periods of time. Markoff hopes that conversational robots might one day aid older people who now lack companionship.

Markoff also sees applications for such machines in education. Growing up, Markoff’s mother worked with low-income students in East Palo Alto who fell behind their peers in language skills by the time they entered kindergarten and faced tough odds getting back on track. Markoff imagines that affordable AI might one day help such children develop their verbal skills.

For the CASBS fellows, AI does not take the form of futuristic science fiction; rather, its applications focus on the thorny issues prevalent in contemporary society. While Markoff has often focused on robots’ caretaking potential, Prabhakar has conducted research on using machine learning to better manage the radio spectrum, allowing people to use wireless data more efficiently.

As self-driving cars began cruising around Silicon Valley, Prabhakar christened the first self-driving ship, which can travel across the ocean for months without any sailors.

Concerns

Despite their optimism about AI’s place in society, the fellows expressed some worries about how humans could choose to use the technology.

Prabhakar is concerned about the way people talk about AI.

“If you listen to the conversation,” she explained, “It sounds like there’s a ‘them,’ [that] there are these machines, and they’re going to do these things. But that’s really almost an excuse for not dealing with the humans. It’s really an expression of the humans that designed and built those things and the humans that choose to use them.”

The CASBS fellows acknowledged plenty of opportunities to misuse automation, as everyone from scammers to terrorists gains ever-more-powerful tools.

Markoff, for instance, emphasized the dangers of efforts to recreate human emotion and even particular voices via machine learning.

“There will be a day where your mom will call you and tell you she forgot her bank account password, but it won’t be your mom,” he said. “I can guarantee you that will happen.”

However, Markoff said, pressing issues with AI may not be the ones that researchers consider proactively and preemptively.

Giving an example, Markoff said that most plans for self-driving cars include an architecture that would allow vehicles to be interconnected. To many technologists, this interconnectedness would provide an elegant way to avoid accidents; cars that can communicate with each other can avoid each other more easily. But to Markoff, such a system would pose a serious threat to cyber security. If a malicious individual hacked it, they could easily enact large-scale devastation.

Prabhakar is intimately familiar with security concerns. Her team at DARPA often considered the negative outcomes that new technologies might cause, but it did not shy away from working on them, she said. In fact, she felt it was her responsibility to explore those novel avenues of research.

“That was our job,” she said. “That’s what we do for our country. It’s important that we discover and drive those areas before anyone else on the planet does.”

In addition to concerns about how AI could be misused, Levi, Prabhakar and Markoff all expressed unease over the power of technological companies, particularly when the companies have more information than the general population. Both Markoff and Prabhakar stressed the importance of understandable AI; if companies are going to design machines to make decisions for consumers, consumers should be able to figure out how those decisions are made, they said.

“Increasingly we’re surrounded by a soup of algorithms that aren’t transparent,” Markoff said. “Look at all those fetching algorithms on our iPhones that give us all kinds of advice.”

He noted that he trusts Siri because that system doesn’t operate on an advertising business model. Because he knows that Siri is not covertly trying to sell him something, he said, he is more willing to trust her input.

CASBS fellows will explore these topics in more depth at Tuesday’s symposium, where they will encourage audience members to think carefully about the role AI and automation plays in their own lives.

After all, they said, consumers will often be the ones to decide whether the benefits of automation outweigh the risks.

Prabhakar admitted that even she sometimes struggles to balance caution over AI with an appreciation for what it can do.

“With the [Amazon] Echo, my husband keeps saying, ‘It’s listening to everything we’re talking about,’” Prabhakar said. “I keep saying, ‘Yeah, but it plays my favorite music. It likes Van Morrison, so that’s okay!’ So I think you want to be aware of the tradeoffs you’re making.”

 

Contact Mini Racker at mracker ‘at’ stanford.edu.



Login or create an account