A.I.

I recently watched the first broadcast of Revolution, a collaborative effort between MSNBC and Recode, “Google and YouTube Changing the World.” Kara Swisher and Ari Melber interviewed Sundar Pichai and Susan Wojcicki, the respective CEO’s of Google and YouTube. This broad ranging discussion covered immigration, sexism in Silicon Valley, the changing nature of jobs, artificial intelligence, and the impact of social media on our democracy.

The final question posed to the two CEO’s was how do they see 2018? Both Pichai and Wojcicki were optimistic. They believed that their companies would help bring the world closer together and foster understanding and learning. According to Wojcicki, 2017 showed them the challenges, and they have laid the foundation to improve.

I don’t agree. Yes, there is enormous potential for bringing people together. I am in a book club whose membership spans 33 states and 8 countries. Without social media, we would not exist. However, what the elections did was bring like-minded people together – and pull them even farther apart from those with different points of view. Earlier in the broadcast Pichai said that there were many legislators who would discuss the issue of immigration reasonably, but that the system pulls them apart. What I didn’t hear were suggestions for changing this system. Do tech companies have a role in enabling this change?  

Another segment of the show dealt with the effects of technology on jobs. (see my post, "Will Your Job be Replaced by a Robot?" Driverless cars will put 2-3 million people out of work. Jobs that were once thought of as safe from redundancy because of their complexity, like accountants, radiologists and journalists, are now performed by computers. Pichai and Wojcicki talked about retraining and some investments they have made to prepare people for emerging tech jobs. But they didn’t address the fact that our education systems are out-of-date and rigid.  As the pace of change accelerates, schools and colleges are unlikely to adapt quickly enough to address a problem of this magnitude.

They did point to jobs being created, like the contractors and tradespeople who connect to potential customers through TaskRabbit. But these jobs do not have benefits, creating another social problem.  Is the tech industry responsible for the social consequences of its inventions?

Daniel Lurie, CEO of Tipping Point Community, an anti-poverty non-profit, said that these companies have a role, if not an obligation, to address poverty and homelessness, but that he didn’t see that reflected among leadership across the industry.

In a letter in today’s NY Times, Oreoluwa Babarinsa disagreed with this point of view: “…the previous generation of technologies (from Google forward) all uncritically tried to solve social and political problems with technology, a fool’s errand. This sort of techno-optimism needs to die on the vine. Stop looking to technology to fix society. Go out and engage.”

The show’s segment on artificial intelligence started with a clip from 2001: A Space Odyssey, in which HAL, the computer, informs the astronauts that he “cannot allow” them to disconnect him. Although this takeover by a machine is still science fiction, its likelihood is top of mind for many leaders in technology and the sciences, like Elon Musk and Stephen Hawking, who see the as-yet-unrestrained development of A.I. as a threat.

robot.jpg

 

Pichai compared A.I. to fire and electricity in its capacity to change our lives and noted that it is already present in commonly used apps like language translation and Google Photos. Although I may find the proliferation of these applications daunting, what I find frightening is the potential for the exponential growth of machine learning beyond what we can comprehend. As machines start to teach themselves, gaining speed and complexity, they can quickly surpass our understanding.  Nor is it  a question of “if,” but “when.”

In “Can A.I. be Taught to Explain Itself?” Cliff Kahn says “…artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem – the inability to discern exactly what machines are doing when they’re teaching themselves novel skills – and it has become a central problem in artificial-intelligence research.” Kahn says that humans won’t trust a decision that cannot be explained to them, “Even if a machine made perfect decisions, a human would still have to take responsibility for them – and if the machine’s rationale was beyond reckoning, that could never happen.” I would ask, “Why not? Why won’t we rely on a machine we believe is smarter than we are to make decisions?”

The positive vision of A.I. is its ability to solve climate change, hunger, and disease, or what Swisher said the tech industry calls the “happy, shiny future.”  The inventor Ray Kurzweil is a leader in this field. He believes that we will see “artificial super intelligence” (ASI) by 2045, but that we will not lose control. “ASI is emerging from many diverse efforts and will be deeply integrated into our civilization’s infrastructure. Indeed, it will reflect our values because it will be us.”

Which brings me back to whether Pichai and Wojcicki’s optimism is deserved. Whose values are “our” values? A group of friends and I spent this afternoon talking about A.I. Using the grid in “The AI Revolution: Our Immortality or Extinction,” published two years ago by Wait but Why, we placed ourselves along the y axis of sooner or later, and the x axis of pessimism or optimism. Out of nine participants, only one was in the “confident corner,” believing that ASI is coming soon, but that it a cause for optimism. Everyone else was either squarely on or near “Anxious Avenue,” believing that it is coming soon, but pessimistic about the results.

Ordinarily an optimist, I do not believe we are prepared, or even willing, to confront the leadership challenges raised by A.I.