Consciousness

This past weekend I attended the Authors Guild Foundation’s first “Words, Ideas and Thinkers Festival, Reimagining America.” Novelists, historians, journalists, and scientists confronted racism, climate change, the future of the Supreme Court, foreign policy, and identity and belonging, all in the context of their writing. It was by turns depressing (Is there the political will to address the crisis of climate change?) and inspiring.     

The opening speaker was Dan Brown, author of The Da Vinci Code, whose topic was “When Religion Meets Science.” He was knowledgeable, much funnier than I expected, and thought-provoking. According to Brown, scientists he has consulted believe that it won’t be long before we develop a machine with consciousness.

My first thought was “I hope I’m not here to see it,” but at the rate of scientific discovery Brown described, it may occur in my lifetime. I dread the moment because I think that the development of AI has already exceeded our capacity for ethical and moral decision-making. Algorithms that manipulate knowledge and behavior frighten me. Or maybe it’s just that I remember Hal, the malevolent computer from 2001: A Space Odyssey. And, I wonder, if we create a machine with consciousness, is the machine intended to serve humanity, or its creator?

But what really brought me up short was Brown’s musing about the questions such a machine might ask. “If we put a machine with consciousness alone in a dark room,” he asked, “Would it start asking the “big” questions, like Who am I? and What is the meaning of my existence?”

As science has increasingly explained natural phenomena, we no longer need religion to explain thunder, or earthquakes, or night and day. But the “big questions” are still unanswerable by science. “Is there life after death?” For these questions we still turn to religion. It is hard to fathom a machine sharing in this quest for answers. Webster defines consciousness as “the state of being characterized by sensation, emotion, volition, and thought.” If the machine asks questions of this magnitude, how might it choose to act on the answers it perceives? Will different machines think and act differently?

I recently read the novel Atomic Anna,” by Rachel Barenbaum, about a nuclear physicist who is building a time travel machine in order to avert Chernobyl and to save her granddaughter. The book asks, “Just because you can change the past, does it mean you should?”   

Just because we can change the future, does it mean we should?