I finally had a chance to listen to Neil Degrasse Tyson interview inventor/futurist/AI soothsayer Ray Kurzweil on the season premiere of Star Talk, and it turned out to be pretty much what I expected: Kurzweil rigidly answered the host’s hypothetical questions with robotic confidence, while guest neuroscientist Gary Marcus questioned the scientific validity of Kurzweil’s predictions and Pulitzer Prize-winning author and professor of cognitive science Douglas Hofstadter compared Kurzweil’s worldview with lunacy and dog excrement intermingled with a few scientifically-sound and reality-based predictions.
I’m not going to comment on the relative accuracy of Kurzweil’s science, but I do think the ethics of AI and nanotechnology are among the most pressing ethical issues of the next twenty years and thus worth exploring even in vague terms here.
Ethics have historically almost always lagged behind technology, so I was especially solaced to hear Dr. Marcus vow to meet these demands as the founder of aiforgood.com. While Kurzweil touts neurocognitive nanobots as a foregone conclusion and a positive step in evolution, he also seems optimistic about the nature of its use in discussing the moral imperative that accompanies it. Dr. Marcus is more measured in his predictions and the barriers that currently exist, taking a critical perspective of Moore’s law (or as Dr. Marcus calls it, Moore’s Trend) explaining the exponential acceleration of technological capability. The truth is that this acceleration has in fact slowed and so even with reference to the initial trend, future paces of advancement might not be easily determined. Dr. Marcus also acknowledges the negative side of this unharnessed technological advancement and stopping short of omens about grey goo, he notes the potential for terrorists, tyrannical governments, and other evildoers and stresses the need for early regulation and ethical codes since technology does often proceed quickly unrestrained.
Perhaps my biggest problem with Kurzweil’s predictions is not the scientific validity of the ideas he so adamantly proposes but his use of “we” to denote those who will be using/benefitting/engaging in “The Singularity” (the forthcoming date—he proposes the year 2045—in which machines will function, reproduce, and blend with human biology). Perhaps he mentions it in his books (which, full disclosure, I have not yet read), but I believe there will continue to be large swaths of the population who have no interest in merging with machine. From uncontacted tribes to the Amish to modern day hippies and naturalists, I don’t foresee the entire human race necessarily jumping aboard this invasive technology.
The other major flaw (albeit not related to ethics) in the predictions espoused in the interview was that of a variation of immortality achieved through creating digital copies of brain scans. Kurzweil describes it as “uploading to ‘the cloud’” but the idea is that the electrical impulses that travel the synapses of the brain can be technologically cloned to operate through computer systems. I don’t have a major problem with this idea except for the use of the word immortality or the notion that anything doesn’t reach a finite conclusion in this universe. Eventually the sun is going to become a white dwarf and engulf the Earth and even if that is somehow avoided, most competing theoretical physics models (including those proposed by host Neil Degrasse Tyson) conclude an eventual demise of the universe. In my view, it’s hard for immortality to endure the end of the universe.
But it does illuminate an interesting question to be explored in the future: if The Singularity does occur as Kurzweil describes, what will we ultimately fear? Death…or unrelenting existence past a natural lifecycle?