Random House
Review by Walter Cummins
It turns out that Yuval Noah Harari, in Nexus, his latest book, isn’t a complete fatalist. But one has to read to the end to find some hope that humans can agree to mitigate the dangers of Artificial Intelligence (AI) taking over the creation of information that will control humanity. The book till then provides example after example of the destructiveness of distorted pre-AI information throughout human history. Along the way, he drops in many warnings that things could be much worse with a self-contained digital system, independent of human intervention, making people superfluous and essentially helpless under the domination of inhumane algorithms. His hope:
The good news is that if we eschew complacency and despair, we are capable of creating balanced information networks that will keep their own power in check. […] we must abandon both the naive and the populist views of information, put aside our fantasies of infallibility, and commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms. That is perhaps the most important takeaway this book has to offer.
From all of the historical examples of information failures Harari has presented up to this final section, that hope is a long shot that would call for a radical enlightenment in human behavior unlike any our species has ever experienced. He states, “This book has argued that the fault isn’t with our nature but with our information networks.” Over the centuries the result of our networks has been power rather than wisdom. But can we really forgive our nature for falling prey to misleading information that blinds us to the reality around us? Why are we so easily misled?
Harari’s many examples of how humans have succumbed to flawed, dangerous, and outright wrong information is, for me, the real subject of Nexus, despite the warnings about the potential power of AI. His last-minute hopeful plea reads like the behavioral equivalent of a Hail Mary pass.
Regardless of AI, the history of human-generated information is problematic, according to Harari. He warns us about accepting the naïve view of accumulated information as steps in human progress because much of what we call information has little to do with the real world: “… information has no essential link to truth, and its role in history isn’t to represent a preexisting reality.” In fact, it often convinces us of erroneous beliefs through misinformation and disinformation and the consequences of ignored information.
But all of Harari’s historical explanations of bad information is prologue to his crucial warning about the even more severe consequences of information created by Artificial Intelligence: “The rise of intelligent machines that can make decisions and create new ideas means that for the first time in history power is shifting away from humans and toward something else.” He worries that machines will become more intelligent than humans, but without feelings. “Since the current information revolution is more momentous than any previous information revolution, it is likely to create unprecedented realities on an unprecedented scale.”
Harari is hardly the first thinker to warn humanity of the grave dangers of AI, Stephen Hawking being one. Others have argued against such concerns. A reader of Nexus can divide the book into two parts, one exposing the significant consequences of misused information throughout history, the other speculating that the triumph of AI will be much worse. But even if that part is dismissed, the record of centuries of distorted information reveals reasons for serious concern.
Beyond Harari’s AI warning, the historical examples he uses to demonstrate the misunderstanding and misuse of information are compelling in themselves because Harari knows so much about historical events throughout the world. His explanation of the process of compilation of the Bible’s New Testament reveals how the assembly of information is often an unmoored debate about what matters and what doesn’t. Different branches of Christianity disagree about what should be considered sacred texts. “We do not know the precise reasons why specific texts were endorsed or rejected by different churches, church councils, and church fathers. But the consequences were far-reaching.”
Consider the example of the acceptance of I Timothy, which calls for the full submission of women, and rejection of the Acts of Paul and Thecla, which gave women leadership roles, “describing how Thecla not only performed numerous miracles but also baptized herself.” Some consider I Timothy a forgery despite its place in the New Testament. If Thecia had been included in the Bible, world history, at least in the Western world, might have experienced a major change in attitudes toward the role of women. But one piece of information won and another lost, with significant results.
So much depends on human decisions about what information dominates and what becomes locked in as truth. That’s why closed systems like authoritarian religions and totalitarian governments cement their foundational information and refuse to admit any new or conflicting information that threaten basic beliefs. Even democracies often resist questioning of accepted beliefs, as seen in American debates about the originalism of the Constitution. Harari would have us accept that “All human political systems are based on fictions, but some admit it, and some do not. Being truthful about the origins of our social order makes it easier to make changes in it.”
Harari holds self-correcting sources as a positive alternative to information that becomes dogma, with scientific research as the primary example, where findings are tested again and again to affirm their accuracy. But even scientists have a tendency to resist change, with resistance to new findings that challenge established theory. The contradiction of Newton’s laws by quantum theory is one example. Still, new paradigms usually emerge: “The self-correcting mechanism embraces fallibility. ”
The heart of the problem for civilization, as Harari sees it, is that “While each individual human is typically interested in knowing the truth about themselves and the world, large [information] networks bind members and create order by relying on fictions and fantasies.” AI, as self-contained with no connection to an external reality, threatens as a likely source of even greater fictions and fantasies. What chance do self-correcting mechanisms have to combat AI when humans are so vulnerable to fabricated beliefs, so reluctant to embrace fallibility?