The Rapid Advancement of AI in Recent Years
AI has taken an enormous leap in recent years. While ordinary people still grapple with its practical applications, scientists are already contemplating the rise of a super-AI. They are also considering a step further: super-AI could be the reason we have never encountered extraterrestrial life.
Super-Intelligence: The Double-Edged Sword
An artificial super-intelligence (ASI) is not only smarter than humans but is also not bound by human learning speeds. It is a superpower that surpasses all, potentially marking a major milestone or our downfall. According to Scottish astronomer Michael Garrett, it could be the “great filter” of the universe—a barrier so difficult to overcome that it prevents most life forms from evolving into spacefaring civilizations.
Answer to the Fermi Paradox
This could explain why our search for extraterrestrial intelligence has yielded nothing. It provides a solution to the Fermi Paradox: why haven’t we found any signs of extraterrestrial civilizations in a universe so vast and old that it could potentially house billions of habitable planets? The reason might be insurmountable evolutionary obstacles, such as the leap to a super-AI, which prevents civilizations from travelling far into space.
“I believe the emergence of ASI could be such a filter. The rapid advancement of AI, possibly leading to ASI, could mark a crucial stage in a civilization’s development—the transition from a single-planet species to a multiplanetary species,” Garrett explains. “In this process, many civilizations could collapse because AI progresses much faster than our ability to control it or sustainably explore and inhabit our solar system.”
Fear of Conflict
The great power—and potential problem—of AI, especially super-AI, lies in its self-learning capability. Artificial intelligence can improve its own capacities at an incredibly high speed. This can lead to a variety of issues, potentially causing the downfall of a civilization before it ever has the chance to discover other planets. For instance, nations may increasingly rely on AI and eventually transfer power to autonomous AI systems, which then engage in conflicts, using weapons and wreaking havoc. This could potentially lead to the destruction of a civilization, including the super-AI itself.
This downfall could happen quickly, according to the Scottish astronomer. “In this scenario, I estimate that the typical lifespan of a technological civilization is less than a hundred years. That’s about the period between when we were able to detect and send signals into space (1960), and the estimated rise of ASI (2040) on Earth.”
Better Monitoring of AI Development
This is, of course, an incredibly short span on the cosmic scale of billions of years. If we input this estimate into the Drake equation, which tries to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way, it suggests that at any given time only a handful of intelligent civilizations might exist. Moreover, technologically, they might be just like us—not advanced enough to find each other.
The study primarily aims to warn us to better monitor and control the development of AI, including military AI systems. We must not only prevent malicious use of AI but also ensure that the evolution of artificial intelligence aligns with the long-term survival of our species.
A Crucial Point for Humanity
There is already evidence that people are willingly transferring significant power to increasingly intelligent AI because it performs many tasks faster and more effectively without human intervention. Thus, governments are hesitant to regulate it as they want to maximize the benefits. However, we must be cautious that AI systems don’t set off a chain of escalating, destructive events, according to the researcher.
“Humanity is at a crucial point in its technological development. Our actions now could determine whether we become a lasting interstellar civilization or succumb to the challenges that AI presents to us.”