The authors of this book are both grandparents.
This book is dedicated to our grandchildren. We are worried for them and for the future of their generation.
In the very first pages, we started with the assumption that human intelligence is something extraordinary, rare, and wonderful. And yet, when we look at the collective intelligence of our species, we are overwhelmed with doubt. “Visiting space aliens,” reads a witty tweet by Neil deGrasse Tyson, astrophysicist and science communicator, “upon seeing humans oppress or kill one another over who they worship, who they sleep with, what side of an arbitrary line they’re born on, or how absorptive their skin is to sunlight, would surely race home & report no sign of intelligent life on Earth.”
Very funny, but also terribly true. Seventy years after Hiroshima, we still haven’t been able to eliminate the potential danger of a thermonuclear war but have actually attributed peacekeeping power to atomic weapons, because mass murder goes hand in hand with mass suicide.
But what is even more extraordinary is that a physical property discovered in 1895 (that carbon dioxide, methane, and other gases hold onto part of the infrared radiation from the Earth) and deemed dangerous as far back as 1959 (physicist Ed Teller told oil industry representatives: “a 10 per cent increase in carbon dioxide will be sufficient to melt the icecap and submerge New York”), a cause for alarm at the 1992 Earth Summit in Rio de Janeiro, and the reason behind increasing climate instability since the early 2000s, should still be up for discussion between those who “believe” and those who don’t. Moreover, this “debate” is obviously an object of misinformation.
Since climatology is a science that deals with a high number of unknowns, scientists have to calculate the probabilities of their conjectures coming true in the future. In 2023, the Sixth Assessment Report of the IPCC (the Intergovernmental Panel on Climate Change, the UN body that brings together thousands of climatologists from all over the world) concluded with a “high degree of confidence” that it is now almost impossible for the increase in the planet’s average temperature to remain below the famous 1.5°C threshold, compared to the preindustrial era. Since then, this suggestion has become a certainty and, in the absence of prompt cuts to emissions, the world is on track for an increase of about 2.7 degrees.
The implication is that the climate crisis is a possible existential risk, perhaps more imminent than a power-hungry algorithmic superintelligence. Those who believe the opposite say that global warming is increasing only in a linear fashion, in tandem with the increase in carbon dioxide concentration in the atmosphere, as has been objectively true until now.
Climatologists are warning us, however, that thanks to positive feedback loops, that is, physical mechanisms by which warming strengthens itself, the progress could become nonlinear and move beyond our control. The huge amounts of methane frozen beneath the Arctic permafrost are an example of this kind of danger, because the permafrost in Siberia is slowly melting, and methane is a greenhouse gas 100 times more potent than CO₂. In addition, there is the Earth’s loss of reflectiveness because of the melting glaciers. The oceans are unable to absorb much more carbon dioxide in addition to what they have already absorbed, without turning too acidic. Moreover, there is the instability of the large ecosystems that support the planet as we know it, starting with Amazonia, whose survival—partly because of illegal deforestation—is already considered in danger.
Even if we rule out the chances of extinction, the future lives of the younger generations will be dangerously different from what their grandparents experienced. We must prevent them from becoming too different.
At the same time, while the climate crisis accelerates, another transformation is unfolding: the rapid progress of artificial intelligence. When we began working in this field, the idea that machines could converse fluently, translate between languages, write code, draw images, and assist in scientific research at today’s level would have seemed like science fiction. Now large language models and related systems are everyday tools. For some, they are the first signs of a coming “superintelligence”; for others, they are just very sophisticated parrots. For our grandchildren, they will simply be part of the environment—almost as natural as electricity or the internet.
So, what should we think about this new form of intelligence? Should we fear it as a rival, or embrace it as an ally? And what should we tell our grandchildren to study—what to learn, how much, and how deeply—in a world where AI is everywhere? Should they rely on these tools, or deliberately learn to think without them?
Here, we believe, we must make a bet. Not a blind bet, but a rational one—something closer, in spirit, to Pascal’s famous wager.
Blaise Pascal argued that, since we cannot know for certain whether God exists, we are nonetheless forced to live “as if” one of the possibilities were true. If you bet on God’s existence and you are right, the gain is immense; if you are wrong, the loss is limited. If you bet against and you are wrong, the loss is enormous. Given the uncertainty, he said, the rational strategy is to bet on belief.
We face a similar situation with human intelligence in the age of AI. We do not know how far machine intelligence will go, nor exactly how it will compare with ours. We do not know whether AI will remain a set of powerful tools, or whether some future system will exceed us in many domains. But we must choose how to act now, long before we have certainty.
One attitude is resignation: the idea that humans are about to be surpassed, made obsolete, and that our best hope is to get out of the way. Another is denial: to insist that real intelligence or creativity will always remain purely human, and that machines are irrelevant or dangerous intruders. Both attitudes, in different ways, bet against human intelligence.
The third attitude—the one we argue for—is to bet on a long, fruitful collaboration between different kinds of minds. Human and artificial intelligence are not identical; they are complementary. We are embodied, emotional, social, and finite; our intelligence grew out of a long evolutionary history, and it is deeply entangled with our bodies, our cultures, and our need for meaning. Machines, for now, are pattern-finding engines built out of silicon and code, trained on vast quantities of data, tireless, fast, and unembarrassed by repetition.
If we bet that human intelligence will still matter, and that its best future lies in partnership with machine intelligence, several consequences follow.
First, we should not abandon the effort to understand ourselves. On the contrary, the more capable AI systems become, the more urgent it is to deepen neuroscience, psychology, and the science of learning. If machines help us model the brain, we will, at the same time, need our best human theories to understand what those models mean. Understanding intelligence—biological and artificial—will be one of the defining scientific projects of this century.
Second, we should design AI not as a replacement for human judgment but as an amplifier of it. Large language models are a good example. They can generate hypotheses, draft texts, explore options, and compress knowledge at a speed and scale that no human can match. But they do not know what is important. They do not have grandchildren. They do not wake up worried about climate tipping points, or justice, or beauty. Those priorities still have to come from us.
If this is true, then the frontier of AI research will depend, for a long time, on a dialogue between human and machine. Humans will continue to pose the questions, define the goals, judge the answers, and decide which directions are worth pursuing. Machines will be increasingly powerful collaborators: testing vast numbers of possibilities, uncovering patterns we do not see, and suggesting solutions we could not have found alone. Together, these two kinds of intelligence may accomplish much more than either could in isolation.
Third, this collaboration is precisely what we need to confront the climate crisis and other global challenges. Climate science, with its complex models and uncertain feedback loops, already depends on high-performance computing and advanced algorithms. Future climate policy, energy systems, agriculture, and city planning will likely depend on AI tools that can simulate scenarios, optimize resources, and help us navigate difficult trade-offs. But these tools will only be useful if they are guided by human values and institutions capable of acting on their advice.
This is where Pascal’s wager returns in a modern form. Imagine four possibilities:
- We invest in human intelligence and collaboration with AI, and AI turns out to be crucial but not dominant.
- We invest in human intelligence and collaboration with AI, and AI eventually surpasses us in many respects.
- We neglect human intelligence, and AI remains limited.
- We neglect human intelligence, and AI becomes powerful but misaligned with our needs.
In the first two cases, investing in education, research, and wise collaboration yields enormous benefits. In the third, at least we have not weakened ourselves unnecessarily. In the fourth, we have handed power to systems we do not fully understand and are no longer able to govern. The rational bet, once again, is clear.
For our grandchildren, this means that schools and universities, laboratories and libraries, should not wither away in the shadow of AI. They should change, of course. Students will work with AI tutors; researchers will explore hypotheses generated by algorithms; artists and writers will collaborate with models that can remix centuries of culture. But the point is not to replace human curiosity and critical thinking; it is to multiply them.
Seen from this perspective, the progress of AI is not an enemy of human intelligence, but a test of it. If we use AI to spread misinformation, to polarize societies, to accelerate consumption and waste, then we will have failed the test, and the visiting aliens in Neil deGrasse Tyson’s tweet will be right: no sign of intelligent life down here. If we use it to understand our planet, to reduce suffering, to give more people access to knowledge and opportunity, then we will have taken a step toward becoming, finally, as intelligent as we like to think we are.
The stakes could not be higher. The climate system does not negotiate; it responds to physics, not rhetoric. AI systems do not have grandchildren; they respond to objectives, not prayers. We are the ones who must decide what to value, what to preserve, and what kind of future to build.
As grandparents, we hope that when our grandchildren are our age, they will look back and say: they did not get everything right, but at least they tried to use their minds—and their machines—wisely. They bet that human intelligence was still worth cultivating, and they built tools to help it grow. They used those tools to protect the only planet they had.
Even if we cannot be certain of success, that is the bet we must make.
We need more intelligence—not less.




Leave a Reply