What is the future of our species? As technology becomes more and more a part of our daily lives, we may become something different to human down the line.
Elise Bohan is a senior research scholar at the University of Oxford's Future of Humanity Institute (FHI).
She's written a book Future Superhuman: Our Transhuman Lives in a Make-or-Break Century.
Transhumanism goes back 30 years, she told Sunday Morning.
“So in roughly 1990, there were a group of philosophers, Max More, and some other AI scientists, people like Ray Kurzweil, Marvin Minsky, who was one of the very early proponents of artificial intelligence, started talking in a really cogent way about deliberately using science and technology to overcome the limits of human biology, and to become more and other than human.
“Essentially the thing that distinguishes this from humanism, which is all about using cultural tools, like education, and higher learning, and the arts to tap into the better angels of our nature.
“Transhumanists are on board with all of that they want to be the best version of human that we can possibly be. But they think, well we have all these really advanced tools now things like gene editing technologies and artificial intelligence, why not figure out how far we can push this?”
That could be trying to knock out single gene disorders such cystic fibrosis, tackling cancer or slowing the aging process, she says.
“So that we're not as susceptible to the biologically heritable deficits of being human, the things that cause us a lot of suffering and misery and pain.”
The question of ‘playing God’ has troubled us throughout the ages, she says, in vitro fertilisation, antibiotics and Caesarean sections being examples.
“All of those things would have seemed very extreme and untenable, and perhaps inhuman to some of our ancestors. So, it's interesting that we tend to get more and more comfortable with becoming more and more enmeshed with technology and allowing it to intervene more in our biological states of being in the same way.”
The iPhone would have seemed almost unimaginable even 20 years ago, she says.
“You're going to walk around with a supercomputer in your pocket. And you're going to have these apps, these social media apps where you keep in contact with your friends and family and you send them little memes and little videos, and you're crossing the street at traffic lights, and you don't look up and you're walking around tethered to this device.
“People would have said, that sounds abhorrent. I would never do that. I don't want any part of that.”
It doesn't follow that every possible intervention that we could make is positive, she says.
“There are all sorts of things that we could do, all sorts of technologies that we could build that can be weaponised, can be used for nefarious purposes.”
But common interventions that we make now were a matter of hot debate and concern even just a few years back, she says.
“I think stem cell research was the big one in the early 2000s. That was a huge bioethical debate at the time. And it's kind of petered out of the public consciousness, nobody's particularly bothered about this anymore.
“And I think when we get to more intense interventions, things like reversing the aging process, life extension in humans living longer than 100 or 120 years, I think we will undergo a very similar process of initial aversion, probably quite strong reactions, and maybe even protests, and then quite rapid transition to acceptance as it becomes more mainstream and acceptable and normative.”
New technologies and emerging technologies go through a boom bust hype cycle, she says.
“We realise often that the technical challenges are a little bit more complex than we thought, also the social integration challenges, you can't overhaul institutions overnight, you can't overhaul human habits overnight.
“I think self-driving cars is a good example of this, we thought we'd have fully self-driving vehicles by now, and of course we do, but unleashing them on all kinds of conditions and all kinds of roads is not yet possible.”
There is a pattern of over-hype then pessimism, she says.
“But what we tend to see is fairly smooth curves of technological progress. When it comes to information technology, you kind of have these S curves going boom and bust over time, but overall, the progress tends to be pretty steady or accelerate.”
However innate human nature is brake on some progress, she says.
“We are fundamentally very limited ape-brained creatures who were designed for a Palaeolithic environment, we’re not designed really to be able to operate at a global scale, to handle global diplomacy well, and certainly when it comes to machines that are more capable than us in many domains, and often that are making decisions in a way that doesn't match human cognition, and that we can't fully reverse engineer and understand.
“Just unleashing anything of that nature into the world comes with inherent dangers. “
Nevertheless, she doesn’t think politicians are necessarily effective gatekeepers when it comes to new technology.
“I have to say I would rather it be the Elon Musk's and the Jeff Bezos at the helm of some of this, than Jacinda Ardern, Scott Morrison, Boris Johnson, Angela Merkel any world leader you care to name, if only for the fact that they actually have some robust understanding of what they are unleashing onto the world.”
Humanity has always been at risk of going the same way as the dinosaurs, Bowen says. But the real risk to humanity is not an asteroid strike so much as our human design flaws
“We've built post-industrial, very technologically advanced civilisations. We have civilisations that now wield nuclear weapons and we've only sat on those technologies for about 70 years. We got very lucky in the 20th century, and we made it through the Cold War kind of unscathed.
“But the idea that we can sit on this level of advanced technology indefinitely, while introducing the capacity to cheaply bioengineer new pathogens and create pandemics that would make Covid-19 look like a picnic. And introducing advanced artificial intelligence into the mix as well. We don't really have the knowledge or the brain capacity to steer our way through this level of complex civilisation for much longer.
“And I think the best example of that, is our failure to be proactive on climate change.”
AI could be the best, or the worst, thing that ever happened to us, she says.
“Certainly, it will be able to help us mine data more effectively and efficiently, massively accelerate the pace of drug discovery, the discovery of new materials that we can use in industrial processes, new methods of farming and manufacturing, new geoengineering protocols, there's no doubt that we need more intelligence than we currently wield, to help us solve the problems that we're already sitting with, for sure.”
AI could also usher in an era of abundance, she says.
“Intelligent systems that are at the helm of finance and at the administrative level of corporations, they will just generate so much more wealth, so much more global GDP, which means that the ability then to redistribute that money takes away the necessity of people to have to do a nine to five.
“I think it will make the universal basic income proposals actually tenable, that we will have enough to go around and as Elon [Musk] says this can be a time of radical abundance.”
Seamless natural language processing is not far off and AI could play a role in future human relationships, she says.
“It's already pretty good and I think it will play an incredible role in human companionship. It'll be a wonderful thing for aged care as well.”