Futurism – the attempt at systematic, thoughtful speculation about the likely future – can be divided into smaller and larger questions.
Many of the ‘small’ questions are admittedly very interesting. Here are some examples.
Will China continue to grow in influence and strength? When will humans colonise Mars? Will increasing technological automation drive more and more people out of work? Will currencies converge (like a larger Euro-zone) or fragment? Is nuclear fusion ever going to prove viable? When will computers outplay humans at the game of Go, or drive cars better than humans, or create art better than humans? What will happen to the rate of population growth, and the rate of resource depletion? Will religion decline or resurge? Which of our present-day habits will our descendents look back on and regard as disdainfully as we now regard (say) slavery and cigarette smoking? Will money and campaign finance play an ever more domineering role in politics?
I call these questions ‘small’ only because there are even larger questions which frame any overall analysis of the future. In particular, I see three groups of questions as particularly pressing:
- The question of existential risk: Is it feasible that human civilisation could dramatically collapse within the next few decades – as a result of (e.g.) economic meltdown, rapidly changing climate, military or terrorist escapades, horrific weaponry or diseases, and/or rogue tech? Could we actually be living in the end times?
- The question of transhuman potential: Is it feasible that tech enhancements in the next few decades could radically transform and elevate human performance and experience – making us substantially smarter, stronger, healthier, longer-lived – potentially creating as big a step-up in capability as in the prehistoric jump from ape to human?
- The question of resource allocation: If transhuman potential lies within our grasp, should we indeed try to grasp it? Contrariwise, is any effort to accelerate transhumanism an indulgence, a distraction, or (even worse) a catalyst for disaster rather than progress? If there are credible risks of existential collapse, where should we actually be grasping? Which topics deserve the lion’s share of our collective attention, investment, analysis, and effort?
These questions are what I seek to see debated at the meetings of the London Futurists that I organise once every few weeks.
The questions defy any simple responses, but for what it’s worth, summary versions of my own answers are as follows:
- The threat of existential collapse is real. Human ingenuity and perseverance have led us through many major crises in the past, but there’s nothing guaranteed about our ability to survive even larger, more wicked, faster-breaking crises in the near future
- Technology is progressing at a remarkable rate, and the rate is likely to accelerate. Powerful combinations of nano-tech, AI, personal genetics, synthetic biology, robotics, and regenerative medicine, coupled with significantly improved understanding of diet and mental health (e.g. mindfulness), could indeed see the emergence of “Humanity+” amidst the struggles of the present-day. But there’s nothing inevitable about it
- Humanity+ (also known as “transhumanism”) is not only possible; it is highly desirable – so long as the increased ‘external’ strengths of new human individuals and societies are balanced by matching increases in ‘internal’ strengths such as kindness, open-mindedness, and sociability. As I’ve written before, we need increased wisdom as well as increased smartness, and an increased desire for self-mastery as well as an increased ability to transcend limits.
The reason why Humanity+ is desirable (as well as being possible) is because I see the enhanced humans of the near future, with their much greater collective wisdom – improved versions of you and me – as being the best bet to guard against the very real threats of existential risk.
Speakers at the London Futurists meetings address different parts of this overall rich mix of existential risk and transhuman opportunity. As befits healthy debate, the speakers take different viewpoints. Some of these speakers are what can be called “professional futurists”, often hired by businesses to help them consider scenarios for evolution of technology, business, and products. Other speakers are what can be called “activists”, who personally commit large amounts of their time and energy to bringing about one or more aspects of a desirable transhuman future.
The speaker on Sunday 2nd September, Aubrey de Grey, falls into the second category. As noted on the webpage for the event,
Dr. Aubrey de Grey is a biomedical gerontologist based in Cambridge, UK and Mountain View, California, USA, and is the Chief Science Officer of SENS Foundation, a California-based 501(c)(3) charity dedicated to combating the aging process. He is also Editor-in-Chief of Rejuvenation Research, the world’s highest-impact peer-reviewed journal focused on intervention in aging.
Aubrey’s talk is entitled “Regenerative medicine for aging”. Note: this is not just about life extension – allowing longer lifespans. It is about health extension – allowing longer healthy lifespans, with resulting very positive benefits in reduced healthcare costs worldwide. As Aubrey writes,
In this talk I will explain why therapies that can add 30 healthy years to the remaining lifespan of typical 60-year-olds may well arrive within the next few decades.
If you’d like to find out more about Aubrey’s thinking and accomplishments, let me point you at two sources:
- The SENS Foundation website contains a wealth of detailed information
- A series of short video interviews by Aubrey were recently posted online by Adam A. Ford. These interviews were conducted in Melbourne, Australia, in May this year, in conjunction with the Humanity+ @Melbourne conference. Titles of these videos include “Addressing criticisms and concerns”, “Science and technology”, “Prediction”, “Are we programmed to die?”, “Aging and suffering”, “Replacement parts”, “SENS”, and so on.
Alternatively, if you’re in or nearby London, by all means drop into the meeting 🙂
(We’re planning to record it and make the video available afterwards, for people unable to join on the day.)
I would have thought that we should try to correct for two sets of things: a) the very human propensity of some to over-rate risk and under-estimate adaptability, and b) the fact that the history of our species tells us that periodic threats of some magnitude (mostly related to disease, famine or war) do appear but are never species-existential while the history of all species suggests that all species ultimately succumb existentially but only after very long periods and only when the existential threat is truly catastrophic, external and outside the ability of any species to manage.
We are in danger of panicking about existential risk because we are confusing an existential risk for lots of people but not all people with existential risk for the species when the species as a species may actually be strengthened by any serious crisis – so long as it is not truly species-critical in which case the odds are that there is literally nothing to be done (we neither caused it nor can do anything about it). Such an existential crisis may come tomorrow (unlikely) or in one million years. In other words, we should stop panicking and enjoy life while we can.
As for the increased wisdom, if this elementary inability to deal with existential extinction’s reality is not included, then it is a very odd sort of wisdom and seems to me to be a form of denial and anxiety. Of course, we should continue to work to improve our existence through science and innovation but nearly every innovation has negative consequences as well as positive ones, losers as well as winners, and none of them may reasonably be regarded as cost-free species improvements but rather mere adaptations to new but always unstable environments.
For example, attempts to impose ‘wisdom’ and ‘morality’ through enhancement has the cost of removing autonomy and adaptability to extreme conditions. In the end, trans-human thought descends into religion and the naturalistic fallacy because it cannot cope with who we actually are and are likely to be regardless of technology for some considerable time to come. It is a project of improvement that fails to see how interesting and unique we are unimproved.
The essentialist pre-definition of the ‘good human’ when set against the world is the error of every idealism, ideology and religion and it is deeply dangerous to the actual, living humanity that we have.
Comment by Tim Pendry — 3 September 2012 @ 8:53 pm
Which is not an argument against trans-humanism but only against an excessively essentialist interpretation of it …
Comment by Tim Pendry — 3 September 2012 @ 8:58 pm
Hi Tim,
How do you assess the analyses of e.g. Jared Diamond, Thomas Homer-Dixon, and Joseph Tainter that a number of human societies experienced major collapses (including extinction) largely on account of self-inflicted measures? These applied only locally, because societies were only local at that time. But with the interconnectedness of modern society, the impact is likely to be much more global nowadays.
Comment by David Wood — 3 September 2012 @ 9:04 pm
Thank you for a very interesting post / read.
So much to say, but aware that such commentary is but a single degree of the global “circle of opinion” which our communication technology now promotes, and upon which our children’s’ grandchildren will found changes in human behaviour and culture.
This future is perhaps beyond our engrained / restricted / entrenched current viewpoints…
for example, “books” are held in a high regard in all societies, but, our future generations will doubtless see them as tortuously slow mono dimensional mediums, with an unacceptably high environmental cost, but the application of different techniques and approaches to the content of “books” will have wide a higher impact than the removal of the paper medium, not simple in speed terms, but influence of direction and strategy taken from the knowledge within those contents. “voting directly for political policies whilst reading the manifesto anybody?” – yet – we still have no generally accepted name for such a change, does “eBook” really represent this change? with regards to the real changes coming from our use of technology, we are still embrionic, we haven’t even connected the whole world yet!
With regards to “threats” to species wide existence, I feel the event that ends our species will be not be man made, ie probably volcanic, ie, not precisely predictable, and planet wide.
Volcanoes like Katla (http://www.volcano.si.edu/world/volcano.cfm?vnum=1702-03=) sitting beneath an icecap of 750m, “will” at some point erupt, perhaps producing many years of potentially unpredictable “global” impact, which acts as a catalyst for other species threatening behaviour..
– what will humans do during five years of global black out from volcanic ash clouds? no sun, continual acidic rain, no food and polluted water… and perhaps even start a new ice age.
There are quite a number of “Katla” sized future events that planet Earth has yet to experience, but, all our knowledge says “it’s not if, but when”
Thanks for the mental stimulation 🙂
James
Comment by James Booth — 4 September 2012 @ 10:16 am
Hi James,
I see the existence of existential planetary threats such as massive submerged supervolcanoes is another reason why humanity needs to rapidly increase its mastery over the kinds of super-powerful technologies (e.g. involving re-engineered nano-materials) that can be wisely applied to re-geo-engineer the planet.
In parallel, though, we need to keep our eye on the man-made existential threats too.
This isn’t going to be easy!
// David W.
Comment by David Wood — 16 September 2012 @ 9:24 am