dw2

28 May 2018

Tug Life IV: Beware complacency

Filed under: Events, futurist — Tags: — David Wood @ 10:28 am

What does the collision of creativity, media and tech mean for humans?

That’s the overall subject for a series of events, Tug Life IV, being run from 12-15 June by Tug, Shoreditch-based digital marketing agency.

As the ‘IV’ in the name suggests, it’s the fourth year such a series of events have been held – each time, as part of the annual London Tech Week.

I was one of the speakers last year – at Tug Life III – when my topic was “What happens to humans as machines become more embedded in our lives?”

I enjoyed that session so much that I’ve agreed to be one of the speakers in the opening session of Tug Life IV this year. It’s taking place on the morning of Tuesday 12th June, on the topic “What are we doing with technology? Is it good for us? What should we do about it? As individuals? As businesses? As government?”

Other speakers for this session will include representatives from TalkToUs.AI, Microsoft, and Book of the Future. To register to attend, click here. Note that, depending on the availability of tickets, you can sign up for as many – or as few – of the Tug Life IV events as best match your own areas of interest and concern.

Ahead of the event, I answered some questions from Tug’s Olivia Lazenby about the content of the event. Here’s a lightly edited transcript of the conversation:

Q: What will you be talking about at Tug Life IV?

I’ll be addressing the questions, “Is technology good for us? And what should we do about it?”

I’ll be outlining three types of scenarios for the impact of technology on us, over the next 10-25 years:

  1. The first is business as usual: technology has, broadly, been good for us in the past, and will, broadly, continue to be good for us in the future.
  2. The second is: social collapse: technology will get out of hand, and provoke a set of unintended adverse consequences, resulting in humanitarian tragedy.
  3. The third is sustainable abundance for all, in which technology enables a huge positive leap for society and humanity.

In my talk, I’ll be sharing my assessment of the probabilities for these three families of scenario, namely, 10%, 30%, and 60%, respectively.

Q: Would you agree that the conversation around the future of technology has become increasingly polarised and sensationalised?

It’s good that the subject is receiving more airtime than before. But much of the coverage remains, sadly, at a primitive level.

Some of the coverage is playing for shock value – clickbait etc.

Other coverage is motivated by ideologies which are, frankly, well past their sell-by date – ideologies such as biological exceptionalism.

Finally, another distortion is that quite a few of the large mainstream consultancies are seeking to pass on blandly reassuring messages to their clients, in order to bolster their “business as usual” business models. I view much of that advice as irresponsible – similar to how tobacco industry spokespeople used to argue that we don’t know for sure that smoking causes cancer, so let’s keep calm and carry on.

Q: How can we move past the hysteria and begin to truly understand – and prepare for – how technology might shape our lives in the future?

We need to raise step by step the calibre of the conversation about the future. Two keys here are agile futurism and collaborative futurism. There are too many variables involved for any one person – or any one discipline – to be able to figure things out by themselves. The model of Wikipedia is a good one on which to build, but it’s only a start. I’m encouraging people to cooperate in the development of something I call H+Pedia.

My call to action is for people to engage more with the communities of futurists, transhumanists, and singularitarians, who are, thankfully, advancing a collaborative discussion that is progressing objective evaluations of the credibility, desirability, and actionability of key future scenarios. Let’s put aside the distractions of the present in order to more fully appreciate the huge opportunities and huge threats technology is about to unleash.

24 May 2018

Overdue – blog design refresh

Filed under: WordPress — Tags: , — David Wood @ 11:09 am

My training as a software engineer led me to be cautious about making changes in complex software systems. I learned from experience that “simple” changes often had unexpected effects elsewhere in a system.

For that reason, I have shied away, for too long, from a task that needed my attention – improving the readability of this blog.

The font in use on this blog was too small. That made it harder for people to read. Visitors to this blog have mentioned this point on occasion, but I repeatedly put off the task of fixing it.

But this morning, I made the plunge, and found the part of WordPress design that allowed me to change the font. It’s a lot bigger now.

Fearing side-effects, I’ve checked how various previous postings are displayed. So far, I haven’t noticed anything that’s become broken as a result…

… apart from the fact that adjacent lines of text, in multi-line paragraphs, now appear too close together. I’ve tried customising the CSS for this blog, using syntax like the following:

p {
line-height: 1.5;
}

but nothing I’ve typed there has had any effect. Hmm.

Anyway – I hope this change increases your enjoyment of reading this blog!

Footnote: so far as I know, there will be no change to what people see if they are reading these posts via email, or on a mobile device. The change often takes place when using a desktop browser.

“The People vs. Democracy” – a quick review

Filed under: books, politics, RSA — Tags: , , , , — David Wood @ 1:08 am

If you’re interested in politics, then I recommend the video recording of yesterday’s presentation at London’s RSA by Yascha Mounk.

Mounk is a Lecturer on Government at Harvard University, a Senior Fellow at New America, a columnist at Slate, and the host of The Good Fight podcast. He’s also the author of the book “The People vs. Democracy: Why Our Freedom Is in Danger and How to Save It” which I finished reading yesterday. The RSA presentation provides a good introduction to the ideas in the book.

The book marshals a set of arguments in a compelling way, which I hadn’t seen lined up in that way before. It provides some very useful insight as to the challenges being posed to the world by the growth of “illiberal democracy” (“populism”). It also explains why these challenges might be more dangerous than many commentators have tended to assume.

(“Don’t worry about the rise of strong men like Trump”, these commentators say. “Sure, Trump is obnoxious. But the traditions of liberal democracy are strong. The separation of powers are such that the excesses of any would-be autocrat will surely be tamed”. Alas, there’s no “surely” about it.)

Here’s how Mounk’s book is described on its website:

The world is in turmoil. From India to Turkey and from Poland to the United States, authoritarian populists have seized power. As a result, Yascha Mounk shows, democracy itself may now be at risk.

Two core components of liberal democracy—individual rights and the popular will—are increasingly at war with each other. As the role of money in politics soared and important issues were taken out of public contestation, a system of “rights without democracy” took hold. Populists who rail against this say they want to return power to the people. But in practice they create something just as bad: a system of “democracy without rights.”

The consequence, Mounk shows in The People vs. Democracy, is that trust in politics is dwindling. Citizens are falling out of love with their political system. Democracy is wilting away. Drawing on vivid stories and original research, Mounk identifies three key drivers of voters’ discontent: stagnating living standards, fears of multiethnic democracy, and the rise of social media. To reverse the trend, politicians need to enact radical reforms that benefit the many, not the few.

The People vs. Democracy is the first book to go beyond a mere description of the rise of populism. In plain language, it describes both how we got here and where we need to go. For those unwilling to give up on either individual rights or the popular will, Mounk shows, there is little time to waste: this may be our last chance to save democracy.

I liked the book so much that I’ve modified some of the slides in the presentation I’ll be giving myself this evening (Thursday 24th May), to include ideas from Mounk’s work.

One drawback of the book, however, is that the solutions it offers, although “worthy”, seem unlikely to stir sufficient popular engagement to become turned from idea into reality. I believe something bigger is needed.

14 May 2018

The key questions about UBIA

The first few times I heard about the notion of Universal Basic Income (UBI), I said to myself, that’s a pretty dumb idea.

Paying people without them doing any work is going to cause big problems for society, I thought. It’s going to encourage laziness, and discourage enterprise. Why should people work hard, if the fruits of their endeavour are taken away from them to be redistributed to people who can’t be bothered to work? It’s not fair. And it’s a recipe for social decay.

But since my first encounters with the idea of UBI, my understanding has evolved a long way. I have come to see the idea, not as dumb, but as highly important. Anyone seriously interested in the future of human society ought to keep abreast of the discussion about UBI:

  • What are the strengths and (yes) the weaknesses of UBI?
  • What alternatives could be considered, that have the strengths of UBI but avoid its weaknesses?
  • And, bearing in mind that the most valuable futurist scenarios typically involve the convergence (or clash) of several different trend analyses, what related ideas might transform our understanding of UBI?

For these reasons, I am hosting a day-long London Futurists event at Birkbeck College, Central London, on Saturday 2nd June, with the title “Universal Basic Income and/or Alternatives: 2018 update”.

The event is defined by the question,

What do we know, in June 2018, about Universal Basic Income and its alternatives (UBIA), that wasn’t known, or was less clear, just a few years ago?

The event website highlights various components of that question, which different speakers on the day will address:

  • What are the main risks and issues with the concept of UBIA?
  • How might the ideas of UBIA evolve in the years ahead?
  • If not a UBI, what alternatives might be considered, to meet the underlying requirements which have led many people to propose a UBI?
  • What can we learn from the previous and ongoing experiments in Basic Income?
  • What are the feasible systems (new or increased taxes, or other means) to pay for a UBIA?
  • What steps can be taken to make UBIA politically feasible?
  • What is a credible roadmap for going beyond a “basic” income towards enabling attainment of a “universal prosperity” by everyone?

As you can see from the event website, an impressive list of speakers have kindly agreed to take part. Here’s the schedule for the day:

09:30: Doors open
10:00: Chair’s welcome: The questions that deserve the most attention: David Wood
10:15: Opening keynote: Basic Income – Making it happenProf Guy Standing
11:00: Implications of Information TechnologyProf Joanna Bryson
11:30: Alternatives to UBI – Exploring the PossibilitiesRohit TalwarHelena Calle and Steve Wells
12:15: Q&A involving all morning speakers
12:30: Break for lunch (lunch not provided)

14:00: Basic Income as a policy and a perspective: Barb Jacobson
14:30: Implications of Artificial Intelligence on UBIATony Czarnecki
15:00: Approaching the Economic SingularityCalum Chace
15:30: What have we learned? And what should we do next? David Wood
16:00-16:30: Closing panel involving all speakers
16:30: Event closes. Optional continuation of discussion in nearby pub

A dumb idea?

In the run-up to the UBIA 2018 event, I’ll make a number of blogposts anticipating some of the potential discussion on the day.

First, let me return to the question of whether UBI is a dumb idea. Viewing the topic from the angle of laziness vs. enterprise is only one possible perspective. As is often the case, changing your perspective often provides much needed insight.

Instead, let’s consider the perspective of “social contract”. Reflect on the fact that society already provides money to people who aren’t doing any paid work. There are basic pension payments for everyone (so long as they are old enough), basic educational funding for everyone (so long as they are young enough), and basic healthcare provisions for people when they are ill (in most countries of the world).

These payments are part of what is called a “social contract”. There are two kinds of argument for having a social contract:

  1. Self-interested arguments: as individuals, we might need to take personal benefit of a social contract at some stage in the future, if we unexpectedly fall on hard times. What’s more, if we fail to look after the rest of society, the rest of society might feel aggrieved, and rise up against us, pitchforks (or worse) in hand.
  2. Human appreciation arguments: all people deserve basic stability in their life, and a social contract can play a significant part in providing such stability.

What’s harder, of course, is to agree which kind of social contract should be in place. Whole libraries of books have been written on that question.

UBI can be seen as fitting inside a modification of our social contract. It would be part of what supporters say would be an improved social contract.

Note: although UBI is occasionally suggested as a replacement for the entirety of the current welfare system, it is more commonly (and, in my view, more sensibly) proposed as a replacement for only some of the current programmes.

Proponents of UBI point to two types of reason for including UBI as part of a new social contract:

  1. Timeless arguments – arguments that have been advanced in various ways by people throughout history, such as Thomas More (1516), Montesquieu (1748), Thomas Paine (1795), William Morris (1890), Bertrand Russell (1920), Erich Fromm (1955), Martin Luther King (1967), and Milton Friedman (1969)
  2. Time-linked arguments – arguments that foresee drastically changed circumstances in the relatively near future, which increase the importance of adopting a UBI.

Chief among the time-linked arguments are that the direct and indirect effects of profound technological change is likely to transform the work environment in unprecedented ways. Automation, powered by AI that is increasingly capable, may eat into more and more of the skills that we humans used to think are “uniquely human”. People who expected to earn money by doing various tasks may find themselves unemployable – robots will do these tasks more reliably, more cheaply, and with greater precision. People who spend some time retraining themselves in anticipation of a new occupation may find that, over the same time period, robots have gained the same skills faster than humans.

That’s the argument for growing technological unemployment. It’s trendy to criticise this argument nowadays, but I find the criticisms to be weak. I won’t repeat all the ins and outs of that discussion now, since I’ve covered them at some length in Chapter 4 of my book Transcending Politics. (An audio version of this chapter is currently available to listen to, free of charge, here.)

A related consideration talks, not about technological unemployment, but about technological underemployment. People may be able to find paid work, but that work pays considerably less than they expected. Alternatively, their jobs may have many rubbishy aspects. In the terminology of David Graeber, increasing numbers of jobs are “bullshit jobs”. (Graeber will be speaking on that very topic at the RSA this Thursday. At time of writing, tickets are still available.)

Yet another related concept is that of the precariat – people whose jobs are precarious, since they have no guarantee of the number of hours of work they may receive in any one week. People in these positions would often prefer to be able to leave these jobs and spend a considerable period of time training for a different kind of work – or starting a new business, with all the risks and uncertainties entailed. If a UBI were available to them, it would give them the stability to undertake that personal voyage.

How quickly will technological unemployment and technological underemployment develop? How quickly will the proportion of bullshit jobs increase? How extensive and socially dangerous will the precariat become?

I don’t believe any futurist can provide crisp answers to these questions. There are too many unknowns involved. However, equally, I don’t believe anyone can say categorically that these changes won’t occur (or won’t occur any time soon). My personal recommendation is that society needs to anticipate the serious possibility of relatively rapid acceleration of these trends over the next couple of decades. I’d actually put the probability of a major acceleration in these trends over the next 20 years as greater than 50%. But even if you assess the odds more conservatively, you ought to have some contingency plans in mind, just in case the pace quickens more than you expected.

In other words, the time-linked arguments in favour of exploring a potential UBI have considerable force.

As it happens, the timeless arguments may gain increased force too. If it’s true that the moral arc of history bends upwards – if it’s true that moral sensibilities towards our fellow humans increase over the passage of time – then arguments which at one time fell below society’s moral radar can gain momentum in the light of collective experience and deliberative reflection.

An impractical idea?

Many people who are broadly sympathetic to the principle of UBI nevertheless consider the concept to be deeply impractical. For example, here’s an assessment by veteran economics analyst John Kay, in his recent article “Basic income schemes cannot work and distract from sensible, feasible and necessary welfare reforms”:

The provision of a universal basic income at a level which would provide a serious alternative to low-paid employment is impossibly expensive. Thus, a feasible basic income cannot fulfil the hopes of some of the idea’s promoters: it cannot guarantee households a standard of living acceptable in a modern society, it cannot compensate for the possible disappearance of existing low-skilled employment and it cannot eliminate “bullshit jobs”. Either the level of basic income is unacceptably low, or the cost of providing it is unacceptably high. And, whatever the appeal of the underlying philosophy, that is essentially the end of the matter.

Kay offers this forthright summary:

Attempting to turn basic income into a realistic proposal involves the reintroduction of elements of the benefit system which are dependent on multiple contingencies and also on income and wealth. The outcome is a welfare system which resembles those that already exist. And this is not surprising. The complexity of current arrangements is not the result of bureaucratic perversity. It is the product of attempts to solve the genuinely difficult problem of meeting the variety of needs of low-income households while minimising disincentives to work for households of all income levels – while ensuring that the system established for that purpose is likely to sustain the support of those who are required to pay for it.

I share Piachaud’s conclusion that basic income is a distraction from sensible, feasible and necessary welfare reforms. As in other areas of policy, it is simply not the case that there are simple solutions to apparently difficult issues which policymakers have hitherto been too stupid or corrupt to implement.

Supporters of UBI have rebuttals to this analysis. Some of these rebuttals will no doubt be presented at the UBIA 2018 event on 2nd June.

One rebuttal seeks to rise above “zero sum” considerations. Injecting even a small amount of money into everyone’s hands can have “multiplier” effects, as that new money passes in turn through several people’s hands. One person’s spending is another person’s income, ready for them to spend in turn.

Along similar lines, Professor Guy Standing, who will be delivering the opening keynote at UBIA 2018, urges readers of his book Basic Income: And How We Can Make It Happen to consider positive feedback cycles: “the likely impact of extra spending power on the supply of goods and services”. As he says,

In developing countries, and in low-income communities in richer countries, supply effects could actually lower prices for basic goods and services. In the Indian basic income pilots, villagers’ increased purchasing power led local farmers to plant more rice and wheat, use more fertilizer and cultivate more of their land. Their earnings went up, while the unit price of the food they supplied went down. The same happened with clothes, since several women found it newly worthwhile to buy sewing machines and material. A market was created where there was none before.

A similar response could be expected in any community where there are people who want to earn more and do more, alongside people wanting to acquire more goods and services to improve their living standard.

(I am indebted to Standing’s book for many other insights that have influenced my thinking and, indeed, points raised in this blogpost. It’s well worth reading!)

There’s a broader point that needs to be raised, about the “prices for basic goods and services”. Since a Basic Income needs to cover payments for these goods and services, two approaches are possible:

  1. Seek to raise the level of Basic Income payments
  2. Seek to lower the cost of basic goods and services.

I believe both approaches should be pursued in parallel. The same technologies of automation that pose threats to human employment also hold the promise for creating goods and services at significantly lower costs (and with higher quality). However, any such reduction in cost sits in tension with the prevailing societal focus on boosting economic prices (and increasing GDP). It is for this reason that we need a change of societal values as well as changes in the mechanics of the social contract.

The vision of goods and services having prices approaching zero is, by the way, sometimes called “the Star Trek economy”. Futurist Calum Chace – another of the UBIA 2018 speakers – addresses this topic is his provocatively titled book The Economic Singularity: Artificial intelligence and the death of capitalism. Here’s an extract from one of his blogposts, a “un-forecast” (Chace’s term) for a potential 2050 scenario, “Future Bites 7 – The Star Trek Economy”, featuring Lauren (born 1990):

The race downhill between the incomes of governments and the costs they needed to cover for their citizens was nerve-wracking for a few years, but by the time Lauren hit middle age it was clear the outcome would be good. Most kinds of products had now been converted into services, so cars, houses, and even clothes were almost universally rented rather than bought: Lauren didn’t know anyone who owned a car. The cost of renting a car for a journey was so close to zero that the renting companies – auto manufacturers or AI giants and often both – generally didn’t bother to collect the payment. Money was still in use, but was becoming less and less necessary.

As a result, the prices of most asset classes had crashed. Huge fortunes had been wiped out as property prices collapsed, especially in the hot-spot cities, but few people minded all that much as they could get whatever they needed so easily.

As you may have noticed, the vision of a potential future “Star Trek” economy is part of the graphic design for UBIA 2018.

I’ll share one further comment on the question of the affordability of UBI. Specifically, I’ll quote some comments made by Guardian writer Colin Holtz in the wake of the discovery of the extent of tax evasion revealed by the Panama Papers. The article by Holtz has the title “The Panama Papers prove it: America can afford a universal basic income”. Here’s an extract:

If the super-rich actually paid what they owe in taxes, the US would have loads more money available for public services.

We should all be able to agree: no one should be poor in a nation as wealthy as the US. Yet nearly 15% of Americans live below the poverty line. Perhaps one of the best solutions is also one of the oldest and simplest ideas: everyone should be guaranteed a small income, free from conditions.

Called a universal basic income by supporters, the idea has has attracted support throughout American history, from Thomas Paine to Martin Luther King Jr. But it has also faced unending criticism for one particular reason: the advocates of “austerity” say we simply can’t afford it – or any other dramatic spending on social security.

That argument dissolved this week with the release of the Panama Papers, which reveal the elaborate methods used by the wealthy to avoid paying back the societies that helped them to gain their wealth in the first place…

While working and middle-class families pay their taxes or face consequences, the Panama Papers remind us that the worst of the 1% have, for years, essentially been stealing access to Americans’ common birthright, and to the benefits of our shared endeavors.

Worse, many of those same global elite have argued that we cannot afford to provide education, healthcare or a basic standard of living for all, much less eradicate poverty or dramatically enhance the social safety net by guaranteeing every American a subsistence-level income.

The Tax Justice Network estimates the global elite are sitting on $21–32tn of untaxed assets. Clearly, only a portion of that is owed to the US or any other nation in taxes – the highest tax bracket in the US is 39.6% of income. But consider that a small universal income of $2,000 a year to every adult in the US – enough to keep some people from missing a mortgage payment or skimping on food or medicine – would cost only around $563bn each year.

This takes us from the question of affordability to the question of political feasibility. Read on…

A politically infeasible idea?

A potential large obstacle to adopting UBI is that powerful entities within society will fight hard against it, being opposed to any idea of increased taxation and a decline in their wealth. These entities don’t particularly care that the existing social contract provides a paltry offering to the poor and precarious in society – or to those “inadequates” who happen to lose their jobs and their standing in the economy. The existing social contract provides them personally (and those they consider their peers) with a large piece of the cake. They’d like to keep things that way, thank you very much.

They defend the current setup with ideology. The ideology states that they deserve their current income and wealth, on account of the outstanding contributions they have made to the economy. They have created jobs, or goods, or services of one sort or another, that the marketplace values. And no-one has any right to take their accomplishments away from them.

In other words, they defend the status quo with a theory of value. In order to overcome their resistance to UBIA, I believe we’ll need to tackle this theory of value head on, and provide a better theory in its place. I’ll pick up that thread of thought shortly.

But an implementation of UBI doesn’t need to happen “big bang” style, all at once. It can proceed in stages, starting with a very low level, and (all being well) ramping up from there in phases. The initial payment from UBI could be funded from new types of tax that would, in any case, improve the health of society:

  • A tax on financial transactions (sometimes called a “Tobin tax”) – that will help to put the brakes on accelerated financial services taking place entirely within the financial industry (without directly assisting the real economy)
  • A “Greenhouse gas tax” (such as a “carbon tax”) on activities that generate greenhouse gas pollution.

Continuing the discussion

The #ubia channel in the newly created London Futurists Slack workspace awaits comments on this topic. For a limited time, members and supporters of London Futurists can use this link to join that workspace.

5 May 2018

Humans: The solution, or the problem?

Filed under: Transcending Politics — Tags: , , , — David Wood @ 3:33 pm

Silicon Valley seems to think that we’re somehow going to compensate for humanity’s faults with digital technologies. I don’t think humans are obsolete. I don’t think humans are the problem, I think humans are the solution.

These words reached my inbox earlier today, as part of a Nesta interview of technology writer Douglas Rushkoff.

The sentiment expressed in these words strikes me as naive – dangerously naive.

Any worldview that ignores the problematic aspects of human nature risks unwittingly enabling the magnification of these flaws, as technology puts ever more power in our hands.

Think of the way that Fox News, with the support of a network of clever social media agitators, has been magnifying many of the uglier human inclinations – resulting in the human calamity of Trumpistan. That’s an example of what can happen if the flaws within humanity aren’t properly handled. It’s an example of twenty first technology making humans problems worse.

Just because we can, correctly, assess humans as having a great deal of positive potential, this doesn’t mean we should become blind to the harmful tendencies that coexist with our favourable tendencies – and which (if we’re not careful) might overwhelm these tendencies.

Here are some examples of our harmful tendencies:

Conflict

  • Abuse of power: we humans are often too ready to exploit the power we temporarily hold, for example in personal relationships with subordinates or colleagues
  • Confirmation bias: we divert our attention from information that would challenge or negate our own pet theories or the commonly accepted paradigms of our culture; we clutch at any convenient justification for ignoring or distorting such information
  • Dysfunctional emotions: we are prone to being dominated by emotional spasms – of anger, self-righteousness, possessiveness, anxiety, despair, etc – to the extent that we are often unable to act on our better judgements
  • Overconfidence: we tend to assess ourselves as having above-average abilities; we also often assume that our core beliefs are more likely to be true than an objective evaluation would suggest
  • In-group preference: we are liable to prejudice in favour of people who seem “like us” (by whatever criteria), and against people who appear to fall outside our group; this drives unnecessary conflict, and can also mean we miss the best opportunities
  • Inertia: we cling onto possessions, habits, and processes that have served us well in the past, and which might conceivably be useful to us at some time in the future, even if these attachments reduce our room for manoeuvre or damage our openness to new experiences
  • Herd mentality: we too readily fall into line with what we perceive our peers are thinking or doing, even though our conscience is telling us that a different path would be better
  • Loss of perspective: we fail to pay attention to matters that should be of long-term importance to us, and instead become dominated by grudges, personal vindictiveness, fads, and other distractions.

Many of these characteristics are likely to have bestowed some evolutionary advantage to our ancestors, in the very different circumstances in which they lived – similar to the way that a sweet tooth made good sense in prehistoric times. These characteristics are far less useful in today’s world, with its vastly increased complexity and connectivity, where individual mistakes can be magnified onto a global scale.

Other characteristics on the list probably never had much direct utility, but they existed as side-effects of yet other character traits that were themselves useful. Evolution was constrained in terms of the character sets it could create; it lacked complete flexibility. However, we humans possess a much greater range of engineering tools. That opens the way for the conscious, thoughtful re-design of our character set.

The project described in the article that caught my attention this morning – the “Team Human” project – needs in my view to be more open to what some in Silicon Valley are proposing (but which the article scorns), namely the use of technology to assist:

  • The strengthening of positive human tendencies
  • The taming of negative human tendencies.

Of course, technology cannot do these things by itself. But it can, very definitely, be part of the solution. Some examples:

  • Education of all sorts can be enhanced by technology such as interactive online video courses that adapt their content to the emerging needs of each different user
  • Vivid experiences within multi-sensory virtual reality worlds can bring home to people the likely consequences of their current personal trajectories (from both first-person and third-person points of view), and allow them to rehearse changes in attitude
  • The reasons why meditation, yoga, and hypnosis can have beneficial results are now more fully understood than before, enabling major improvements in the efficacy of these practices
  • Prompted by alerts generated by online intelligent assistance software, real-world friends can connect at critical moments in someone’s life, in order to provide much-needed personal support
  • Information analytics can resolve some of the long-running debates about which diets – and which exercise regimes – are the ones that will best promote all-round health for given individuals.

And there are some more radical possibilities:

  • New pharmacological compounds – sometimes called “smart drugs”
  • Gentle stimulation of the brain by a variety of electromagnetic methods – something that has been trialled by the US military
  • Alteration of human biology more fundamentally, by interventions at the genetic, epigenetic, or microbiome level
  • The use of intelligent assistance software that monitors our actions and offers us advice in a timely manner, similar to the way that a good personal friend will occasionally volunteer wise counsel; intelligent assistants can also strengthen our positive characteristics by wise selection of background music, visual imagery, and “thought for the day” aphorisms to hold in mind.

What I’m describing here is the vision of transhumanism – the vision that humanity can and should take wise and profound advantage of technology to transcend the damaging limitations and drawbacks imposed by the current circumstances of human nature. As a result, humans will be able to transition, individually and collectively, towards a significantly higher stage of life – a life with much improved quality.

And here’s a formulation from 1990 by the founder of the modern transhumanist movement, philosopher Max More:

Transhumanism is a class of philosophies of life that seek the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.

Any attempt to “reprogram society to better serve humans” that fails to follow this transhumanist advice – any project that turns its back on the radical transformational potential of science and technology – is leaving itself dangerously underpowered.

In short: the journey to a healthier society inevitably involves transhumanism. Without transhumanism, Team Human isn’t going to make it.

Note: For a fuller examination of the ideas in this blogpost, see my recent new book Transcending Politics, especially Chapter 12,  “Humans and Superhumans” and Chapter 1, “Vision and roadmap”.

Picture source: TheDigitalArtist and JoeTheStoryTeller.

3 May 2018

Recommended: The Longevity Code

If you’re interested in the latest advice on how to extend your healthspan, you should read The Longevity Code by Kris Verburgh.

The full title of the book is “The Longevity Code: Secrets to Living Well for Longer, from the Front Lines of Science”.

The book has the following description (on Goodreads):

Medical doctor and researcher Kris Verburgh is fast emerging as one of the world’s leading research authorities on the science of aging. The Longevity Code is Dr. Verburgh’s authoritative guide on why and how we age — and on the four most crucial areas we have control over, to slow down, and even reverse, the aging process.

We learn why some animal species age hardly at all while others age and die very quickly, and about the mechanisms at work that slowly but definitely cause our bodies to age, making us susceptible to heart attack, stroke, cancer, pneumonia and/or dementia.

Dr. Verburgh devotes the last third of The Longevity Code to what we can do to slow down the process of aging. He concludes by introducing and assessing the wide range of cutting-edge developments in anti-aging technology, the stuff once only of science fiction: new types of vaccines, and the use of mitochondrial DNA, CRISPR proteins, stem cells, and more.

In the course of researching and writing my own book The Abolition of Aging, I read dozens of different books on broadly similar topics. (For a partial list, scan the online copy of the Endnotes for that book.)

However, I found The Longevity Code to address a number of issues in ways that were particularly compelling and engaging:

  1. Persuasive advice on how to modify diet and lifestyle, now, in order to increase your likelihood to remain healthy long enough to benefit from forthcoming rejuvenation therapies (therapies which Verburgh lists as “Step 4” of a four-stage “longevity staircase”)
  2. A compelling analysis of different “theories of aging”, in Chapter 1 of his book, including the implications of the notably different lifespans of various animals that seem on first sight to have similar biology
  3. A down-to-earth matter-of-fact analysis, in Chapter 4 of his book, on the desirability of living longer lives.

The first of these points is an area where I have often struggled, in the Q&A portions of my own presentations on The Abolition of Aging, to give satisfactory answers to audience questions. I now have better answers to offer!

Allowable weakness

One “allowable weakness” of the book is that the author repeats himself on occasion – especially when it comes to making recommendations on diet and food supplements. I say this is “allowable” because his messages deserve repetition, in a world where there is an abundance of apparent expert dietary advice that is, alas, confusing, contradictory, and often compromised (due to the influence of vested interests – as Verburgh documents).

Table of Contents

The table of contents gives a good idea of what the book contains:

  1. Why do we age?
    • Making room?
    • Dying before growing old
    • Young and healthy, old and sick
    • Sex and aging
  2. What causes aging?
    • Proteins
    • Carbohydrates
    • Fats
    • Our energy generators and their role in life, death, and aging
    • Shoelaces and string
    • Other causes, and conclusion
  3. The longevity staircase
    • Avoid deficiencies
    • Stimulate hormesis
    • Reduce growth stimulation
    • Reverse the aging process
  4. Some thoughts about aging, longevity, and immortality
    • Do we really want to grow that old?
    • A new society?
  5. Recipes
  6. Afterword

About Kris Verburgh

You can read more about the author on the bio page of his website. Here’s a brief extract:

Kris Verburgh (born 1986) graduated magna cum laude as a medical doctor from the University of Antwerp, Belgium.

Dr. Verburgh is a researcher at the Center Leo Apostel for Interdisciplinary Studies (CLEA) at the Free University Brussels (VUB) and a member of the Evolution, Complexity and Cognition group at the Free University of Brussels.

Dr. Verburgh’s fields of research are aging, nutrition, metabolism, preventive medicine and health. In this context, he created a new scientific discipline, called ‘nutrigerontology‘, which studies the impact of nutrition on the aging process and aging-related diseases.

Additionally, he has a profound interest in new technologies that will disrupt medicine, health(care) and our lifespans. He follows the new trends and paradigm shifts in medicine an biotechnology and how they are impacted by the fourth industrial revolution

Verburgh wrote his first science book when he was 16 years old. At age 25, he had written 3 science books.

Dr. Verburgh gives talks on new developments and paradigm shifts in medicine, healthcare and the science of aging. He gave lectures for the European Parliament, Google, Singularity University, various academic institutes, organizations and international companies.

And I’d be delighted to host him at London Futurists, when schedules allow!

6 March 2018

Transcending left and right?

(The following consists of short extracts from Chapter 1,  “Vision and Roadmap”, of my recent new book Transcending Politics.)

One of the most destructive elements of current politics is its divisiveness. Politicians form into warring parties which then frequently find fault with each other. They seek to damage the reputation of their adversaries, throwing lots of mud in the hope that at least some of it will stick. Whereas disagreement is inherent in political process, what would be far better is if politicians could disagree without being disagreeable.

The division between “left” and “right” is particularly long-established. The pioneering transhumanist philosopher F.M. Esfandiary, who later changed his name to FM-2030, lamented this division in his 1977 book Up-Wingers:

To transcend more rapidly to higher levels of evolution we must begin by breaking out of the confinement of traditional ideologies.

We are at all times slowed down by the narrowness of Right-wing and Left-wing alternatives. If you are not conservative you are liberal. If not right of centre you are left of it or middle of the road.

Our traditions comprise no other alternatives. There is no ideological or conceptual dimension beyond conservative and liberal – beyond Right and Left.

Right and Left – even the extreme Left – are traditional frameworks predicated on traditional premises striving in obsolete ways to attain obsolete goals.

Esfandiary’s answer was a different dimension: “Up” – the optimistic embrace of radical technological possibility for positive human transformation:

How do you identify Space scientists who this very day are working with new sets of premises to establish communities in other worlds? Are they Right-wing or Left? Are they conservative or liberal?…

These and other breakthroughs are outside the range of all the traditional philosophical social economic political frameworks. These new dimensions are nowhere on the Right or on the Left. These new dimensions are Up.

Up is an entirely new framework whose very premises and goals transcend the conventional Right and Left…

The Right/Left establishment wants to maintain an evolutionary status quo. It is resigned to humanity’s basic predicament. It simply strives to make life better within this predicament.

Up-Wingers are resigned to nothing. We accept no human predicament as permanent, no tragedy as irreversible; no goals as unattainable.

The term “Up” dovetails with Esfandiary’s evident interest in the exploration of space. We should raise our thinking upwards – towards the stars – rather than being constrained with small-mindedness.

Professor Steve Fuller of the University of Warwick and legal expert Veronika Lipinska take these ideas further in their 2014 book The Proactionary Imperative: A Foundation for Transhumanism, in which they explore “the rotation of the ideological axis”, from left/right to up/down. Fuller and Lipinska provide some fascinating historical background and provocative speculations about possible futures – including a section on “the four styles of playing God in today’s world”.

I share the view that there are more important questions than the left-right split that has dominated politics for so long. Esfandiary was correct to highlight the question of whether to embrace (“Up”) or to reject (“Down”) the potential of new technology to dramatically enhance human capabilities.

But the “Up” decision to embrace the potential for transhuman enhancements still leaves many other decisions unresolved. People who identify as being up-wing are torn between being “right-leaning upwingers” and being “left-leaning upwingers”:

  • The former admire the capabilities of a free market
  • The latter admire the safety net of a welfare system
  • The former mistrust the potential over-reach of politicians
  • The latter mistrust the actions of profit-seeking corporations
  • The former wish to uphold as much individual freedom as possible
  • The latter wish to uphold as much social solidarity as possible
  • The former are keen to reduce taxation
  • The latter are keen to increase equality of opportunity
  • The former point to the marvels that can be achieved by competitive-minded self-made individuals
  • The latter point to the marvels that can be achieved by collaboration-minded progressive coalitions.

I identify myself as a technoprogressive more than a technolibertarian. Individual freedoms are important, but the best way to ensure these is via wise collective agreement on appropriate constraints. Rather than seeking minimal government and minimal taxation, you’ll see in the pages ahead that I argue for appropriate government and appropriate taxation.

However, I’m emphatically not going to advocate that left-leaning transhumanists should somehow overcome or defeat right-leaning transhumanists. The beliefs I listed as being characteristic of right-leaning transhumanists all contain significant truths – as do the beliefs I listed for left-leaning transhumanists. The task ahead is to pursue policies that respect both sets of insights. That’s what I mean when describing the Transpolitica initiative as “integrative”. Rather than “either-or” it’s “both-and”.

 

1 March 2018

Pragmatically envisioning better humans

Filed under: Transcending Politics — Tags: , — David Wood @ 12:00 pm

Is it possible to significantly improve politics, over the course of, say, the next dozen years, without first significantly improving human nature?

(The following consists of short extracts from Chapter 12,  “Humans and Superhumans”, of my recent new book Transcending Politics.)

In this chapter, I’ll look at four different answers to this question:

  1. We shouldn’t try to improve human nature; that’s the route to hell
  2. We can have a better politics without any change in human nature
  3. Improving human nature will turn out to be relatively straightforward; let’s get cracking
  4. Improving human nature will be difficult but is highly desirable; we need to carefully consider the potential scenarios, with an open mind, and then make our choices…

The technoprogressive transformation of society and human nature that I envision will build upon the product management insight that it’s more important to analyse the intended outcome of a transformation than to become over-enthused by potential means to carry out that transformation. That is, the specification must come first, and then the implementation. Otherwise the implementation might develop inertia of its own. In that case, we’ll get technology for technology’s sake – answers looking for questions, rather than the other way round.

Accordingly, let’s now take a moment to explore features of the human character that there’s a strong case to seek to improve. Then we can move on to consider potential ways to carry out such improvements.

The character features I’m aiming to list are those which, if they are not tamed, threaten to combine in devastating ways with the greater powers that technology as a whole is putting in our hands. These features include:

  • Dysfunctional emotions: we are prone to being dominated by emotional spasms – of anger, self-righteousness, possessiveness, anxiety, despair, etc – to the extent that we are often unable to act on our better judgements
  • Overconfidence: we tend to assess ourselves as having above-average abilities; we also often assume that our core beliefs are more likely to be true than an objective evaluation would suggest
  • Confirmation bias: we divert our attention from information that would challenge or negate our own pet theories or the commonly accepted paradigms of our culture; we clutch at any convenient justification for ignoring or distorting such information
  • Abuse of power: we are too ready to exploit the power we temporarily hold, for example in personal relationships with subordinates or colleagues, and in the process damage other people – and often our own longer-term interests too
  • In-group preference: we are liable to prejudice in favour of people who seem “like us” (by whatever criteria), and against people who appear to fall outside our group; this drives unnecessary conflict, and can also mean we miss the best opportunities
  • Over-attachment: we cling onto things that might conceivably be useful to us at some time in the future, even if these attachments reduce our room for manoeuvre or damage our openness to new experiences
  • Herd mentality: we too readily fall into line with what we perceive our peers are thinking or doing, even though our conscience is telling us that a different path would be better
  • Loss of perspective: we fail to pay attention to matters that should be of long-term importance to us, and instead become dominated by grudges, personal vindictiveness, fads, and other distractions.

Many of these characteristics are likely to have bestowed some evolutionary advantage to our ancestors, in the very different circumstances in which they lived. They are far less useful in today’s world, with its vastly increased complexity and connectivity, where individual mistakes can be magnified onto a global scale.

Other characteristics on the list probably never had much direct utility, but they existed as side-effects of yet other character traits that were themselves useful. Evolution was constrained in terms of the character sets it could create; it lacked complete flexibility. However, we humans possess a much greater range of engineering tools. That opens the way for the conscious, thoughtful re-design of our character set.

Some critics of transhumanism respond that they prefer to keep human nature as it is, thank you very much, with all our quirks and foibles. These features are said to enable creativity, fun, imagination diversity, and so on. My response is to point again to the character flaws listed earlier. These are not “quirks” or “foibles”. Nor can they be described as “allowable weaknesses”. They are dangerous weaknesses. And as such, they deserve serious attention from us. Can we find ways to dial down these character flaws, without (at the same time) inducing adverse side-effects?

Transhumanists are by no means the first set of thinkers to desire these changes in human nature. Philosophers, religious teachers, and other leaders of society have long called for humans to overcome the pull of “attachment” (desire), self-centredness, indiscipline, “the seven deadly sins” (pride, greed, lust, envy, gluttony, wrath, and sloth), and so on. Where transhumanism goes beyond these previous thinkers is in highlighting new methods that can now be used, or will shortly become available, to assist in the improvement of character.

Collectively these methods can be called “cognotech”. They will boost our all-round intelligence: emotional, rational, creative, social, spiritual, and more. Here are some examples:

  • New pharmacological compounds – sometimes called “smart drugs”
  • Gentle stimulation of the brain by a variety of electromagnetic methods – something that has been trialled by the US military
  • Alteration of human biology more fundamentally, by interventions at the genetic, epigenetic, or microbiome level
  • Vivid experiences within multi-sensory virtual reality worlds that bring home to people the likely consequences of their current personal trajectories (from both first-person and third-person points of view), and allow them to rehearse changes in attitude
  • The use of “intelligent assistance” software that monitors our actions and offers us advice in a timely manner, similar to the way that a good personal friend will occasionally volunteer wise counsel; intelligent assistants can also strengthen our positive characteristics by wise selection of background music, visual imagery, and “thought for the day” aphorisms to hold in mind.

Technological progress can also improve the effectiveness of various traditional methods for character improvement:

  • The reasons why meditation, yoga, and hypnosis can have beneficial results are now more fully understood than before, enabling major improvements in the efficacy of these practices
  • Education of all sorts can be enhanced by technology such as interactive online video courses that adapt their content to the emerging needs of each different user
  • Prompted by alerts generated by online intelligent assistants, real-world friends can connect at critical moments in someone’s life, in order to provide much-needed personal support
  • Information analytics can resolve some of the long-running debates about which diets – and which exercise regimes – are the ones that will best promote all-round health for given individuals…

It’s worth stressing some key differences between this kind of transhumanist initiative, on the one hand, and the idealist political campaigns of Stalin, Hitler, Mao, Pol Pot and others (covered earlier in the chapter). The transhumanist initiative is committed to:

  • Open review, so that problems arising can be noticed and addressed promptly
  • An experimental approach, to discover what actually works in reality, rather than just sounding good in theory
  • An agile framework, in which feedback is sought on a regular basis, so that knowledge can accumulate quickly via a “fail fast” process
  • Easy access by all members of society to the set of ideas that are under discussion, in order to promote a wider appreciation of any emerging risks or opportunities
  • Giving priority to data, rather than to anecdote, supposition, or ideology
  • Embracing diversity as far as possible, with hard constraints being imposed only when matters are seen to be particularly central
  • Integrating viewpoints from many different perspectives, rather than insisting on there being only “one true way” forwards.

The technoprogressive feedback cycle

One criticism of the initiative I’ve just outlined is that it puts matters the wrong way round.

I’ve been describing how individuals can, with the aid of technology as well as traditional methods, raise themselves above their latent character flaws, and can therefore make better contributions to the political process (either as voters or as actual politicians). In other words, we’ll get better politics as a result of getting better people.

However, an opposing narrative runs as follows. So long as our society is full of emotional landmines, it’s a lot to expect people to become more emotionally competent. So long as we live in a state of apparent siege, immersed in psychological conflict, it’s a big ask for people to give each other the benefit of the doubt, in order to develop new bonds of trust. Where people are experiencing growing inequality, a deepening sense of alienation, a constant barrage of adverts promoting consumerism, and an increasing foreboding about an array of risks to their wellbeing, it’s not reasonable to urge them to make the personal effort to become more compassionate, thoughtful, tolerant, and open-minded. They’re more likely to become angry, reactive, intolerant, and closed-minded. Who can blame them? Therefore – so runs this line of reasoning – it’s more important to improve the social environment than to urge the victims of that social environment to learn to turn the other cheek. Let’s stop obsessing about personal ethics and individual discipline, and instead put every priority on reducing the inequality, alienation, consumerist propaganda, and risk perception that people are experiencing. Instead of fixating upon possibilities for technology to rewire people’s biology and psychology, let’s hurry up and provide a better social safety net, a fairer set of work opportunities, and a deeper sense that “we’re all in this together”.

I answer this criticism by denying that it’s a one-way causation. We shouldn’t pick just a single route of influence – either that better individuals will result in a better society, or that a better society will enable the emergence of better individuals. On the contrary, there’s a two way flow of influence.

Yes, there’s such a thing as psychological brutalisation. In a bad environment, the veneer of civilisation can quickly peel away. Youngsters who would, in more peaceful circumstances, instinctively help elderly strangers to cross the road, can quickly degrade in times of strife into obnoxious, self-obsessed bigots. But that path doesn’t apply to everyone. Others in the same situation take the initiative to maintain a cheery, contemplative, constructive outlook. Environment influences the development of character, but doesn’t determine it.

Accordingly, I foresee a positive feedback cycle:

  • With the aid of technological assistance, more people – whatever their circumstances – will be able to strengthen the latent “angelic” parts of their human nature, and to hold in check the latent “diabolic” aspects
  • As a result, at least some citizens will be able to take wiser policy decisions, enabling an improvement in the social and psychological environment
  • The improved environment will, in turn, make it easier for other positive personal transformations to occur – involving a larger number of people, and having a greater impact.

One additional point deserves to be stressed. The environment that influences our behaviour involves not just economic relationships and the landscape of interpersonal connections, but also the set of ideas that fill our minds. To the extent that these ideas give us hope, we can find extra strength to resist the siren pull of our diabolic nature. These ideas can help us focus our attention on positive, life-enhancing activities, rather than letting our minds shrink and our characters deteriorate.

This indicates another contribution of transhumanism to building a comprehensively better future. By painting a clear, compelling image of sustainable abundance, credibly achievable in just a few decades, transhumanism can spark revolutions inside the human heart…

(To read more, follow the links from the Transpolitica website.)

7 December 2017

The super-opportunities and super-risks of super-AI

Filed under: AGI, Events, risks, Uncategorized — Tags: , , — David Wood @ 7:29 pm

2017 has seen more discussion of AI than any preceding year.

There has even been a number of meetings – 15, to be precise – in the UK Houses of Parliament, of the APPG AI – an “All-Party Parliamentary Group on Artificial Intelligence”.

According to its website, the APPG AI “was set up in January 2017 with the aim to explore the impact and implications of Artificial Intelligence”.

In the intervening 11 months, the group has held 7 evidence meetings, 4 advisory group meetings, 2 dinners, and 2 receptions. 45 different MPs, along with 7 members of the House of Lords and 5 parliamentary researchers, have been engaged in APPG AI discussions at various times.

APPG-AI

Yesterday evening, at a reception in Parliament’s Cholmondeley Room & Terrace, the APPG AI issued a 12 page report with recommendations in six different policy areas:

  1. Data
  2. Infrastructure
  3. Skills
  4. Innovation & entrepreneurship
  5. Trade
  6. Accountability

The headline “key recommendation” is as follows:

The APPG AI recommends the appointment of a Minister for AI in the Cabinet Office

The Minister would have a number of different responsibilities:

  1. To bring forward the roadmap which will turn AI from a Grand Challenge to a tool for untapping UK’s economic and social potential across the country.
  2. To lead the steering and coordination of: a new Government Office for AI, a new industry-led AI Council, a new Centre for Data Ethics and Innovation, a new GovTech Catalyst, a new Future Sectors Team, and a new Tech Nation (an expansion of Tech City UK).
  3. To oversee and champion the implementation and deployment of AI across government and the UK.
  4. To keep public faith high in these emerging technologies.
  5. To ensure UK’s global competitiveness as a leader in developing AI technologies and capitalising on their benefits.

Overall I welcome this report. It’s a definite step in the right direction. Via a programme of further evidence meetings and workshops planned throughout 2018, I expect real progress can be made.

Nevertheless, it’s my strong belief that most of the public discussion on AI – including the discussions at the APPG AI – fail to appreciate the magnitude of the potential changes that lie ahead. There’s insufficient awareness of:

  • The scale of the opportunities that AI is likely to bring – opportunities that might better be called “super-opportunities”
  • The scale of the risks that AI is likely to bring – “super-risks”
  • The speed at which it is possible (though by no means guaranteed) that AI could transform itself via AGI (Artificial General Intelligence) to ASI (Artificial Super Intelligence).

These are topics that I cover in some of my own presentations and workshops. The events organisation Funzing have asked me to run a number of seminars with the title “Assessing the risks from superintelligent AI: Elon Musk vs. Mark Zuckerberg…”

DW Dec Funzing Singularity v2

The reference to Elon Musk and Mark Zuckerberg reflects the fact that these two titans of the IT industry have spoken publicly about the advent of superintelligence, taking opposing views on the balance of opportunity vs. risk.

In my seminar, I take the time to explain their differing points of view. Other thinkers on the subject of AI that I cover include Alan Turing, IJ Good, Ray Kurzweil, Andrew Ng, Eliezer Yudkowsky, Stuart Russell, Nick Bostrom, Isaac Asimov, and Jaan Tallinn. The talk is structured into six sections:

  1. Introducing the contrasting ideas of Elon Musk and Mark Zuckerberg
  2. A deeper dive into the concepts of “superintelligence” and “singularity”
  3. From today’s AI to superintelligence
  4. Five ways that powerful AI could go wrong
  5. Another look at accelerating timescales
  6. Possible responses and next steps

At the time of writing, I’ve delivered this Funzing seminar twice. Here’s a sampling of the online reviews:

Really enjoyed the talk, David is a good presenter and the presentation was very well documented and entertaining.

Brilliant eye opening talk which I feel very effectively conveyed the gravity of these important issues. Felt completely engaged throughout and would highly recommend. David was an excellent speaker.

Very informative and versatile content. Also easy to follow if you didn’t know much about AI yet, and still very insightful. Excellent Q&A. And the PowerPoint presentation was of great quality and attention was spent on detail putting together visuals and explanations. I’d be interested in seeing this speaker do more of these and have the opportunity to go even more in depth on specific aspects of AI (e.g., specific impact on economy, health care, wellbeing, job market etc). 5 stars 🙂

Best Funzing talk I have been to so far. The lecture was very insightful. I was constantly tuned in.

Brilliant weighing up of the dangers and opportunities of AI – I’m buzzing.

If you’d like to attend one of these seminars, three more dates are in my Funzing diary:

Click on the links for more details, and to book a ticket while they are still available 🙂

30 November 2017

Technological Resurrection: An idea ripe for discussion

Like it or not, humans are becoming as gods. Where will this trend lead?

How about the ability to bring back to life people who died centuries ago, and whose bodies have long since disintegrated?

That’s the concept of “Technological Resurrection” which is covered in the recent book of the same name by Dallas, Texas based researcher Jonathan A. Jones.

The book carries the subtitle “A thought experiment”. It’s a book that can, indeed, lead readers to experiment with new kinds of thoughts. If you are ready to leave your normal comfort zone behind, you may find a flurry of unexpected ideas emerging in your mind as you dip into its pages. You’re likely also to encounter considerable emotional turmoil en route.

The context

Here’s the context. Technology is putting within human reach more and more of the capabilities that were thought, in former times, to be the preserve of divine beings:

  • We’re not omniscient, but Google has taken us a long way in that direction
  • We’re not yet able to create life at will, but our skills with genomic engineering are proceeding apace
  • Evolution need no longer proceed blindly, via Darwinian Russian roulette, but can benefit from conscious intelligent design (by humans, for humans)
  • Our ability to remake nature is being extended by our ability to remake human nature.
  • We can enable the blind to see, the deaf to hear, and the lame to walk
  • Thanks to medical breakthroughs, we can even bring the dead back to life – that is, the cessation of heart and breath need no longer herald an early grave.

But that’s just the start. It’s plausible that, sooner or later, humanity will create artificial superintelligence with powers that are orders of magnitude greater than anything we currently possess. These enhanced powers would bring humanity even closer to the domain of the gods of bygone legends. These powers might even enable technological resurrection.

Some details

In more detail: Profound new engineering capabilities might become available that can bridge remote sections of space and time – perhaps utilising the entanglement features of quantum physics, perhaps creating and exploiting relativistic “wormholes”, or perhaps involving unimagined novel scientific principles. These bridges might allow selected “copying” of consciousness from just before the moment of death, into refined bodies constructed in the far future ready to receive such consciousness. As Jonathan Jones explores, this copying might take place in ways that circumvent the time travel paradoxes that often feature in science fiction.

That’s a lot of “mights” and “maybes”. However, when contemplating the range of ideas for what might happen to consciousness after physical death, it would be wise to include this option. Beyond our deathbed, we might awaken to find ourselves in a state akin to paradise – surrounded by resurrected family and friends. Born 1945, died 2020, resurrected 2085? Born 1895, died 1917, resurrected 2087?

The book contains a number of speculative short stories to whet readers’ appetites to continue this exploration. These stories add colour to what is already a colourful, imaginative book. The artistic license is grounded in a number of solid references to science, philosophy, psychology, and history. For example, there’s a particularly good section on Russian “cosmist” thinkers. There’s a review of how films and novels have dealt with similar ideas over the decades. And the book is brought up to date with a discussion of contemporary transhumanists, including Ray Kurzweil, Ben Goertzel, Jose Cordeiro, and Giulio Prisco.

Futurists like to ask three questions about forthcoming scenarios. Are they credible (as opposed to being mere flights of fantasy). Are they actionable, in that individual human actions could alter their probability of occurring. And are they desirable.

All three questions get an airing in the pages of the book Jonathan Jones has written. To keep matters short, for now I’ll focus on the third question.

The third question

The idea of technological resurrection could provide much-needed solace, for people whose lives otherwise seem wretched. Perhaps death will cease to be viewed as a one-way ticket to eternal oblivion. What’s more, the world might benefit mightily from a new common quest to advance human capability, safely, beyond the existential perils of modern social angst, towards being able to make technological resurrection a reality. That’s a shared purpose which would help humanity transcend our present-day pettiness. It’s a route to make humanity truly great.

However, from other points of view, the idea of technological resurrection could be viewed as an unhelpful distraction. Similar to how religion was criticised by Karl Marx as being “the opium of the people” – an illusory “pie in the sky when you die” – the vague prospect of technological resurrection could dissuade people from taking important steps to secure or improve long-term health prospects. It might prevent them from:

  • Investigating and arranging cryonics support standby services
  • Channelling funds and resources to those researchers who may be on the point of abolishing aging
  • Encouraging the adoption of health-promoting lifestyles, economic policies, and beneficial diets and supplements
  • Accelerating the roll-out of technoprogressive measures that will raise people around the world out of relative poverty and into relative prosperity.

Finally, the idea of technological resurrection may also fill some minds with dread and foreboding – if they realise that devious, horrible actions from their past, which they believed were secret, might become more widely known by a future superintelligence. If that superintelligence has the inclination to inflict a punitive (hellish) resurrection, well, things gain a different complexion.

There’s a great deal more that deserves to be said about technological resurrection. I’m already thinking of organising some public meetings on this topic. In the meantime, I urge readers to explore the book Jonathan Jones has written. That book serves up its big ideas in ways that are playful, entertaining, and provocative. But the ideas conveyed by the light-hearted text may live in your mind long after you have closed the book.

PS I’ve addressed some of these questions from a different perspective in Chapter 12, “Radical alternatives”, of my own 2016 book “The Abolition of Aging”.

Older Posts »

Blog at WordPress.com.