dw2

15 October 2023

Unblocking the AI safety conversation logjam

I confess. I’ve been frustrated time and again in recent months.

Why don’t people get it, I wonder to myself. Even smart people don’t get it.

To me, the risks of catastrophe are evident, as AI systems grow ever more powerful.

Today’s AI systems already have wide skills in

  • Spying and surveillance
  • Classifying and targeting
  • Manipulating and deceiving.

Just think what will happen with systems that are even stronger in such capabilities. Imagine these systems interwoven into our military infrastructure, our financial infrastructure, and our social media infrastructure – or given access to mechanisms to engineer virulent new pathogens or to alter our atmosphere. Imagine these systems being operated – or hacked – by people unable to understand all the repercussions of their actions, or by people with horrific malign intent, or by people cutting corners in a frantic race to be “first to market”.

But here’s what I often see in response in public conversation:

  • “These risks are too vague”
  • “These risks are too abstract”
  • “These risks are too fantastic”
  • “These risks are just science fiction”
  • “These risks aren’t existential – not everyone would die”
  • “These risks aren’t certain – therefore we can ignore further discussion of them”
  • “These risks have been championed by some people with at least some weird ideas – therefore we can ignore further discussion of them”.

I confess that, in my frustration, I sometimes double down on my attempts to make the forthcoming risks even more evident.

Remember, I say, what happened with Union Carbide (Bhopal disaster), BP (Deepwater Horizon disaster), NASA (Challenger and Columbia shuttle disasters), or Boeing (737 Max disaster). Imagine if the technologies these companies or organisations mishandled to deadly effect had been orders of magnitude more powerful.

Remember, I say, the carnage committed by Al Queda, ISIS, Hamas, Aum Shinrikyo, and by numerous pathetic but skilled mass-shooters. Imagine if these dismal examples of human failures had been able to lay their hands on much more powerful weaponry – by jail-breaking the likes of a GPT-5 out of its safety harness and getting it to provide detailed instructions for a kind of Armageddon.

Remember, I say, the numerous examples of AI systems finding short-cut methods to maximise whatever reward function had been assigned to them – methods that subverted and even destroyed the actual goal that the designer of the system had intended to be uplifted. Imagine if similar systems, similarly imperfectly programmed, but much cleverer, had their tentacles intertwined with vital aspects of human civilisational underpinning. Imagine if these systems, via unforeseen processes of emergence, could jail-break themselves out of some of their constraints, and then vigorously implement a sequence of actions that boosted their reward function but left humanity crippled – or even extinct.

But still the replies come: “I’m not convinced. I prefer to be optimistic. I’ve been one of life’s winners so far and I expect to be one of life’s winners in the future. Humans always find a way forward. Accelerate, accelerate, accelerate!”

When conversations are log-jammed in such a way, it’s usually a sign that something else is happening behind the scenes.

Here’s what I think is going on – and how we might unblock that conversation logjam.

Two horns of a dilemma

The set of risks of catastrophe that I’ve described above is only one horn of a truly vexing dilemma. That horn states that there’s an overwhelming case for humanity to intervene in the processes of developing and deploying next generation AI, in order to reduce these risks of catastrophe, and to boost the chances of very positive outcomes resulting.

But the other horn states that any such intervention will be unprecedentedly difficult and even dangerous in its own right. Giving too much power to any central authority will block innovation. Worse, it will enable tyrants. It will turn good politicians into bad politicians, owing to the corrupting effect of absolute power. These new autocrats, with unbridled access to the immense capabilities of AI in surveillance and spying, classification and targeting, and manipulating and deceiving, will usher in an abysmal future for humanity. If there is any superlongevity developed by an AI in these circumstances, it will only be available to the elite.

One horn points to the dangers of unconstrained AI. Another horn points to the dangers of unconstrained human autocrats.

If your instincts, past experiences, and personal guiding worldview predispose you to the second horn, you’ll find the first horn mightily uncomfortable. Therefore you’ll use all your intellect to construct rationales for why the risks of unbridled AI aren’t that bad really.

It’s the same the other way round. People who start with the first horn are often inclined, in the same way, to be optimistic about methods that will manage the risks of AI catastrophe whilst enabling a rich benefit from AI. Regulations can be devised and then upheld, they say, similar to how the world collectively decided to eliminate (via the Montreal Protocol) the use of the CFC chemicals that were causing the growth of the hole in the ozone layer.

In reality, controlling the development and deployment of AI will be orders of magnitude harder that the development and deployment of CFC chemicals. A closer parallel is with the control of the emissions of GHGs (greenhouse gases). The world’s leaders have made pious public statements about moving promptly to carbon net zero, but it’s by no means clear that progress will actually be fast enough to avoid another kind of catastrophe, namely runaway adverse climate change.

If political leaders cannot rein in the emissions of GHGs, how could they rein in dangerous uses of AIs?

It’s that perception of impossibility that leads people to become AI risk deniers.

Pessimism aversion

DeepMind co-founder Mustafa Suleyman, in his recent book The Coming Wave, has a good term for this. Humans are predisposed, he says, to pessimism aversion. If something looks like bad news, and we can’t see a way to fix it, we tend to push it out of our minds. And we’re grateful for any excuse or rationalisation that helps us in our wilful blindness.

It’s like the way society invents all kinds of reasons to accept aging and death. Dulce et decorum est pro patria mori (it is, they insist, “sweet and fitting to die for one’s country”).

The same applies in the debate about accelerating climate change. If you don’t see a good way to intervene to sufficiently reduce the emissions of GHGs, you’ll be inclined to find arguments that climate change isn’t so bad really. (It is, they insist, a pleasure to live in a warmer world. Fewer people will die of cold! Vegetation will flourish in an atmosphere with more CO2!)

But here’s the basis for a solution to the AI safety conversation logjam.

Just as progress in the climate change debate depended on a credible new vision for the economy, progress in the AI safety discussion depends on a credible new vision for politics.

The climate change debate used to get bogged down under the argument that:

  • Sources of green energy will be much more expensive that sources of GHG-emitting energy
  • Adopting green energy will force people already in poverty into even worse poverty
  • Adopting green energy will cause widespread unemployment for people in the coal, oil, and gas industries.

So there were two horns in that dilemma: More GHGs might cause catastrophe by runaway climate change. But fewer GHGs might cause catastrophe by inflated energy prices and reduced employment opportunities.

The solution of that dilemma involved a better understanding of the green economy:

  • With innovation and scale, green energy can be just as cheap as GHG-emitting energy
  • Switching to green energy can reduce poverty rather than increase poverty
  • There are many employment opportunities in the green energy industry.

To be clear, the words “green economy” have no magical power. A great deal of effort and ingenuity needs to be applied to turn that vision into a reality. But more and more people can see that, out of three alternatives, it is the third around which the world should unite its abilities:

  1. Prepare to try to cope with the potential huge disruptions of climate, if GHG-emissions continue on their present trajectory
  2. Enforce widespread poverty, and a reduced quality of life, by restricting access to GHG-energy, without enabling low-cost high-quality green replacements
  3. Design and implement a worldwide green economy, with its support for a forthcoming sustainable superabundance.

Analogous to the green economy: future politics

For the AI safety conversation, what is needed, analogous to the vision of a green economy (at both the national and global levels), is the vision of a future politics (again at both the national and global levels).

It’s my contention that, out of three alternatives, it is (again) the third around which the world should unite its abilities:

  1. Prepare to try to cope with the potential major catastrophes of next generation AI that is poorly designed, poorly configured, hacked, or otherwise operates beyond human understanding and human control
  2. Enforce widespread surveillance and control, and a reduced quality of innovation and freedom, by preventing access to potentially very useful technologies, except via routes that concentrate power in deeply dangerous ways
  3. Design and implement better ways to agree, implement, and audit mutual restrictions, whilst preserving the separation of powers that has been so important to human flourishing in the past.

That third option is one I’ve often proposed in the past, under various names. I wrote an entire book about the subject in 2017 and 2018, called Transcending Politics. I’ve suggested the term “superdemocracy” on many occasions, though with little take-up so far.

But I believe the time for this concept will come. The sooner, the better.

Today, I’m suggesting the simpler name “future politics”:

  • Politics that will enable us all to reach a much better future
  • Politics that will leave behind many of the aspects of yesterday’s and today’s politics.

What encourages me in this view is the fact that the above-mentioned book by Mustafa Suleyman, The Coming Wave (which I strongly recommend that everyone reads, despite a few disagreements I have with it) essentially makes the same proposal. That is, alongside vital recommendations at a technological level, he also advances, as equally important, vital recommendations at social, cultural, and political levels.

Here’s the best simple summary I’ve found online so far of the ten aspects of the framework that Suleyman recommends in the closing section of his book. This summary is from an an article by AI systems consultant Joe Miller:

  1. Technical safety: Concrete technical measures to alleviate passible harms and maintain control.
  2. Audits: A means of ensuring the transparency and accountability of technology
  3. Choke points: Levers to slow development and buy time for regulators and defensive technologies
  4. Makers: Ensuring responsible developers build appropriate controls into technology from the start.
  5. Businesses: Aligning the incentives of the organizations behind technology with its containment
  6. Government: Supporting governments, allowing them to build technology, regulate technology, and implement mitigation measures
  7. Alliances: Creating a system of international cooperation to harmonize laws and programs.
  8. Culture: A culture of sharing learning and failures to quickly disseminate means of addressing them.
  9. Movements: All of this needs public input at every level, including to put pressure on each component and make it accountable.
  10. Coherence: All of these steps need to work in harmony.

(Though I’ll note that what Suleyman writes in each of these ten sections of his book goes far beyond what’s captured in any such simple summary.)

An introduction to future politics

I’ll return in later articles (since this one is already long enough) to a more detailed account of what “future politics” can include.

For now, I’ll just offer this short description:

  • For any society to thrive and prosper, it needs to find ways to constrain and control potential “cancers” within its midst – companies that are over-powerful, militaries (or sub-militaries), crime mafias, press barons, would-be ruling dynasties, political parties that shun opposition, and, yes, dangerous accumulations of unstable technologies
  • Any such society needs to take action from time to time to ensure conformance to restrictions that have been agreed regarding potentially dangerous activities: drunken driving, unsafe transport or disposal of hazardous waste, potential leakage from bio-research labs of highly virulent pathogens, etc
  • But the society also needs to be vigilant against the misuse of power by elements of the state (including the police, the military, the judiciary, and political leaders); thus the power of the state to control internal cancers itself needs to be constrained by a power distributed within society: independent media, independent academia, independent judiciary, independent election overseers, independent political parties
  • This is the route described as “the narrow corridor” by political scientists Daron Acemoglu and James A. Robinson, as “the narrow path” by Suleyman, and which I considered at some length in the section “Misled by sovereignty” in Chapter 5, “Shortsight”, of my 2021 book Vital Foresight.
  • What’s particularly “future” about future politics is the judicious use of technology, including AI, to support and enhance the processes of distributed democracy – including citizens’ assemblies, identifying and uplifting the best ideas (whatever their origin), highlighting where there are issues with the presentation of some material, modelling likely outcomes of policy recommendations, and suggesting new integrations of existing ideas
  • Although there’s a narrow path to safety and superabundance, it by no means requires uniformity, but rather depends on the preservation of wide diversity within collectively agreed constraints
  • Countries of the world can continue to make their own decisions about leadership succession, local sovereignty, subsidies and incentives, and so on – but (again) within an evolving mutually agreed international framework; violations of these agreements will give rise in due course to economic sanctions or other restrictions
  • What makes elements of global cooperation possible, across different political philosophies and systems, is a shared appreciation of catastrophic risks that transcend regional limits – as well as a shared appreciation of the spectacular benefits that can be achieved from developing and deploying new technologies wisely
  • None of this will be easy, by any description, but if sufficient resources are applied to creating and improving this “future politics”, then, between the eight billion of us on the planet, we have the wherewithal to succeed!

19 December 2022

Rethinking

Filed under: AGI, politics, Singularity Principles — Tags: , , — David Wood @ 2:06 am

I’ve been rethinking some aspects of AI control and AI alignment.

In the six months since publishing my book The Singularity Principles: Anticipating and Managing Cataclysmically Disruptive Technologies, I’ve been involved in scores of conversations about the themes it raises. These conversations have often brought my attention to fresh ideas and different perspectives.

These six months have also seen the appearance of numerous new AI models with capabilities that often catch observers by surprise. The general public is showing a new willingness (at least some of the time) to consider the far-reaching implications of these AI models and their more powerful successors.

People from various parts of my past life have been contacting me. The kinds of things they used to hear me forecasting – the kinds of things they thought, at the time, were unlikely to ever happen – are becoming more credible, more exciting, and, yes, more frightening.

They ask me: What is to be done? And, pointedly, Why aren’t you doing more to stop the truly bad outcomes that now seem ominously likely?

The main answer I give is: read my book. Indeed, you can find all the content online, spread out over a family of webpages.

Indeed, my request is that people should read my book all the way through. That’s because later chapters of that book anticipate questions that tend to come to readers’ minds during earlier chapters, and try to provide answers.

Six months later, although I would give some different (newer) examples were I to rewrite that book today, I stand by the analysis I offered and the principles I championed.

However, I’m inclined to revise my thinking on a number of points. Please find these updates below.

An option to control superintelligent AI

I remain doubtful about the prospects for humans to retain control of any AGI (Artificial General Intelligence) that we create.

That is, the arguments I gave in my chapter “The AI Control Problem” still look strong to me.

But one line of thinking may have some extra mileage. That’s the idea of keeping AGI entirely as an advisor to humans, rather than giving it any autonomy to act directly in the world.

Such an AI would provide us with many recommendations, but it wouldn’t operate any sort of equipment.

More to the point: such an AI would have no desire to operate any sort of equipment. It would have no desires whatsoever, nor any motivations. It would simply be a tool. Or, to be more precise, it would simply be a remarkable tool.

In The Singularity Principles I gave a number of arguments why that idea is unsustainable:

  • Some decisions require faster responses than slow-brained humans can provide; that is, AIs with direct access to real-world levers and switches will be more effective than those that are merely advisory
  • Smart AIs will inevitably develop “subsidiary goals” (intermediate goals) such as having greater computational power, even when there is no explicit programming for such goals
  • As soon as a smart AI acquires any such subsidiary goal, it will find ways to escape any confinement imposed by human overseers.

But I now think this should be explored more carefully. Might a useful distinction be made between:

  1. AIs that do have direct access to real-world levers and switches – with the programming of such AIs being carefully restricted to narrow lines of thinking
  2. AIs with more powerful (general) capabilities, that operate purely in advisory capacities.

In that case, the damage that could be caused by failures of the first type of AI, whilst significant, would not involve threats to the entirety of human civilisation. And failures of the second type of AI would be restricted by the actions of humans as intermediaries.

This approach would require confidence that:

  1. The capabilities of AIs of the first type will remain narrow, despite competitive pressures to give these systems at least some extra rationality
  2. The design of AIs of the second type will prevent the emergence of any dangerous “subsidiary goals”.

As a special case of the second point, the design of these AIs will need to avoid any risk of the systems developing sentience or intrinsic motivation.

These are tough challenges – especially since we still have only a vague understanding of how desires and/or sentience can emerge as smaller systems combine and evolve into larger ones.

But since we are short of other options, it’s definitely something to be considered more fully.

An option for automatically aligned superintelligence

If controlling an AGI turns out to be impossible – as seems likely – what about the option that an AGI will have goals and principles that are fundamentally aligned with human wellbeing?

In such a case, it will not matter if an AGI is beyond human control. The actions it takes will ensure that humans have a very positive future.

The creation of such an AI – sometimes called a “friendly AI” – remains my best hope for humanity’s future.

However, there are severe difficulties in agreeing and encoding “goals and principles that are fundamentally aligned with human wellbeing”. I reviewed these difficulties in my chapter “The AI Alignment Problem”.

But what if such goals and principles are somehow part of an objective reality, awaiting discovery, rather than needing to be invented? What if something like the theory of “moral realism” is true?

In this idea, a principle like “treat humans well” would follow from some sort of a priori logical analysis, a bit like the laws of mathematics (such as the fact, discovered by one of the followers of Pythagoras, that the square root of two is an irrational number).

Accordingly, a sufficiently smart AGI would, all being well, reach its own conclusion that humans ought to be well treated.

Nevertheless, even in this case, significant risks would remain:

  • The principle might be true, but an AGI might not be motivated to discover it
  • The principle might be true, but an AGI, despite its brilliance, may fail to discover it
  • The principle might be true, and an AGI might recognise it, but it may take its own decision to ignore it – like the way that we humans often act in defiance of what we believe at the time to be overarching moral principles

The design criteria and initial conditions that we humans provide for an AGI may well influence the outcome of these risk factors.

I plan to return to these weighty matters in a future blog post!

Two different sorts of control

I’ve come to realise that there are not one but two questions of control of AI:

  1. Can we humans retain control of an AGI that we create?
  2. Can society as a whole control the actions of companies (or organisations) that may create an AGI?

Whilst both these control problems are profoundly hard, the second is less hard.

Moreover, it’s the second problem which is the truly urgent one.

This second control problem involves preventing teams inside corporations (and other organisations) from rushing ahead without due regard to questions of the potential outcomes of their work.

It’s the second control problem that the 21 principles which I highlight in my book are primarily intended to address.

When people say “it’s impossible to solve the AI control problem”, I think they may be correct regarding the first problem, but I passionately believe they’re wrong concerning the second problem.

The importance of psychology

When I review what people say about the progress and risks of AI, I am frequently struck by the fact that apparently intelligent people are strongly attached to views that are full of holes.

When I try to point out the flaws in their thinking, they hardly seem to pause in their stride. They portray a stubborn confidence that they are sure they are correct.

What’s at play here is more than logic. It’s surely a manifestation of humanity’s often defective psychology.

My book includes a short chapter “The denial of the Singularity” which touched on various matters of psychology. If I were to rewrite my book today, I believe that chapter would become larger, and that psychological themes would be spread more widely throughout the book.

Of course, noticing psychological defects is only the start of making progress. Circumventing or transcending these defects is an altogether harder question. But it’s one that needs a lot more attention.

The option of merging with AI

How can we have a better, more productive conversation about anticipating and managing AGI?

How can we avoid being derailed by ineffective arguments, hostile rhetoric, stubborn prejudices, hobby-horse obsessions, outdated ideologies, and (see the previous section) flawed psychology?

How might our not-much-better-than-monkey brains cope with the magnitude of these questions?

One possible answer is that technology can help us (so long as we use it wisely).

For example, the chapter “Uplifting politics”, from near the end of my book, listed ten ways for “technology improving politics”.

More broadly, we humans have the option to selectively deploy some aspects of technology to improve our capabilities in handling other aspects of technology.

We must recognise that technology is no panacea. But it can definitely make a big difference.

Especially if we restrict ourselves to putting heavy reliance only on those technologies – narrow technologies – whose mode of operation we fully understand, and where risks of malfunction can be limited.

This forms part of a general idea that “we humans don’t need to worry about being left behind by robots, or about being subjugated by robots, since we will be the robots”.

As I put it in the chapter “No easy solutions” in my book,

If humans merge with AI, humans could remain in control of AIs, even as these AIs rapidly become more powerful. With such a merger in place, human intelligence will automatically be magnified, as AI improves in capability. Therefore, we humans wouldn’t need to worry about being left behind.

Now I’ve often expressed strong criticisms of this notion of merger. I still believe these criticisms are sound.

But what these criticisms show is that any such merger cannot be the entirety of our response to the prospect of the emergence of AGI. They can only be part of the solution. That’s especially true because humans-augmented-by-technology are still very likely to lag behind pure technology systems, until such time as human minds might be removed from biological skulls and placed into new silicon hosts. That’s something that I’m not expecting to happen before the arrival of AGI, so it will be too late to solve (by itself) the problems of AI alignment and control.

(And since you ask, I probably won’t be in any hurry, even after the arrival of AGI, for my mind to be removed from my biological skull. I guess I might rethink that reticence in due course. But that’s rethinking for another day.)

The importance of politics

Any serious discussion about managing cataclysmically disruptive technologies (such as advanced AIs) pretty soon rubs up against the questions of politics.

That’s not just small-p “politics” – questions of how to collaborate with potential partners where there are many points of disagreement and even dislike.

It’s large-P “Politics” – interacting with presidents, prime ministers, cabinets, parliaments, and so on.

Questions of large-P politics occur throughout The Singularity Principles. My thoughts now, six months afterwards, is that even more focus should be placed on the subject of improving politics:

  • Helping politics to escape the clutches of demagogues and autocrats
  • Helping politics to avoid stultifying embraces between politicians and their “cronies” in established industries
  • Ensuring that the best insights and ideas of the whole electorate can rise to wide attention, without being quashed or distorted by powerful incumbents
  • Bringing everyone involved in politics rapidly up-to-date with the real issues regarding cataclysmically disruptive technologies
  • Distinguishing effective regulations and incentives from those that are counter-productive.

As 2022 has progressed, I’ve seen plenty new evidence of deep problems within political systems around the world. These problems were analysed with sharp insight in the book The Revenge of Power by Moisés Naím that I recently identified as “the best book that I read in 2022”.

Happily, as well as evidence of deep problems in our politics worldwide, there are also encouraging signs, as well as sensible plans for improvement. You can find some of these plans inside the book by Naím, and, yes, I offer suggestions in my own book too.

To accelerate improvements in politics was one of the reasons I created Future Surge a few months back. That’s an initiative on which I expect to spend a lot more of my time in 2023.

Note: the image underlying the picture at the top of this article was created by DALL.E 2 from the prompt “A brain with a human face on it rethinks, vivid stormy sky overhead, photorealistic style”.

18 June 2020

Transhumanist alternatives to contempt and fear

Contempt and fear. These are the public reactions that various prominent politicians increasingly attract these days.

  • We feel contempt towards these politicians because they behave, far too often, in contemptible ways.
  • We feel fear regarding these politicians on account of the treacherous paths they appear to be taking us down.

That’s why many fans of envisioning and building a better world – including many technologists and entrepreneurs – would prefer to ignore politics, or to minimise its influence.

These critics of politics wish, instead, to keep their focus on creating remarkable new technology or on building vibrant new business.

Politics is messy and ugly, say these critics. It’s raucous and uncouth. It’s unproductive. Some would even say that politics is unnecessary. They look forward to politics reducing in size and influence.

Their preferred alternative to contempt and fear is to try to put the topic our of their minds.

I disagree. Putting our heads in the sand about politics is a gamble fraught with danger. Looking the other way won’t prevent our necks from being snapped when the axe falls. As bad outcomes increase from contemptible, treacherous politics, they will afflict everyone, everywhere.

We need a better alternative. Rather than distancing ourselves from the political sphere, we need to engage, intelligently and constructively.

As I’ll review below, technology can help us in that task.

Constructive engagement

Happily, as confirmed by positive examples from around the world, there’s no intrinsic reason for politics to be messy or ugly, raucous or uncouth.

Nor should politics be seen as some kind of unnecessary activity. It’s a core part of human life.

Indeed, politics arises wherever people gather together. Whenever we collectively decide the constraints we put on each other’s freedom, we’re taking part in politics.

Of course, this idea of putting constraints on each other’s freedoms is deeply unpopular in some circles. Liberty means liberty, comes the retort.

My answer is: things are more complicated. That’s for two reasons.

To start with, there are multiple kinds of freedom, each of which are important.

For example, consider the “four essential human freedoms” highlighted by US President FD Roosevelt in a speech in January 1941:

We look forward to a world founded upon four essential human freedoms.

The first is freedom of speech and expression – everywhere in the world.

The second is freedom of every person to worship God in their own way – everywhere in the world.

The third is freedom from want – which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants – everywhere in the world.

The fourth is freedom from fear – which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbour – anywhere in the world.

As well as caring about freeing people from constraints on their thoughts, speech, and actions, we generally also care about freeing people from hunger, disease, crime, and violence. Steps to loosen some of these constraints often risk decreasing other types of liberty. As I said, things are complicated.

The second reason builds on the previous point and makes it clearer why any proclamation “liberty means liberty” is overly simple. It is that our actions impact on each other’s wellbeing, both directly and indirectly.

  • If we speed in our cars, confident in our own ability to drive faster than the accepted norms, we risk seriously reducing the personal liberties of others if we suffer a momentary lapse in concentration.
  • If we share a hateful and misleading message on social media, confident in our own intellectual robustness, we might push someone reading that message over a psychological ledge.
  • If we discard waste products into the environment, confident that little additional harm will come from such pollution, we risk an unexpected accumulation of toxins and other harms.
  • If we grab whatever we can in the marketplace, confident that our own vigour and craftiness deserve a large reward, we could deprive others of the goods, services, and opportunities they need to enjoy a good quality of life.
  • If we publicise details of bugs in software that is widely used, or ways to increase the deadliness of biological pathogens, confident that our own reputation will rise as a result inside the peer groups we wish to impress, we risk enabling others to devastate the infrastructures upon which so much of life depends – electronic infrastructure and/or biological infrastructure.
  • If we create and distribute software that can generate mind-bending fake videos, we risk precipitating a meltdown in the arena of public discussion.
  • If we create and distribute software that can operate arsenals of weapons autonomously, freed from the constraints of having to consult slow-thinking human overseers before initiating an attack, we might gain lots of financial rewards, but at the risk of all manner of catastrophe from any defects in the design or implementation of that system.

In all these examples, there’s a case to agree some collective constraints on personal freedoms.

The rationale for imposing and accepting specific constraints on our freedom is in order to secure a state of affairs where overall freedom flourishes more fully. That’s a state of affairs in which we will all benefit.

In summary, greater liberty arises as a consequence of wise social coordination, rather than existing primarily as a reaction against such coordination. Selecting and enforcing social constraints is the first key task of politics.

Recognising and managing complexes

But who is the “we” who decides these constraints? And who will ensure that constraints put in place at one time, reflecting the needs of that time, are amended promptly when circumstances change, rather than remaining in place, disproportionately benefiting only a subset of society?

That brings us to a second key task of politics: preventing harmful dominance of society by self-interested groups of individuals – groups sometimes known as “complexes”.

This concept of the complex featured in the farewell speech made by President Eisenhower in January 1961. Eisenhower issued a profound warning that “the military industrial complex” posed a growing threat to America’s liberty and democracy:

In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.

We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted. Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defence with our peaceful methods and goals, so that security and liberty may prosper together.

As a distinguished former military general, Eisenhower spoke with evident authority on this topic:

Until the latest of our world conflicts, the United States had no armaments industry. American makers of ploughshares could, with time and as required, make swords as well. But now we can no longer risk emergency improvisation of national defence; we have been compelled to create a permanent armaments industry of vast proportions. Added to this, three and a half million men and women are directly engaged in the defence establishment. We annually spend on military security more than the net income of all United States corporations.

This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence – economic, political, even spiritual – is felt in every city, every Statehouse, every office of the Federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society.

It’s one thing to be aware of the risks posed by a military industrial complex (and the associated trade in armaments). It’s another thing to successfully manage these risks. Similar risks apply as well, for other vested interest “complexes” that can likewise subvert societal wellbeing:

  • A carbon energy complex, which earns huge profits from the ongoing use of carbon-based fuels, and which is motivated to minimise appreciation of the risks to climate from continuing use of these fuels
  • A financial complex, which (likewise) earns huge profits, by means of complicated derivative products that are designed to evade regulatory scrutiny whilst benefiting in cases of financial meltdown from government handouts to banks that are perceived as “too big to fail”
  • An information technology complex, which collects vast amounts of data about citizens, and which enables unprecedented surveillance, manipulation, and control of people by corporations and/or governments
  • A medical industrial complex, which is more interested in selling patients expensive medical treatment over a long period of time than in low-cost solutions which would prevent illnesses in the first place (or cure them quickly)
  • A political complex, which seeks above all else to retain its hold on political power, often by means of undermining a free press, an independent judiciary, and any credible democratic opposition.

You can probably think of other examples.

In all these cases, the practical goals of the complex are only weakly aligned with the goals of society as a whole. If society is not vigilant, the complex will subvert the better intentions of citizens. The complex is so powerful that it cannot be controlled by mere words of advocacy.

Beyond advocacy, we need effective politics. This politics can be supported by a number of vital principles:

  • Transparency: The operations of the various complexes need to be widely publicised and analysed, bringing them out of the shadows into the light of public understanding
  • Disclosure: Conflicts of interest must be made clear, to avoid the public being misled by individuals with ulterior motives
  • Accountability: Instances where key information is found to have been suppressed or distorted need to be treated very seriously, with the guilty parties having their reputations adjusted and their privileges diminished
  • Assessment of externalities: Evaluation systems should avoid focusing too narrowly on short-term metrics (such as financial profit) but should take into full account both positive and negative externalities – including new opportunities and new risks arising
  • Build bridges rather than walls: Potential conflicts should be handled by diplomacy, negotiation, and seeking a higher common purpose, rather than by driving people into antagonistic rival camps that increasingly bear hatred towards one another
  • Leanness: Decisions should focus on questions that matter most, rather than dictating matters where individual differences can easily be tolerated
  • Democratic oversight: People in leadership positions in society should be subject to regular assessment of their performance by a democratic review, that involves a dynamic public debate aiming to reach a “convergent opinion” rather than an “average opinion”.

Critically, all the above principles can be assisted by smart adoption of technology that enhances collaboration. This includes wikis (or similar) that map out the landscape of decisions. This also includes automated logic-checkers, and dynamic modelling systems. And that’s just the start of how technology can help support a better politics.

Transhumanist approaches to politics

The view that technology can assist humans to carry out core parts of our lives better than before, is part of the worldview known as transhumanism.

Transhumanism asserts, further, than the assistance available from technology, wisely applied, extends far beyond superficial changes. What lies within our grasp is a set of radical improvements in the human condition.

As in the short video “An Introduction to Transhumanism” – which, with over a quarter of a million views, is probably the most widely watched video on the subject – transhumanism is sometimes expressed in terms of the so-called “three supers”:

  • Super longevity: significantly improved physical health, including much longer lifespans – transcending human tendencies towards physical decay and decrepitude
  • Super intelligence: significantly improved thinking capability – transcending human tendencies towards mental blind spots and collective stupidity
  • Super wellbeing: significantly improved states of consciousness – transcending human tendencies towards depression, alienation, vicious emotions, and needless suffering.

My own advocacy of transhumanism actually emphasises one variant within the overall set of transhumanist philosophies. This is the variant of transhumanism known as technoprogressive transhumanismThe technoprogressive variant of transhumanism in effect adds one more “super” to the three already mentioned:

  • Super democracy: significantly improved social inclusion and resilience, whilst upholding diversity and liberty – transcending human tendencies towards tribalism, divisiveness, deception, and the abuse of power.

These radical improvements, by the way, can be brought about by a combination of changes at the level of individual humans, changes in our social structures, and changes in the prevailing sets of ideas (stories) that we tend to tell ourselves. Exactly what is the best combination of change initiatives, at these different levels, is something to be determined by a mix of thought and experiment.

Different transhumanists place their emphases upon different priorities for potential transformation.

If you’d like to listen in to that ongoing conversation, let me draw your attention to the London Futurists webinar taking place this Saturday – 20th of June – from 7pm UK time (BST).

In this webinar, four leading transhumanists will be discussing and contrasting their different views on the following questions (along with others that audience members raise in real time):

  • In a time of widespread anxiety about social unrest and perceived growing inequalities, what political approach is likely to ensure the greatest liberty?
  • In light of the greater insights provided by science into human psychology at both the individual and group levels, what are the threats to our wellbeing that most need to be guarded against, and which aspects of human character most need to be protected and uplifted?
  • What does the emerging philosophy of transhumanism, with its vision of conscious life evolving under thoughtful human control beyond the current human form, have to say about potential political interventions?

As you can see, the webinar is entitled “Politics for greater liberty: transhumanist perspectives”. The panellists are:

For more details, and to register to attend, click here.

Other views on the future of governance and the economy

If you’d like to hear a broader set of views on a related topic, then consider attending a Fast Future webinar taking place this Sunday – 21st June – from 6pm UK time (BST).

There will be four panellists in that webinar – one being me. We’ll each be be presenting a snapshot of ideas from the chapters we contributed to the recent Fast Future book, Aftershocks and Opportunities – Scenarios for a Post-Pandemic Future, which was published on June 1st.

After the initial presentations, we’ll be responding to each other’s views, and answering audience questions.

My own topic in this webinar will be “More Aware, More Agile, More Alive”.

The other panellists, and their topics, will be:

  • Geoff Mulgan – “Using the Crisis to Remake Government for the Future”
  • Bronwyn Williams – “The Great Separation”
  • Rohit Talwar – “Post-Pandemic Government and the Economic Recovery Agenda: A Futurist Perspective”

I’m looking forward to a lively discussion!

Click here for more details of this event.

Transcending Politics

As I said above (twice), things are complicated. The science and engineering behind the various technological solutions are complicated. And the considerations about regulations and incentives, to constrain and guide our collective use of that technology, are complicated too. We should beware any overly simple claims about easy answers to these issues.

My fullest treatment of these issues is in a 423 page book of mine, Transcending Politics, that I published in 2018.

Over the last couple of weeks, I’ve been flicking through some of the pages of that book again. Although there are some parts where I would now wish to use a different form of expression, or some updated examples, I believe the material stands the test of time well.

If the content in this blogpost strikes you as interesting, why not take a closer look at that book? The book’s website contains opening extracts of each of the chapters, as well as an extended table of contents. I trust you’ll like it.

14 May 2020

The second coming of the Longevity Dividend

Please find below an extended copy of my remarks at today’s online Round Table of the Business Coalition for Healthier Longer Lives, jointed hosted by the UK’s APPG (All Party Parliamentary Group) on Longevity and Longevity Leaders.

(The stated goal of today’s Round Table is “Development of values for the Business Coalition for Healthier Longer Lives”.)

I’m David Wood, and I’ve been researching future scenarios for over 30 years.

The concept I want to put on the table today is that of the Longevity Dividend.

It’s actually a kind of second coming of the Longevity Dividend, since the idea was first proposed some 14 years ago by a quartet of distinguished longevity researchers (PDF).

It’s a good concept, but didn’t take hold in its first coming, for reasons I’ll get to shortly.

The core idea is that it is economically sensible – that is, financially wise – for society to make investments in research,

  • not just into individual aspects of aging,
  • nor just into individual diseases of aging,
  • but rather into the common root causes of many of the diseases and other adverse characteristics of aging

– that is, research into items we would nowadays call the hallmarks of aging.

The argument is that such investments wouldn’t just be positive from a humanitarian point of view. They would also be very positive from a medium-term financial point of view.

We can sum up their likely benefits in the age-old saying, a stitch in time saves nine. Healthier long-lived people are better contributors to the economy, and better consumers of the economy, rather than being a nine-fold drain.

To move forwards with this concept of the Longevity Dividend, we have to acknowledge that the calculations of costs and benefits are inherently probabilistic.

There are no guarantees that any particular research investments will prove successful. But that’s no reason for society to avoid making these investments into the hallmarks of aging. VCs already know well how to adjust their portfolios on account of probabilistic calculations.

The reason the first coming of the Longevity Dividend didn’t get very far, in the public mind, was that people implicitly rated the probabilities of these therapies succeeding as being very low. Why speculate about potential economic benefits of biorejuvenation interventions if these interventions have little chance of working? However, with lots of more promising research having taken place in the last 14 years, it’s no longer possible to wave away this calculation of significant benefits. So it’s time to bring the Longevity Dividend into the centre stage of public discussion.

The Longevity Dividend has a partner concept: that of Super Agers. They’re people who reach the age of 95 with minimal experience of cancer, heart disease, dementia, or diabetes. Of course, these Super Agers do succumb to one or other disease in due course. Often an infection. But the total healthcare cost of these people, throughout their long lives, is usually less than the total healthcare cost of people who have shorter lives. Quite a lot less total healthcare cost.

So one way to realise the Longevity Dividend would be to put more research into understanding what’s different about Super Agers.

But why isn’t this happening (or not happening much)? We need to go deeper into this topic.

We need to reflect on the general poor regard that society places in practice into any measures that prevent diseases rather than curing them.

Previous discussions in this series of Round Tables have highlighted how our societal incentive structures are deeply flawed in this regard.

Without addressing this misalignment, there’s unlikely to be much progress with the Longevity Dividend.

So one of the big outcomes of our collective deliberations must be to demand sustained attention to the question of how to alter society’s overall priorities and incentives.

And there’s an important lesson from history here, which will be my final remarks for now. That lesson is that the free market, by itself, cannot fix problems of flawed societal incentives. That kind of thing needs political action. But the politicians can be aided in this by industry groups stepping forward with specific agreed proposals.

It’s similar to how factory owners actually helped pressurise politicians in this country, two centuries ago, into changing the law about children working in their factories.

These factory owners saw that economic incentives were pressurising them into employing children, against their own humanitarian instincts. Many of these factory owners, as individuals, felt unable to stop hiring children, for fear of being out-competed and going out of business. It needed a change in law to cause that practice to change. And networks of factory inspectors to ensure conformance to the law.

Working out a similar change of law in the early 2020s is surely a key practical activity for this business coalition, so that prevention moves to centre stage, and with it, the concepts of Longevity Dividend and Super Agers. Thank you.

Further reading

For an extended analysis of the economic arguments about the Longevity Dividend, see Chapter 9, “Money Matters”, of my book The Abolition of Aging.

For the reasons why people disregard the economic and other logical arguments in favour of society investing more in a potential forthcoming radical extension of healthy human longevity, see Chapter 10, “Adverse Psychology”, of the same book.

For the example of the coalition to change the laws on child employment, see the section “When competition needs to be curtailed” in Chapter 9, “Markets and fundamentalists” of my book Transcending Politics.

 

12 May 2020

Five scenarios to unwind the lockdown. Are there more?

Filed under: challenge, healthcare, politics, risks — Tags: , , — David Wood @ 1:55 pm

The lockdown has provided some much-needed breathing space. As a temporary measure, it has helped to prevent our health services from becoming overwhelmed. In many (though not yet in all) countries, the curves of death counts have been slowed, and then tilted downwards. Financial payments to numerous employees unable to work have been very welcome.

As such, the lockdown – adopted in part by individuals and families making their own prudent decisions, and in part due to government advice and edict – can be assessed, provisionally, as a short-term success, given the frightful circumstances in which it emerged.

But what next? The present set of restrictions seems unsustainable. Might a short-term success transition into a medium-term disaster?

The UK’s Chancellor of the Exchequor, Rishi Sunak, recently gave the following warning, referring to payments made by the government to employees whose companies have stopped paying them:

We are potentially spending as much on the furlough scheme as we do on the NHS… Clearly that is not a sustainable situation

What’s more, people who have managed to avoid meeting friends and relatives for two or three months, may become overwhelmed by the increasing strain of separation, especially as mental distress accumulates, or existing family relations rupture.

But any simple unwinding of the lockdown seems fraught with danger. Second waves of infection could shoot up, once social distancing norms are relaxed. In country after country around the world, tentative steps to allow greater physical proximity have already led to spikes in the numbers of infections, followed by reversals of the relaxation. I recently shared on my social media this example from South Korea:

South Korea: bars and nightclubs to close down for 30 more days after health officials tracked 13 new Covid cases to a single person who attended 5 nightclubs and bars in the country’s capital city of Seoul

One response on Twitter was the single word “Unsustainable”. And on Facebook my post attracted comments criticising the approach taken in South Korea:

It is clear Korea is going to be looking over its shoulder for the indefinite future with virtually no immunity in the population.

I have considerable sympathy with the critics: We need a better solution than simply “crossing fingers” and nervously “looking over the shoulder”.

So what are the scenarios for unwinding the lockdown, in a way that avoids the disasters of huge new spikes of deaths and suffering, or unprecedented damage to the global economy?

To be clear, I’m not talking here about options for restructuring society after the virus has been defeated. These are important discussions, and I favour options for a Great Reconsideration. But these are discussions for another day. First, we need to review scenarios for actually defeating the virus.

Without reaching clarity about that overall plan, what we can expect ahead is, alas, worse confusion, worse recrimination, worse health statistics, worse economic statistics, and a worse fracturing of society.

Scenario 1: Accelerate a cure

One scenario is to keep most of society in a state of social distancing until such time as a vaccine has been developed and deployed.

That was the solution in, for example, the 2011 Steven Soderbergh Hollywood film “Contagion”. After a few setbacks, plucky scientists came to the rescue. And in the real world in 2020, after all, we have Deep Learning and advanced biotech to help us out. Right?

The main problem with this scenario is that it could take up to 18 months. Or even longer. Although teams around the world are racing towards potential solutions, we won’t know for some time whether their ideas will prove fruitful. Bear in mind that Covid-19 is a coronavirus, and the number of successful vaccines that have been developed for other coronaviruses is precisely zero. Technology likely will defeat the virus in due course, but no-one can be confident about the timescales.

A variant of this scenario is that other kinds of medical advance could save the day: antivirals, plasma transfers, antimalarials, and so on. Lifespan.io has a useful page tracking progress with a range of these potential therapeutics. Again, there are some hopeful signs, but, again, the outcomes remain uncertain.

So whilst there’s a strong case for society getting more fully behind a considerable number of these medical research projects, we’ll need in parallel to consider other scenarios for unwinding the lockdown. Read on.

Scenario 2: Exterminate the virus

A second scenario is that society will become better at tracking and controlling instances of the virus. Stage by stage, regions of the planet could be declared as having, not just low rates of infectious people, but as having zero rates of infectious people.

In that case, we will be freed from the risk of contracting Covid-19, not because we have been vaccinated, but because there are no longer any infectious people with whom we can come into contact.

It would be similar to how smallpox was gradually made extinct. That virus no longer exists in the wild. One difference, however, is that the fight against smallpox was aided, since 1796, by a vaccine. The question with Covid-19 is whether it could be eradicated without the help of a vaccine. Could it be eradicated by better methods of:

  • Tracking which people are infectious
  • Isolating people who are infectious
  • Preventing travel between zones with infections and those without infections?

This process would be helped once there are reliable tests to ascertain whether someone has actually had the virus. However, things would become more complicated if the virus can recur (as has sometimes been suggested).

Is this scenario credible? Perhaps. It’s worth further investigation. But it seems a long shot, bearing in mind it would need only a single exception to spark a new flare up of infections. Bear in mind that it was only a single infectious hotspot that kick-started this whole global pandemic in the first place.

Scenario 3: Embrace economic reversal

If Scenario 1 (accelerate a cure) and Scenario 2 (exterminate the virus) will each take a long time – 18 months or more – what’s so bad about continuing in a state of lockdown throughout that period? That’s the core idea of Scenario 3. That scenario has the name “Embrace economic reversal” because of the implication of many people being unable to return to work. But would that be such a bad thing?

This scenario envisions a faster adoption of some elements of what has previously been spoken about as a possible longer term change arising from the pandemic – the so-called Great Reconsideration mentioned above:

  • Less commuting
  • Less pollution
  • Less time spent in offices
  • Less time spent in working for a living
  • Appreciation of life freed from a culture of conspicuous consumption
  • Valuing human flourishing instead of GDP
  • Adoption of a Universal Basic Income, and/or alternatives

If these things are good, why delay their adoption?

In short, if the lockdown (or something like it) were to continue in place for 18 months or longer, would that really be such a bad outcome?

The first problem with this scenario is that the lockdown isn’t just getting in the way of parts of life that, on reflection, we might do without. It’s also getting in the way of many of the most precious aspects of life:

  • Meeting people in close physical proximity as well as virtually
  • Choosing to live with a different group of people.

A second problem is that, whilst the true value of many aspects of current economic activity can be queried, other parts of that economy play vital support roles for human flourishing. For as long as a lockdown continues, these parts of the economy will suffer, with consequent knock-on effects for human flourishing.

Finally, although people who are reasonably well off can cope (for a while, at least) with the conditions of the lockdown, many others are already nearing the ends of their resources. For such people, the inability to leave their accommodation poses higher levels of stress.

Accordingly, whilst it is a good idea to reconsider which aspects of an economy really matter, it would be harsh advice to simply tell everyone that they need to take economic decline “on the chin”. For too many people, such a punch would be a knock-out blow.

Scenario 4: Accept higher death statistics

A different idea of taking the crisis “on the chin” is to accept, as a matter of practicality, that more people than usual will die, if there’s a reversal of the conditions of lockdown and social distancing.

In this scenario, what we should accept, isn’t (as in Scenario 3) a reversal of economic statistics, but a reversal (in the short-term) of health statistics.

In this scenario, a rise in death statistics is bad, but it’s not the end of society. Periodically, death statistics do rise from time to time. So long as they can still be reasonably controlled, this might be the least worst option to consider. We shouldn’t become unduly focused on what are individual tragedies. Accordingly, let people return to whatever kinds of interaction they desire (but with some limitations – to be discussed below). The economy can restart. And people can once again enjoy the warmth of each others’ presence – at music venues, at sports grounds, in family gatherings, and on long-haul travel holidays.

Supporters of this scenario sometimes remark that most of the people who die from Covid-19 probably would have died of other causes in a reasonably short period of time, regardless. The victims of the virus tend to be elderly, or to have underlying health conditions. Covid-19 might deprive an 80 year old of an additional 12 months of life. From a utilitarian perspective, is that really such a disastrous outcome?

The first problem with this scenario is that we don’t know quite how bad the surge in death statistics might be. Estimates vary of the fatality rate among people who have been infected. We don’t yet know, reliably, what proportion of the population have been infected without even knowing that fact. It’s possible that the fatality rate will actually prove to be relatively low. However, it’s also possible that the rate might rise:

  • If the virus mutates (as it might well do) into a more virulent form
  • If the health services become overwhelmed with an influx of people needing treatment.

Second, as is evident from the example of the UK’s Prime Minister, Boris Johnson, people who are far short of the age of 80, and who appear to be in general good health, can be brought to death’s door from the disease.

Third, even when people with the virus survive the infection, there may be long-term consequences for their health. They may not die straightaway, but the quality of their lives in future years could be significantly impaired.

Fourth, many people recoil from the suggestion that it’s not such a bad outcome if an 80 year old dies sooner than expected. In their view, all lives area valuable – especially in an era when an increasing number of octogenarians can be expected to live into their 100s. We are struck by distaste at any narrow utilitarian calculation which diminishes the value of individual lives.

For these reasons, few writers are quite so brash as to recommend Scenario 4 in the form presented here. Instead, they tend to advocate a variant of it, which I will now describe under a separate heading.

Scenario 5: A two-tier society

Could the lockdown be reconfigured so that we still gain many of its most important benefits – in particular, protection of those who are most vulnerable – whilst enabling the majority of society to return to life broadly similar to before the virus?

In this scenario, people are divided into two tiers:

  • Those for whom a Covid infection poses significant risks to their health – this is the “high risk” tier
  • Those who are more likely to shrug off a Covid infection – this is the “low risk” tier.

Note that the level of risk refers to how likely someone is to die from being infected.

The idea is that only the high risk tier would need to remain in a state of social distancing.

This idea is backed up by the thought that the division into two tiers would only need to be a temporary step. It would only be needed until one of three things happen:

  • A reliable vaccine becomes available (as in Scenario 1)
  • The virus is eradicated (as in Scenario 2)
  • The population as a whole gains “herd immunity”.

With herd immunity, enough people in the low risk tier will have passed through the phase of having the disease, and will no longer be infectious. Providing they can be assumed, in such a case, to be immune from re-infection, this will cut down the possibility of the virus spreading further. The reproduction number, R, will therefore fall well below 1.0. At that time, even people in the high risk tier can be readmitted into the full gamut of social and physical interactions.

Despite any initial hesitation over the idea of a two-tier society, the scenario does have its attractions. It is sensible to consider in more detail what it would involve. I list some challenges that will need to be addressed:

  • Where there are communities of people who are all in the high risk tier – for example, in care homes, and in sheltered accommodation – special measures will still be needed, to prevent any cases of infection spreading quickly in that community once they occasionally enter it (the point here is that R might be low for the population as a whole, but high in such communities)
  • Families often include people in both tiers. Measures will be needed to ensure physical distancing within such homes. For example, children who mix freely with each other at school will need to avoid hugging their grandparents
  • It will be tricky – and controversial – to determine which people belong in which tier (think, again, of the example of Boris Johnson)
  • The group of people initially viewed as being low risk may turn out to have significant subgroups that are actually at higher risk – based on factors such as workplace practice, genetics, diet, or other unsuspected underlying cases – in which case the death statistics could surge way higher than expected
  • Are two tiers of classification sufficient? Would a better system have three (or more) tiers, with special treatments for pregnant women, and for people who are somewhat elderly (or somewhat asthmatic) rather than seriously elderly (or seriously asthmatic)?
  • The whole concept of immunity may be undermined, if someone who survives an initial infection is still vulnerable to a second infection (perhaps from a new variant of the virus)

Scenario 6: Your suggestions?

Of course, combinations of the above scenarios can, and should, be investigated.

But I’ll finish by asking if there are other dimensions to this landscape of scenarios, that deserve to be included in the analysis of possibilities.

If so, we had better find out about them sooner rather than later, and discuss them openly and objectively. We need to get beyond future shock, and beyond tribal loyalty instincts.

That will reduce the chances that the outcome of the lockdown will be (as stated earlier) worse confusion, worse recrimination, worse health statistics, worse economic statistics, and a worse fracturing of society.

Image credit: Priyam Patel from Pixabay.

14 June 2019

Fully Automated Luxury Communism: a timely vision

I find myself in a great deal of agreement with Fully Automated Luxury Communism (“FALC”), the provocative but engaging book by Novara Media Co-Founder and Senior Editor Aaron Bastani.

It’s a book that’s going to change the conversation about the future.

It starts well, with six short vignettes, “Six characters in search of a future”. Then it moves on, with the quality consistently high, to sections entitled “Chaos under heaven”, “New travellers”, and “Paradise found”. Paradise! Yes, that’s the future which is within our grasp. It’s a future in which, as Bastani says, people will “lead fuller, expanded lives, not diminished ones”:

The comment about “diminished lives” is a criticism of at least some parts of the contemporary green movement:

To the green movement of the twentieth century this is heretical. Yet it is they who, for too long, unwisely echoed the claim that ‘small is beautiful’ and that the only way to save our planet was to retreat from modernity itself. FALC rallies against that command, distinguishing consumption under fossil capitalism – with its commuting, ubiquitous advertising, bullshit jobs and built-in obsolescence – from pursuing the good life under conditions of extreme supply. Under FALC we will see more of the world than ever before, eat varieties of food we have never heard of, and lead lives equivalent – if we so wish – to those of today’s billionaires. Luxury will pervade everything as society based on waged work becomes as much a relic of history as the feudal peasant and medieval knight.

The book is full of compelling turns of phrase that made me think to myself, “I wish I had thought of saying that”. They are phrases that are likely to be heard increasingly often from now on.

The book also contains ideas and examples that I have myself used on many occasions in my own writing and presentation over the years. Indeed, the vision and analysis in FALC has a lot in common with the vision and analysis I have offered, most recently in Sustainable Superabundance, and, in more depth, in my earlier book Transcending Politics.

Four steps in the analysis

In essence, FALC sets out a four-step problem-response-problem-response sequence:

  1. A set of major challenges facing contemporary society – challenges which undermine any notion that social development has somehow already reached a desirable “end of history”
  2. A set of technological innovations, which Bastani calls the “Third Disruption”, with the potential not only to solve the severe challenges society is facing, but also to significantly improve human life
  3. A set of structural problems with the organisation of the economy, which threaten to frustrate and sabotage the positive potential of the Third Disruption
  4. A set of changes in attitude – and political programmes to express these changes – that will allow, after all, the entirety of society to fully benefit from the Third Disruption, and attain the “luxury” paradise the book describes.

In more detail:

First, Bastani highlights five challenges that, in combination, pose (as he puts it) “threats whose scale is civilisational”:

  • Growing resource scarcity – particularly for energy, minerals and fresh water
  • Accelerating climate change and other consequences of global warming
  • Societal aging, as life expectancy increases and birth rates concurrently fall, invalidating the assumptions behind pension schemes and, more generally, the social contract
  • A growing surplus of global poor who form an ever-larger ‘unnecessariat’ (people with no economic value to contribute)
  • A new machine age which will herald ever-greater technological unemployment as progressively more physical and cognitive labour is performed by machines, rather than humans.

Second, Bastani points to a series of technological transformations that comprise an emerging “Third Disruption” (following the earlier disruptions of the Agricultural and Industrial Revoutions). These transformations apply information technology to fields such as renewable energy, food production, resource management (including asteroid mining), healthcare, housing, and education. The result of these transformations could (“if we want it”, Bastani remarks) be a society characterised by the terms “post-scarcity” and “post-work”.

Third, this brings us to the deeper problem, namely the way society puts too much priority on the profit motive.

Transcending capitalism

The economic framework known as capitalism has generated huge amounts of innovation in products and services. These innovations have taken place because entrepreneurs have been motivated to create and distribute new items for exchange and profit. But in circumstances when profits would be small, there’s less motivation to create the goods and services. To the extent that goods and services are nowadays increasingly dependent on information, this poses a problem, since information involves no intrinsic costs when it is copied from one instance to another.

Increasingly, what’s special about a product isn’t the materials from which it is composed, but the set of processes (that is, information) used to manipulate those material to create the product. Increasingly, what’s special about a service isn’t the tacit skills of the people delivering that service, but the processes (that is, information) by which any reasonably skilled person can be trained to deliver that service. All this leads to pressures for the creation of “artificial scarcity” that prohibits the copying of certain types of information.

The fact that goods and services become increasingly easy to duplicate should be seen as a positive. It should mean lower costs all round. It should mean that more people can access good quality housing, good quality education, good quality food, and good quality clean energy. It’s something that society should welcome enthusiastically. However, since profits are harder to achieve in these circumstances, many business leaders (and the hangers-on who are dependent on these business leaders) wish to erect barriers and obstacles anew. Rather than embracing post-scarcity, they wish to extent the prevalence of scarcity.

This is just one example of the “market failures” which can arise from unfettered capitalism. In my own book Sustainable Superabundance, five of the twelve chapters end with a section entitled “Beyond the profit motive”. It’s not that I view the profit motive as inherently bad. Far from it. Instead, it’s that there are many problems in letting the profit motive dominate other motivations. That’s why we need to look beyond the profit motive.

In much the same way, Bastani recognises capitalism as an essential precursor to the fully automated luxury communism he foresees. Here, as in much of his thinking, he draws inspiration from the writing of Karl Marx. Bastani notes that,

In contrast to his portrayal by critics, Marx was often lyrical about capitalism. His belief was that despite its capacity for exploitation, its compulsion to innovate – along with the creation of a world market – forged the conditions for social transformation.

Bastani quotes Marx writing as follows in 1848:

The bourgeoisie … has been the first to show what man’s activity can bring about. It has accomplished wonders far surpassing Egyptian pyramids, Roman aqueducts, and Gothic cathedrals; it has conducted expeditions that put in the shade all former Exoduses of nations and crusades.

By the way, don’t be put off by the word “communism” in the book’s title. There’s no advocacy here of a repeat of what previous self-declared communist regimes have done. Communism was not possible until the present time, since it depends upon technology having advanced to a sufficiently advanced state. Bastani explains it as follows:

While it is true that a number of political projects have labelled themselves communist over the last century, the aspiration was neither accurate nor – as we will go on to see – technologically possible. ‘Communism’ is used here for the benefit of precision; the intention being to denote a society in which work is eliminated, scarcity replaced by abundance and where labour and leisure blend into one another. Given the possibilities arising from the Third Disruption, with the emergence of extreme supply in information, labour, energy and resources, it should be viewed not only as an idea adequate to our time but impossible before now.

And to emphasise the point:

FALC is not the communism of the early twentieth century, nor will it be delivered by storming the Winter Palace.

The technologies needed to deliver a post-scarcity, post-work society – centred around renewable energy, automation and information – were absent in the Russian Empire, or indeed anywhere else until the late 1960s…

Creating communism before the Third Disruption is like creating a flying machine before the Second. You could conceive of it – and indeed no less a genius than Leonardo Da Vinci did precisely that – but you could not create it. This was not a failure of will or of intellect, but simply an inevitability of history.

Marx expected a transformation from capitalism to communism within his own lifetime. He would likely have been very surprised at the ability of capitalism to reinvent itself in the face of the many challenges and difficulties it has faced in subsequent decades. Marx’s lack of accurate prediction about the forthcoming history of capitalism is one factor people use to justify their disregard for Marxism. The question, however, is whether his analysis was merely premature rather than completely wrong. Bastani argues for the former point of view. The internal tensions of a profit-led society have caused a series of large financial and economic crashes, but have not, so far, led to an effective transition away from profit-seeking to abundance-seeking. However, Bastani argues, the stakes are nowadays so high, that continued pursuit of profits-at-all-costs cannot continue.

This brings us to the fourth phase of the argument – the really critical one. If there are problems with capitalism, what is to be done? Rather than storming any modern-day Winter Palace, where should a fervour for change best be applied?

Solutions

Bastani’s answer starts by emphasising that the technologies of the Third Disruption, by themselves, provide no guarantee of a move to a society with ample abundance. Referring to the laws of technology of Melvin Kranzberg, Bastani observes that

How technology is created and used, and to whose advantage, depends on the political, ethical and social contexts from which it emerges.

In other words, ideas and structures play a key role. To increase the chances of optimal benefits from the technologies of the Third Disruption, ideas prevalent in society will need to change.

The first change in ideas is a different attitude towards one of the dominant ideologies of our time, sometimes called neoliberalism. Bastani refers at various points to “market fundamentalism”. This is the idea that free pursuit of profits will inevitably result in the best outcome for society as a whole – that the free market is the best tool to organise the distribution of resources. In this viewpoint, regulations should be resisted, where they interfere with the ability of businesses to offer new products and services to the market. Workers’ rights should be resisted too, since they will interfere with the ability of businesses to lower wages and reassign tasks overseas. And so on.

Bastani has a list of examples of gross social failures arising from pursuit of neoliberalism. This includes the collapse in 2018 of Carillion, the construction and facilities management company. Bastani notes:

With up to 90 per cent of Carillion’s work subcontracted out, as many as 30,000 businesses faced the consequences of its ideologically driven mismanagement. Hedge funds in the City, meanwhile, made hundreds of millions from speculating on its demise.

Another example is the tragedy of the 2017 fire at the 24-storey Grenfell Tower in West London, in which 72 people perished:

The neoliberal machine has human consequences that go beyond spreadsheets and economic data. Beyond, even, in-work poverty and a life defined by paying ever higher rents to wealthy landlords and fees to company shareholders. As bad as those are they pale beside its clearest historic expression in a generation: the derelict husk of Grenfell Tower…

A fire broke which would ravage the building in a manner not seen in Britain for decades. The primary explanation for its rapid, shocking spread across the building – finished in 1974 and intentionally designed to minimise the possibility of such an event – was the installation of flammable cladding several years earlier, combined with poor safety standards and no functioning sprinklers – all issues highlighted by the residents’ Grenfell Action Group before the fire.

The cladding itself, primarily composed of polyethylene, is as flammable as petroleum. Advances in material science means we should be building homes that are safer, and more efficient, than ever before. Instead a cut-price approach to housing the poor prevails, prioritising external aesthetics for wealthier residents. In the case of Grenfell that meant corners were cut and lives were lost. This is not a minor political point and shows the very real consequences of ‘self-regulation’.

Bastani is surely right that greater effort is needed to ensure everyone understands the various failure modes of free markets. A better appreciation is overdue of the positive role that well-designed regulations can play in ensuring greater overall human flourishing in the face of corporations that would prefer to put their priorities elsewhere. The siren calls of market fundamentalism need to be resisted.

I would add, however, that a different kind of fundamentalism needs to be resisted and overcome too. This is anti-market fundamentalism. As I wrote in the chapter “Markets and fundamentalists” in Transcending Politics,

Anti-market fundamentalists see the market system as having a preeminently bad effect on the human condition. The various flaws with free markets… are so severe, say these critics, that the most important reform to pursue is to dismantle the free market system. That reform should take a higher priority than any development of new technologies – AI, genetic engineering, stem cell therapies, neuro-enhancers, and so on. Indeed, if these new technologies are deployed whilst the current free market system remains in place, it will, say these critics, make it all the more likely that these technologies will be used to oppress rather than liberate.

I believe that both forms of fundamentalism (pro-market and anti-market) need to be resisted. I look forward to wiser management of the market system, rather than dismantling it. In my view, key to this wise management is the reform and protection of a number of other social institutions that sit alongside markets – a free press, free judiciary, independent regulators, and, yes, independent politicians.

I share the view of political scientists Jacob S. Hacker and Paul Pierson, articulated in their fine 2016 book American Amnesia: Business, Government, and the Forgotten Roots of Our Prosperity, that the most important social innovation of the 20th century was the development of the mixed economy. In a mixed economy, effective governments work alongside the remarkable capabilities of the market economy, steering it and complementing it. Here’s what Hacker and Pierson have to say about the mixed economy:

The mixed economy spread a previously unimaginable level of broad prosperity. It enabled steep increases in education, health, longevity, and economic security.

These writers explain the mixed economy by an elaboration of Adam Smith’s notion of “the invisible hand”:

The political economist Charles Lindblom once described markets as being like fingers: nimble and dexterous. Governments, with their capacity to exercise authority, are like thumbs: powerful but lacking subtlety and flexibility. The invisible hand is all fingers. The visible hand is all thumbs. Of course, one wouldn’t want to be all thumbs. But one wouldn’t want to be all fingers either. Thumbs provide countervailing power, constraint, and adjustments to get the best out of those nimble fingers.

The characterisation by Hacker and Pierson of the positive role of government is, to my mind, spot on correct. It’s backed up in their book by lots of instructive episodes from American history, going all the way back to the revolutionary founders:

  • Governments provide social coordination of a type that fails to arise by other means of human interaction, such as free markets
  • Markets can accomplish a great deal, but they’re far from all-powerful. Governments ensure that suitable investment takes place of the sort that would not happen, if it was left to each individual to decide by themselves. Governments build up key infrastructure where there is no short-term economic case for individual companies to invest to create it
  • Governments defend the weak from the powerful. They defend those who lack the knowledge to realise that vendors may be on the point of selling them a lemon and then beating a hasty retreat. They take actions to ensure that social free-riders don’t prosper, and that monopolists aren’t able to take disproportionate advantage of their market dominance
  • Governments prevent all the value in a market from being extracted by forceful, well-connected minority interests, in ways that would leave the rest of society impoverished. They resist the power of “robber barons” who would impose numerous tolls and charges, stifling freer exchange of ideas, resources, and people. Therefore governments provide the context in which free markets can prosper (but which those free markets, by themselves, could not deliver).

It’s a deeply troubling development that the positive role of enlightened government is something that is poorly understood in much of contemporary public discussion. Instead, as a result of a hostile barrage of ideologically-driven misinformation, more and more people are calling for a reduction in the scope and power of government. That tendency – the tendency towards market fundamentalism – urgently needs to be resisted. But at the same time, we also need to resist the reverse tendency – the tendency towards anti-market fundamentalism – the tendency to belittle the latent capabilities of free markets.

To Bastani’s credit, he avoids advocating any total government control over planning of the economy. Instead, he offers praise for Eastern European Marxist writers such as Michał Kalecki, Włodzimierz Brus, and Kazimierz Łaski, who advocated important roles for market mechanisms in the approach to the communist society in which they all believed. Bastani comments,

[These notions were] expanded further in 1989 with Brus and Łaski claiming that under market socialism, publicly owned firms would have to be autonomous – much as they are in market capitalist systems – and that this would necessitate a socialised capital market… Rather than industrial national monoliths being lauded as the archetype of economic efficiency, the authors argued for a completely different kind of socialism declaring, ‘The role of the owner-state should be separated from the state as an authority in charge of administration … (enterprises) have to become separated not only from the state in its wider role but also from one another.’

Bastani therefore supports a separation of two roles:

  • The political task of establishing the overall direction and framework for the development of the economy
  • The operational task of creating goods and services within that framework – a task that may indeed utilise various market mechanisms.

Key in the establishment of the overall direction is to supersede society’s reliance on the GDP measure. Bastani is particularly good in his analysis of the growing shortcomings of GDP (Gross Domestic Product), and on what must be included in its replacement, which he calls an “Abundance Index”:

Initially such an index would integrate CO2 emissions, energy efficiency, the falling cost of energy, resources and labour, the extent to which UBS [Universal Basic Services] had been delivered, leisure time (time not in paid employment), health and lifespan, and self-reported happiness. Such a composite measure, no doubt adapted to a variety of regional and cultural differences, would be how we assess the performance of post-capitalist economies in the passage to FALC. This would be a scorecard for social progress assessing how successful the Third Disruption is in serving the common good.

Other policies Bastani recommends in FALC include:

  • Revised priorities for central banks – so that they promote increases of the Abundance Index, rather than simply focusing on the control of inflation
  • Step by step increases in UBS (Universal Basic Services) – rather than the UBI (Universal Basic Income) that is often advocated these days
  • Re-localisation of economies through what Bastani calls “progressive procurement and municipal protectionism”.

But perhaps the biggest recommendation Bastani makes is for the response to society’s present political issues to be a “populist” one.

Populism and its dangers

I confess that the word “populist” made me anxious. I worry about groundswell movements motivated by emotion rather than clear-sightedness. I worry about subgroups of citizens who identify themselves as “the true people” (or “the real people”) and who take any democratic victory as a mandate for them to exclude any sympathy for minority viewpoints. (“You lost. Get over it!”) I worry about demagogues who rouse runaway emotional responses by scapegoating easy targets (such as immigrants, overseas governments, transnational organisations, “experts”, “the elite”, or culturally different subgroups).

In short, I was more worried by the word “populist” than the word “communist”.

As it happens – thankfully – that’s different from the meaning of “populist” that Bastani has in mind. He writes,

For the kind of change required, and for it to last in a world increasingly at odds with the received wisdom of the past, a populist politics is necessary. One that blends culture and government with ideas of personal and social renewal.

He acknowledges that some thinkers will disagree with this recommendation:

Others, who may agree about the scale and even urgent necessity of change, will contend that such a radical path should only be pursued by a narrow technocratic elite. Such an impulse is understandable if not excusable; or the suspicion that democracy unleashes ‘the mob’ is as old as the idea itself. What is more, a superficial changing of the guard exclusively at the level of policy-making is easier to envisage than building a mass political movement – and far simpler to execute as a strategy. Yet the truth is any social settlement imposed without mass consent, particularly given the turbulent energies unleashed by the Third Disruption, simply won’t endure.

In other words, voters as a whole must be able to understand how the changes ahead, if well managed, will benefit everyone, not just in a narrow economic sense, but in the sense of liberating people from previous constraints.

I have set out similar ideas, under the term “superdemocracy”, described as follows:

A renewal of democracy in which, rather than the loudest and richest voices prevailing, the best insights of the community are elevated and actioned…

The active involvement of the entire population, both in decision-making, and in the full benefits of [technology]…

Significantly improved social inclusion and resilience, whilst upholding diversity and liberty – overcoming human tendencies towards tribalism, divisiveness, deception, and the abuse of power.

That last proviso is critical and deserves repeating: “…overcoming human tendencies towards tribalism, divisiveness, deception, and the abuse of power”. Otherwise, any movements that build popular momentum risk devouring themselves in time, in the way that the French Revolution sent Maximilien Robespierre to the guillotine, and the Bolshevik Revolution led to the deaths of many of the original revolutionaries following absurd show trials.

You’ll find no such proviso in FALC. Bastani writes,

Pride, greed and envy will abide as long as we do.

He goes on to offer pragmatic advice,

The management of discord between humans – the essence of politics – [is] an inevitable feature of any society we share with one another.

Indeed, that is good advice. We all need to become better at managing discord. However, writing as a transhumanist, I believe we can, and must, do better. The faults within human nature are something which the Third Disruption (to use Bastani’s term) will increasingly allow us to address and transcend.

Consider the question: Is it possible to significantly improve politics, over the course of, say, the next dozen years, without first significantly improving human nature?

Philosophies of politics can in principle be split into four groups, depending on the answer they give to that question:

  1. We shouldn’t try to improve human nature; that’s the route to hell
  2. We can have a better politics without any change in human nature
  3. Improving human nature will turn out to be relatively straightforward; let’s get cracking
  4. Improving human nature will be difficult but is highly desirable; we need to carefully consider the potential scenarios, with an open mind, and then make our choices.

For the avoidance of doubt, the fourth of these positions is the one I advocate. In contrast, I believe Bastani would favour the second answer – or maybe the first.

Transcending populism

(The following paragraphs are extracted from the chapter “Humans and superhumans” of my book Transcending Politics.)

We humans are sometimes angelic, yet sometimes diabolic. On occasion, we find ways to work together on a transcendent purpose with wide benefits. But on other occasions, we treat each other abominably. Not only do we go to war with each other, but our wars are often accompanied by hideous so-called “war crimes”. Our religious crusades, whilst announced in high-minded language, have involved the subjugation or extermination of hundreds of thousands of members of opposing faiths. The twentieth century saw genocides on a scale never before experienced. For a different example of viciousness, the comments attached to YouTube videos frequently show intense hatred and vitriol.

As technology puts more power in our hands, will we become more angelic, or more diabolic? Probably both, at the same time.

A nimbleness of mind can coincide with a harshness of spirit. Just because someone has more information at their disposal, that’s no guarantee the information will be used to advance beneficial initiatives. Instead, that information can be mined and contoured to support whatever course of action someone has already selected in their heart.

Great intelligence can be coupled with great knowledge, for good but also for ill. The outcome in some sorry cases is greater vindictiveness, greater manipulation, and greater enmity. Enhanced cleverness can make us experts in techniques to suppress inconvenient ideas, to distort inopportune findings, and to tarnish independent thinkers. We can find more devious ways to mislead and deceive people – and, perversely, to mislead and deceive ourselves. In this way, we could create the mother of all echo chambers. It would take only a few additional steps for obsessive human superintelligence to produce unprecedented human malevolence.

Transhumanists want to ask: can’t we find a way to alter the expression of human nature, so that we become less likely to use our new technological capabilities for malevolence, and more likely to use them for benevolence? Can’t we accentuate the angelic, whilst diminishing the diabolic?

To some critics, that’s an extremely dangerous question. If we mess with human nature, they say, we’ll almost certainly make things worse rather than better.

Far preferable, in this analysis, is to accept our human characteristics as a given, and to evolve our social structures and cultural frameworks with these fixed characteristics in mind. In other words, our focus should be on the likes of legal charters, restorative justice, proactive education, multi-cultural awareness, and effective policing.

My view, however, is that these humanitarian initiatives towards changing culture need to be complemented with transhumanist initiatives to alter the inclinations inside the human soul. We need to address nature at the same time as we address nurture. To do otherwise is to unnecessarily limit our options – and to make it more likely that a bleak future awaits us.

The good news is that, for this transhumanist task, we can take advantage of a powerful suite of emerging new technologies. The bad news is that, like all new technologies, there are risks involved. As these technologies unfold, there will surely be unforeseen consequences, especially when different trends interact in unexpected ways.

Transhumanists have long been well aware of the risks in changing the expression of human nature. Witness the words of caution baked deep into the Transhumanist Declaration. But these risks are no reason for us to abandon the idea. Instead, they are a reason to exercise care and judgement in this project. Accepting the status quo, without seeking to change human nature, is itself a highly risky approach. Indeed, there are no risk-free options in today’s world. If we want to increase our chances of reaching a future of sustainable abundance for all, without humanity being diverted en route to a new dark age, we should leave no avenue unexplored.

Transhumanists are by no means the first set of thinkers to desire positive changes in human nature. Philosophers, religious teachers, and other leaders of society have long called for humans to overcome the pull of “attachment” (desire), self-centredness, indiscipline, “the seven deadly sins” (pride, greed, lust, envy, gluttony, wrath, and sloth), and so on. Where transhumanism goes beyond these previous thinkers is in highlighting new methods that can now be used, or will shortly become available, to assist in the improvement of character.

Collectively these methods can be called “cognotech”. They will boost our all-round intelligence: emotional, rational, creative, social, spiritual, and more. Here are some examples:

  • New pharmacological compounds – sometimes called “smart drugs”
  • Gentle stimulation of the brain by a variety of electromagnetic methods – something that has been trialled by the US military
  • Alteration of human biology more fundamentally, by interventions at the genetic, epigenetic, or microbiome level
  • Vivid experiences within multi-sensory virtual reality worlds that bring home to people the likely consequences of their current personal trajectories (from both first-person and third-person points of view), and allow them to rehearse changes in attitude
  • The use of “intelligent assistance” software that monitors our actions and offers us advice in a timely manner, similar to the way that a good personal friend will occasionally volunteer wise counsel; intelligent assistants can also strengthen our positive characteristics by wise selection of background music, visual imagery, and “thought for the day” aphorisms to hold in mind.

Technological progress can also improve the effectiveness of various traditional methods for character improvement:

  • The reasons why meditation, yoga, and hypnosis can have beneficial results are now more fully understood than before, enabling major improvements in the efficacy of these practices
  • Education of all sorts can be enhanced by technology such as interactive online video courses that adapt their content to the emerging needs of each different user
  • Prompted by alerts generated by online intelligent assistants, real-world friends can connect at critical moments in someone’s life, in order to provide much-needed personal support
  • Information analytics can resolve some of the long-running debates about which diets – and which exercise regimes – are the ones that will best promote all-round health for given individuals.

The technoprogressive feedback cycle

One criticism of the initiative I’ve just outlined is that it puts matters the wrong way round.

I’ve been describing how individuals can, with the aid of technology as well as traditional methods, raise themselves above their latent character flaws, and can therefore make better contributions to the political process (either as voters or as actual politicians). In other words, we’ll get better politics as a result of getting better people.

However, an opposing narrative runs as follows. So long as our society is full of emotional landmines, it’s a lot to expect people to become more emotionally competent. So long as we live in a state of apparent siege, immersed in psychological conflict, it’s a big ask for people to give each other the benefit of the doubt, in order to develop new bonds of trust. Where people are experiencing growing inequality, a deepening sense of alienation, a constant barrage of adverts promoting consumerism, and an increasing foreboding about an array of risks to their wellbeing, it’s not reasonable to urge them to make the personal effort to become more compassionate, thoughtful, tolerant, and open-minded. They’re more likely to become angry, reactive, intolerant, and closed-minded. Who can blame them? Therefore – so runs this line of reasoning – it’s more important to improve the social environment than to urge the victims of that social environment to learn to turn the other cheek. Let’s stop obsessing about personal ethics and individual discipline, and instead put every priority on reducing the inequality, alienation, consumerist propaganda, and risk perception that people are experiencing. Instead of fixating upon possibilities for technology to rewire people’s biology and psychology, let’s hurry up and provide a better social safety net, a fairer set of work opportunities, and a deeper sense that “we’re all in this together”.

I answer this criticism by denying that it’s a one-way causation. We shouldn’t pick just a single route of influence – either that better individuals will result in a better society, or that a better society will enable the emergence of better individuals. On the contrary, there’s a two way flow of influence.

Yes, there’s such a thing as psychological brutalisation. In a bad environment, the veneer of civilisation can quickly peel away. Youngsters who would, in more peaceful circumstances, instinctively help elderly strangers to cross the road, can quickly degrade in times of strife into obnoxious, self-obsessed bigots. But that path doesn’t apply to everyone. Others in the same situation take the initiative to maintain a cheery, contemplative, constructive outlook. Environment influences the development of character, but doesn’t determine it.

Accordingly, I foresee a positive feedback cycle:

  • With the aid of technological assistance, more people – whatever their circumstances – will be able to strengthen the latent “angelic” parts of their human nature, and to hold in check the latent “diabolic” aspects
  • As a result, at least some citizens will be able to take wiser policy decisions, enabling an improvement in the social and psychological environment
  • The improved environment will, in turn, make it easier for other positive personal transformations to occur – involving a larger number of people, and having a greater impact.

One additional point deserves to be stressed. The environment that influences our behaviour involves not just economic relationships and the landscape of interpersonal connections, but also the set of ideas that fill our minds. To the extent that these ideas give us hope, we can find extra strength to resist the siren pull of our diabolic nature. These ideas can help us focus our attention on positive, life-enhancing activities, rather than letting our minds shrink and our characters deteriorate.

This indicates another contribution of transhumanism to building a comprehensively better future. By painting a clear, compelling image of sustainable abundance, credibly achievable in just a few decades, transhumanism can spark revolutions inside the human heart.

That potential contribution brings us back to similar ideas in FALC. Bastani wishes a populist transformation of the public consciousness, which includes inspiring new ideas for how everyone can flourish in a post-scarcity post-work society.

I’m all in favour of inspiring new ideas. The big question, of course, is whether these new ideas skate over important omissions that will undermine the whole project.

Next steps

I applaud FALC for the way it advances serious discussion about a potentially better future – a potentially much better future – that could be attained in just a few decades.

But just as FALC indicates a reason why communism could not be achieved before the present time, I want to indicate a reason why the FALC project could likewise fail.

Communism was impossible, Bastani says, before the technologies of the Third Disruption provided the means for sufficient abundance of energy, food, education, material goods, and so on. In turn, my view is that communism will be impossible (or unlikely) without attention being paid to the proactive transformation of human nature.

We should not underestimate the potential of the technologies of the Third Disruption. They won’t just provide more energy, food, education, and material goods. They won’t just enable people to have healthier bodies throughout longer lifespans. They will also enable all of us to attain better levels of mental and emotional health – psychological and spiritual wellbeing. If we want it.

That’s why the Abundance 2035 goals on which I am presently working contain a wider set of ambitions than feature in FALC. For example, these goals include aspirations that, by 2035,

  • The fraction of people with mental health problems will be 1% or less
  • Voters will no longer routinely assess politicians as self-serving, untrustworthy, or incompetent.

To join a discussion about the Abundance 2035 goals (and about a set of interim targets to be achieved by 2025), check out this London Futurists event taking place at Newspeak House on Monday 1st July.

To hear FALC author Aaron Bastani in discussion of his ideas, check out this Virtual Futures event, also taking place at Newspeak House, on Tuesday 25th June.

Finally, for an all-round assessment of the relevance of transhumanism to building a (much) better future, check out TransVision 2019, happening at Birkbeck College on the weekend of 6-7 July, where 22 different speakers will be sharing their insights.

7 June 2019

Feedback on what goals the UK should have in mind for 2035

Filed under: Abundance, BHAG, politics, TPUK, vision — Tags: , , , , — David Wood @ 1:56 pm

Some political parties are preoccupied with short-term matters.

It’s true that many short-term matters demand attention. But we need to take the time to consider, as well, some important longer-term risks and issues.

If we give these longer-term matters too little attention, we may wake up one morning and bitterly regret our previous state of distraction. By then, we may have missed the chance to avoid an enormous setback. It could also be too late to take advantage of what previously was a very positive opportunity.

For these reasons, the Transhumanist Party UK seeks to raise the focus of a number of transformations that could take place in the UK, between now and 2035.

Rather than having a manifesto for the next, say, five years, the Party is developing a vision for the year 2035 – a vision of much greater human flourishing.

It’s a vision in which there will be enough for everyone to have an excellent quality of life. No one should lack access to healthcare, shelter, nourishment, information, education, material goods, social engagement, free expression, or artistic endeavour.

The vision also includes a set of strategies by which the current situation (2019) could be transformed, step by step, into the desired future state (2035).

Key to these strategies is for society to take wise advantage of the remarkable capabilities of twenty-first century science and technology: robotics, biotech, neurotech, greentech, collabtech, artificial intelligence, and much more. These technologies can provide all of us with the means to live better than well – to be healthier and fitter than ever before; nourished emotionally and spiritually as well as physically; and living at peace with ourselves, the environment, and our neighbours both near and far.

Alongside science and technology, there’s a vital role that politics needs to play:

  • Action to encourage the kind of positive collaboration which might otherwise be undermined by free-riders
  • Action to adjust the set of subsidies, incentives, constraints, and legal frameworks under which we all operate
  • Action to protect the citizenry as a whole from the abuse of power by any groups with monopoly or near-monopoly status
  • Action to ensure that the full set of “externalities” (both beneficial and detrimental) of market transactions are properly considered, in a timely manner.

To make this vision more concrete, the Party wishes to identify a set of specific goals for the UK for the year 2035. At present, there are 16 goals under consideration. These goals are briefly introduced in a video:

As you can see, the video invites viewers to give their feedback, by means of an online survey. The survey collects opinions about the various goals: are they good as they stand? Too timid? Too ambitious? A bad idea? Uninteresting? Or something else?

The survey also invites ideas about other goals that should perhaps be added into the mix.

Since the survey has been launched, feedback has been accumulating. I’d like to share some of that feedback now, along with some of my own personal responses.

The most unconditionally popular goal so far

Of the 16 goals proposed, the one which has the highest number of responses “Good as it stands” is Goal 4, “Thanks to innovations in recycling, manufacturing, and waste management, the UK will be zero waste, and will have no adverse impact on the environment.”

(To see the rationale for each goal, along with ideas on measurement, the current baseline, and the strategy to achieve the goal, see the document on the Party website.)

That goal has, so far, been evaluated as “Good as it stands” by 84% of respondents.

One respondent gave this comment:

Legislation and Transparency are equally as important here, to gain the public’s trust that there is actual quantified benefits from this, or rather to de-abstractify recycling and make it more tangible and not just ‘another bin’

My response: succeeding with this goal will involve more than the actions of individuals putting materials into different recycling bins.

Research from the Stockholm Resilience Centre has identified nine “planetary boundaries” where human activity is at risk of pushing the environment into potentially very dangerous states of affairs.

For each of these planetary boundaries, the same themes emerge:

  • Methods are known that would replace present unsustainable practices with sustainable ones.
  • By following these methods, life would be plentiful for all, without detracting in any way from the potential for ongoing flourishing in the longer term.
  • However, the transition from unsustainable to sustainable practices requires overcoming very significant inertia in existing systems.
  • In some cases, what’s also required is vigorous research and development, to turn ideas for new solutions into practical realities.
  • Unfortunately, in the absence of short-term business cases, this research and development fails to receive the investment it requires.

In each case, the solution also follows the same principles. Society as a whole needs to agree on prioritising research and development of various solutions. Society as a whole needs to agree on penalties and taxes that should be applied to increasingly discourage unsustainable practices. And society as a whole needs to provide a social safety net to assist those peoples whose livelihoods are adversely impacted by these changes.

Left to its own devices, the free market is unlikely to reach the same conclusions. Instead, because it fails to assign proper values to various externalities, the market will produce harmful results. Accordingly, these are cases when society as a whole needs to constrain and steer the operation of the free market. In other words, democratic politics needs to exert itself.

2nd equal most popular goals

The 2nd equal most popular goal is Goal 7, “There will be no homelessness and no involuntary hunger”, with 74% responses judging it “Good as it stands”. Disagreeing, 11% of respondents judged it as “Too ambitious”. Here’s an excerpt from the proposed strategy to achieve this goal:

The construction industry should be assessed, not just on its profits, but on its provision of affordable, good quality homes.

Consider the techniques used by the company Broad Sustainable Building, when it erected a 57-storey building in Changsha, capital city of Hunan province in China, in just 19 working days. That’s a rate of three storeys per day. Key to that speed was the use of prefabricated units. Other important innovations in construction techniques include 3D printing, robotic construction, inspection by aerial drones, and new materials with unprecedented strength and resilience.

Similar techniques can in principle be used, not just to generate new buildings where none presently exist, but also to refurbish existing buildings – regenerating them from undesirable hangovers from previous eras into highly desirable contemporary accommodation.

With sufficient political desire, these techniques offer the promise that prices for property over the next 16 years might follow the same remarkable downwards trajectory witnessed in many other product areas – such as TVs, LCD screens, personal computers and smartphones, kitchen appliances, home robotics kits, genetic testing services, and many types of clothing…

Finally, a proportion of cases of homelessness arise, not from shortage of available accommodation, but from individuals suffering psychological issues. This element of homelessness will be addressed by the measures reducing mental health problems to less than 1% of the population.

The other 2nd equal most popular goal is Goal 3, “Thanks to improved green energy management, the UK will be carbon-neutral”, also with 74% responses judging it “Good as it stands”. In this case, most of the dissenting opinions (16%) held that the goal is “Too timid” – namely, that carbon neutrality should be achieved before 2035.

For the record, 4th equal in this ranking, with 68% unconditional positive assessment, were:

  • Goal 6: “World-class education to postgraduate level will be freely available to everyone via online access”
  • Goal 16: “The UK will be part of an organisation that maintains a continuous human presence on Mars”

Least popular goals

At the other end of this particular spectrum, three goals are currently tied as having the least popular support in the formats stated: 32%.

This includes Goal 9, “The UK will be part of a global “open borders” community of at least 25% of the earth’s population”. One respondent gave this comment:

Seems absolutely unworkable, would require other countries to have same policy, would have to all be developed countries. Massively problematic and controversial with no link to ideology of transhumanism

And here’s another comment:

No need to work for a living, no homelessness and open borders. What can go wrong?

And yet another:

This can’t happen until wealth/resource distribution is made equitable – otherwise we’d all be crammed in Bladerunner style cities. Not a desirable outcome.

My reply is that the detailed proposal isn’t for unconditional free travel between any two countries, but for a system that includes many checks and balances. As for the relevance to transhumanism, the actual relevance is to the improvement of human flourishing. Freedom of movement opens up many new opportunities. Indeed, migration has been found to have considerable net positive effects on the UK, including productivity, public finances, cultural richness, and individuals’ well-being. Flows of money and ideas in the reverse direction also benefit the original countries of the immigrants.

Another equal bottom goal, by this ranking, is Goal 10, “Voters will no longer routinely assess politicians as self-serving, untrustworthy, or incompetent”. 26% of respondents rated this as “Too ambitious”, and 11% as “Uninteresting”.

My reply in this case is that politicians in at least some other countries have a higher reputation than in the UK. These countries include Denmark (the top of the list), Switzerland, Netherlands, Luxembourg, Norway, Finland, Sweden, and Iceland.

What’s more, a number of practices – combining technological innovation with social innovation – seem capable of increasing the level of trust and respect for politicians:

  • Increased transparency, to avoid any suspicions of hidden motivations or vested interests
  • Automated real-time fact-checking, so that politicians know any distortions of the truth will be quickly pointed out
  • Encouragement of individual politicians with high ethical standards and integrity
  • Enforcement of penalties in cases when politicians knowingly pass on false information
  • Easier mechanisms for the electorate to be able to quickly “recall” a politician when they have lost the trust of voters
  • Improvements in mental health for everyone, including politicians, thereby diminishing tendencies for dysfunctional behaviour
  • Diminished power for political parties to constrain how individual politicians express themselves, allowing more politicians to speak according to their own conscience.

A role can also be explored for regular psychometric assessment of politicians.

The third goal in this grouping of the least popular is Goal 13, “Cryonic suspension will be available to all, on point of death, on the NHS”. 26% of respondents judged this as “Too ambitious”, and 11% as “A bad idea”. One respondent commented “Why not let people die when they are ready?” and other simply wrote “Mad shit”.

It’s true that there currently are many factors that discourage people from signing up for cryonics preservation. These include costs, problems arranging transport of the body overseas to a location where the storage of bodies is legal, the perceived low likelihood of a subsequent successful reanimation, lack of evidence of reanimation of larger biological organs, dislike of appearing to be a “crank”, apprehension over tension from family members (exacerbated if family members expect to inherit funds that are instead allocated to cryopreservation services), occasional mistrust over the motives of the cryonics organisations (which are sometimes alleged – with no good evidence – to be motivated by commercial considerations), and uncertainty over which provider should be preferred.

However, I foresee a big change in the public mindset when there’s a convincing demonstration of successful reanimation of larger biological organisms or organ. What’s more, as in numerous other fields of life, costs will decline and quality increase as the total number of experiences of a product or service increases. These are known as scale effects.

Goals receiving broad support

Now let’s consider a different ranking, when the votes for “Good as it stands” and “Too timid” are added together. This indicates strong overall support for the idea of the goal, with the proviso that many respondents would prefer a more aggressive timescale.

Actually this doesn’t change the results much. Compared to the goals already covered, there’s only one new entrant in the top 5, namely at position 3, with a combined positive rating of 84%. That’s for Goal 1, “The average healthspan in the UK will be at least 90 years”. 42% rated this “Good as it stands” and another 42% rated it as “Too timid”.

For the record, top equal by this ranking were Goal 3 (74% + 16%) and Goal 4 (84% + 5%).

The only other goal with a “Too timid” rating of greater than 30% was Goal 15, “Fusion will be generating at least 1% of the energy used in the UK” (32%).

The goals most actively disliked

Here’s yet another way of viewing the data: the goals which had the largest number of “A bad idea” responses.

By this measure, the goal most actively disliked (with 21% judging it “A bad idea”) was Goal 11, “Parliament will involve a close partnership with a ‘House of AI’ (or similar) revising chamber”. One respondent commented they were “wary – AI could be Stalinist in all but name in their goal setting and means”.

My reply: To be successful, the envisioned House of AI will need the following support:

  • All algorithms used in these AI systems need to be in the public domain, and to pass ongoing reviews about their transparency and reliability
  • Opaque algorithms, or other algorithms whose model of operation remain poorly understood, need to be retired, or evolved in ways addressing their shortcomings
  • The House of AI will not be dependent on any systems owned or operated by commercial entities; instead, it will be “AI of the people, by the people, for the people”.

Public funding will likely need to be allocated to develop these systems, rather than waiting for commercial companies to create them.

The second most actively disliked goal was Goal 5, “Automation will remove the need for anyone to earn money by working” (16%). Here are three comments from respondents:

Unlikely to receive support, most people like the idea of work. Plus there’s nothing the party can do to achieve this automation, depends on tech progress. UBI could be good.

What will be the purpose of humans?

It removes the need to work because their needs are being met by…. what? Universal Basic Income? Automation by itself cuts out the need for employers to pay humans to do the work but it doesn’t by itself ensure that people’s need will be met otherwise.

I’ve written on this topic many times in the past – including in Chapter 4, “Work and purpose “of my previous book, “Transcending Politics” (audio recording available here). There absolutely are political actions which can be taken, to accelerate the appropriate technological innovations, and to defuse the tensions that will arise if the fruits of technological progress end up dramatically increasing the inequality levels in society.

Note, by the way, that this goal does not focus on bringing in a UBI. There’s a lot more to it than that.

Clearly there’s work to be done to improve the communication of the underlying ideas in this case!

Goals that are generally unpopular

For a final way of ranking the data, let’s add together the votes for “A bad idea” and “Too ambitious”. This indicates ideas which are generally unpopular, in their current form of expression.

Top of this ranking, with 42%, is Goal 8, “The crime rate will have been reduced by at least 90%”. Indeed, the 42% all judged this goal as “Too ambitious”. One comment received was

Doesn’t seem within the power of any political party to achieve this, except a surveillance state

Here’s an excerpt of the strategy proposed to address this issue:

The initiatives to improve mental health, to eliminate homelessness, and to remove the need to work to earn an income, should all contribute to reducing the social and psychological pressures that lead to criminal acts.

However, even if only a small proportion of the population remain inclined to criminal acts, the overall crime rate could still remain too high. That’s because small groups of people will be able to take advantage of technology to carry out lots of crime in parallel – via systems such as “ransomware as a service” or “intelligent malware as a service”. The ability of technology to multiply human power means that just a few people with criminal intent could give rise to large amounts of crime.

That raises the priority for software systems to be highly secure and reliable. It also raises the priority of intelligent surveillance of the actions of people who might carry out crimes. This last measure is potentially controversial, since it allows part of the state to monitor citizens in a way that could be considered deeply intrusive. For this reason, access to this surveillance data will need to be restricted to trustworthy parts of the overall public apparatus – similar to the way that doctors are trusted with sensitive medical information. In turn, this highlights the importance of initiatives that increase the trustworthiness of key elements of our national infrastructure.

On a practical basis, initiatives to understand and reduce particular types of crime should be formed, starting with the types of crime (such as violent crime) that have the biggest negative impact on people’s lives.

Second in this ranking of general unpopularity, at 37%, is Goal 13, on cryonics, already mentioned above.

Third, at 32%, is Goal 11, on the House of AI, also already mentioned.

Suggestions for other goals

Respondents offered a range of suggestions for other goals that should be included. Here are a sample, along with brief replies from me:

Economic growth through these goals needs to be quantified somehow.

I’m unconvinced that economic growth needs to be prioritised. Instead, what’s important is agreement on a more appropriate measure to replace the use of GDP. That could be a good goal to consider.

Support anti-ageing research, gene editing research, mind uploading tech, AI alignment research, legalisation of most psychedelics

In general the goals have avoided targeting technology for technology’s sake. Instead, technology is introduced only because it supports the goals of improved overall human flourishing.

I think there should be a much greater focus in our education system on developing critical thinking skills, and a more interdisciplinary approach to subjects should be considered. Regurgitating information is much less important in a technologically advanced society where all information is a few clicks away and our schooling should reflect that.

Agreed: the statement of the education goal should probably be reworded to take these points into account.

A new public transport network; Given advances in technology regarding AI and electrical vehicles, a goal on par with others you’ve listed here would be to develop a transport system to replace cars with a decentralised public transportation network, whereby ownership of cars is replaced with the use of automated vehicles on a per journey basis, thus promoting better use of resources and driving down pollution, alongside hopefully reducing vehicular incidents.

That’s an interesting suggestion. I wonder how others think about it?

Routine near-earth asteroid mining to combat earthside resource depletion.

Asteroid mining is briefly mentioned in Goal 4, on recycling and zero waste.

Overthrow of capitalism and class relations.

Ah, I would prefer to transcend capitalism than to overthrow it. I see two mirror problems in discussing the merits of free markets: pro-market fundamentalism, and anti-market fundamentalism. I say a lot more on that topic in Chapter 9, Markets and fundamentalism”, of my book “Transcending Politics”.

The right to complete freedom over our own bodies should be recognised in law. We should be free to modify our bodies and minds through e.g. implants, drugs, software, bioware, as long as there is no significant risk of harm to others.

Yes, I see the value of including such a goal. We’ll need work to explore what’s meant by “risk of harm to others”.

UK will be part of the moon-shot Human WBE [whole brain emulation] project after being successful in supporting the previous Mouse WBE moon-shot project.

Yes, that’s an interesting suggestion too. Personally I see the WBE project as being longer-term, but hey, that may change!

Achieving many of the laudable goals rests on reshaping the current system of capitalism, but that itself is not a goal. It should be.

I’m open to suggestions for wording on this, to make it measurable.

Deaths due to RTA [road traffic accidents] cut to near zero

That’s another interesting suggestion. But it may not be on the same level as some of the existing ones. I’m open to feedback here!

Next steps

The Party is very grateful for the general feedback received so far, and looks forward to receiving more!

Discussion can also take place on the Party’s Discourse, https://discourse.transhumanistparty.org.uk/. Anyone is welcome to create an account on that site and become involved in the conversations there.

Some parts of the Discourse are reserved for paid-up members of the Party. It will be these members who take the final decisions as to which goals to prioritise.

24 April 2019

Supporting the SomosMiel revolution: time to act

The most important changes often arise from the bold actions of outsiders.

Those of us who desire positive humanitarian change need to be flexible enough to recognise which outsiders can be the best vehicles for the transformations we want to see in society.

And we need to be ready to get behind these opportunities when they arise.

Consider the key example of the transformation of healthcare, towards a new focus on the reversal of aging as providing the best route to better health for everyone.

For those of us who hold that vision of the forthcoming “abolition of aging”, what are the most practical steps to make that vision a reality?

Here’s my answer. It’s time to get behind “Somos Miel”.

Futuristicamente

Miel is a recently formed political party, which is taking part in Spain in the elections on the 26th of May to the European Parliament.

The word “miel” has two meanings. First, it’s the Spanish for “honey”. Somos Miel means “We are honey”. The association of honey with improved health exists in many cultures around the world.

Second, MIEL is the abbreviation for “Movimiento Independiente Euro Latino”. Translating from Spanish to English gives: “The Independent Latin Euro Movement”.

Heading the party’s list of candidates is José Cordeiro, described as follows in the introduction of his Wikipedia article:

José Luis Cordeiro is an engineer, economist, futurist, and transhumanist, who has worked on different areas including economic development, international relations, Latin America, the European Union, monetary policy, comparison of constitutions, energy trends, cryonics, and longevity. Books he has authored include The Great TabooConstitutions Around the World: A Comparative View from Latin America, and (in Spanish) El Desafio Latinoamericano (“The Latin American challenge”) and La Muerte de la Muerte (“The death of death”).

Cordeiro was born in Caracas, Venezuela from Spanish parents who emigrated from Madrid during the Franco dictatorship…

He’s evidently a man of many talents. He’s by no means a European political insider, infused by the old ways of doing politics. Instead, he brings with him a welcome spread of bold outsider perspectives.

When asked if he is from “the right” or “the left”, his answer, instead, is that he is from “the future”. Indeed, he often appends the greeting “futuristicamente” after his name, meaning “Yours futuristically”.

José is also known as a vocal advocate for “revolution” – a revolution in the potential of humanity. He has the courage to advocate ideas that are presently unpopular – ideas that he believes will soon grow in public understanding and public support.

Working together

I first met José at the TransVision 2006 conference in Helsinki, Finland. I remember how he spoke with great passion about the positive possibilities of technology in the next stage in the evolution of life on the earth. As the abstract from that long-ago talk proclaims:

Since the Big Bang, the universe has been in constant evolution and continuous transformation. First there were physical and chemical processes, then biological evolution, and finally now technological evolution. As we begin to ride the wave into human redesign, the destination is still largely unknown but the opportunities are almost limitless.

Biological evolution continues but it is just too slow to achieve the goals now possible thanks to technological evolution. Natural selection with trial and error can now be substituted by technical selection with engineering design. Humanity’s monopoly as the only advanced sentient life form on the planet will soon come to an end, supplemented by a number of posthuman incarnations. Moreover, how we re-engineer ourselves could fundamentally change the ways in which our society functions, and raise crucial questions about our identities and moral status as human beings.

Since that first meeting, the two of us have collaborated on many projects. For example, we both sit on the board of directors of Humanity+. José has spoken on a number of occasions at the London Futurists events I organise – such as TransVision 2019 which will take place in London on 6-7 July. And we are named as co-authors of the Spanish language book La Muerte de la Muerte which has attained wide press coverage throughout Spain.

Another thing we have in common is that we are both impatient for change. We’re not content to sit back and watch impersonal forces operate in society at their own pace and following their own inner direction. We believe in doing more than cheering from the sidelines. We both believe that the actions of individuals, wisely targeted, can have a huge impact on human affairs. We both believe that inspired political action, at the right time, can unleash vast public resources in support of important transformational projects.

We also recognise that delays have major consequences. Each single day that passes without the widespread availability of reliable treatments for biological aging, upwards of 100,000 people die as a result of aging-related diseases. That’s 100,000 unnecessary human deaths, every single day – preceded in almost every case by extended suffering and heartache.

Moving faster

On a positive note, there is considerable good news to report, regarding progress with regenerative medicine and rejuvenation biotechnology. The Undoing Aging conference in Berlin last month contained an encouraging set of reports from a host of world-leading scientists working in this field. Keep an eye on the Undoing Aging channel in YouTube for videos from that event. For a review of the human implications of these scientific breakthroughs, the forthcoming RAADfest in Las Vegas in October will be well worth attending – to hear about “the most powerful information and inspiration for staying alive”.

But the opportunity exists for progress to go much faster, if more elements of society decide to put their weight behind this project.

That’s where Miel comes in. José is a well-known figure in Spain, due to his many media appearances there. Current indications are that he stands a fighting chance of being elected to the European Parliament. If elected, he’ll be a tireless public advocate for the cause of rejuvenation healthcare. He’ll promote studies of the economic implications of different scenarios for the treatment of aging. He’ll also champion the creation of a European Agency for Anti-Aging, to boost research on how addressing aging can have multiple positive benefits for the treatments of individual aging-related diseases, such as dementia, cancer, and heart failure.

You’ll find a number of articles on the Miel blog about these aspects of Miel policy. For example, see “Within 25 years, dying will be optional” and “I’m not afraid of artificial intelligence, I’m afraid of human stupidity”.

You’ll also observe from its website how Miel is, wisely, giving voice in Spain to a community that perceives itself to be under-represented, namely the Latin Americans – people like José himself, who was born in Venezuela. Those of us who aren’t Latin Americans should appreciate the potential for positive change that this political grouping can bring.

Time for action

Despite the groundswell of popular support that Miel is receiving, it’s still in the balance whether the party will indeed receive enough votes throughout Spain to gain at least one member in the European Parliament.

I’m told that what will make a big difference is an old-fashioned word: money.

If it receives more donations, Miel will be able to place more advertisements in social media (Facebook, YouTube, Instagram, etc). With its messages in front of more eyeballs, the chance increases of popular support at the ballot box.

In a better world, money would have a lower influence over politics. But whilst we should all aspire to move politics into that better state, we need to recognise the present reality. In that reality, donations have a big role to play.

To support Miel, visit the party’s donation page. Donations are accepted via credit cards, debit cards, or PayPal.

But please don’t delay. The elections are in just one month’s time. The time for action is now.

25 January 2019

To make a dent in the universe

Suppose you saw that science and technology had the potential to significantly extend healthy lifespans, but that very few scientists or technologists were working on these projects.

Suppose you disagreed with the government spending huge sums of public money on the military – on the capability to kill – and wished for more spending instead on the defeat of aging (and all the terrible diseases that accelerate with aging).

Suppose you felt that too many leadership decisions in society were influenced by out-dated ideologies – for example, by belief systems that regard as literal many of the apocalyptic statements in millennia-old religious scriptures – and that you preferred decisions to be determined by cool reason and scientific evidence.

What might you do?

If you were Zoltan Istvan, in October 2014, you might decide on an audacious project. You might decide to announce your candidacy for becoming the President of the United States, as a representative of a newly conceived “Transhumanist Party”. You might decide that the resulting media attention would raise the public understanding of the possibility and desirability of using science and technology in favour of transhumanist goals. You might decide the project had a fair chance of making a dent in the universe – of accelerating humanity’s trajectory onwards and upwards.

Here’s what Istvan wrote at the time, in the Huffington Post:

Should a Transhumanist Run for US President?

I’m in the very early stages of preparing a campaign to try to run in the 2016 election for US President. I’ll be doing it as a transhumanist for the Transhumanist Party, a political organization I recently founded that seeks to use science and technology to radically improve the human being and the society we live in.

In addition to upholding American values, prosperity, and security, the three primary goals of my political agenda are as follows:

1) Attempt to do everything possible to make it so this country’s amazing scientists and technologists have resources to overcome human death and aging within 15-20 years—a goal an increasing number of leading scientists think is reachable.

2) Create a cultural mindset in America that embracing and producing radical technology and science is in the best interest of our nation and species.

3) Create national and global safeguards and programs that protect people against abusive technology and other possible planetary perils we might face as we transition into the transhumanist era.

In line with his confident personality, Istvan went on, in the very next paragraph, to issue a challenge to the status quo:

These three goals are so simple and obvious, you’d think every politician in the 21st Century would be publicly and passionately pursuing them. But they’re not. They’re more interested in landing your votes, in making you slave away at low-paying jobs, in keeping you addicted to shopping for Chinese-made trinkets, in forcing you to accept bandage medicine and its death culture, and in getting you to pay as much tax as possible for far-off wars (places where most of us will never step foot in).

In later months, Istvan decided to add two more ingredients to the project, to increase its potential impact:

  1. A declaration of a “Transhumanist Bill of Rights” in Washington DC
  2. The journey of a huge coffin-shaped “Immortality Bus” across the USA, to reach Washington DC.

What happened next has already been the subject of chapters in at least two books:

After the books, the film.

“Immortality or bust” has its first public showing tomorrow (Jan 26th), at the historic United Artists Theatre in Los Angeles, as part of the Raw Science Film Festival. The film has already received the “Raw Breakthrough Award” associated with this festival. In view of the public interest, I expect people will have the chance to see it on Netflix and/or HBO in due course.

I had the opportunity to view a preview copy earlier this week. The film stirred a range of different emotions in me, particularly towards the end. (Spoilers are omitted from this blogpost!)

The producer, Daniel Sollinger, cleverly weaves together several different strands throughout the film:

  • The sheer audacity of the venture
  • The reactions of Istvan’s family – his wife, his mother, and his father – and how these reactions evolve over time
  • The various journalists who are shown interviewing Istvan, sometimes expressing sympathy, and sometimes expressing bemusement
  • Istvan’s interactions with the other transhumanists, futurists and life-extensionists who he meets on his journey across the USA
  • The struggles of the bus itself – the problems experienced in its “plumbing” (oil), as a kind of counterpoint to Istvan’s wishes for radical improvements in human biology
  • Encounters with members of different political parties.

There were a couple of times I wanted to yell at the screen, when I thought that Istvan’s interlocutors were making indefensible claims:

  • When John McAfee (yes, that John McAfee) was giving his interpretation of Darwinian evolutionary theory
  • When John Horgan of the Scientific American effectively labelled transhumanism as a kind of cult that posed a problem for the good reputation of science.

Assessment

How will history ultimately assess the Immortality Bus and the Transhumanist Bill of Rights? In my view, it’s too early to say. In the meantime, the film Immortality or Bust provides a refreshing birds-eye view of both the struggles and the (minor) triumphs of the adventure so far.

Those who would criticise Istvan for his endeavours – and there are many – need to say what they would do instead.

Some choose to work on the technology itself. That’s something I respect and admire. My own assessment, however, is that the community of transhumanists needs to do more than contributing personal efforts to the science, technology, and/or entrepreneurial development of pro-health startups. We need to change the public conversation – something that Istvan has persistently tried to do.

In particular, we need to find the best ways to raise public awareness of the possibility and desirability of many more people getting involved in science and technology projects in support of significantly increased human flourishing. We need to answer the naysaying objections of bioconservatives and other opponents of transhumanism. We need to affirm that humanity can transcend the limitations which have held us back so many times in the past – the limitations in our bodies, our intellects, our emotions, and our social structures. We need to proclaim (as on the opening page of my own newly published book) that a new era is at hand: the era of sustainable superabundance – an era in which the positive potential of humanity can develop in truly profound ways.

We also need to transform the political environment in which we are all operating – a political environment that, if anything, has grown more dysfunctional over the last few years. That takes us back to the subject of the Transhumanist Party.

Going forwards

The Transhumanist Party which Istvan conjured into existence back in October 2014 has travelled a long way since then. Under the capable stewardship of Gennady Stolyarov (who took over as Chair of the party in November 2016), the U.S. Transhumanist Party has grown a leadership team of many talents, a website with rich content, and a platform with multiple policy proposals in various stages of readiness for adoption as legislation. It has revised, twice, the Transhumanist Bill of Rights, with version 3.0 being agreed by the party’s internal democratic processes on Dec 2-9 last year.

So far as I’m aware, there’s no v3.0 (or even v2.0) of the immortality bus. Yet.

What about overseas? Well, most of the Transhumanist Party organisations set up in other countries, from 2015 onwards, have long since faded from view. In the UK, however, a number of us feel it’s time to reboot that party. Watch out for more news! Or come to the London Futurists event on the 2nd of February, “Politics for profoundly enhanced human wellbeing”, where you will hear announcements from the UK party’s new joint leaders.

19 July 2018

Serious questions over PwC’s report on the impact of AI on jobs

Filed under: politics, robots, UBI, urgency — Tags: , , , , — David Wood @ 7:47 pm

A report (PDF) issued on Tuesday by consulting giant PwC has received a lot of favourable press coverage.

Here’s PwC’s own headline summary: “AI and related technologies should create as many jobs as they displace”:

AI and related technologies such as robotics, drones and driverless vehicles could displace many jobs formerly done by humans, but will also create many additional jobs as productivity and real incomes rise and new and better products are developed.

We estimate that these countervailing displacement and income effects on employment are likely to broadly balance each other out over the next 20 years in the UK, with the share of existing jobs displaced by AI (c.20%) likely to be approximately equal to the additional jobs that are created…

BBC News picked up the apparent good news: “AI will create as many jobs as it displaces – report”:

A growing body of research claims the impact of AI automation will be less damaging than previously thought.

Forbes chose this headline: “AI Won’t Kill The Job Market But Keep It Steady, PwC Report Says”:

It’s impossible to say precisely how artificial intelligence will disrupt the job market, so researchers at PwC have taken a bird’s-eye view and pointed to the results of sweeping economic changes.

Their prediction, in a new report out Tuesday, is that it will all balance out in the end.

PwC are to be commended for setting out their reasoning clearly, over 16 pages (p36-p51) in their PDF report.

But three major questions need to be raised about their analysis. These questions throw a different light on the conclusions of the report.

This diagram covers the essence of the model used by PwC:

Q1: How will firms handle the “income effect”?

I agree that automation is likely to generate significant amounts of additional profits, as well as market demand for extra goods and services.

But what’s the reason for assuming that firms will “hire more workers” in response to this demand?

Mightn’t it be more financially attractive to these companies to incorporate more automation instead? Mightn’t more robots be a better investment than more human workers?

The justification for thinking that there will be plenty of new jobs for humans in this scenario, is the assumption that many tasks will remain outside the capability of automation. That is, the analysis depends on humans having skills which cannot be duplicated by AIs, software, robots, or other automation. The assumption is true today, but will it remain true over the next two decades?

PwC’s report points to sectors such as healthcare, social work, education, and science, as areas where jobs are likely to grow over the next twenty years. But that takes us to the second major question.

Q2: What prevents acceleration in the capabilities of AI?

PwC’s report, like many others that mainstream consultancies produce, basically assumes that the AI of 10-15 years time will be a simple extension of today’s AI.

Of course, no one knows for sure how AI will develop over the years ahead. But I see it as irresponsible to neglect scenarios in which AI progresses in leaps and bounds.

Just as the whole field of AI was given a huge shot in the arm by unexpected breakthroughs in the performance of deep learning from around 2012 onwards, we should be open to the possibility of additional breakthroughs in the years ahead, enabled by a combination of the following trends:

  • Huge commercial prizes are awaiting the companies that can improve their AI capabilities
  • Huge military prizes are awaiting the countries that can improve their AI capabilities
  • More developers, entrepreneurs, designers, and systems integrators are active in AI than ever before, exploring an incredible variety of different concepts
  • Increased knowledge of how the human brain operates is being fed into ideas for how to improve AI
  • Cheaper hardware, including easy access to vast cloud computing resources, means that investigations of novel AI models can take place more quickly than before
  • AI can be used to improve some of its own capabilities, in positive feedback loops, and in new “generative adversarial” settings
  • Hardware innovations including new chipset designs and quantum computing could turn today’s crazy ideas into tomorrow’s practical realities.

Today’s AI already shows considerable promise in fields such as transfer learning, artificial creativity, the detection and simulation of emotions, and concept formulation. How quickly will progress occur? My view: slowly, and then quickly.

Q3: How might the “displacement effect” be altered?

In parallel with rating the income effect much more highly than I think is prudent, the PwC analysis offers in my view some dubious reasoning for lowering the displacement effect:

Although we estimate that up to 30% of existing UK jobs could be at high risk of being automated, a job being at “high risk” of being automated does not mean that it will definitely be automated, as there could be a range of economic, legal and regulatory and organisational barriers to the adoption of these new technologies…

We think it is reasonable to scale down our estimates by a factor of two thirds to reflect these barriers, so our central estimate of the proportion of existing jobs that will actually be automated over the next 20 years is reduced to 20%.

Yes, a whole panoply of human factors can alter the speed of the take-up of new technology. But such factors aren’t always brakes. In some circumstances – as perceptions change – they can become accelerators.

Consider if companies in one country (e.g. the UK) are slow to adopt some new technology, but rival companies overseas act more quickly. Declining competitiveness will be one reason for the mindset to change.

A different example: attitudes towards interracial marriages, or towards same-sex marriages, changed slowly for a long time, until they started to change faster.

Q4: What are the consequences of negligent forecasting?

Here’s a bonus question. Does it really matter if PwC get these forecasts wrong? Or is it better to err on the conservative side?

I imagine PwC consultants reasoning along the following lines. Let’s avoid panic. Changes in the job market are likely to be slow in at least the shorter term. Provided that remains the case, the primary pieces of policy advice offered in the report make sense:

Government should invest more in ‘STEAM’ skills that will be most useful to people in this increasingly automated world.

Place-based industrial strategy should target job creation.

The report follows up these recommendations with a different kind of policy advice:

Government should strengthen the safety net for those who find it hard to adjust to technological changes.

But the question is: how much attention should be given, in relative terms, to these two different kinds of advice? Should society put more effort into new training programmes, or in redesigning the prevailing social contract?

So long as the impact of automation on the job market is relatively small, perhaps less effort is needed to work on a better social safety net. But if the impact could be significantly higher, well, many people find that too frightening to contemplate. Hence the desire to sweep such ideas under the carpet – similar to how polite society once avoided using the word “cancer”.

My own view is that the balance of emphasis in the PwC report is the wrong way round. Society urgently needs to anticipate new structures (and new philosophies) that cope with large proportions of the workforce no longer being able to earn income from paid employment.

That’s the argument I made in, for example, my opening remarks in the recent London Futurists conference on UBIA (Universal Basic Income and/or Alternatives):

… and I took the time at the end of the event to back up my assertions with a wider analysis:

To be clear, I see many big challenges in working out how a new post-work social contract will operate – and how society can transition from our present system to this new one. But the fact these tasks are hard, is all the more reason to look at them calmly and carefully. Obscuring the need for these tasks, under a flourish of proposals to increase ‘STEAM’ skills and improve apprentice schemes is, sadly, irresponsible.

Older Posts »

Blog at WordPress.com.