dw2

3 November 2022

Four options for avoiding an AI cataclysm

Let’s consider four hard truths, and then four options for a solution.

Hard truth 1: Software has bugs.

Even when clever people write the software, and that software passes numerous verification tests, any complex software system generally still has bugs. If the software encounters a circumstance outside its verification suite, it can go horribly wrong.

Hard truth 2: Just because software becomes more powerful, that won’t make all the bugs go away.

Newer software may run faster. It may incorporate input from larger sets of training data. It may gain extra features. But none of these developments mean the automatic removal of subtle errors in the logic of the software, or shortcomings in its specification. It might still reach terrible outcomes – just quicker than before!

Hard truth 3: As AI becomes more powerful, there will be more pressure to deploy it in challenging real-world situations.

Consider the real-time management of:

  • Complex arsenals of missiles, anti-missile missiles, and so on
  • Geoengineering interventions, which are intended to bring the planet’s climate back from the brink of a cascade of tipping points
  • Devious countermeasures against the growing weapons systems of a group (or nation) with a dangerously unstable leadership
  • Social network conversations, where changing sentiments can have big implications for electoral dynamics or for the perceived value of commercial brands
  • Ultra-hot plasmas inside whirling magnetic fields in nuclear fusion energy generators
  • Incentives for people to spend more money than is wise, on addictive gambling sites
  • The buying and selling of financial instruments, to take advantage of changing market sentiments.

In each case, powerful AI software could be a very attractive option. A seductive option. Especially if it has been written by clever people, and appears to have a good track record of delivering results.

Until it goes wrong. In which case the result could be cataclysmic. (Accidental nuclear war. The climate walloped past a tipping point in the wrong direction. Malware going existentially wrong. Partisan outrage propelling a psychological loose cannon over the edge. Easy access to weapons of mass destruction. Etc.)

Indeed, the real risk of AI cataclysm – as opposed to the Hollywood version of any such risk – is that an AI system may acquire so much influence over human society and our surrounding environment that a mistake in that system could cataclysmically reduce human wellbeing all over the world. Billions of lives could be extinguished, or turned into a very pale reflection of their present state.

Such an outcome could arise in any of four ways – four catastrophic error modes. In brief, these are:

  1. Implementation defect
  2. Design defect
  3. Design overridden
  4. Implementation overridden.

Hard truth 4: There are no simple solutions to the risks described above.

What’s more, people who naively assume that a simple solution can easily be put in place (or already exists) are making the overall situation worse. They encourage complacency, whereas greater attention is urgently needed.

But perhaps you disagree?

That’s the context for the conversation in Episode 11 of the London Futurists Podcast, which was published yesterday morning.

In just thirty minutes, that episode dug deep into some of the ideas in my recent book The Singularity Principles. Co-host Calum Chace and I found plenty on which to agree, but had differing opinions on one of the most important questions.

Calum listed three suggestions that people sometimes make for how the dangers of potentially cataclysmic AI might be handled.

In response, I described a different approach – something that Calum said would be a fourth idea for a solution. As you can hear from the recording of the podcast, I evidently left him unconvinced.

Therefore, I’d like to dig even deeper.

Option 1: Humanity gets lucky

It might be the case that AI software that is smart enough, will embody an unshakeable commitment toward humanity having the best possible experience.

Such software won’t miscalculate (after all, it is superintelligent). If there are flaws in how it has been specified, it will be smart enough to notice these flaws, rather than stubbornly following through on the letter of its programming. (After all, it is superintelligent.)

Variants of this wishful thinking exist. In some variants, what will guarantee a positive outcome isn’t just a latent tendency of superintelligence toward superbenevolence. It’s the invisible hand of the free market that will guide consumer choices away from software that might harm users, toward software that never, ever, ever goes wrong.

My response here is that software which appears to be bug free can, nevertheless, harbour deep mistakes. It may be superintelligent, but that doesn’t mean it’s omniscient or infallible.

Second, software which is bug free may be monstrously efficient at doing what some of its designers had in mind – manipulating consumers into actions which increase the share price of a given corporation, despite all the externalities arising.

Moreover, it’s too much of a stretch to say that greater intelligence always makes your wiser and kinder. There are plenty of dreadful counterexamples, from humans in the worlds of politics, crime, business, academia, and more. Who is to say that a piece of software with an IQ equivalent to 100,000 will be sure to treat us humans any better than we humans sometimes treat swarms of insects (e.g. ant colonies) that get in our way?

Do you feel lucky? My view is that any such feeling, in these circumstances, is rash in the extreme.

Option 2: Safety engineered in

Might a team of brilliant AI researchers, Mary and Flo (to make up a couple of names), devise a clever method that will ensure their AI (once it is built) never harms humanity?

Perhaps the answer lies in some advanced mathematical wizardry. Or in chiselling a 21st century version of Asimov’s Laws of Robotics into the chipsets at the heart of computer systems. Or in switching from “correlation logic” to “causation logic”, or some other kind of new paradigm in AI systems engineering.

Of course, I wish Mary and Flo well. But their ongoing research won’t, by itself, prevent lots of other people releasing their own unsafe AI first. Especially when these other engineers are in a hurry to win market share for their companies.

Indeed, the considerable effort being invested by various researchers and organisations in a search for a kind of fix for AI safety is, arguably, a distraction from a sober assessment of the bigger picture. Better technology, better product design, better mathematics, and better hardware can all be part of the full solution. But that full solution also needs, critically, to include aspects of organisational design, economic incentives, legal frameworks, and political oversight. That’s the argument I develop in my book. We ignore these broader forces at our peril.

Option 3: Humans merge with machines

If we can’t beat them, how about joining them?

If human minds are fused into silicon AI systems, won’t the good human sense of these minds counteract any bugs or design flaws in the silicon part of the hybrid formed?

With such a merger in place, human intelligence will automatically be magnified, as AI improves in capability. Therefore, we humans wouldn’t need to worry about being left behind. Right?

I see two big problems with this idea. First, so long as human intelligence is rooted in something like the biology of the brain, the mechanisms for any such merger may only allow relatively modest increases in human intelligence. Our biological brains would be bottlenecks that constrain the speed of progress in this hybrid case. Compared to pure AIs, the human-AI hybrid would, after all, be left behind in this intelligence race. So much for humans staying in control!

An even bigger problem is the realisation that a human with superhuman intelligence is likely to be at least as unpredictable and dangerous as an AI with superhuman intelligence. The magnification of intelligence will allow that superhuman human to do all kinds of things with great vigour – settling grudges, acting out fantasies, demanding attention, pursuing vanity projects, and so on. Recall: power tends to corrupt. Such a person would be able to destroy the earth. Worse, they might want to do so.

Another way to state this point is that, just because AI elements are included inside a person, that won’t magically ensure that these elements become benign, or are subject to the full control of the person’s best intentions. Consider as comparisons what happens when biological viruses enter a person’s body, or when a cancer grows there. In neither case does the intruding element lose its ability to cause damage, just on account of being part of a person who has humanitarian instincts.

This reminds me of the statement that is sometimes heard, in defence of accelerating the capabilities of AI systems: “I am not afraid of artificial intelligence. I am afraid of human stupidity”.

In reality, what we need to fear is the combination of imperfect AI and imperfect humanity.

The conclusion of this line of discussion is that we need to do considerably more than enable greater intelligence. We also need to accelerate greater wisdom – so that any beings with superhuman intelligence will operate truly beneficently.

Option 4: Greater wisdom

The cornerstone insight of ethics is that, just because we can do something, and indeed may even want to do that thing, it doesn’t mean we should do that thing.

Accordingly, human societies since prehistory have placed constraints on how people should behave.

Sometimes, moral sanction is sufficient: people constrain their actions in deference to public opinion. In other cases, restrictions are codified into laws and regulations.

Likewise, just because a corporation could boost its profits by releasing a new version of its AI software, that doesn’t mean it should release that software.

But what is the origin of these “should” imperatives? And how do we resolve conflicts, when two different groups of people champion two different sets of ethical intuitions?

Where can we find a viable foundation for ethical restrictions – something more solid than “we’ve always done things like this” or “this feels right to me” or “we need to submit to the dictates in our favourite holy scripture”?

Welcome to the world of philosophy.

It’s a world that, according to some observers, has made little progress over the centuries. People still argue over fundamentals. Deontologists square off against consequentialists. Virtue ethicists stake out a different position.

It’s a world in which it is easier to poke holes in the views held by others, rather than defending a consistent view of your own.

But it’s my position that the impending threat of cataclysmic AI impels us to reach a wiser agreement.

It’s like how the devastation of the Covid pandemic impelled society to find significantly quicker ways to manufacture, verify, and deploy vaccines.

It’s like how society can come together, remarkably, in a wartime situation, notwithstanding the divisions that previously existed.

In the face of the threats of technology beyond our control, minds should focus, with unprecedented clarity. We’ll gradually build a wider consensus in favour of various restrictions and, yes, in favour of various incentives.

What’s your reaction? Is option 4 simply naïve?

Practical steps forward

Rather than trying to “boil the ocean” of philosophical disputes over contrasting ethical foundations, we can, and should, proceed in a kaizen manner.

To start with, we can give our attention to specific individual questions:

  • What are the circumstances when we should welcome AI-powered facial recognition software, and when should we resist it?
  • What are the circumstances when we should welcome AI systems that supervise aspects of dangerous weaponry?
  • What are the circumstances that could transform AI-powered monitoring systems from dangerous to helpful?

As we reach some tentative agreements on these individual matters, we can take the time to highlight principles with potential wider applicability.

In parallel, we can revisit some of the agreements (explicit and implicit) for how we measure the health of society and the liberties of individuals:

  • The GDP (Gross Domestic Product) statistics that provide a perspective on economic activities
  • The UDHR (Universal Declaration of Human Rights) statement that was endorsed in the United Nations General Assembly in 1948.

I don’t deny it will be hard to build consensus. It will be even harder to agree how to enforce the guidelines arising – especially in light of the wretched partisan conflicts that are poisoning the political processes in a number of parts of the world.

But we must try. And with some small wins under our belt, we can anticipate momentum building.

These are some of the topics I cover in the closing chapters of The Singularity Principles:

I by no means claim to know all the answers.

But I do believe that these are some of the most important questions to address.

And to help us make progress, something that could help us is – you guessed it – AI. In the right circumstances, AI can help us think more clearly, and can propose new syntheses of our previous ideas.

Thus today’s AI can provide stepping stones to the design and deployment of better, safer, wiser AI tomorrow. That’s provided we maintain human oversight.

Footnotes

The image above includes a design by Pixabay user Alexander Antropov, used with thanks.

See also this article by Calum in Forbes, Taking Back Control Of The Singularity.

19 June 2020

Highlighting probabilities

Filed under: communications, education, predictability, risks — Tags: , , — David Wood @ 7:54 pm

Probabilities matter. If society fails to appreciate probabilities, and insists on seeing everything in certainties, a bleak future awaits us all (probably).

Consider five predictions, and common responses to these predictions.

Prediction A: If the UK leaves the EU without a deal, the UK will experience a significant economic downturn.

Response A: We’ve heard that prediction before. Before the Brexit vote, it was predicted that a major economic downturn would happen straightaway if the result was “Leave”. That downturn failed to take place. So we can discard the more recent prediction. It’s just “Project Fear” again.

Prediction B (made in Feb 2020): We should anticipate a surge in infections and deaths from Covid-19, and take urgent action to prevent transmissions.

Response B: We’ve heard that prediction before. Bird flu was going to run havoc. SARS and MERS, likewise, were predicted to kill hundreds of thousands. These earlier predictions were wrong. So we can discard the more recent prediction. It’s just “Project Pandemic” again.

Prediction C: We should prepare for the advent of artificial superintelligence, the most disruptive development in all of human history.

Response C: We’ve heard that prediction before. AIs more intelligent than humans have often been predicted. No such AI has been developed. These earlier predictions were wrong. So there’s no need to prepare for ASI. It’s just “Project Hollywood Fantasy” again.

Prediction D: If we don’t take urgent action, the world faces a disaster from global warming.

Response D: We’ve heard that prediction before. Climate alarmists told us some time ago “you only have twelve years to save the planet”. Twelve years passed, and the planet is still here. So we can ignore what climate alarmists are telling us this time. It’s just “Project Raise Funding for Climate Science” again.

Prediction E (made in mid December 1903): One day, humans will fly through the skies in powered machines that are heavier than air.

Response E: We’ve heard that prediction before. All sorts of dreamers and incompetents have naively imagined that the force of gravity could be overcome. They have all come to ruin. All these projects are a huge waste of money. Experts have proved that heavier than air flying machines are impossible. We should resist this absurdity. It’s just “Langley’s Folly” all over again.

The vital importance of framing

Now, you might think that I write these words to challenge the scepticism of the people who made the various responses listed. It’s true that these responses do need to be challenged. In each case, the response involves an unwarranted projection from the past into the future.

But the main point on my mind is a bit different. What I want to highlight is the need to improve how we frame and present predictions.

In all the above cases – A, B, C, D, E – the response refers to previous predictions that sounded similar to the more recent ones.

Each of these earlier predictions should have been communicated as follows:

  • There’s a possible outcome we need to consider. For example, the possibility of an adverse economic downturn immediately after a “Leave” vote in the Brexit referendum.
  • That outcome is possible, though not inevitable. We can estimate a rough probability of it happening.
  • The probability of the outcome will change if various actions are taken. For example, swift action by the Bank of England, after a Leave vote, could postpone or alleviate an economic downturn. Eventually leaving the EU, especially without a deal in place, is likely to accelerate and intensify the downturn.

In other words, our discussions of the future need to embrace uncertainty, and need to emphasise how human action can alter that uncertainty.

What’s more, the mention of uncertainty must be forceful, rather than something that gets lost in small print.

So the message itself must be nuanced, but the fact that the message is nuanced must be underscored.

All this makes things more complicated. It disallows any raw simplicity in the messaging. Understandably, many activists and enthusiasts prefer simple messages.

However, if a message has raw simplicity, and is subsequently seen to be wrong, observers will be likely to draw the wrong conclusion.

That kind of wrong conclusion lies behind each of flawed responses A to E above.

Sadly, lots of people who are evidently highly intelligent fail to take proper account of probabilities in assessing predictions of the future. At the back of their minds, an argument like the following holds sway:

  • An outcome predicted by an apparent expert failed to materialise.
  • Therefore we should discard anything else that apparent expert says.

Quite likely the expert in question was aware of the uncertainties affecting their prediction. But they failed to emphasise these uncertainties strongly enough.

Transcending cognitive biases

As we know, we humans are prey to large numbers of cognitive biases. Even people with a good education, and who are masters of particular academic disciplines, regularly fall foul of these biases. They seem to be baked deep into our brains, and may even have conveyed some survival benefit, on average, in times long past. In the more complicated world we’re now living in, we need to help each other to recognise and resist the ill effects of these biases. Including the ill effects of the “probability neglect” bias which I’ve been writing about above.

Indeed, one of the most important lessons from the current chaotic situation arising from the Covid-19 pandemic is that society in general needs to raise its understanding of a number of principles related to mathematics:

  • The nature of exponential curves – and how linear thinking often comes to grief, in failing to appreciate exponentials
  • The nature of probabilities and uncertainties – and how binary thinking often comes to grief, in failing to appreciate probabilities.

This raising of understanding won’t be easy. But it’s a task we should all embrace.

Image sources: Thanasis Papazacharias and Michel Müller from Pixabay.

Footnote 1: The topic of “illiteracy about exponentials and probabilities” is one I’ll be mentioning in this Fast Future webinar taking place on Sunday evening.

Footnote 2: Some people who offer a rationally flawed response like the ones above are, sadly, well aware of the flawed nature of their response, but they offer it anyway. They do so since they believe the response may well influence public discussion, despite being flawed. They put a higher value on promoting their own cause, rather than on keeping the content of the debate as rational as possible. They don’t mind adding to the irrationality of public discussion. That’s a topic for a separate discussion, but it’s my view that we need to find both “carrots” and “sticks” to discourage people from deliberately promoting views they know to be irrational. And, yes, you guessed it, I’ll be touching on that topic too on Sunday evening.

18 June 2020

Transhumanist alternatives to contempt and fear

Contempt and fear. These are the public reactions that various prominent politicians increasingly attract these days.

  • We feel contempt towards these politicians because they behave, far too often, in contemptible ways.
  • We feel fear regarding these politicians on account of the treacherous paths they appear to be taking us down.

That’s why many fans of envisioning and building a better world – including many technologists and entrepreneurs – would prefer to ignore politics, or to minimise its influence.

These critics of politics wish, instead, to keep their focus on creating remarkable new technology or on building vibrant new business.

Politics is messy and ugly, say these critics. It’s raucous and uncouth. It’s unproductive. Some would even say that politics is unnecessary. They look forward to politics reducing in size and influence.

Their preferred alternative to contempt and fear is to try to put the topic our of their minds.

I disagree. Putting our heads in the sand about politics is a gamble fraught with danger. Looking the other way won’t prevent our necks from being snapped when the axe falls. As bad outcomes increase from contemptible, treacherous politics, they will afflict everyone, everywhere.

We need a better alternative. Rather than distancing ourselves from the political sphere, we need to engage, intelligently and constructively.

As I’ll review below, technology can help us in that task.

Constructive engagement

Happily, as confirmed by positive examples from around the world, there’s no intrinsic reason for politics to be messy or ugly, raucous or uncouth.

Nor should politics be seen as some kind of unnecessary activity. It’s a core part of human life.

Indeed, politics arises wherever people gather together. Whenever we collectively decide the constraints we put on each other’s freedom, we’re taking part in politics.

Of course, this idea of putting constraints on each other’s freedoms is deeply unpopular in some circles. Liberty means liberty, comes the retort.

My answer is: things are more complicated. That’s for two reasons.

To start with, there are multiple kinds of freedom, each of which are important.

For example, consider the “four essential human freedoms” highlighted by US President FD Roosevelt in a speech in January 1941:

We look forward to a world founded upon four essential human freedoms.

The first is freedom of speech and expression – everywhere in the world.

The second is freedom of every person to worship God in their own way – everywhere in the world.

The third is freedom from want – which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants – everywhere in the world.

The fourth is freedom from fear – which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbour – anywhere in the world.

As well as caring about freeing people from constraints on their thoughts, speech, and actions, we generally also care about freeing people from hunger, disease, crime, and violence. Steps to loosen some of these constraints often risk decreasing other types of liberty. As I said, things are complicated.

The second reason builds on the previous point and makes it clearer why any proclamation “liberty means liberty” is overly simple. It is that our actions impact on each other’s wellbeing, both directly and indirectly.

  • If we speed in our cars, confident in our own ability to drive faster than the accepted norms, we risk seriously reducing the personal liberties of others if we suffer a momentary lapse in concentration.
  • If we share a hateful and misleading message on social media, confident in our own intellectual robustness, we might push someone reading that message over a psychological ledge.
  • If we discard waste products into the environment, confident that little additional harm will come from such pollution, we risk an unexpected accumulation of toxins and other harms.
  • If we grab whatever we can in the marketplace, confident that our own vigour and craftiness deserve a large reward, we could deprive others of the goods, services, and opportunities they need to enjoy a good quality of life.
  • If we publicise details of bugs in software that is widely used, or ways to increase the deadliness of biological pathogens, confident that our own reputation will rise as a result inside the peer groups we wish to impress, we risk enabling others to devastate the infrastructures upon which so much of life depends – electronic infrastructure and/or biological infrastructure.
  • If we create and distribute software that can generate mind-bending fake videos, we risk precipitating a meltdown in the arena of public discussion.
  • If we create and distribute software that can operate arsenals of weapons autonomously, freed from the constraints of having to consult slow-thinking human overseers before initiating an attack, we might gain lots of financial rewards, but at the risk of all manner of catastrophe from any defects in the design or implementation of that system.

In all these examples, there’s a case to agree some collective constraints on personal freedoms.

The rationale for imposing and accepting specific constraints on our freedom is in order to secure a state of affairs where overall freedom flourishes more fully. That’s a state of affairs in which we will all benefit.

In summary, greater liberty arises as a consequence of wise social coordination, rather than existing primarily as a reaction against such coordination. Selecting and enforcing social constraints is the first key task of politics.

Recognising and managing complexes

But who is the “we” who decides these constraints? And who will ensure that constraints put in place at one time, reflecting the needs of that time, are amended promptly when circumstances change, rather than remaining in place, disproportionately benefiting only a subset of society?

That brings us to a second key task of politics: preventing harmful dominance of society by self-interested groups of individuals – groups sometimes known as “complexes”.

This concept of the complex featured in the farewell speech made by President Eisenhower in January 1961. Eisenhower issued a profound warning that “the military industrial complex” posed a growing threat to America’s liberty and democracy:

In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.

We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted. Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defence with our peaceful methods and goals, so that security and liberty may prosper together.

As a distinguished former military general, Eisenhower spoke with evident authority on this topic:

Until the latest of our world conflicts, the United States had no armaments industry. American makers of ploughshares could, with time and as required, make swords as well. But now we can no longer risk emergency improvisation of national defence; we have been compelled to create a permanent armaments industry of vast proportions. Added to this, three and a half million men and women are directly engaged in the defence establishment. We annually spend on military security more than the net income of all United States corporations.

This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence – economic, political, even spiritual – is felt in every city, every Statehouse, every office of the Federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society.

It’s one thing to be aware of the risks posed by a military industrial complex (and the associated trade in armaments). It’s another thing to successfully manage these risks. Similar risks apply as well, for other vested interest “complexes” that can likewise subvert societal wellbeing:

  • A carbon energy complex, which earns huge profits from the ongoing use of carbon-based fuels, and which is motivated to minimise appreciation of the risks to climate from continuing use of these fuels
  • A financial complex, which (likewise) earns huge profits, by means of complicated derivative products that are designed to evade regulatory scrutiny whilst benefiting in cases of financial meltdown from government handouts to banks that are perceived as “too big to fail”
  • An information technology complex, which collects vast amounts of data about citizens, and which enables unprecedented surveillance, manipulation, and control of people by corporations and/or governments
  • A medical industrial complex, which is more interested in selling patients expensive medical treatment over a long period of time than in low-cost solutions which would prevent illnesses in the first place (or cure them quickly)
  • A political complex, which seeks above all else to retain its hold on political power, often by means of undermining a free press, an independent judiciary, and any credible democratic opposition.

You can probably think of other examples.

In all these cases, the practical goals of the complex are only weakly aligned with the goals of society as a whole. If society is not vigilant, the complex will subvert the better intentions of citizens. The complex is so powerful that it cannot be controlled by mere words of advocacy.

Beyond advocacy, we need effective politics. This politics can be supported by a number of vital principles:

  • Transparency: The operations of the various complexes need to be widely publicised and analysed, bringing them out of the shadows into the light of public understanding
  • Disclosure: Conflicts of interest must be made clear, to avoid the public being misled by individuals with ulterior motives
  • Accountability: Instances where key information is found to have been suppressed or distorted need to be treated very seriously, with the guilty parties having their reputations adjusted and their privileges diminished
  • Assessment of externalities: Evaluation systems should avoid focusing too narrowly on short-term metrics (such as financial profit) but should take into full account both positive and negative externalities – including new opportunities and new risks arising
  • Build bridges rather than walls: Potential conflicts should be handled by diplomacy, negotiation, and seeking a higher common purpose, rather than by driving people into antagonistic rival camps that increasingly bear hatred towards one another
  • Leanness: Decisions should focus on questions that matter most, rather than dictating matters where individual differences can easily be tolerated
  • Democratic oversight: People in leadership positions in society should be subject to regular assessment of their performance by a democratic review, that involves a dynamic public debate aiming to reach a “convergent opinion” rather than an “average opinion”.

Critically, all the above principles can be assisted by smart adoption of technology that enhances collaboration. This includes wikis (or similar) that map out the landscape of decisions. This also includes automated logic-checkers, and dynamic modelling systems. And that’s just the start of how technology can help support a better politics.

Transhumanist approaches to politics

The view that technology can assist humans to carry out core parts of our lives better than before, is part of the worldview known as transhumanism.

Transhumanism asserts, further, than the assistance available from technology, wisely applied, extends far beyond superficial changes. What lies within our grasp is a set of radical improvements in the human condition.

As in the short video “An Introduction to Transhumanism” – which, with over a quarter of a million views, is probably the most widely watched video on the subject – transhumanism is sometimes expressed in terms of the so-called “three supers”:

  • Super longevity: significantly improved physical health, including much longer lifespans – transcending human tendencies towards physical decay and decrepitude
  • Super intelligence: significantly improved thinking capability – transcending human tendencies towards mental blind spots and collective stupidity
  • Super wellbeing: significantly improved states of consciousness – transcending human tendencies towards depression, alienation, vicious emotions, and needless suffering.

My own advocacy of transhumanism actually emphasises one variant within the overall set of transhumanist philosophies. This is the variant of transhumanism known as technoprogressive transhumanismThe technoprogressive variant of transhumanism in effect adds one more “super” to the three already mentioned:

  • Super democracy: significantly improved social inclusion and resilience, whilst upholding diversity and liberty – transcending human tendencies towards tribalism, divisiveness, deception, and the abuse of power.

These radical improvements, by the way, can be brought about by a combination of changes at the level of individual humans, changes in our social structures, and changes in the prevailing sets of ideas (stories) that we tend to tell ourselves. Exactly what is the best combination of change initiatives, at these different levels, is something to be determined by a mix of thought and experiment.

Different transhumanists place their emphases upon different priorities for potential transformation.

If you’d like to listen in to that ongoing conversation, let me draw your attention to the London Futurists webinar taking place this Saturday – 20th of June – from 7pm UK time (BST).

In this webinar, four leading transhumanists will be discussing and contrasting their different views on the following questions (along with others that audience members raise in real time):

  • In a time of widespread anxiety about social unrest and perceived growing inequalities, what political approach is likely to ensure the greatest liberty?
  • In light of the greater insights provided by science into human psychology at both the individual and group levels, what are the threats to our wellbeing that most need to be guarded against, and which aspects of human character most need to be protected and uplifted?
  • What does the emerging philosophy of transhumanism, with its vision of conscious life evolving under thoughtful human control beyond the current human form, have to say about potential political interventions?

As you can see, the webinar is entitled “Politics for greater liberty: transhumanist perspectives”. The panellists are:

For more details, and to register to attend, click here.

Other views on the future of governance and the economy

If you’d like to hear a broader set of views on a related topic, then consider attending a Fast Future webinar taking place this Sunday – 21st June – from 6pm UK time (BST).

There will be four panellists in that webinar – one being me. We’ll each be be presenting a snapshot of ideas from the chapters we contributed to the recent Fast Future book, Aftershocks and Opportunities – Scenarios for a Post-Pandemic Future, which was published on June 1st.

After the initial presentations, we’ll be responding to each other’s views, and answering audience questions.

My own topic in this webinar will be “More Aware, More Agile, More Alive”.

The other panellists, and their topics, will be:

  • Geoff Mulgan – “Using the Crisis to Remake Government for the Future”
  • Bronwyn Williams – “The Great Separation”
  • Rohit Talwar – “Post-Pandemic Government and the Economic Recovery Agenda: A Futurist Perspective”

I’m looking forward to a lively discussion!

Click here for more details of this event.

Transcending Politics

As I said above (twice), things are complicated. The science and engineering behind the various technological solutions are complicated. And the considerations about regulations and incentives, to constrain and guide our collective use of that technology, are complicated too. We should beware any overly simple claims about easy answers to these issues.

My fullest treatment of these issues is in a 423 page book of mine, Transcending Politics, that I published in 2018.

Over the last couple of weeks, I’ve been flicking through some of the pages of that book again. Although there are some parts where I would now wish to use a different form of expression, or some updated examples, I believe the material stands the test of time well.

If the content in this blogpost strikes you as interesting, why not take a closer look at that book? The book’s website contains opening extracts of each of the chapters, as well as an extended table of contents. I trust you’ll like it.

20 July 2018

Christopher Columbus and the surprising future of AI

Filed under: AGI, predictability, Singularity — Tags: , , , , — David Wood @ 5:49 pm

There are plenty of critics who are sceptical about the future of AI. The topic has been over-hyped, say these critics. According to these critics, we don’t need to be worried about the longer-term repercussions of AI with superhuman capabilities. We’re many decades – perhaps centuries – from anything approaching AGI (artificial general intelligence) with skills in common sense reasoning matching (or surpassing) that of humans. As for AI destroying jobs, that, too, is a false alarm – or so the critics insist. AI will create at least as many jobs as it destroys.

In my previous blog post, Serious question over PwC’s report on the impact of AI on jobs, I offered some counters to these critics. To my mind, this is no time for complacency: AI could accelerate in its capabilities, and take us by surprise. The kinds of breakthroughs that, in a previous era, might have been expected to take many decades, could actually take place in just a few short years. Rather than burying our head in the sands, denying the possibility of any such acceleration, we need to pay more attention to the trends of technological change and the potential for disruptive new innovations.

The Christopher Columbus angle

Overnight, I’ve been reminded of an argument that I’ve used previously – towards the end of a rather long blogpost. It’s the argument that critics of the future of AI are similar to the critics of Christopher Columbus – the people who said, before his 1492 voyage across the Atlantic in search of a westerly route to Asia, that the effort was bound to be a bad investment.

Bear with me while I retell this analogy.

For years, Columbus tried to drum up support for what most people considered to be a hare-brained scheme. Most observers concluded that Columbus had fallen victim to a significant mistake – he estimated that the distance from the Canary Islands (off the coast of Morocco) to Japan was around 3,700 km, whereas the generally accepted figure was closer to 20,000 km. Indeed, the true size of the sphere of the Earth had been known since the 3rd century BC, due to a calculation by Eratosthenes, based on observations of shadows at different locations.

Accordingly, when Columbus presented his bold proposal to courts around Europe, the learned members of the courts time and again rejected the idea. The effort would be hugely larger than Columbus supposed, they said. It would be a fruitless endeavour.

Columbus, an autodidact, wasn’t completely crazy. He had done a lot of his own research. However, he was misled by a number of factors:

  • Confusion between various ancient units of distance (the “Arabic mile” and the “Roman mile”)
  • How many degrees of latitude the Eurasian landmass occupied (225 degrees versus 150 degrees)
  • A speculative 1474 map, by the Florentine astronomer Toscanelli, which showed a mythical island “Antilla” located to the east of Japan (named as “Cippangu” in the map).

You can read the details in the Wikipedia article on Columbus, which provides numerous additional reference points. The article also contains a copy of Toscanelli’s map, with the true location of the continents of North and South America superimposed for reference.

No wonder Columbus thought his plan might work after all. Nevertheless, the 1490s equivalents of today’s VCs kept saying “No” to his pitches. Finally, spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to take the risk of supporting his adventure. After stopping in the Canaries to restock, the Nina, the Pinta, and the Santa Maria set off westward. Five weeks later, the crew spotted land, in what we now call the Bahamas. And the rest is history.

But it wasn’t the history expected by Columbus, or by his backers, or by his critics. No-one had foreseen that a huge continent existed in the oceans in between Europe and Japan. None of the ancient writers – either secular or religious – had spoken of such a continent. Nevertheless, once Columbus had found it, the history of the world proceeded in a very different direction – including mass deaths from infectious diseases transmitted from the European sailors, genocide and cultural apocalypse, and enormous trade in both goods and slaves. In due course, it would the the ingenuity and initiatives of people subsequently resident in the Americas that propelled humans beyond the Earth’s atmosphere all the way to the moon.

What does this have to do with the future of AI?

Rational critics may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

Just as the contemporaries of Columbus erred in presuming they already knew all the large features of the earth’s continents (after all: if America really existed, surely God would have written about it in the Bible…), modern-day critics of AI can err in presuming they already know all the large features of the landscape of possible artificial minds.

When contemplating the space of all possible minds, some humility is in order. We cannot foretell in advance what configurations of intelligence are possible. We don’t know what may happen, if separate modules of reasoning are combined in innovative ways. After all, there are many aspects of the human mind which are still in doubt.

When critics say that it is unlikely that present-day AI mechanisms will take us all the way to AGI, they are very likely correct. But it would be a horrendous error to draw the conclusion that meaningful new continents of AI capability are inevitably still the equivalent of 20,000 km into the distance. The fact is, we simply don’t know. And for that reason, we should keep an open mind.

One day soon, indeed, we might read news of some new “AUI” having been discovered – some Artificial Unexpected Intelligence, which changes history. It won’t be AGI, but it could have all kinds of unexpected consequences.

Beyond the Columbus analogy

Every analogy has its drawbacks. Here are three ways in which the discovery of an AUI could be different from the discovery by Columbus of America:

  1. In the 1490s, there was only one Christopher Columbus. Nowadays, there are scores (perhaps hundreds) of schemes underway to try to devise new models of AI. Many of these are proceeding with significant financial backing.
  2. Whereas the journey across the Atlantic (and, eventually, the Pacific) could be measured by a single variable (latitude), the journey across the vast multidimensional landscape of artificial minds is much less predictable. That’s another reason to keep an open mind.
  3. Discovering an AUI could drastically transform the future of exploration in the landscape of artificial minds. Assisted by AUI, we might get to AGI much quicker than without it. Indeed, in some scenarios, it might take only a few months after we reach AUI for us (now going much faster than before) to reach AGI. Or days. Or hours.

Footnote

If you’re in or near Birmingham on 11th September, I’ll be giving a Funzing talk on how to assess the nature of the risks and opportunities from superhuman AI. For more details, see here.

 

19 July 2018

Serious questions over PwC’s report on the impact of AI on jobs

Filed under: politics, robots, UBI, urgency — Tags: , , , , — David Wood @ 7:47 pm

A report (PDF) issued on Tuesday by consulting giant PwC has received a lot of favourable press coverage.

Here’s PwC’s own headline summary: “AI and related technologies should create as many jobs as they displace”:

AI and related technologies such as robotics, drones and driverless vehicles could displace many jobs formerly done by humans, but will also create many additional jobs as productivity and real incomes rise and new and better products are developed.

We estimate that these countervailing displacement and income effects on employment are likely to broadly balance each other out over the next 20 years in the UK, with the share of existing jobs displaced by AI (c.20%) likely to be approximately equal to the additional jobs that are created…

BBC News picked up the apparent good news: “AI will create as many jobs as it displaces – report”:

A growing body of research claims the impact of AI automation will be less damaging than previously thought.

Forbes chose this headline: “AI Won’t Kill The Job Market But Keep It Steady, PwC Report Says”:

It’s impossible to say precisely how artificial intelligence will disrupt the job market, so researchers at PwC have taken a bird’s-eye view and pointed to the results of sweeping economic changes.

Their prediction, in a new report out Tuesday, is that it will all balance out in the end.

PwC are to be commended for setting out their reasoning clearly, over 16 pages (p36-p51) in their PDF report.

But three major questions need to be raised about their analysis. These questions throw a different light on the conclusions of the report.

This diagram covers the essence of the model used by PwC:

Q1: How will firms handle the “income effect”?

I agree that automation is likely to generate significant amounts of additional profits, as well as market demand for extra goods and services.

But what’s the reason for assuming that firms will “hire more workers” in response to this demand?

Mightn’t it be more financially attractive to these companies to incorporate more automation instead? Mightn’t more robots be a better investment than more human workers?

The justification for thinking that there will be plenty of new jobs for humans in this scenario, is the assumption that many tasks will remain outside the capability of automation. That is, the analysis depends on humans having skills which cannot be duplicated by AIs, software, robots, or other automation. The assumption is true today, but will it remain true over the next two decades?

PwC’s report points to sectors such as healthcare, social work, education, and science, as areas where jobs are likely to grow over the next twenty years. But that takes us to the second major question.

Q2: What prevents acceleration in the capabilities of AI?

PwC’s report, like many others that mainstream consultancies produce, basically assumes that the AI of 10-15 years time will be a simple extension of today’s AI.

Of course, no one knows for sure how AI will develop over the years ahead. But I see it as irresponsible to neglect scenarios in which AI progresses in leaps and bounds.

Just as the whole field of AI was given a huge shot in the arm by unexpected breakthroughs in the performance of deep learning from around 2012 onwards, we should be open to the possibility of additional breakthroughs in the years ahead, enabled by a combination of the following trends:

  • Huge commercial prizes are awaiting the companies that can improve their AI capabilities
  • Huge military prizes are awaiting the countries that can improve their AI capabilities
  • More developers, entrepreneurs, designers, and systems integrators are active in AI than ever before, exploring an incredible variety of different concepts
  • Increased knowledge of how the human brain operates is being fed into ideas for how to improve AI
  • Cheaper hardware, including easy access to vast cloud computing resources, means that investigations of novel AI models can take place more quickly than before
  • AI can be used to improve some of its own capabilities, in positive feedback loops, and in new “generative adversarial” settings
  • Hardware innovations including new chipset designs and quantum computing could turn today’s crazy ideas into tomorrow’s practical realities.

Today’s AI already shows considerable promise in fields such as transfer learning, artificial creativity, the detection and simulation of emotions, and concept formulation. How quickly will progress occur? My view: slowly, and then quickly.

Q3: How might the “displacement effect” be altered?

In parallel with rating the income effect much more highly than I think is prudent, the PwC analysis offers in my view some dubious reasoning for lowering the displacement effect:

Although we estimate that up to 30% of existing UK jobs could be at high risk of being automated, a job being at “high risk” of being automated does not mean that it will definitely be automated, as there could be a range of economic, legal and regulatory and organisational barriers to the adoption of these new technologies…

We think it is reasonable to scale down our estimates by a factor of two thirds to reflect these barriers, so our central estimate of the proportion of existing jobs that will actually be automated over the next 20 years is reduced to 20%.

Yes, a whole panoply of human factors can alter the speed of the take-up of new technology. But such factors aren’t always brakes. In some circumstances – as perceptions change – they can become accelerators.

Consider if companies in one country (e.g. the UK) are slow to adopt some new technology, but rival companies overseas act more quickly. Declining competitiveness will be one reason for the mindset to change.

A different example: attitudes towards interracial marriages, or towards same-sex marriages, changed slowly for a long time, until they started to change faster.

Q4: What are the consequences of negligent forecasting?

Here’s a bonus question. Does it really matter if PwC get these forecasts wrong? Or is it better to err on the conservative side?

I imagine PwC consultants reasoning along the following lines. Let’s avoid panic. Changes in the job market are likely to be slow in at least the shorter term. Provided that remains the case, the primary pieces of policy advice offered in the report make sense:

Government should invest more in ‘STEAM’ skills that will be most useful to people in this increasingly automated world.

Place-based industrial strategy should target job creation.

The report follows up these recommendations with a different kind of policy advice:

Government should strengthen the safety net for those who find it hard to adjust to technological changes.

But the question is: how much attention should be given, in relative terms, to these two different kinds of advice? Should society put more effort into new training programmes, or in redesigning the prevailing social contract?

So long as the impact of automation on the job market is relatively small, perhaps less effort is needed to work on a better social safety net. But if the impact could be significantly higher, well, many people find that too frightening to contemplate. Hence the desire to sweep such ideas under the carpet – similar to how polite society once avoided using the word “cancer”.

My own view is that the balance of emphasis in the PwC report is the wrong way round. Society urgently needs to anticipate new structures (and new philosophies) that cope with large proportions of the workforce no longer being able to earn income from paid employment.

That’s the argument I made in, for example, my opening remarks in the recent London Futurists conference on UBIA (Universal Basic Income and/or Alternatives):

… and I took the time at the end of the event to back up my assertions with a wider analysis:

To be clear, I see many big challenges in working out how a new post-work social contract will operate – and how society can transition from our present system to this new one. But the fact these tasks are hard, is all the more reason to look at them calmly and carefully. Obscuring the need for these tasks, under a flourish of proposals to increase ‘STEAM’ skills and improve apprentice schemes is, sadly, irresponsible.

17 July 2018

Would you like your mind expanded?

Filed under: books, healthcare, psychology, religion — Tags: , , , , , — David Wood @ 10:15 pm

Several times while listening to the audio of the recent new book How to Change Your Mind by Michael Pollan, I paused the playback and thought to myself, “wow”.

Pollan is a gifted writer. He strings together words and sentences in a highly elegant way. But my reactions to his book were caused by the audacity of the ideas conveyed, even more than by the powerful rhythms and cadences of the words doing the conveying.

Pollan made his reputation as a writer about food. The most famous piece of advice he offered, earlier in his career, is the seven word phrase “Eat food, not too much, mostly plants”. You might ask: What do you mean by food? Pollan’s answer: “Don’t eat anything your great grandmother wouldn’t recognize as food.”

With such a background, you might not expect any cutting-edge fireworks from Pollan. However, his most recent book bears the provocative subtitle What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence. That’s a lot of big topics. (On reflection, you’ll realise that your great grandmother might have had things to say about all these topics.)

The book covers its material carefully and patiently, from multiple different perspectives. I found it engaging throughout – from the section at the beginning when Pollan explained how he, in his late 50s, became more interested in this field – via sections covering the evolutionary history of mushrooms, thoughtful analyses of Pollan’s own varied experiences with various psychedelics, and the rich mix of fascinating characters in psychedelic history (many larger-than-life, others preferring anonymity) – to sections suggesting big implications for our understanding of mental wellbeing, illnesses of the mind, and the nature of spirituality.

If any of the following catch your interest, I suggest you check out How to Change your Mind:

  • The likely origins of human beliefs about religion
  • Prospects for comprehensive treatments of depression, addiction, and compulsive behaviour
  • The nature of consciousness, the self, and the ego
  • Prospects for people routinely becoming “better than well”
  • Ways in which controversial treatments (e.g. those involving psychedelics) can in due course become accepted by straight-laced regulators from the FDA and the EMA
  • The perils of society collectively forgetting important insights from earlier generations of researchers.

Personally, I particularly enjoyed the sections about William James and Aldous Huxley. I already knew quite a lot about both of them before, but Pollan helped me see their work in a larger perspective. There were many other characters in the book that I learned about for the first time. Perhaps the most astonishing was Al Hubbard. Mind-boggling, indeed.

I see How to Change your Mind as part of a likely tipping point of public acceptability of psychedelics. It’s that well written.

In case it’s not clear, you ought to familiarise yourself with this book if:

  • You consider yourself a futurist – someone who attempts to anticipate key changes in social attitudes and practices
  • You consider yourself a transhumanist – someone interested in extending human experience beyond the ordinary.

3 May 2018

Recommended: The Longevity Code

If you’re interested in the latest advice on how to extend your healthspan, you should read The Longevity Code by Kris Verburgh.

The full title of the book is “The Longevity Code: Secrets to Living Well for Longer, from the Front Lines of Science”.

The book has the following description (on Goodreads):

Medical doctor and researcher Kris Verburgh is fast emerging as one of the world’s leading research authorities on the science of aging. The Longevity Code is Dr. Verburgh’s authoritative guide on why and how we age — and on the four most crucial areas we have control over, to slow down, and even reverse, the aging process.

We learn why some animal species age hardly at all while others age and die very quickly, and about the mechanisms at work that slowly but definitely cause our bodies to age, making us susceptible to heart attack, stroke, cancer, pneumonia and/or dementia.

Dr. Verburgh devotes the last third of The Longevity Code to what we can do to slow down the process of aging. He concludes by introducing and assessing the wide range of cutting-edge developments in anti-aging technology, the stuff once only of science fiction: new types of vaccines, and the use of mitochondrial DNA, CRISPR proteins, stem cells, and more.

In the course of researching and writing my own book The Abolition of Aging, I read dozens of different books on broadly similar topics. (For a partial list, scan the online copy of the Endnotes for that book.)

However, I found The Longevity Code to address a number of issues in ways that were particularly compelling and engaging:

  1. Persuasive advice on how to modify diet and lifestyle, now, in order to increase your likelihood to remain healthy long enough to benefit from forthcoming rejuvenation therapies (therapies which Verburgh lists as “Step 4” of a four-stage “longevity staircase”)
  2. A compelling analysis of different “theories of aging”, in Chapter 1 of his book, including the implications of the notably different lifespans of various animals that seem on first sight to have similar biology
  3. A down-to-earth matter-of-fact analysis, in Chapter 4 of his book, on the desirability of living longer lives.

The first of these points is an area where I have often struggled, in the Q&A portions of my own presentations on The Abolition of Aging, to give satisfactory answers to audience questions. I now have better answers to offer!

Allowable weakness

One “allowable weakness” of the book is that the author repeats himself on occasion – especially when it comes to making recommendations on diet and food supplements. I say this is “allowable” because his messages deserve repetition, in a world where there is an abundance of apparent expert dietary advice that is, alas, confusing, contradictory, and often compromised (due to the influence of vested interests – as Verburgh documents).

Table of Contents

The table of contents gives a good idea of what the book contains:

  1. Why do we age?
    • Making room?
    • Dying before growing old
    • Young and healthy, old and sick
    • Sex and aging
  2. What causes aging?
    • Proteins
    • Carbohydrates
    • Fats
    • Our energy generators and their role in life, death, and aging
    • Shoelaces and string
    • Other causes, and conclusion
  3. The longevity staircase
    • Avoid deficiencies
    • Stimulate hormesis
    • Reduce growth stimulation
    • Reverse the aging process
  4. Some thoughts about aging, longevity, and immortality
    • Do we really want to grow that old?
    • A new society?
  5. Recipes
  6. Afterword

About Kris Verburgh

You can read more about the author on the bio page of his website. Here’s a brief extract:

Kris Verburgh (born 1986) graduated magna cum laude as a medical doctor from the University of Antwerp, Belgium.

Dr. Verburgh is a researcher at the Center Leo Apostel for Interdisciplinary Studies (CLEA) at the Free University Brussels (VUB) and a member of the Evolution, Complexity and Cognition group at the Free University of Brussels.

Dr. Verburgh’s fields of research are aging, nutrition, metabolism, preventive medicine and health. In this context, he created a new scientific discipline, called ‘nutrigerontology‘, which studies the impact of nutrition on the aging process and aging-related diseases.

Additionally, he has a profound interest in new technologies that will disrupt medicine, health(care) and our lifespans. He follows the new trends and paradigm shifts in medicine an biotechnology and how they are impacted by the fourth industrial revolution

Verburgh wrote his first science book when he was 16 years old. At age 25, he had written 3 science books.

Dr. Verburgh gives talks on new developments and paradigm shifts in medicine, healthcare and the science of aging. He gave lectures for the European Parliament, Google, Singularity University, various academic institutes, organizations and international companies.

And I’d be delighted to host him at London Futurists, when schedules allow!

Blog at WordPress.com.