dw2

23 February 2023

Nuclear-level catastrophe: four responses

36% of respondents agree that it is plausible that AI could produce catastrophic outcomes in this century, on the level of all-out nuclear war.

That’s 36% of a rather special group of people. People who replied to this survey needed to meet the criterion of being a named author on at least two papers published in the last three years in accredited journals in the field of Computational Linguistics (CL) – the field sometimes also known as NLP (Natural Language Processing).

The survey took place in May and June 2022. 327 complete responses were received, by people matching the criteria.

A full report on this survey (31 pages) is available here (PDF).

Here’s a screenshot from page 10 of the report, illustrating the answers to questions about Artificial General Intelligence (AGI):

You can see the responses to question 3-4. 36% of the respondents either “agreed” or “weakly agreed” with the statement that

It is plausible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war.

That statistic is a useful backdrop to discussions stirred up in the last few days by a video interview given by polymath autodidact and long-time AGI risk researcher Eliezer Yudkowsky:

The publishers of that video chose the eye-catching title “we’re all gonna die”.

If you don’t want to spend 90 minutes watching that video – or if you are personally alienated by Eliezer’s communication style – here’s a useful twitter thread summary by Liron Shapira:

In contrast to the question posed in the NLP survey I mentioned earlier, Eliezer isn’t thinking about “outcomes of AGI in this century“. His timescales are much shorter. His “ballpark estimate” for the time before AGI arrives is “3-15 years”.

How are people reacting to this sombre prediction?

More generally, what responses are there to the statistic that, as quoted above,

36% of respondents agree that it is plausible that AI could produce catastrophic outcomes in this century, on the level of all-out nuclear war.

I’ve seen a lot of different reactions. They break down into four groups: denial, sabotage, trust, and hustle.

1. Denial

One example of denial is this claim: We’re nowhere near an understanding the magic of human minds. Therefore there’s no chance that engineers are going to duplicate that magic in artificial systems.

I have two counters:

  1. The risks of AGI arise, not because the AI may somehow become sentient, and take on the unpleasant aspects of alpha male human nature. Rather, the risks arise from systems that operate beyond our understanding and outside our control, and which may end up pursuing objectives different from the ones we thought (or wished) we had programmed into them
  2. Many systems have been created over the decades without the underlying science being fully understood. Steam engines predated the laws of thermodynamics. More recently, LLMs (Large Language Model AIs) have demonstrated aspects of intelligence that the designers of these systems had not anticipated. In the same way, AIs with some extra features may unexpectedly tip over into greater general intelligence.

Another example of denial: Some very smart people say they don’t believe that AGI poses risks. Therefore we don’t need to pay any more attention to this stupid idea.

My counters:

  1. The mere fact that someone very smart asserts an idea – likely outside of their own field of special expertise – does not confirm the idea is correct
  2. None of these purported objections to the possibility of AGI risk holds water (for a longer discussion, see my book The Singularity Principles).

Digging further into various online discussion threads, I caught the impression that what was motivating some of the denial was often a terrible fear. The people loudly proclaiming their denial were trying to cope with depression. The thought of potential human extinction within just 3-15 years was simply too dreadful for them to contemplate.

It’s similar to how people sometimes cope with the death of someone dear to them. There’s a chance my dear friend has now been reunited in an afterlife with their beloved grandparents, they whisper to themselves. Or, It’s sweet and honourable to die for your country: this death was a glorious sacrifice. And then woe betide any uppity humanist who dares to suggests there is no afterlife, or that patriotism is the last refuge of a scoundrel!

Likewise, woe betide any uppity AI risk researcher who dares to suggest that AGI might not be so benign after all! Deny! Deny!! Deny!!!

(For more on this line of thinking, see my short chapter “The Denial of the Singularity” in The Singularity Principles.)

A different motivation for denial is the belief that any sufficient “cure” to the risk of AGI catastrophe would be worse than the risk it was trying to address. This line of thinking goes as follows:

  • A solution to AGI risk will involve pervasive monitoring and widespread restrictions
  • That monitoring and restrictions will only be possible if an autocratic world government is put in place
  • Any autocratic world government would be absolutely terrible
  • Therefore, the risk of AGI can’t be that bad after all.

I’ll come back later to the flaws in that particular argument. (In the meantime, see if you can spot what’s wrong.)

2. Sabotage

In the video interview, Eliezer made one suggestion for avoiding AGI catastrophe: Destroy all the GPU server farms.

These vast collections of GPUs (a special kind of computing chip) are what enables the training of many types of AI. If these chips were all put out of action, it would delay the arrival of AGI, giving humanity more time to work out a better solution to coexisting with AGI.

Another suggestion Eliezer makes is that the superbright people who are currently working flat out to increase the capabilities of their AI systems should be paid large amounts of money to do nothing. They could lounge about on a beach all day, and still earn more money than they are currently receiving from OpenAI, DeepMind, or whoever is employing them. Once again, that would slow down the emergence of AGI, and buy humanity more time.

I’ve seen other similar suggestions online, which I won’t repeat here, since they come close to acts of terrorism.

All these suggestions have in common: let’s find ways to stop the development of AI in its tracks, all across the world. Companies should be stopped in their tracks. Shadowy military research groups should be stopped in their tracks. Open source hackers should be stopped in their tracks. North Korean ransomware hackers must be stopped in their tracks.

This isn’t just a suggestion that specific AI developments should be halted, namely those with an explicit target of creating AGI. Instead, it recognises that the creation of AGI might occur via unexpected routes. Improving the performance of various narrow AI systems, including fact-checking, or emotion recognition, or online request interchange marketplaces – any of these might push the collection of AI modules over the critical threshold. Mixing metaphors, AI could go nuclear.

Shutting down all these research activities seems a very tall order. Especially since many of the people who are currently working flat out to increase AI capabilities are motivated, not by money, but by the vision that better AI could do a tremendous amount of good in the world: curing cancer, solving nuclear fusion, improving agriculture by leaps and bounds, and so on. They’re not going to be easy to persuade to change course. For them, there’s a lot more at stake than money.

I have more to say about the question “To AGI or not AGI” in this chapter. In short, I’m deeply sceptical.

In response, a would-be saboteur may admit that their chances of success are low. But what do you suggest instead, they will ask.

Read on.

3. Trust

Let’s start again from the statistic that 36% of the NLP survey respondents agreed, with varying degrees of confidence, that advanced AI could trigger a catastrophe as bad as an all-out nuclear war some time this century.

It’s a pity that the question wasn’t asked with shorter timescales. Comparing the chances of an AI-induced global catastrophe in the next 15 years with one in the next 85 years:

  • The longer timescale makes it more likely that AGI will be developed
  • The shorter timescale makes it more likely that AGI safety research will still be at a primitive (deeply ineffective) level.

Even since the date of the survey – May and June 2022 – many forecasters have shortened their estimates of the likely timeline to the arrival of AGI.

So, for the sake of the argument, let’s suppose that the risk of an AI-induced global catastrophe happening by 2038 (15 years from now) is 1/10.

There are two ways to react to this:

  • 1/10 is fine odds. I feel lucky. What’s more, there are plenty of reasons we ought to feel lucky about
  • 1/10 is terrible odds. That’s far too high a risk to accept. We need to hustle to find ways to change these odds in our favour.

I’ll come to the hustle response in a moment. But let’s first consider the trust response.

A good example is in this comment from SingularityNET founder and CEO Ben Goertzel:

Eliezer is a very serious thinker on these matters and was the core source of most of the ideas in Nick Bostrom’s influential book Superintelligence. But ever since I met him, and first debated these issues with him,  back in 2000 I have felt he had a somewhat narrow view of humanity and the universe in general.   

There are currents of love and wisdom in our world that he is not considering and seems to be mostly unaware of, and that we can tap into by creating self reflective compassionate AGIs and doing good loving works together with them.

In short, rather than fearing humanity, we should learn to trust humanity. Rather than fearing what AGI will do, we should trust that AGI can do wonderful things.

You can find a much longer version of Ben’s views in the review he wrote back in 2015 of Superintelligence. It’s well worth reading.

What are the grounds for hope? Humanity has come through major challenges in the past. Even though the scale of the challenge is more daunting on this occasion, there are also more people contributing ideas and inspiration than before. AI is more accessible than nuclear weapons, which increases the danger level, but AI could also be deployed as part of the solution, rather than just being a threat.

Another idea is that if an AI looks around for data teaching it which values to respect and uphold, it will find plenty of positive examples in great human literature. OK, that literature also includes lots of treachery, and different moral codes often conflict, but a wise AGI should be able to see through all these conclusions to discern the importance of defending human flourishing. OK, much of AI training at the moment focuses on deception, manipulation, enticement, and surveillance, but, again, we can hope that a wise AGI will set aside those nastier aspects of human behaviour. Rather than aping trolls or clickbait, we can hope that AGI will echo the better angels of human nature.

It’s also possible that, just as DeepMind’s AlphaGo Zero worked out by itself, without any human input, superior strategies at the board games Go and Chess, a future AI might work out, by itself, the principles of universal morality. (That’s assuming such principles exist.)

We would still have to hope, in such a case, that the AI that worked out the principles of universal morality would decide to follow these principles, rather than having some alternative (alien) ways of thinking.

But surely hope is better than despair?

To quote Ben Goertzel again:

Despondence is unwarranted and unproductive. We need to focus on optimistically maximizing odds of a wildly beneficial Singularity together.   

My view is the same as expressed by Berkeley professor of AI Stuart Russell, in part of a lengthy exchange with Steven Pinker on the subject of AGI risks:

The meta argument is that if we don’t talk about the failure modes, we won’t be able to address them…

Just like in nuclear safety, it’s not against the rules to raise possible failure modes like, what if this molten sodium that you’re proposing should flow around all these pipes? What if it ever came into contact with the water that’s on the turbine side of the system? Wouldn’t you have a massive explosion which could rip off the containment and so on? That’s not exactly what happened in Chernobyl, but not so dissimilar…

The idea that we could solve that problem without even mentioning it, without even talking about it and without even pointing out why it’s difficult and why it’s important, that’s not the culture of safety. That’s sort of more like the culture of the communist party committee in Chernobyl, that simply continued to assert that nothing bad was happening.

(By the way, my sympathies in that long discussion, when it comes to AGI risk, are approximately 100.0% with Russell and approximately 0.0% with Pinker.)

4. Hustle

The story so far:

  • The risks are real (though estimates of their probability vary)
  • Some possible “solutions” to the risks might produce results that are, by some calculations, worse than letting AGI take its own course
  • If we want to improve our odds of survival – and, indeed, for humanity to reach something like a sustainable superabundance with the assistance of advanced AIs – we need to be able to take a clear, candid view of the risks facing us
  • Being naïve about the dangers we face is unlikely to be the best way forward
  • Since time may be short, the time to press for better answers is now
  • We shouldn’t despair. We should hustle.

Some ways in which research could generate useful new insight relatively quickly:

  • When the NLP survey respondents expressed their views, what reasons did they have for disagreeing with the statement? And what reasons did they have for agreeing with it? And how do these reasons stand up, in the cold light of a clear analysis? (In other words, rather than a one-time survey, an iterative Delphi survey should lead to deeper understanding.)
  • Why have the various AI safety initiatives formed in the wake of the Puerto Rico and Asilomar conferences of 2015 and 2017 fallen so far short of expectations?
  • Which descriptions of potential catastrophic AI failure modes are most likely to change the minds of those critics who currently like to shrug off failure scenarios as “unrealistic” or “Hollywood fantasy”?

Constructively, I invite conversation on the strengths and weaknesses of the 21 Singularity Principles that I have suggested as contributing to improving the chances of beneficial AGI outcomes.

For example:

  • Can we identify “middle ways” that include important elements of global monitoring and auditing of AI systems, without collapsing into autocratic global government?
  • Can we improve the interpretability and explainability of advanced AI systems (perhaps with the help of trusted narrow AI tools), to diminish the risks of these systems unexpectedly behaving in ways their designers failed to anticipate?
  • Can we deepen our understanding of the ways new capabilities “emerge” in advanced AI systems, with a particular focus on preventing the emergence of alternative goals?

I also believe we should explore more fully the possibility that an AGI will converge on a set of universal values, independent of whatever training we provide it – and, moreover, the possibility that these values will include upholding human flourishing.

And despite me saying just now that these values would be “independent of whatever training we provide”, is there, nevertheless, a way for us to tilt the landscape so that the AGI is more likely to reach and respect these conclusions?

Postscript

To join me in “camp hustle”, visit Future Surge, which is the activist wing of London Futurists.

If you’re interested in the ideas of my book The Singularity Principles, here’s a podcast episode in which Calum Chace and I discuss some of these ideas more fully.

In a subsequent episode of our podcast, Calum and I took another look at the same topics, this time with Millennium Project Executive Director Jerome Glenn: “Governing the transition to AGI”.

22 February 2022

Nine technoprogressive proposals

Filed under: Events, futurist, vision — Tags: , , — David Wood @ 11:30 pm

Ahead of time, I wasn’t sure the format was going to work.

It seemed to be an ambitious agenda. Twenty-five speakers were signed up to deliver short presentations. Each had agreed to limit their remarks to just four minutes. The occasion was an International Technoprogressive Conference that took place earlier today (22nd February), with themes including:

  • “To be human, today and tomorrow”
  • “Converging visions from many horizons”.
Image credit: this graphic includes work by Pixabay user Sasin Tipchai

Each speaker had responded to a call to cover in their remarks either or both of the following:

  • Provide a brief summary of transhumanist-related activity in which they are involved
  • Make a proposal about “a concrete idea that could inspire positive and future-oriented people or organisations”.

Their proposals could address, for example, AI, enhancing human nature, equity and justice, accelerating science, existential risks, the Singularity, social and political angles, the governance of technology, superlongevity, superhappiness, or sustainable superabundance.

The speakers who provided concrete proposals were asked, ahead of the conference, to write down their proposal in 200 words or less, for distribution in a document to be shared among all attendees.

Attendees at the event – speakers and non-speakers alike – were asked to provide feedback on the proposals that had been presented, and to cast up to five votes among the different proposals.

I wondered whether we were trying to do too much, especially given the short amount of time spent in preparing for the event.

Happily, it all went pretty smoothly. A few speakers recorded videos of their remarks in advance, to be sure to keep to the allotted timespan. A small number of others were in the end unable to take part on the day, on account of last-minute schedule conflicts.

As for the presentations themselves, they were diverse – exactly as had been hoped by the organisers ( l’Association Françoise Transhumanistes (Technoprog), with some support from London Futurists).

For example, I found it particularly interesting to hear about perspectives on transhumanism from Cameroon and Japan.

Reflecting the quality of all the presentations, audience votes were spread widely. Comments made by voters again and again stressed the difficulty in each picking just five proposals to be prioritised. Nevertheless, audience members accepted the challenge. Some people gave one vote each to five different proposals. Others split them 2, 2, and 1, or in other combinations. One person gave all their five votes to a single proposal.

As for the outcome of the voting: I’m appending the text of the nine proposals that received the most votes. You’ll notice a number of common ideas, along with significant variety.

I’m presenting these nine proposals in alphabetical order of the first name of the proposers. I hope you find them interesting. If you find yourself inspired by what you read, please don’t hesitate to offer your own support to the projects described.

PS Big thanks are due to everyone who made this conference possible, especially the co-organisers, Didier Coeurnelle and Marc Roux.

Longevity: Opportunities and Challenges

Proposed by Anastasiia Velikanova, project coordinator at Open Longevity

Why haven’t we achieved significant progress in the longevity field yet? Although about 17,000 biological articles with the word “aging” in the title are published yearly, we do not have any therapy that reliably prolongs life.

One reason is that there are no large-scale projects in the biology of aging, such as the Human Genome or the  Large Hadron Collider. All research is conducted separately in academic institutions or startups and is mostly closed. With a great idea at the start, a company hides its investigations, but the capabilities of its team are not enough to globally change the situation with aging.

Another reason is that the problem of aging is highly interdisciplinary. We need advanced mathematical models and AI algorithms to accumulate all research about molecular processes and identify critical genes or targets.

Most importantly, we, transhumanists, should unite and create an infrastructure that would allow solving the problem of aging on a large scale, attracting the best specialists from different fields. 

An essential part of such an infrastructure is open databases. For example, our organization created Open Genes – the database of genes associated with aging, allowing the selection of combinatorial therapy against aging.

Vital Syllabus

Proposed by David Wood, Chair at London Futurists

Nearly every serious discussion about improving the future comes round to the need to improve education. In our age of multiple pressures, dizzying opportunities, daunting risks, and accelerating disruption, people in all walks of life need better access to information about the skills that are most important and the principles that matter most. Traditional education falls far short on these counts.

The Vital Syllabus project aims to collect and curate resources to assist students of all ages to acquire and deepen these skills, and to understand and embody the associated principles. To be included in the project, these resources must be free of charge, clear, engaging, and trustworthy – and to align with a transhumanist understanding.

A framework is already in place: 24 top-level syllabus areas, nearly 200 subareas, and an initial set of example videos. Please join this project to help fill out the syllabus quickly!

For information about how to help this project, see this FAQ page.

Longevity Art

Proposed by Elena Milova, Founder at LongevityArt

When we are discussing life extension, people most often refer to movies, animations, books, paintings, and other works of art. They find there the concepts and the role models that they can either follow or reject. Art has the potential to seed the ideas in one’s mind that can then gradually grow and mature until they become part of the personal life philosophy. Also, since one function of art is to uncover, question, mock and challenge the status quo, art is one of the most appropriate medias for spreading new ideas such as one of radical life extension.

I suggest that the community supports more art projects (movies, animations, books, paintings, digital artworks) by establishing foundations sponsoring the most valuable art projects.

Use longevity parties to do advocacy for more anti-aging research

Proposed by Felix Werth, Leader at Partei für Gesundheitsforschung

With the repair-approach we already know in principle, how to defeat aging. To increase our chance of being alive and healthy in 100 years significantly, much more resources have to be put into the implementation of the repair-approach. An efficient way to achieve this is to form single issue longevity parties and run in elections. There are many people who would like to live longer, but for some reason don’t do anything for it. Running in elections can be very efficient advocacy and gives the people the option to very easily support longevity research with their vote. If the governing parties see that they can get more votes with this issue, they will probably care about it more.

In 2015 I initiated a longevity party in Germany and since then, we have participated in 14 elections already and did a lot of advocacy, all this with very few active members and very few resources. With a little more resources, much more advocacy could be done this way. I suggest that more people, who want radical life extension in their lifetime, form longevity parties in their country and run in elections. Growing the longevity movement faster is key to success.

Revive LEV: The Game on Life Extension

Proposed by Gennady Stolyarov, Chair at U.S. Transhumanist Party

I propose to resurrect a computer game on longevity escape velocity, LEV: The Game, which was previously attempted in 2014 and for which a working Alpha version had been created but had unfortunately been lost since that time.

In this game one plays the role of a character who, through various lifestyle choices and pursuit of rejuvenation treatments, strives to live to age 200. The U.S. Transhumanist Party has obtained the rights to continue game development as well as the previously developed graphical assets. The logic of the game has been redesigned to be turn-based; all that remains is to recruit the programming talent needed to implement the logic of the game into code. A game on longevity escape velocity can draw in a much larger audience to take interest in the life-extension movement and also illustrate how LEV will likely actually arrive – dispelling common misunderstandings and enabling more people to readily understand the transition to indefinite lifespans.

Implement optimization and planning for your organization

Proposed by Ilia Stambler, Chair at Israeli Longevity Alliance

Often progressive, transhumanist and/or life-extensionist groups and associations are inefficient as organizations – they lack a clear and agreed vision, concrete goals and plans for the organization’s advancement, a clear estimate of the available as well desirable human and material resources necessary to achieve those goals and plans, do not track progress, performance and achievements toward the implementation of those goals. As a result, many groups are acting rather as discussion clubs at best, instead of active and productive organizations, drifting aimlessly along occasional activities, and so they can hardly be expected to bring about significant directional positive changes for the future.

Hence the general suggestion is to build up one’s own organizations through organizational optimization, to plan concretely, not so much in terms of what the organization “should do”, but rather what its specific members actually can and plan to do in the shorter and longer term. I believe, through increasing the planning efficiency and the organizational optimization for the existing and emerging organizations, a much stronger impact can be made. (The suggestion is general, but particular organizations may see whether it may apply to them and act according to their particular circumstances.)

Campaign for the Longevity Dividend

Proposed by James Hughes, Executive Director at the IEET

The most popular goal of the technoprogressive and futurist community is universal access to safe and effective longevity therapies. There are three things our community can do to advance this agenda:

  1. First, we need to engage with demographic, medical and policy issues that surround longevity therapies, from the old-age dependency ratio and pension crisis to biomarkers of aging and defining aging as a disease process.
  2. Second, we need to directly argue for public financing of research, a rational clinical trial pathway, and access to these therapies through public health insurance.
  3. Third, we need to identify the existing organizations with similar or related goals, and establish coalitions with them to work for the necessary legislation.

These projects can build on existing efforts, such as International Longevity Alliance, Ending Aging Media Response and the Global Healthspan Policy Institute.

Prioritise moral enhancement

Proposed by Marc Roux, Chair at the French Transhumanist Association (AFT-Technoprog)

As our efforts to attract funding and researchers to longevity have begun to bear fruit, we need to popularise much more moral enhancement.

Ageing is not defeated. However, longevity has already found powerful relays in the decision-making spheres. Mentalities are slowly changing, but the battle for longevity is underway.

Our vanguard can begin to turn to other great goal.

Longevity will not be enough to improve the level of happiness and harmony of our societies. History has shown that it doesn’t change the predisposition of humans to dominance, xenophobia, aggressiveness … They remain stuck in their prehistoric gangue, which condemns them to repeat the same mistakes. If we don’t allow humans to change these behavioural predeterminations, nothing essential will change.

We must prioritise cognitive sciences, and ensure that this is done in the direction of greater choice for everyone, access for all to an improvement in their mental condition, and an orientation towards greater solidarity.

And we’ll work to prevent cognitive sciences from continuing to be put at the service of liberticidal control and domination logics.

On this condition, moral enhancement can be an unprecedented good in the history of humanity.

Transhumanist Studies: Knowledge Accelerator

Proposed by Natasha Vita-More, Executive Director at Humanity+

An education is a crucial asset. Providing lifelong learning that is immediate, accessible and continually updating is key. Transhumanist Studies is an education platform designed to expand knowledge about how the world is transforming. Its Knowledge Accelerator curricula examines the field of longevity, facts on aging and advances in AI, nanomedicine and cryonics, critical and creative thinking, relationships between humanity and ecosystems of earth and space, ethics of fairness, and applied foresight concerning opportunities and risks on the horizon.

Our methodology is applied foresight with a learning model that offers three methods in its 50-25-25 curricula:

  1. 50% immersive learning environment (lectures, presentations, and resources);
  2. 25% project-based iterative study; and
  3. 25% open-form discussion and debate (aligned with a Weekly Studies Group and monthly H+ Academy Roundtable).

In its initiative to advance transhumanism, the Knowledge Accelerator supports the benefits of secular values and impartiality. With a team located across continents, the program is free for some and at a low cost for others. As the scope of transhumanism  continues to grow, the culture is as extraordinary as its advocacy, integrity, and long-term vision.

Homepage | Transhumanist Studies (teachable.com) (I spoke on the need for education at TransVision 2021.)

7 February 2022

Options for controlling artificial superintelligence

What are the best options for controlling artificial superintelligence?

Should we confine it in some kind of box (or simulation), to prevent it from roaming freely over the Internet?

Should we hard-wire into its programming a deep respect for humanity?

Should we avoid it from having any sense of agency or ambition?

Should we ensure that, before it takes any action, it always double-checks its plans with human overseers?

Should we create dedicated “narrow” intelligence monitoring systems, to keep a vigilant eye on it?

Should we build in a self-destruct mechanism, just in case it stops responding to human requests?

Should we insist that it shares its greater intelligence with its human overseers (in effect turning them into cyborgs), to avoid humanity being left behind?

More drastically, should we simply prevent any such systems from coming into existence, by forbidding any research that could lead to artificial superintelligence?

Alternatively, should we give up on any attempt at control, and trust that the superintelligence will be thoughtful enough to always “do the right thing”?

Or is there a better solution?

If you have clear views on this question, I’d like to hear from you.

I’m looking for speakers for a forthcoming London Futurists online webinar dedicated to this topic.

I envision three speakers each taking up to 15 minutes to set out their proposals. Once all the proposals are on the table, the real discussion will begin – with the speakers interacting with each other, and responding to questions raised by the live audience.

The date for this event remains to be determined. I will find a date that is suitable for the speakers who have the most interesting ideas to present.

As I said, please get in touch if you have questions or suggestions about this event.

Image credit: the above graphic includes work by Pixabay user Geralt.

PS For some background, here’s a video recording of the London Futurists event from last Saturday, in which Roman Yampolskiy gave several reasons why control of artificial superintelligence will be deeply difficult.

For other useful background material, see the videos on the Singularity page of the Vital Syllabus project.

2 August 2021

Follow-ups from the future of Transhumanist Studies

Last Saturday’s London Futurists event experimented with the format.

After the by-now usual 90 minutes of speaker presentation and moderated Q&A, and a five-minute comfort break, the event transitioned into a new phase with informal on-camera audience discussion. Audience members who stayed on for this part of the meeting were all transformed from webinar viewers into panellists, and invited to add their voices into the discussion. Questions to seed the discussion were:

  • What did you particularly like about what you have heard?
  • What would you like to add into the discussion?
  • What might you suggest as a follow-up after the event?

The topic for the event as a whole was “The Future of Transhumanist Studies”. The speaker was Natasha Vita-More, the executive director of Humanity+. Natasha kindly agreed to stay on for the informal phase of the event and provided more insight in that phase too.

I’m appending, below, a copy of the video recording of the main part of the event. What I want to share now are my personal take-aways from the informal discussion phase. (That part wasn’t recorded, but I took notes.)

1. The importance of increments

Transhumanism has a vision of a significantly better future for humanity.

To be clear, it’s not a vision of some kind of perfection – some imagined state in which no change ever happens. Instead, it’s a vision of an open, dynamic journey forward. Max More has written eloquently about that point on many occasions over the years. See in particular the Principles of Extropy (v3.11) from 2003. Or this short summary from the chapter “True Transhumanism” in the 2011 book H+/-: Transhumanism and Its Critics:

Transhumanism is about continual improvement, not perfection or paradise.

Transhumanism is about improving nature’s mindless “design”, not guaranteeing perfect technological solutions.

Transhumanism is about morphological freedom, not mechanizing the body.

Transhumanism is about trying to shape fundamentally better futures, not predicting specific futures.

Transhumanism is about critical rationalism, not omniscient reason.

What arose during the discussion on Saturday were questions about possible incremental next steps along that envisioned journey.

In part, these were questions about what science and technology might be able to deliver in the next 2, 5, 10 years, and so on. It’s important to be able to speak in a credible manner about these possible developments, and to offer evidence supporting these forecasts.

But there were also questions about specific actions that transhumanists might be able to take in the coming months and years to improve public awareness of key transhumanist ideas.

One panellist posed the question as follows:

What are the immediate logical next steps across the Transhumanist agenda that could [achieve wider impact]?

The comment continued:

The problem I see with roadmaps generally… is that people always look at the end of the roadmap and think about the end point, not the incremental journey… People start planning around the final slide/item on the roadmap instead of buying into the bits in between while expecting everyone else to do the work to get us there. That usually results in people not buying the incremental steps which of course stifles progress.

That thought resonated with other participants. One added:

This is a crucial idea. A sense of urgency is hard to engender in long term issues.

I am reminded of the excellent analysis by Harvard Business School Professor John Kotter. Kotter has probably done more than anyone else to understand why change initiatives frequently fail – even when the people involved in these initiatives have many admirable qualities. Here are the eight reasons he identifies for change initiatives failing:

  1. Lack of a sufficient sense of urgency;
  2. Lack of an effective guiding coalition for the change (an aligned team with the ability to make things happen);
  3. Lack of a clear appealing vision of the outcome of the change (otherwise it may seem too vague, having too many unanswered questions);
  4. Lack of communication for buy-in, keeping the change in people’s mind (otherwise people will be distracted back to other issues);
  5. Lack of empowerment of the people who can implement the change (lack of skills, wrong organisational structure, wrong incentives, cumbersome bureaucracy);
  6. Lack of celebration of small early wins (failure to establish momentum);
  7. Lack of follow through (it may need wave after wave of change to stick);
  8. Lack of embedding the change at the cultural level (otherwise the next round of management reorgs can unravel the progress made).

Kotter’s positive suggestions for avoiding these failures can be summed up in a slide I’ve used in various forms many times in my presentations over the years:

That brings me back to the topic of incremental change – envisioning it, communicating it, enabling it, and celebrating it. If that’s not done, any sense of urgency and momentum behind a change initiative is likely to falter and stall.

That’s why a credible roadmap of potential incremental changes is such an important tool.

Watch out for more news on that front soon.

2. Transhumanism becoming mainstream

Here’s another line of discussion from the informal conversation at the end of Saturday’s event.

Many members of the public, if they know about transhumanism at all, tend to see it as other worldly. It’s the subject of science fiction, or something that might appear in eccentric video games. But it’s not something relevant to the real world any time soon.

Or they might think of transhumanism as something for academics to debate, using abstract terminology such as post-modernism, post-humanism, and (yes) trans-humanism. Again, not something with any real-world implications.

To transhumanists, on the other hand, the subject is highly relevant. It’s relevant to the lives of individuals, as it covers treatments and methods that can be applied, here and now, to improve our wellbeing – physically, rationally, emotionally, and socially. It can also provide an uplifting vision that transforms our understanding of our own personal role in steering a forthcoming mega-disruption.

Moreover, transhumanism is relevant to the real-world problems that, understandably, cause a great deal of concern – problems about the environment, social interactions, economics and politics, and the runaway adoption of technology.

As Albert Einstein said in 1946, “a new type of thinking is essential if mankind is to survive and move to higher levels”.

My own view is that transhumanism is the “new kind of thinking” that is, indeed, “essential” if we are to avoid the many dangerous landmines into which humanity currently risks sleepwalking.

That’s a core message of my recent book Vital Foresight: The Case For Active Transhumanism.

In that book, I emphasise that transhumanism isn’t some other worldly idea that’s in search of a question to answer. Instead, I introduce transhumanism as the solution of what I describe as eleven “landmines”.

Snippets of ideas about transhumanism are included in the early chapters of my book, but it’s not until Chapter 11 that I introduce the subject properly. That was a deliberate choice. I want to be clear that transhumanism can be seen as the emerging mainstream response to real-world issues and opportunities.

3. Academics who write about transhumanism

In some parts of the world, there are more people who study and write about transhumanism than who actively support transhumanist projects. That was another topic at the end of Saturday’s London Futurists event.

From my own reading, I recognise some of that academic work as being of high quality. For example, see the research of Professor Stefan Lorenz Sorgner from the History and Humanities department at John Cabot University in Rome. Sorgner featured in a London Futurists webinar a few months ago.

Another example of fine academic research into transhumanism is the 2018 PhD thesis of Elise Bohan of Macquarie University, Sydney, Australia: A History of Transhumanism.

On the other hand, there’s also a considerable amount of academic writing on transhumanism that is, frankly, of a shockingly poor quality. I stepped through some of that writing while preparing Chapter 12 of Vital Foresight – the chapter (“Antitheses”) where I evaluate criticisms of transhumanism.

What these critics often do is to imagine their own fantasy version of transhumanism, and then criticise it, with little anchoring to the actual transhumanist community. That is, they criticise “straw men” distortions of transhumanism.

In some cases, these critics latch onto individual statements of people loosely connected with transhumanism – for example, statements by the fictional character Jethro Knights in the novel The Transhumanist Wager – and wrongly assume that these statements are authoritative for the entire movement. (See here for my own review of The Transhumanist Wager.)

These critics often assert: “What transhumanists fail to consider is…” or “Transhumanists never raise the question that…” whereas, in fact, these very questions have been reviewed in depth, many times over, in transhumanist discussion lists.

From time to time, critics of transhumanism do raise some good points. I acknowledge a number of examples throughout Vital Foresight. What I want to consider now are the questions that were raised on Saturday:

  1. How can transhumanists keep on top of the seemingly growing number of academic articles about us?
  2. What is the best way to respond to the misunderstandings and distortions that we notice?
  3. As a good use for our time, how do interactions with these academics compare with trying to share transhumanist messages with more mainstream audiences?

To answer the third question first: ideas matter. Ideas can spread from initially obscure academic settings into wider contexts. Keeping an eye on these discussions could help us to address issues early.

Moreover, what we can surely find, in amongst the range of academic work that addresses transhumanism, are some really good expressions and thoughts that deserve prominence and attention. These thoughts might also cause us to have some “aha” realisations – about things we could, or should, start to do differently.

Flipping to the first question: many hands make light work. Rather than relying on a single person that tries to review all academic mentions of transhumanism, more of us should become involved in that task.

When we find an article that deserves more attention – whether criticism or praise – we can add it into pages on H+Pedia (creating new pages if necessary).

The main event

Now you’ve read the after thoughts, here’s a recording of the event itself. Enjoy!

1 March 2021

The imminence of artificial consciousness

Filed under: AGI, books, brain simulation, London Futurists — Tags: , , — David Wood @ 10:26 am

I’ve changed my mind about consciousness.

I used to think that, of the two great problems about artificial minds – namely, achieving artificial general intelligence, and achieving artificial consciousness – progress toward the former would be faster than progress toward the latter.

After all, progress in understanding consciousness had seemed particularly slow, whereas enormous numbers of researchers in both academia and industry have been attaining breakthrough after breakthrough with new algorithms in artificial reasoning.

Over the decades, I’d read a number of books by Daniel Dennett and other philosophers who claimed to have shown that consciousness was basically already understood. There’s nothing spectacularly magical or esoteric about consciousness, Dennett maintained. What’s more, we must beware being misled by our own introspective understanding of our consciousness. That inner introspection is subject to distortions – perceptual illusions, akin to the visual illusions that often mislead us about what we think our eyes are seeing.

But I’d found myself at best semi-convinced by such accounts. I felt that, despite the clever analyses in such accounts, there was surely more to the story.

The most famous expression of the idea that consciousness still defied a proper understanding is the formulation by David Chalmers. This is from his watershed 1995 essay “Facing Up to the Problem of Consciousness”:

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect… There is something it is like to be a conscious organism. This subjective aspect is experience.

When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion?

It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

However, as Wikipedia notes,

The existence of a “hard problem” is controversial. It has been accepted by philosophers of mind such as Joseph Levine, Colin McGinn, and Ned Block and cognitive neuroscientists such as Francisco Varela, Giulio Tononi, and Christof Koch. However, its existence is disputed by philosophers of mind such as Daniel Dennett, Massimo Pigliucci, Thomas Metzinger, Patricia Churchland, and Keith Frankish, and cognitive neuroscientists such as Stanislas Dehaene, Bernard Baars, Anil Seth and Antonio Damasio.

With so many smart people apparently unable to agree, what hope is there for a layperson to have any confidence in an answering the question, is consciousness already explained in principle, or do we need some fundamentally new insights?

It’s tempting to say, therefore, that the question should be left to one side. Instead of squandering energy spinning circles of ideas with little prospect of real progress, it would be better to concentrate on numerous practical questions: vaccines for pandemics, climate change, taking the sting out of psychological malware, protecting democracy against latent totalitarianism, and so on.

That practical orientation is the one that I have tried to follow most of the time. But there are four reasons, nevertheless, to keep returning to the question of understanding consciousness. A better understanding of consciousness might:

  1. Help provide therapists and counsellors with new methods to address the growing crisis of mental ill-health
  2. Change our attitudes towards the suffering we inflict, as a society, upon farm animals, fish, and other creatures
  3. Provide confidence on whether copying of memories and other patterns of brain activity, into some kind of silicon storage, could result at some future date in the resurrection of our consciousness – or whether any such reanimation would, instead, be “only a copy” of us
  4. Guide the ways in which systems of artificial intelligence are being created.

On that last point, consider the question whether AI systems will somehow automatically become conscious, as they gain in computational ability. Most AI researchers have been sceptical on that score. Google Maps is not conscious, despite all the profoundly clever things that it can do. Neither is your smartphone. As for the Internet as a whole, opinions are a bit more mixed, but again, the general consensus is that all the electronic processing happening on the Internet is devoid of the kind of subjective inner experience described by David Chalmers.

Yes, lots of software has elements of being self-aware. Such software contains models of itself. But it’s generally thought (and I agree, for what it’s worth) that such internal modelling is far short of subjective inner experience.

One prospect this raises is the dark possibility that humans might be superseded by AIs that are considerably more intelligent than us, but that such AIs would have “no-one at home”, that is, no inner consciousness. In that case, a universe with AIs instead of humans might have much more information processing, but be devoid of conscious feelings. Mega oops.

The discussion at this point is sometimes led astray by the popular notion that any threat from superintelligent AIs to human existence is predicated on these AIs “waking up” or become conscious. In that popular narrative, any such waking up might give an AI an additional incentive to preserve itself. Such an AI might adopt destructive human “alpha male” combative attitudes. But as I say, that’s a faulty line of reasoning. AIs might well be motivated to preserve themselves without ever gaining any consciousness. (Look up the concept of “basic AI drives” by Steve Omohundro.) Indeed, a cruise missile that locks onto a target poses a threat to that target, not because the missile is somehow conscious, but because it has enough intelligence to navigate to its target and explode on arrival.

Indeed, AIs can pose threats to people’s employment, without these AIs gaining consciousness. They can simulate emotions without having real internal emotions. They can create artistic masterpieces, using techniques such as GANs (Generative Adversarial Networks), without having any real psychological appreciation of the beauty of these works of art.

For these reasons, I’ve generally urged people to set aside the question of machine consciousness, and to focus instead on the question of machine intelligence. (For example, I presented that argument in Chapter 9 of my book Sustainable Superabundance.) The latter is tangible and poses increasing threats (and opportunities), whereas the former is a discussion that never seems to get off the ground.

But, as I mentioned at the start, I’ve changed my mind. I now think it’s possible we could have machines with synthetic consciousness well before we have machines with general intelligence.

What’s changed my mind is the book by Professor Mark Solms, The Hidden Spring: A Journey to the Source of Consciousness.

Solms is director of neuropsychology in the Neuroscience Institute of the University of Cape Town, honorary lecturer in neurosurgery at the Royal London Hospital School of Medicine, and an honorary fellow of the American College of Psychiatrists. He has spent his entire career investigating the mysteries of consciousness. He achieved renown within his profession for identifying the brain mechanisms of dreaming and for bringing psychoanalytic insights into modern neuroscience. And now his book The Hidden Spring is bringing him renown far beyond his profession. Here’s a selection of the praise it has received:

  • A remarkably bold fusion of ideas from psychoanalysis, psychology, and the frontiers of theoretical neuroscience, that takes aim at the biggest question there is. Solms will challenge your most basic beliefs.
    Matthew Cobb, author of The Idea of the Brain: The Past and Future of Neuroscience
  • At last the emperor has found some clothes! For decades, consciousness has been perceived as an epiphenomenon, little more than an illusion that can’t really make things happen. Solms takes a thrilling new approach to the problem, grounded in modern neurobiology but finding meaning in older ideas going back to Freud. This is an exciting book.
    Nick Lane, author of The Vital Question
  • To say this work is encyclopaedic is to diminish its poetic, psychological and theoretical achievement. This is required reading.
    Susie Orbach, author of In Therapy
  • Intriguing…There is plenty to provoke and fascinate along the way.
    Anil Seth, Times Higher Education
  • Solms’s efforts… have been truly pioneering. This unification is clearly the direction for the future.
    Eric Kandel, Nobel laureate for Physiology and Medicine
  • This treatment of consciousness and artificial sentience should be taken very seriously.
    Karl Friston, scientific director, Wellcome Trust Centre for Neuroimaging
  • Solms’s vital work has never ignored the lived, felt experience of human beings. His ideas look a lot like the future to me.
    Siri Hustvedt, author of The Blazing World
  • Nobody bewitched by these mysteries [of consciousness] can afford to ignore the solution proposed by Mark Solms… Fascinating, wide-ranging and heartfelt.
    Oliver Burkeman, Guardian
  • This is truly a remarkable book. It changes everything.
    Brian Eno

At times, I had to concentrate hard while listening to this book, rewinding the playback multiple times. That’s because the ideas kept sparking new lines of thought in my mind, which ran off in different directions as the narration continued. And although Solms explains his ideas in an engaging manner, I wanted to think through the deeper connections with the various fields that form part of the discussion – including psychoanalysis (Freud features heavily), thermodynamics (Helmholtz, Gibbs, and Friston), evolution, animal instincts, dreams, Bayesian statistics, perceptual illusions, and the philosophy of science.

Alongside the theoretical sections, the book contains plenty of case studies – from Solms’ own patients, and from other clinicians over the decades (actually centuries) – that illuminate the points being made. These studies involve people – or animals – with damage to parts of their brains. The unusual ways in which these subjects behave – and the unusual ways in which they express themselves – provide insight on how consciousness operates. Particularly remarkable are the children born with hydranencephaly – that is, without a cerebral cortex – but who nevertheless appear to experience feelings.

Having spent two weeks making my way through the first three quarters of the book, I took the time yesterday (Sunday) to listen to the final quarter, where there were several climaxes following on top of each other – addressing at length the “Hard Problem” ideas of David Chalmers, and the possibility of artificial consciousness.

It’s challenging to summarise such a rich set of ideas in just a few paragraphs, but here are some components:

  • To understand consciousness, the subcortical brain stem (an ancient part of our anatomy) is at least as important as the cognitive architecture of the cortex
  • To understand consciousness, we need to pay attention to feelings as much as to memories and thought processing
  • Likewise, the chemistry of long-range neuromodulators is at least as important as the chemistry of short-range neurotransmitters
  • Consciousness arises from particular kinds of homeostatic systems which are separated from their environment by a partially permeable boundary: a structure known as a “Markov blanket”
  • These systems need to take actions to preserve their own existence, including creating an internal model of their external environment, monitoring differences between incoming sensory signals and what their model predicted these signals would be, and making adjustments so as to prevent these differences from escalating
  • Whereas a great deal of internal processing and decision-making can happen automatically, without conscious thought, some challenges transcend previous programming, and demand greater attention

In short, consciousness arises from particular forms of information processing. (Solms provides good reasons to reject the idea that there is a basic consiciousness latent in all information, or, indeed, in all matter.) Whilst more work requires to be done to pin down the exact circumstances in which consciousness arises, this project is looking much more promising now, than it did just a few years ago.

This is no idle metaphysics. The ideas can in principle be tested by creating artificial systems that involve particular kinds of Markov blankets, uncertain environments that pose existential threats to the system, diverse categorical needs (akin to the multiple different needs of biologically conscious organisms), and layered feedback loops. Solms sets out a three-stage process whereby such systems could be built and evolved, in a relatively short number of years.

But wait. All kinds of questions arise. Perhaps the most pressing one is this: If such systems can be built, should we build them?

That “should we” question gets a lot of attention in the closing sections of the book. We might end up with AIs that are conscious slaves, in ways that we don’t have to worry about for our existing AIs. We might create AIs that feel pain beyond that which any previous conscious being has ever experienced it. Equally, we might create AIs that behave very differently from those without consciousness – AIs that are more unpredictable, more adaptable, more resourceful, more creative – and more dangerous.

Solms is doubtful about any global moratorium on such experiments. Now that the ideas are out of the bag, so to speak, there will be many people – in both academia and industry – who are motivated to do additional research in this field.

What next? That’s a question that I’ll be exploring this Saturday, 6th March, when Mark Solms will be speaking to London Futurists. The title of his presentation will be “Towards an artificial consciousness”.

For more details of what I expect will be a fascinating conversation – and to register to take part in the live question and answer portion of the event – follow the links here.

18 June 2020

Transhumanist alternatives to contempt and fear

Contempt and fear. These are the public reactions that various prominent politicians increasingly attract these days.

  • We feel contempt towards these politicians because they behave, far too often, in contemptible ways.
  • We feel fear regarding these politicians on account of the treacherous paths they appear to be taking us down.

That’s why many fans of envisioning and building a better world – including many technologists and entrepreneurs – would prefer to ignore politics, or to minimise its influence.

These critics of politics wish, instead, to keep their focus on creating remarkable new technology or on building vibrant new business.

Politics is messy and ugly, say these critics. It’s raucous and uncouth. It’s unproductive. Some would even say that politics is unnecessary. They look forward to politics reducing in size and influence.

Their preferred alternative to contempt and fear is to try to put the topic our of their minds.

I disagree. Putting our heads in the sand about politics is a gamble fraught with danger. Looking the other way won’t prevent our necks from being snapped when the axe falls. As bad outcomes increase from contemptible, treacherous politics, they will afflict everyone, everywhere.

We need a better alternative. Rather than distancing ourselves from the political sphere, we need to engage, intelligently and constructively.

As I’ll review below, technology can help us in that task.

Constructive engagement

Happily, as confirmed by positive examples from around the world, there’s no intrinsic reason for politics to be messy or ugly, raucous or uncouth.

Nor should politics be seen as some kind of unnecessary activity. It’s a core part of human life.

Indeed, politics arises wherever people gather together. Whenever we collectively decide the constraints we put on each other’s freedom, we’re taking part in politics.

Of course, this idea of putting constraints on each other’s freedoms is deeply unpopular in some circles. Liberty means liberty, comes the retort.

My answer is: things are more complicated. That’s for two reasons.

To start with, there are multiple kinds of freedom, each of which are important.

For example, consider the “four essential human freedoms” highlighted by US President FD Roosevelt in a speech in January 1941:

We look forward to a world founded upon four essential human freedoms.

The first is freedom of speech and expression – everywhere in the world.

The second is freedom of every person to worship God in their own way – everywhere in the world.

The third is freedom from want – which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants – everywhere in the world.

The fourth is freedom from fear – which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbour – anywhere in the world.

As well as caring about freeing people from constraints on their thoughts, speech, and actions, we generally also care about freeing people from hunger, disease, crime, and violence. Steps to loosen some of these constraints often risk decreasing other types of liberty. As I said, things are complicated.

The second reason builds on the previous point and makes it clearer why any proclamation “liberty means liberty” is overly simple. It is that our actions impact on each other’s wellbeing, both directly and indirectly.

  • If we speed in our cars, confident in our own ability to drive faster than the accepted norms, we risk seriously reducing the personal liberties of others if we suffer a momentary lapse in concentration.
  • If we share a hateful and misleading message on social media, confident in our own intellectual robustness, we might push someone reading that message over a psychological ledge.
  • If we discard waste products into the environment, confident that little additional harm will come from such pollution, we risk an unexpected accumulation of toxins and other harms.
  • If we grab whatever we can in the marketplace, confident that our own vigour and craftiness deserve a large reward, we could deprive others of the goods, services, and opportunities they need to enjoy a good quality of life.
  • If we publicise details of bugs in software that is widely used, or ways to increase the deadliness of biological pathogens, confident that our own reputation will rise as a result inside the peer groups we wish to impress, we risk enabling others to devastate the infrastructures upon which so much of life depends – electronic infrastructure and/or biological infrastructure.
  • If we create and distribute software that can generate mind-bending fake videos, we risk precipitating a meltdown in the arena of public discussion.
  • If we create and distribute software that can operate arsenals of weapons autonomously, freed from the constraints of having to consult slow-thinking human overseers before initiating an attack, we might gain lots of financial rewards, but at the risk of all manner of catastrophe from any defects in the design or implementation of that system.

In all these examples, there’s a case to agree some collective constraints on personal freedoms.

The rationale for imposing and accepting specific constraints on our freedom is in order to secure a state of affairs where overall freedom flourishes more fully. That’s a state of affairs in which we will all benefit.

In summary, greater liberty arises as a consequence of wise social coordination, rather than existing primarily as a reaction against such coordination. Selecting and enforcing social constraints is the first key task of politics.

Recognising and managing complexes

But who is the “we” who decides these constraints? And who will ensure that constraints put in place at one time, reflecting the needs of that time, are amended promptly when circumstances change, rather than remaining in place, disproportionately benefiting only a subset of society?

That brings us to a second key task of politics: preventing harmful dominance of society by self-interested groups of individuals – groups sometimes known as “complexes”.

This concept of the complex featured in the farewell speech made by President Eisenhower in January 1961. Eisenhower issued a profound warning that “the military industrial complex” posed a growing threat to America’s liberty and democracy:

In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.

We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted. Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defence with our peaceful methods and goals, so that security and liberty may prosper together.

As a distinguished former military general, Eisenhower spoke with evident authority on this topic:

Until the latest of our world conflicts, the United States had no armaments industry. American makers of ploughshares could, with time and as required, make swords as well. But now we can no longer risk emergency improvisation of national defence; we have been compelled to create a permanent armaments industry of vast proportions. Added to this, three and a half million men and women are directly engaged in the defence establishment. We annually spend on military security more than the net income of all United States corporations.

This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence – economic, political, even spiritual – is felt in every city, every Statehouse, every office of the Federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society.

It’s one thing to be aware of the risks posed by a military industrial complex (and the associated trade in armaments). It’s another thing to successfully manage these risks. Similar risks apply as well, for other vested interest “complexes” that can likewise subvert societal wellbeing:

  • A carbon energy complex, which earns huge profits from the ongoing use of carbon-based fuels, and which is motivated to minimise appreciation of the risks to climate from continuing use of these fuels
  • A financial complex, which (likewise) earns huge profits, by means of complicated derivative products that are designed to evade regulatory scrutiny whilst benefiting in cases of financial meltdown from government handouts to banks that are perceived as “too big to fail”
  • An information technology complex, which collects vast amounts of data about citizens, and which enables unprecedented surveillance, manipulation, and control of people by corporations and/or governments
  • A medical industrial complex, which is more interested in selling patients expensive medical treatment over a long period of time than in low-cost solutions which would prevent illnesses in the first place (or cure them quickly)
  • A political complex, which seeks above all else to retain its hold on political power, often by means of undermining a free press, an independent judiciary, and any credible democratic opposition.

You can probably think of other examples.

In all these cases, the practical goals of the complex are only weakly aligned with the goals of society as a whole. If society is not vigilant, the complex will subvert the better intentions of citizens. The complex is so powerful that it cannot be controlled by mere words of advocacy.

Beyond advocacy, we need effective politics. This politics can be supported by a number of vital principles:

  • Transparency: The operations of the various complexes need to be widely publicised and analysed, bringing them out of the shadows into the light of public understanding
  • Disclosure: Conflicts of interest must be made clear, to avoid the public being misled by individuals with ulterior motives
  • Accountability: Instances where key information is found to have been suppressed or distorted need to be treated very seriously, with the guilty parties having their reputations adjusted and their privileges diminished
  • Assessment of externalities: Evaluation systems should avoid focusing too narrowly on short-term metrics (such as financial profit) but should take into full account both positive and negative externalities – including new opportunities and new risks arising
  • Build bridges rather than walls: Potential conflicts should be handled by diplomacy, negotiation, and seeking a higher common purpose, rather than by driving people into antagonistic rival camps that increasingly bear hatred towards one another
  • Leanness: Decisions should focus on questions that matter most, rather than dictating matters where individual differences can easily be tolerated
  • Democratic oversight: People in leadership positions in society should be subject to regular assessment of their performance by a democratic review, that involves a dynamic public debate aiming to reach a “convergent opinion” rather than an “average opinion”.

Critically, all the above principles can be assisted by smart adoption of technology that enhances collaboration. This includes wikis (or similar) that map out the landscape of decisions. This also includes automated logic-checkers, and dynamic modelling systems. And that’s just the start of how technology can help support a better politics.

Transhumanist approaches to politics

The view that technology can assist humans to carry out core parts of our lives better than before, is part of the worldview known as transhumanism.

Transhumanism asserts, further, than the assistance available from technology, wisely applied, extends far beyond superficial changes. What lies within our grasp is a set of radical improvements in the human condition.

As in the short video “An Introduction to Transhumanism” – which, with over a quarter of a million views, is probably the most widely watched video on the subject – transhumanism is sometimes expressed in terms of the so-called “three supers”:

  • Super longevity: significantly improved physical health, including much longer lifespans – transcending human tendencies towards physical decay and decrepitude
  • Super intelligence: significantly improved thinking capability – transcending human tendencies towards mental blind spots and collective stupidity
  • Super wellbeing: significantly improved states of consciousness – transcending human tendencies towards depression, alienation, vicious emotions, and needless suffering.

My own advocacy of transhumanism actually emphasises one variant within the overall set of transhumanist philosophies. This is the variant of transhumanism known as technoprogressive transhumanismThe technoprogressive variant of transhumanism in effect adds one more “super” to the three already mentioned:

  • Super democracy: significantly improved social inclusion and resilience, whilst upholding diversity and liberty – transcending human tendencies towards tribalism, divisiveness, deception, and the abuse of power.

These radical improvements, by the way, can be brought about by a combination of changes at the level of individual humans, changes in our social structures, and changes in the prevailing sets of ideas (stories) that we tend to tell ourselves. Exactly what is the best combination of change initiatives, at these different levels, is something to be determined by a mix of thought and experiment.

Different transhumanists place their emphases upon different priorities for potential transformation.

If you’d like to listen in to that ongoing conversation, let me draw your attention to the London Futurists webinar taking place this Saturday – 20th of June – from 7pm UK time (BST).

In this webinar, four leading transhumanists will be discussing and contrasting their different views on the following questions (along with others that audience members raise in real time):

  • In a time of widespread anxiety about social unrest and perceived growing inequalities, what political approach is likely to ensure the greatest liberty?
  • In light of the greater insights provided by science into human psychology at both the individual and group levels, what are the threats to our wellbeing that most need to be guarded against, and which aspects of human character most need to be protected and uplifted?
  • What does the emerging philosophy of transhumanism, with its vision of conscious life evolving under thoughtful human control beyond the current human form, have to say about potential political interventions?

As you can see, the webinar is entitled “Politics for greater liberty: transhumanist perspectives”. The panellists are:

For more details, and to register to attend, click here.

Other views on the future of governance and the economy

If you’d like to hear a broader set of views on a related topic, then consider attending a Fast Future webinar taking place this Sunday – 21st June – from 6pm UK time (BST).

There will be four panellists in that webinar – one being me. We’ll each be be presenting a snapshot of ideas from the chapters we contributed to the recent Fast Future book, Aftershocks and Opportunities – Scenarios for a Post-Pandemic Future, which was published on June 1st.

After the initial presentations, we’ll be responding to each other’s views, and answering audience questions.

My own topic in this webinar will be “More Aware, More Agile, More Alive”.

The other panellists, and their topics, will be:

  • Geoff Mulgan – “Using the Crisis to Remake Government for the Future”
  • Bronwyn Williams – “The Great Separation”
  • Rohit Talwar – “Post-Pandemic Government and the Economic Recovery Agenda: A Futurist Perspective”

I’m looking forward to a lively discussion!

Click here for more details of this event.

Transcending Politics

As I said above (twice), things are complicated. The science and engineering behind the various technological solutions are complicated. And the considerations about regulations and incentives, to constrain and guide our collective use of that technology, are complicated too. We should beware any overly simple claims about easy answers to these issues.

My fullest treatment of these issues is in a 423 page book of mine, Transcending Politics, that I published in 2018.

Over the last couple of weeks, I’ve been flicking through some of the pages of that book again. Although there are some parts where I would now wish to use a different form of expression, or some updated examples, I believe the material stands the test of time well.

If the content in this blogpost strikes you as interesting, why not take a closer look at that book? The book’s website contains opening extracts of each of the chapters, as well as an extended table of contents. I trust you’ll like it.

30 March 2020

Ending insecurity: from UBI to a Marshall Plan for the planet

Filed under: London Futurists, UBI — Tags: , — David Wood @ 7:53 am

On Thursday last week, 60 members and friends of London Futurists took part in an online Zoom webinar addressing the following questions:

  • Are regular payments to every citizen in the country an appropriate solution to the fragility that the coronavirus pandemic is exposing in our economy and social safety net?
  • When we consider potential additional crises that may boil over in the years ahead, does the case for UBI (universal basic income) strengthen or weaken?
  • What are the alternatives to UBI?

A video recording of the discussion (lightly edited) is now available:

As you can see, the event was not without glitches, but it went more smoothly than the previous online London Futurists event. We continue to live and learn!

Please find below various takeaways from the conversation that deserve wider attention. I am providing these to help enable the continuation of key lines of discussion.

(My apologies in advance in case my summaries unintentionally misrepresent what any participant said.)

From Phil Teer:

  • A basic “no strings attached” income would allow people to follow government advice to “stay at home”, confident that they won’t fall into poverty as a result
  • If basic income had already been in place, it would have allowed the country to go into lockdown quicker, without waiting for a specific financial support scheme to be devised first
  • At the moment, far too many people feel obliged to work, even when they are unwell – that’s a bad system, even in times when no pandemic is taking place
  • Many aspects of the current benefits system are designed to incentivise people to return to work as soon as possible. That’s not a purpose we need the system to do at this moment
  • Basic income provides individual agency and choice: individuals being able to choose whether to protect themselves and their families by staying at home
  • Until now, technology for working remotely has been stuck on the wrong side of the adoption curve chasm, waiting for its big push into the mainstream; that push is now here
  • The crisis can accelerate adoption of automation – such as online supermarkets, and (already happening in China) unmanned supermarkets and drone deliveries
  • We can anticipate and welcome a shift in production from people to machines, bots, and algorithms
  • Companies big and small will question the need for their physical offices and premises
  • We should welcome these changes – they lay the basis for an “age of imagination” in which people will be valued more for their creativity than for their productivity

From Calum Chace:

  • The true challenge from technological unemployment is not meaning but income
  • Once the income problem is solved, a world of greater technological unemployment could bring in a second Renaissance
  • The forecasters who insist that people will always have paid work to do are pessimists
  • However, a “basic income” scheme that leaves recipients indefinitely poor (as implied by the word “basic”) will be a failure, and won’t survive
  • The economist John Kay: If UBI is high enough to be useful, it’s unaffordable. If it’s affordable, it’s not useful
  • That’s why every government so far, in responding to Covid-19, is implementing targeted policies rather than a regularly paid UBI
  • A better solution than UBI is “the economy of abundance” in which the goods and services needed for a very good standard of living are almost free
  • What will reduce the costs of these goods and services isn’t “Fully Automated Luxury Communism” but “Fully Automated Luxury Capitalism”
  • Taking inspiration from the abundant music delivery of Spotify, we need to develop what might be called Constructify and Clothify
  • This can happen by enlisting more and more advanced AI, driving down energy costs, and removing expensive humans from as many production processes as possible

From Barb Jacobson:

  • If a basic income scheme were already in place, we wouldn’t have seen such a fear and panic as has taken place over the last few weeks
  • Around 500,000 people in the UK have recently applied for Universal Credit (unemployment benefit), but these claim processes are unlikely to be completed until June
  • A package just announced by the government for self-employed people will also not kick in until June
  • In contrast, basic income would be the fastest and easiest way to get money to everybody who needs it
  • There’s a scheme in one province in South Korea that has already issued what they call an “emergency income”, via a card based on people’s national insurance number
  • Instead of “universal basis income” we can think of these payments as being a “dividend” – a share in the economy
  • What we’re learning in this crisis is that many people who are necessary to the operation of society (including carers) are unpaid or low paid or have insecure income
  • In contrast, many people who are paid the most aren’t noticed when they’re no longer working
  • Therefore the crisis can allow a rejigging of how the world is viewed and how we collectively function in it
  • In the meantime, there are prospects in central London of riots and mass looting in the next few weeks: shopkeepers are already taking precautions

From Carin Ism:

  • Milton Friedman: “Only a crisis, actual or perceived, produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around”
  • “That is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable”
  • Our present social systems for producing and distributing surplus are by no means “the end of history”
  • We can’t know for sure what the full effects of basic income will be – although initial experiments are promising – so that’s an argument for continuing to experiment
  • All policy initiatives in response to the crisis are, likewise, experiments, if we are being honest. Staying with the status quo can no longer be defended
  • A bias against “something new” as being “irresponsible” is no longer tenable; everything in the current situation is unprecedented
  • Just as the public in each country have been watching how other countries have been changing health policies, they can now watch how other countries are changing social policies
  • When someone takes the leadership in effective economic and social policies, the whole world will be observing
  • The blossoming public conversation is going to highlight more clearly the injustices that have been in place for a long time without gaining proper attention before
  • The public will no longer accept a response in which banks get bail outs but essential workers just get applause

From Gennady Stolyarov:

  • A proposal that is currently receiving significant support from members of the US Transhumanist Party is “The United States Transhumanist Party supports for an immediate, universal, unconditional basic income of at least $1000 per month to be provided to every United States citizen for the duration of the COVID-19 outbreak and its immediate aftermath, without regard for individuals’ means or other sources of income.”
  • “The priority for this program should be to prevent massive and irreparable economic disruptions to the lives of Americans in the wake of the COVID-19 epidemic.”
  • This stands in dramatic contrast from the stimulus package negotiated in the US Congress, which contains only a one-time monetary payment, with many restrictions on who can receive it
  • As a result there is a risk of class resentment and class warfare
  • Means-tested payments are administratively complex and will hinder getting relief to people as rapidly as possible
  • Universal payments that reach billionaires form only a tiny fraction of the overall expenditure; it’s far simpler to include these people than to create admin systems to exclude them
  • There are means to fund a UBI other than raising taxes: consider the proposal by Zoltan Istvan (and developed by Johannon Ben Zion) of a “federal land dividend”
  • The western US, in particular, contains immense swathes of federally-owned land that is completely unused – barren scrub-land that could be leased out for a fee to corporations
  • That leasing would observe principles of environmental protection, and would allow use for agriculture, industry, construction, among other purposes

From a second round of comments from the initial five panellists:

  • Arguably the real issue is whether we put people first, and hope the economy gets better, or put the economy first, and hope that people are OK
  • By putting people first, with a basic income, it taps into the very thing that actually makes the economy work – the industry, imagination, and creativity of people
  • There is a strong psychological objection, from some, against any taxpayer money going to ultra-wealthy individuals – as would happen in a UBI
  • Economies are what makes it possible for all of us to have the goods and services we need to have a good life – so we cannot ignore the question of the health of the economy
  • The idea of paying for a social dividend from rents of public land dates back to the English radical Thomas Spence
  • By giving money to rich and poor alike, UBI would reinforce the important idea that everyone is a citizen of the same society
  • In the last forty years, the taxes on unearned income have fallen way below the taxes on working; fixing this would make it easier to afford a basic income
  • Why object to poor people receiving a basic income without working, when many rich people earn unearned income such as investment dividends or property rents without working?
  • Our monetary system is based on shared underlying stable beliefs; different ideas from radical thinkers threaten the stability of this belief system and could undermine it
  • The creation of lots of new money, to bail out companies, or to fund a basic income, is an example of an idea that could threaten the stability of the current financial system
  • Governance failures can have knock-on effects – for example, failures to regulate live animal markets can lead to the origination of new virus pandemics
  • For countries lacking sufficient unused land to fund a UBI via land lease, other public resources could fund a UBI – examples include the sovereign wealth fund in Norway
  • Economic progress, that would also help to fund a UBI, could be considerably accelerated by the application of technological innovation
  • This innovation is, unfortunately, too often constrained unnecessarily by the continued operation of outdated 20th century regulatory frameworks
  • Selected relaxation of regulations could enable lots of new economic activity that would make it easy to pay for a prosperous basic income for all

From Dean Bubley:

  • Rather than debating the long-term benefits of a UBI, we need to consider how any form of UBI could be implemented right now
  • How long would it take to implement either a one-off payment or a series of monthly payments, as an emergency solution, straight away
  • The UK finance ministry seem to have looked hard at lots of different options, and speed of implementation has been part of what has guided their choices
  • Any new IT project to support a UBI would by no means be a “small IT project”
  • In the future, once AI is smart enough to put lots of humans out of work, it will surely also be smart enough to allocate income payments in ways beyond simple uniformity

From Rohit Talwar:

  • Most of his contacts in the banking industry is anticipating a forthcoming 30%-80% job reduction due to automation
  • Corporations are responding to the current crisis by reconsidering the work they are doing – how much is essential, and how many tasks currently done by humans could be automated
  • Nearly all his contacts in the AI world are overwhelmed by requests to undertake projects to transform how businesses operate
  • We should therefore expect the automation of many jobs as early as the next twelve months, and a big new lump of unemployment in the short term
  • A stop-gap measure will be urgently needed to respond to this additional unemployment, until such time as people can be retrained into the new industries that will arise

Further responses from the initial panellists:

  • Instead of looking at UBI as being a cost, we should look at it as being an investment
  • In 2008, the nation invested massively in saving the banks and the money system; we now need to invest in people throughout society
  • Basic income allocated to billionaires will be recovered straightforwardly by the tax system – there’s no need to obsess over these particular payments
  • It’s not just the “top 1%” who don’t need any basic income support; it’s more like the top 50%; that makes it more important that the system is targeted rather than universal
  • However, lessons from working inside the benefits system is that “targeting doesn’t actually reach its targets”
  • In the UK, between 20%-60% of people eligible for benefits do not receive them
  • Rather than debating conditions for payments being made, more focus should be put on levying appropriate taxes on corporate profits and unearned income such as inherited wealth
  • The issue isn’t so much one of inequality but one of insecurity
  • A two-phase approach can be pursued – an immediate response as a quick fix, followed by a larger process to come into operation later
  • The immediate quick fix payment can be covered by something Western governments do all the time, namely deficit spending
  • Even people who are not normally fond of deficit spending can appreciate the special circumstances of the current dire emergency, and make an exception
  • Any such deficit spending could be recovered in due course by the fruits of economic activity that is jump-started by the policy initiatives already mentioned

From Wendy Grossman:

  • Making a UBI conditional – for example, a decision to exclude the top 1% – would inevitably lead on to many other wrangles
  • For example, there could be arguments over constraints on how many children a woman is allowed to have, with each of them receiving the standard basic income
  • What about the moral hazard to companies: companies might feel little need to pay their employees much, if the employees are already receiving a UBI
  • Companies may feel able to behave in such a way, unless the UBI is high enough that employees are confident about walking away from poor salaries at work

From Tim Pendry:

  • We need to distinguish the acute short-term problem from the chronic longer-term problem. The acute problem is one of dealing with deflation
  • Our economies could be smashed by this crisis, not because of the 1% death rate, but because of our reactions to it. That’s why a UBI – or something like it – seems to be essential
  • But before a UBI can be adopted in the longer term, a number of problems need to be solved, including having sufficient productive capacity to sustain the economy
  • Major transformations of the economy are often accompanied by a great deal of pain and misery – consider the Industrial Revolution, and Stalin’s actions in the USSR
  • Separately, trades unionists may have legitimate concerns about UBI, since its introduction could be used as an excuse to unravel key pin-pointed elements of the welfare state
  • Not every recipient of UBI will respond positively to it, becoming more creative and lovely; some will behave as psychopaths or otherwise respond badly to money being thrown at them
  • As a possible worrying comparison, consider how “the mob” responded to unconditional hand-outs in ancient Rome
  • There’s a growing clash between privileged employees and precarious freelancers. The solution isn’t to make everyone employees. The solution is to make everyone freelancers with rights

Fourth round of panellist responses:

  • It would be helpful to explore the concept of “helicopter money” in which money is simply created and dropped into the economy on a few occasions, rather than an ongoing UBI
  • We should beware a false dichotomy between “people” and “economy”; the two are interdependent
  • Something that could change the lockdown conditions is if reliable antibody tests become available: that would allow more people to return to work and travel sooner
  • Even if antibody testing becomes available, there’s still a need for a stop-gap measure enabling people to pay for food, rent, and so on
  • Instead of UBI, perhaps the government could provide free UBS – universal basic services – paying fees to e.g. electricity companies on behalf of consumers
  • We need to combine pragmatic short-term considerations, with working out how to manage the larger longer-term societal shifts that are now increasingly realised as being possible
  • We need to anticipate potential new future crises, and work out coping mechanisms in advance, especially thinking about which unintended consequences might result

From Tony Czarnecki:

  • The Covid-19 crisis is likely to accelerate the advent of technological unemployment, due to greater use of robotics and a general surge in innovation
  • It’s possible there could as a result be 3-4 additional unemployed within just one year – this will pose an even greater social crisis
  • Various reports created in 2016 calculated that the cost of a meaningful UBI could be as low as 3% of the GDP (the figure might be closer to 2% today)
  • This would provide an annual adult income of £5k, £2.5k for a child, and £8k for a pensioner
  • Of course this won’t yet support luxurious prosperity, but it’s a useful transitional step forward
  • One more question: do we really understand what lies behind the apparent reluctance of politicians to implement a UBI?

From Alexandria Black:

  • As an emergency solution, right now, payments could be made to special credit cards of individual citizens
  • These cards could also function as identifiers of someone’s Covid infection status
  • The cards will also assist in the vital task of contact tracing, to alert people of the spread of the virus
  • Another consequence of the Covid-19 crisis is an accelerating “crypto war” between China and the US
  • Measures which both countries seem to have been planning for some time may now being rushed out more quickly
  • Perhaps a “hackathon” investigation could be organised, to jump start a better understanding of the crypto war dimension of the current crisis

Final round of discussion points:

  • Rather than corporations potentially mistreating their employees who are receiving a UBI, the outcome could be the opposite: employees with a UBI will have more bargaining power
  • We’re all still at the learning stage of how best to organise ourselves as online digital citizens, able to bring about significant changes in social structures (such as a UBI)
  • The money supply isn’t fixed and static: if it wants to, the government can come up with more money
  • As a comparison, when governments go to war, they don’t ask how much it’s going to cost
  • UBI isn’t just about helping the poorest. It’s for everyone. It addresses the financial insecurity and precariousness that everyone can feel
  • UBI isn’t just for the people who have special creative talents. It’s for everyone, including people caring for family members or the elderly
  • Many who are wealthy keep working, for the sense of self-achievement from serving society via their business. Everyone deserves the opportunity to have that same sense of contribution
  • Today’s large companies may find clever ways to game any UBI system in their own favour – we’ll need to keep an eye on them throughout any transition to UBI
  • We should be open to having an unconditional universal payment being supplemented by conditional payments also available to everyone
  • The fundamental point is the ending of insecurity in society, rather than focusing on other topics such as redistribution or equality
  • Transhumanists like to talk about the goal of “Ending aging”; the goal of “Ending insecurity” belongs on that same list
  • Now is the perfect time to be starting the conversation about the bigger picture solution for the future, because if we don’t do it now, we’ll forget
  • As well as people representing civil society, key participants in such a conversation include the asset owners – the sovereign wealth funds and the big pension funds
  • The asset owners need to be on board, if governments are going to finance their new expenditure plans via debt
  • In effect we’re talking about a Marshall Plan for the planet.

For more details about the book project mentioned by Rohit Talwar at the end of the discussion, Aftershocks and Opportunities – Futurists Envision our Post-Pandemic Future, see here.

Thanks are especially due to all the panellists who spoke up during the event. This meetup page has more details of the event.

19 March 2020

Improving online events, for the sake of a better discussion of what truly matters

In a time of travel restrictions and operating from home, we’re all on a learning curve. There’s much for us to find out about alternatives to meeting in our usual physical locations.

London Futurists have been meeting in various physical locations for twelve years. We’ve also held a number of online gatherings over that time, using tools such as Google Hangouts on Air. But now the balance needs to shift. Given the growing Covid-19 lockdown, all London Futurists physical meetings are cancelled for the time being. While the lockdown continues, the group’s activities will be 100% online.

But what does this mean in practice?

I’d like to share some reflections from the first of this new wave of London Futurists events. That online gathering took place on Saturday, 14th March, using the meeting platform Zoom.

Hopefully my observations can help others to improve their own online events. Hopefully, too, readers of this blog will offer answers or suggestions in response to questions I raise.

Context: our event

Our event last Saturday was recorded, and the footage subsequently edited – removing, for example, parts where speakers needed to be told their microphones were muted. Here’s a copy of the resulting video:

By prior arrangement, five panellists gave short introductory talks, each lasting around 5-10 minutes, to set the stage for group discussion. Between 50 and 60 audience participants were logged into the event throughout. Some of them spoke up during the event; a larger number participated in an online text chat discussion that proceeded in parallel (there’s a lightly edited copy of the text discussion here).

As you can see from the recording, the panellists and the other participants raised lots of important points during the discussion. I’ll get back to these shortly, in another blogpost. But first, some thoughts about the tools and the process that were used for this event.

Context: Zoom

Zoom is available at a number of different price levels:

  • The “Free” level is restricted to meetings of up to 40 minutes.
  • The “Pro” level – which costs UKP £11.99 per month – supports longer meetings (up to 24 hours), recording of events, and other elements of admin and user management. This is what I use at the moment.
  • I’ve not yet explored the more expensive versions.

Users participating in an event can can turn their cameras on or off, and can share their screen (in order, for example, to present slides). Participants can also choose at any time to see a view of the video feeds from all participants (up to 25 on each page), or a “presenter view” that focuses on the person who Zoom detects as the speaker.

Recording can take place locally, on the host’s computer (and, if enabled by the organiser, on participants’ computers). Recording can also take place on the Zoom cloud. In this case, what is recorded (by default) is the “presenter view”.

The video recording can subsequently be downloaded and edited (using any video editing software – what I use is Cyberlink PowerDirector).

Limitations and improvements

I switched some time ago from Google Hangouts-on-Air (HoA) to Zoom, when Google reorganised their related software offerings during 2019.

One feature of the HoA software that I miss in Zoom is the ability for the host to temporarily “blue box” a participant, so that their screen remains highlighted, regardless of which video feeds contain speech or other noises. Without this option, what happens – as you can see from the recording of Saturday’s event – is that the presentation view can jump to display the video from a participant that is not speaking at that moment. For five seconds or so, the display shows the participant staring blankly at the camera, generally without realising that the focus is now on them. What made Zoom shift the focus is that it detected some noise from that video feed -perhaps a cough, a laugh, a moan, a chair sliding across the floor, or some background discussion.

(Participants in the event needn’t worry, however, about their blank stares or other inadvertent activity being contained in the final video. While editing the footage, I removed all such occurrences, covering up the displays, while leaving the main audio stream in place.)

In any case, participants should mute their microphones when not speaking. That avoids unwanted noise reaching the event. However, it’s easy for people to neglect to do so. For that reason, Zoom provides the host with admin control over which mics are on or off at any time. But the host may well be distracted too… so the solution is probably for me to enrol one or two participants with admin powers for the event, and ask them to keep an eye on any mics being left unmuted at the wrong times.

Another issue is the variable quality of the microphones participants were using. If the participant turns their head while speaking – for example, to consult some notes – it can make it hard to hear what they’re saying. A better solution here is to use a head-mounted microphone.

A related problem is occasional local bandwidth issues when a participant is speaking. Some or all of what they say may be obscured, slurred, or missed altogether. The broadband in my own house is a case in point. As it happens, I have an order in the queue to switch my house to a different broadband provider. But this switch is presently being delayed.

Deciding who speaks

When a topic is thought-provoking, there are generally are lots of people with things to contribute to the discussion. Evidently, they can’t all talk at once. Selecting who speaks next – and deciding how long they can speak before they might need to be interrupted – is a key part of chairing successful meetings.

One guide to who should be invited to speak next, at any stage in a meeting, is the set of comments raised in the text chat window. However, in busy meetings, important points raised can become lost in the general flow of messages. Ideally, the meeting software will support a system of voting, so that other participants can indicate their choices of which questions are the most interesting. The questions that receive the most upvotes will become the next focus of the discussion.

London Futurists have used such software in the past, including Glisser and Slido, at our physical gatherings. For online events, ideally the question voting mechanism will be neatly integrated with the underlying platform.

I recently took part in one online event (organised by the Swiss futurist Gerd Leonhard) where the basic platform was Zoom and where there was a “Q&A” voting system for questions from the audience. However, I don’t see such a voting system in the Zoom interface that I use.

Added on 20th March

Apparently there’s a Webinar add-on for Zoom that provides better control of meetings, including the Q&A voting system. The additional cost of this add-on starts from UKP £320 per annum. I’ll be looking into this further. See this feature comparison page.

Thanks to Joe Kay for drawing this to my attention!

Summarising key points

The video recording of our meeting on Saturday lasts nearly 100 minutes. To my mind, the discussion remained interesting throughout. However, inevitably, many potential viewers will hesitate before committing 100 minutes of their time to watch the entirety of that recording. Even if they watch the playback at an accelerated speed, they would probably still prefer access to some kind of edited highlights.

Creating edited highlights of recordings of London Futurists events has long been a “wish list” item for me. I can appreciate that there’s a particular skill to identifying which parts should be selected for inclusion in any such summary. I’ll welcome suggestions on how to do this!

Learning together

More than ever, what will determine our success or failure in coming to terms with the growing Covid-19 crisis is the extent to which positive collaboration and a proactive technoprogressive mindset can pull ahead of humanity’s more destructive characteristics.

That “race” was depicted on the cover of the collection of the ebook of essays published by London Futurists in June 2014, “Anticipating 2025”. Can we take advantage of our growing interconnectivity to spread, not dangerous pathogens or destructive “fake news”, but good insights about building a better future?

That was a theme that emerged time and again during our online event last Saturday.

I’ll draw this blogpost towards a close by sharing some excepts from the opening chapter from Anticipating 2025.

Four overlapping trajectories

The time period up to 2025 can be considered as a race involving four overlapping trajectories: technology, crisis, collaboration, and mindset.

The first trajectory is the improvement of technology, with lots of very positive potential. The second, however, has lots of very negative potential: it is the growth in likelihood of societal crisis:

  • Stresses and strains in the environment, with increased climate chaos, and resulting disputes over responsibility and corrective action
  • Stresses and strains in the financial system, which share with the environment the characteristics of being highly complex, incompletely understood, weakly regulated, and subject to potential tipping points for fast-accelerating changes
  • Increasing alienation, from people who feel unable to share in the magnitude of the riches flaunted by the technologically fortunate; this factor is increased by the threats from technological unemployment and the fact that, whilst the mean household income continues to rise, the median household income is falling
  • Risks from what used to be called “weapons of mass destruction” – chemical, biological, or even nuclear weapons, along with cyber-weapons that could paralyse our electronics infrastructure; there are plenty of “angry young men” (and even angry middle-aged men) who seem ready to plunge what they see as a corrupt world into an apocalyptic judgement.

What will determine the outcome of this race, between technological improvement and growing risk of crises? It may be a third trajectory: the extent to which people around the world are able to collaborate, rather than compete. Will our tendencies to empathise, and to build a richer social whole, triumph over our equally deep tendencies to identify more closely with “people like us” and to seek the well-being of our “in-group” ahead of that of other groups?

In principle, we probably already have sufficient knowledge, spread around the world, to solve all the crises facing us, in a smooth manner that does not require any significant sacrifices. However, that knowledge is, as I said, spread – it does not cohere in just a single place. If only we knew what we knew. Nor does that knowledge hold universal assent – far from it. It is mocked and distorted and undermined by people who have vested interests in alternative explanations – with the vested interests varying among economic, political, ideological, and sometimes sheer human cussedness. In the absence of improved practical methods for collaboration, our innate tendencies to short-term expedience and point-scoring may rule the day – especially when compounded by an economic system that emphasises competition and “keeping up with the Joneses”.

Collaborative technologies such as Wikipedia and open-source software point the way to what should be possible. But they are unlikely to be sufficient, by themselves, to heal the divisions that tend to fragment human endeavours. This is where the fourth, and final, trajectory becomes increasingly important – the transformation of the philosophies and value systems that guide our actions.

If users are resolutely suspicious of technologies that would disturb key familiar aspects of “life as we know it”, engineers will face an uphill battle to secure sufficient funding to bring these technologies to the market – even if society would eventually end up significantly improved as a result.

Politicians generally take actions that reflect the views of the electorate, as expressed through public media, opinion polls, and (occasionally) in the ballot box. However, the electorate is subject to all manners of cognitive bias, prejudice, and continuing reliance on rules of thumb which made sense in previous times but which have been rendered suspect by changing circumstances. These viewpoints include:

  • Honest people should put in forty hours of work in meaningful employment each week
  • People should be rewarded for their workplace toil by being able to retire around the age of 65
  • Except for relatively peripheral matters, “natural methods” are generally the best ones
  • Attempts to redesign human nature – or otherwise to “play God” – will likely cause disaster
  • It’s a pointless delusion to think that the course of personal decay and death can be averted.

In some cases, long-entrenched viewpoints can be overturned by a demonstration that a new technology produces admirable results – as in the case of IVF (in-vitro fertilisation). But in other cases, minds need to be changed even before a full demonstration can become possible.

It’s for this reason that I see the discipline of “culture engineering” as being equally important as “technology engineering”. The ‘culture’ here refers to cultures of humans, not cells. The ‘engineering’ means developing and applying a set of skills – skills to change the set of prevailing ideas concerning the desirability of particular technological enhancements. Both technology engineering and culture engineering are deeply hard skills; both need a great deal of attention.

A core part of “culture engineering” fits under the name “marketing”. Some technologists bristle at the concept of marketing. They particularly dislike the notion that marketing can help inferior technology to triumph over superior technology. But in this context, what do “inferior” and “superior” mean? These judgements are relative to how well technology is meeting the dominant desires of people in the marketplace.

Marketing means selecting, understanding, inspiring, and meeting key needs of what can be called “influence targets” – namely, a set of “tipping point” consumers, developers, and partners. Specifically, marketing includes:

  • Forming a roadmap of deliverables, that build, step-by-step, to delivering something of great benefit to the influence targets, but which also provide, each step of the way, something with sufficient value to maintain their active interest
  • Astutely highlighting the ways in which present (and forthcoming) products will, indeed, provide value to the influence targets
  • Avoiding any actions which, despite the other good things that are happening, alienate the influence targets; and in the event any such alienation emerges, taking swift and decisive action to address it.

Culture engineering involves politics as well as marketing. Politics means building alliances that can collectively apply power to bring about changes in regulations, standards, subsidies, grants, and taxation. Choosing the right partners, and carefully managing relationships with them, can make a big difference to the effectiveness of political campaigns. To many technologists, “politics” is as dirty a word as “marketing”. But once again, mastery of the relevant skillset can make a huge difference to the adoption of technologies.

The final component of culture engineering is philosophy – sets of arguments about fundamentals and values. For example, will human flourishing happen more fully under simpler lifestyles, or by more fully embracing the radical possibilities of technology? Should people look to age-old religious traditions to guide their behaviour, or instead seek a modern, rational, scientific basis for morality? And how should the freedoms of individuals to experiment with potentially dangerous new kinds of lifestyle be balanced against the needs of society as a whole?

“Philosophy” is (you guessed it) yet another dirty word, in the minds of many technologists. To these technologists, philosophical arguments are wastes of time. Yet again, I will disagree. Unless we become good at philosophy – just as we need to become good at both politics and marketing – we will fail to rescue the prevailing culture from its unhelpful mix of hostility and apathy towards the truly remarkable potential to use technology to positively transcend human nature. And unless that change in mindset happens, the prospects are uncertain for the development and adoption of the remarkable technologies of abundance mentioned earlier.

[End of extract from Anticipating 2025.]

How well have we done?

On the one hand, the contents of the 2014 London Futurists book “Anticipating 2025” are prescient. These chapters highlight many issues and opportunities that have grown in importance in the intervening six years.

On the other hand, I was brought down to earth by an email reply I received last week to the latest London Futurists newsletter:

I’m wondering where the Futurism is in this reaction.

Maybe the group is more aptly Reactionism.

I wanted to splutter out an answer: the group (London Futurists) has done a great deal of forward thinking over the years. We have looked at numerous trends and systems, and considered possible scenarios arising from extrapolations and overlaps. We have worked hard to clarify, for these scenarios, the extent to which they are credible and desirable, and ways in which the outcomes can be influenced.

But on reflection, a more sober thought emerged. Yes, we futurists have been trying to alert the rest of society to our collective lack of preparedness for major risks and major opportunities ahead. We have discussed the insufficient resilience of modern social systems – their fragility and lack of sustainability.

But have our messages been heard?

The answer is: not really. That’s why Covid-19 is causing such a dislocation.

It’s tempting to complain that the population as a whole should have been listening to futurists. However, we can also ask, how should we futurists change the way we talk about our insights, so that people pay us more attention?

After all, there are many worse crises potentially just around the corner. Covid-19 is by no means the most dangerous new pathogen that could strike humanity. And there are many other types of risk to consider, including malware spreading out of control, the destruction of our electronics infrastructure by something similar to the 1859 Carrington Event, an acceleration of chaotic changes in weather and climate, and devastating wars triggered by weapons systems overseen by AI software whose inner logic no-one understands.

It’s not just a new mindset that humanity needs. It’s a better way to have discussions about fundamentals – discussions about what truly matters.

Footnote: with thanks

Special thanks are due to the people who boldly stepped forwards at short notice as panellists for last Saturday’s event:

and to everyone else who contributed to that discussion. I’m sorry there was no time to give sufficient attention to many of the key points raised. As I said at the end of the recording, this is a kind of cliffhanger.

14 May 2018

The key questions about UBIA

The first few times I heard about the notion of Universal Basic Income (UBI), I said to myself, that’s a pretty dumb idea.

Paying people without them doing any work is going to cause big problems for society, I thought. It’s going to encourage laziness, and discourage enterprise. Why should people work hard, if the fruits of their endeavour are taken away from them to be redistributed to people who can’t be bothered to work? It’s not fair. And it’s a recipe for social decay.

But since my first encounters with the idea of UBI, my understanding has evolved a long way. I have come to see the idea, not as dumb, but as highly important. Anyone seriously interested in the future of human society ought to keep abreast of the discussion about UBI:

  • What are the strengths and (yes) the weaknesses of UBI?
  • What alternatives could be considered, that have the strengths of UBI but avoid its weaknesses?
  • And, bearing in mind that the most valuable futurist scenarios typically involve the convergence (or clash) of several different trend analyses, what related ideas might transform our understanding of UBI?

For these reasons, I am hosting a day-long London Futurists event at Birkbeck College, Central London, on Saturday 2nd June, with the title “Universal Basic Income and/or Alternatives: 2018 update”.

The event is defined by the question,

What do we know, in June 2018, about Universal Basic Income and its alternatives (UBIA), that wasn’t known, or was less clear, just a few years ago?

The event website highlights various components of that question, which different speakers on the day will address:

  • What are the main risks and issues with the concept of UBIA?
  • How might the ideas of UBIA evolve in the years ahead?
  • If not a UBI, what alternatives might be considered, to meet the underlying requirements which have led many people to propose a UBI?
  • What can we learn from the previous and ongoing experiments in Basic Income?
  • What are the feasible systems (new or increased taxes, or other means) to pay for a UBIA?
  • What steps can be taken to make UBIA politically feasible?
  • What is a credible roadmap for going beyond a “basic” income towards enabling attainment of a “universal prosperity” by everyone?

As you can see from the event website, an impressive list of speakers have kindly agreed to take part. Here’s the schedule for the day:

09:30: Doors open
10:00: Chair’s welcome: The questions that deserve the most attention: David Wood
10:15: Opening keynote: Basic Income – Making it happenProf Guy Standing
11:00: Implications of Information TechnologyProf Joanna Bryson
11:30: Alternatives to UBI – Exploring the PossibilitiesRohit TalwarHelena Calle and Steve Wells
12:15: Q&A involving all morning speakers
12:30: Break for lunch (lunch not provided)

14:00: Basic Income as a policy and a perspective: Barb Jacobson
14:30: Implications of Artificial Intelligence on UBIATony Czarnecki
15:00: Approaching the Economic SingularityCalum Chace
15:30: What have we learned? And what should we do next? David Wood
16:00-16:30: Closing panel involving all speakers
16:30: Event closes. Optional continuation of discussion in nearby pub

A dumb idea?

In the run-up to the UBIA 2018 event, I’ll make a number of blogposts anticipating some of the potential discussion on the day.

First, let me return to the question of whether UBI is a dumb idea. Viewing the topic from the angle of laziness vs. enterprise is only one possible perspective. As is often the case, changing your perspective often provides much needed insight.

Instead, let’s consider the perspective of “social contract”. Reflect on the fact that society already provides money to people who aren’t doing any paid work. There are basic pension payments for everyone (so long as they are old enough), basic educational funding for everyone (so long as they are young enough), and basic healthcare provisions for people when they are ill (in most countries of the world).

These payments are part of what is called a “social contract”. There are two kinds of argument for having a social contract:

  1. Self-interested arguments: as individuals, we might need to take personal benefit of a social contract at some stage in the future, if we unexpectedly fall on hard times. What’s more, if we fail to look after the rest of society, the rest of society might feel aggrieved, and rise up against us, pitchforks (or worse) in hand.
  2. Human appreciation arguments: all people deserve basic stability in their life, and a social contract can play a significant part in providing such stability.

What’s harder, of course, is to agree which kind of social contract should be in place. Whole libraries of books have been written on that question.

UBI can be seen as fitting inside a modification of our social contract. It would be part of what supporters say would be an improved social contract.

Note: although UBI is occasionally suggested as a replacement for the entirety of the current welfare system, it is more commonly (and, in my view, more sensibly) proposed as a replacement for only some of the current programmes.

Proponents of UBI point to two types of reason for including UBI as part of a new social contract:

  1. Timeless arguments – arguments that have been advanced in various ways by people throughout history, such as Thomas More (1516), Montesquieu (1748), Thomas Paine (1795), William Morris (1890), Bertrand Russell (1920), Erich Fromm (1955), Martin Luther King (1967), and Milton Friedman (1969)
  2. Time-linked arguments – arguments that foresee drastically changed circumstances in the relatively near future, which increase the importance of adopting a UBI.

Chief among the time-linked arguments are that the direct and indirect effects of profound technological change is likely to transform the work environment in unprecedented ways. Automation, powered by AI that is increasingly capable, may eat into more and more of the skills that we humans used to think are “uniquely human”. People who expected to earn money by doing various tasks may find themselves unemployable – robots will do these tasks more reliably, more cheaply, and with greater precision. People who spend some time retraining themselves in anticipation of a new occupation may find that, over the same time period, robots have gained the same skills faster than humans.

That’s the argument for growing technological unemployment. It’s trendy to criticise this argument nowadays, but I find the criticisms to be weak. I won’t repeat all the ins and outs of that discussion now, since I’ve covered them at some length in Chapter 4 of my book Transcending Politics. (An audio version of this chapter is currently available to listen to, free of charge, here.)

A related consideration talks, not about technological unemployment, but about technological underemployment. People may be able to find paid work, but that work pays considerably less than they expected. Alternatively, their jobs may have many rubbishy aspects. In the terminology of David Graeber, increasing numbers of jobs are “bullshit jobs”. (Graeber will be speaking on that very topic at the RSA this Thursday. At time of writing, tickets are still available.)

Yet another related concept is that of the precariat – people whose jobs are precarious, since they have no guarantee of the number of hours of work they may receive in any one week. People in these positions would often prefer to be able to leave these jobs and spend a considerable period of time training for a different kind of work – or starting a new business, with all the risks and uncertainties entailed. If a UBI were available to them, it would give them the stability to undertake that personal voyage.

How quickly will technological unemployment and technological underemployment develop? How quickly will the proportion of bullshit jobs increase? How extensive and socially dangerous will the precariat become?

I don’t believe any futurist can provide crisp answers to these questions. There are too many unknowns involved. However, equally, I don’t believe anyone can say categorically that these changes won’t occur (or won’t occur any time soon). My personal recommendation is that society needs to anticipate the serious possibility of relatively rapid acceleration of these trends over the next couple of decades. I’d actually put the probability of a major acceleration in these trends over the next 20 years as greater than 50%. But even if you assess the odds more conservatively, you ought to have some contingency plans in mind, just in case the pace quickens more than you expected.

In other words, the time-linked arguments in favour of exploring a potential UBI have considerable force.

As it happens, the timeless arguments may gain increased force too. If it’s true that the moral arc of history bends upwards – if it’s true that moral sensibilities towards our fellow humans increase over the passage of time – then arguments which at one time fell below society’s moral radar can gain momentum in the light of collective experience and deliberative reflection.

An impractical idea?

Many people who are broadly sympathetic to the principle of UBI nevertheless consider the concept to be deeply impractical. For example, here’s an assessment by veteran economics analyst John Kay, in his recent article “Basic income schemes cannot work and distract from sensible, feasible and necessary welfare reforms”:

The provision of a universal basic income at a level which would provide a serious alternative to low-paid employment is impossibly expensive. Thus, a feasible basic income cannot fulfil the hopes of some of the idea’s promoters: it cannot guarantee households a standard of living acceptable in a modern society, it cannot compensate for the possible disappearance of existing low-skilled employment and it cannot eliminate “bullshit jobs”. Either the level of basic income is unacceptably low, or the cost of providing it is unacceptably high. And, whatever the appeal of the underlying philosophy, that is essentially the end of the matter.

Kay offers this forthright summary:

Attempting to turn basic income into a realistic proposal involves the reintroduction of elements of the benefit system which are dependent on multiple contingencies and also on income and wealth. The outcome is a welfare system which resembles those that already exist. And this is not surprising. The complexity of current arrangements is not the result of bureaucratic perversity. It is the product of attempts to solve the genuinely difficult problem of meeting the variety of needs of low-income households while minimising disincentives to work for households of all income levels – while ensuring that the system established for that purpose is likely to sustain the support of those who are required to pay for it.

I share Piachaud’s conclusion that basic income is a distraction from sensible, feasible and necessary welfare reforms. As in other areas of policy, it is simply not the case that there are simple solutions to apparently difficult issues which policymakers have hitherto been too stupid or corrupt to implement.

Supporters of UBI have rebuttals to this analysis. Some of these rebuttals will no doubt be presented at the UBIA 2018 event on 2nd June.

One rebuttal seeks to rise above “zero sum” considerations. Injecting even a small amount of money into everyone’s hands can have “multiplier” effects, as that new money passes in turn through several people’s hands. One person’s spending is another person’s income, ready for them to spend in turn.

Along similar lines, Professor Guy Standing, who will be delivering the opening keynote at UBIA 2018, urges readers of his book Basic Income: And How We Can Make It Happen to consider positive feedback cycles: “the likely impact of extra spending power on the supply of goods and services”. As he says,

In developing countries, and in low-income communities in richer countries, supply effects could actually lower prices for basic goods and services. In the Indian basic income pilots, villagers’ increased purchasing power led local farmers to plant more rice and wheat, use more fertilizer and cultivate more of their land. Their earnings went up, while the unit price of the food they supplied went down. The same happened with clothes, since several women found it newly worthwhile to buy sewing machines and material. A market was created where there was none before.

A similar response could be expected in any community where there are people who want to earn more and do more, alongside people wanting to acquire more goods and services to improve their living standard.

(I am indebted to Standing’s book for many other insights that have influenced my thinking and, indeed, points raised in this blogpost. It’s well worth reading!)

There’s a broader point that needs to be raised, about the “prices for basic goods and services”. Since a Basic Income needs to cover payments for these goods and services, two approaches are possible:

  1. Seek to raise the level of Basic Income payments
  2. Seek to lower the cost of basic goods and services.

I believe both approaches should be pursued in parallel. The same technologies of automation that pose threats to human employment also hold the promise for creating goods and services at significantly lower costs (and with higher quality). However, any such reduction in cost sits in tension with the prevailing societal focus on boosting economic prices (and increasing GDP). It is for this reason that we need a change of societal values as well as changes in the mechanics of the social contract.

The vision of goods and services having prices approaching zero is, by the way, sometimes called “the Star Trek economy”. Futurist Calum Chace – another of the UBIA 2018 speakers – addresses this topic is his provocatively titled book The Economic Singularity: Artificial intelligence and the death of capitalism. Here’s an extract from one of his blogposts, a “un-forecast” (Chace’s term) for a potential 2050 scenario, “Future Bites 7 – The Star Trek Economy”, featuring Lauren (born 1990):

The race downhill between the incomes of governments and the costs they needed to cover for their citizens was nerve-wracking for a few years, but by the time Lauren hit middle age it was clear the outcome would be good. Most kinds of products had now been converted into services, so cars, houses, and even clothes were almost universally rented rather than bought: Lauren didn’t know anyone who owned a car. The cost of renting a car for a journey was so close to zero that the renting companies – auto manufacturers or AI giants and often both – generally didn’t bother to collect the payment. Money was still in use, but was becoming less and less necessary.

As a result, the prices of most asset classes had crashed. Huge fortunes had been wiped out as property prices collapsed, especially in the hot-spot cities, but few people minded all that much as they could get whatever they needed so easily.

As you may have noticed, the vision of a potential future “Star Trek” economy is part of the graphic design for UBIA 2018.

I’ll share one further comment on the question of the affordability of UBI. Specifically, I’ll quote some comments made by Guardian writer Colin Holtz in the wake of the discovery of the extent of tax evasion revealed by the Panama Papers. The article by Holtz has the title “The Panama Papers prove it: America can afford a universal basic income”. Here’s an extract:

If the super-rich actually paid what they owe in taxes, the US would have loads more money available for public services.

We should all be able to agree: no one should be poor in a nation as wealthy as the US. Yet nearly 15% of Americans live below the poverty line. Perhaps one of the best solutions is also one of the oldest and simplest ideas: everyone should be guaranteed a small income, free from conditions.

Called a universal basic income by supporters, the idea has has attracted support throughout American history, from Thomas Paine to Martin Luther King Jr. But it has also faced unending criticism for one particular reason: the advocates of “austerity” say we simply can’t afford it – or any other dramatic spending on social security.

That argument dissolved this week with the release of the Panama Papers, which reveal the elaborate methods used by the wealthy to avoid paying back the societies that helped them to gain their wealth in the first place…

While working and middle-class families pay their taxes or face consequences, the Panama Papers remind us that the worst of the 1% have, for years, essentially been stealing access to Americans’ common birthright, and to the benefits of our shared endeavors.

Worse, many of those same global elite have argued that we cannot afford to provide education, healthcare or a basic standard of living for all, much less eradicate poverty or dramatically enhance the social safety net by guaranteeing every American a subsistence-level income.

The Tax Justice Network estimates the global elite are sitting on $21–32tn of untaxed assets. Clearly, only a portion of that is owed to the US or any other nation in taxes – the highest tax bracket in the US is 39.6% of income. But consider that a small universal income of $2,000 a year to every adult in the US – enough to keep some people from missing a mortgage payment or skimping on food or medicine – would cost only around $563bn each year.

This takes us from the question of affordability to the question of political feasibility. Read on…

A politically infeasible idea?

A potential large obstacle to adopting UBI is that powerful entities within society will fight hard against it, being opposed to any idea of increased taxation and a decline in their wealth. These entities don’t particularly care that the existing social contract provides a paltry offering to the poor and precarious in society – or to those “inadequates” who happen to lose their jobs and their standing in the economy. The existing social contract provides them personally (and those they consider their peers) with a large piece of the cake. They’d like to keep things that way, thank you very much.

They defend the current setup with ideology. The ideology states that they deserve their current income and wealth, on account of the outstanding contributions they have made to the economy. They have created jobs, or goods, or services of one sort or another, that the marketplace values. And no-one has any right to take their accomplishments away from them.

In other words, they defend the status quo with a theory of value. In order to overcome their resistance to UBIA, I believe we’ll need to tackle this theory of value head on, and provide a better theory in its place. I’ll pick up that thread of thought shortly.

But an implementation of UBI doesn’t need to happen “big bang” style, all at once. It can proceed in stages, starting with a very low level, and (all being well) ramping up from there in phases. The initial payment from UBI could be funded from new types of tax that would, in any case, improve the health of society:

  • A tax on financial transactions (sometimes called a “Tobin tax”) – that will help to put the brakes on accelerated financial services taking place entirely within the financial industry (without directly assisting the real economy)
  • A “Greenhouse gas tax” (such as a “carbon tax”) on activities that generate greenhouse gas pollution.

Continuing the discussion

The #ubia channel in the newly created London Futurists Slack workspace awaits comments on this topic. For a limited time, members and supporters of London Futurists can use this link to join that workspace.

2 July 2017

If your Windows 10 laptop doesn’t connect to websites

Filed under: Connectivity, Microsoft — Tags: , , , — David Wood @ 1:31 pm

What should you do if your Windows 10 laptop fails to connect to any website? With the same problem in both Chrome and Microsoft Edge?

Suppose, like me, you’ve rebooted your laptop several times, rebooted your home broadband wireless router, and also tried connecting to websites over the cellular SIM that is built into the laptop. All with no avail. What next?

You’d probably, like me, run the Windows Network Diagnostics tool. But what if that fails to report any problems?

That was the situation I was in last night. My laptop had been off the network for a while, as it rendered a 14GB MP4 file from recordings from yesterday’s London Futurists event. (This one, if you’re curious.) But when I was ready to upload the file to YouTube, I hit the connection problem.

As it happens, my laptop is a bit over six years well. It has served me well. But it gets pretty hot from time to time – especially when processing videos. I started to suspect that the heat may have damaged an internal connector. That’s despite the fact that the BIOS diagnostics tests gave the machine a clean bill of health.

I even spent some time disabling anti-virus software. That didn’t make any difference. Nor did leaving the laptop alone, switched off for six hours to cool down as I slept.

At this stage I was beginning to plan the process of buying a new laptop. I went to press the Windows “Shut down” button one more time. I noticed that the button actually said “Update Windows and Shut down”.

Well, I hadn’t been expecting any Windows update. Three days earlier, I’d already been through a very lengthy process of installing something called “Windows 10 Creators Update” – a process I’d accepted on the prompting of messages sent to me by Microsoft through my laptop.

(Did I say ‘lengthy’? A one point during that Creators Update my laptop had displayed a screen for more than two hours saying something like “This will take a while”. The percentage done indicator stayed at 1% for a full 15 minutes, before ticking up to 2%.)

This second update, which took place this morning, also took ages. I stared at my laptop as it warned me, “Getting Windows ready. Don’t turn off your computer”.

After around 30 minutes, I was almost ready to ignore the advice. Before reaching for the hardware reset button, though, I decided to attend to some other household tasks. By the time I returned to my laptop, it was inviting me to log in. Twenty minutes later, I was back online. Chrome was showing me webpages again. Hooray!

In short, my guess is that Microsoft was doing some kind of mass software download to my laptop, at the time I was trying to connect to websites, and for some reason, the Microsoft traffic was exclusively prioritised higher than mine. Too bad that the diagnostic tool gave no inkling of what might be happening.

I notice a recent headline in The Register: “Don’t install our buggy Windows 10 Creators Update, begs Microsoft”. The sub-headline follows up:

We’ll give it to you when it’s ready – and it is not.

My own experience seems to back up that message: the new software can struggle on older hardware.

Windows 10 users take note!

As a postscript, the YouTube video is now available:

Older Posts »

Blog at WordPress.com.