dw2

3 December 2023

“6 Mindblowing Predictions about 2024”

Filed under: Abundance, futurist, intelligence, vision — Tags: , , , , — David Wood @ 11:15 am

As we stand on the brink of 2024, the air is electric with anticipation. The future, often shrouded in mystery and conjecture, seems to beckon us with a mischievous grin, promising wonders and revelations that most of us haven’t even begun to imagine. I’m here to pull back the curtain, just a little, to reveal six mind-blowing predictions about 2024 that 99% of people don’t know about. Fasten your seatbelts, for we’re about to embark on a thrilling ride into the unknown!

[ Note: with the exception of this paragraph inside the square brackets, all the text (and formatting) in this article was created by GPT-4, and hasn’t been edited in the slightest by me. I offer this post as an example of what generative AI can achieve with almost no human effort. It’s far from what I would write personally, but it’s comparable to the fluff that seems to earn lots of so-called futurist writers lots of clicks. As for the images, they were all produced by Midjourney. The idea for this article came from this Medium article by Neeramitra Reddy. ]

1. The Rise of Personal AI Companions

Imagine waking up to a friendly voice that knows you better than anyone else, offering weather updates, reading out your schedule, and even cracking a joke or two to kickstart your day with a smile. In 2024, personal AI companions will move from science fiction to everyday reality. These AI entities will be more than just sophisticated algorithms; they’ll be digital confidantes, seamlessly integrating into our daily lives, offering personalized advice, and even helping us stay on top of our mental and physical health.

2. Green Energy Takes a Giant Leap

The year 2024 will witness a monumental shift in the global energy landscape. We’re not just talking about a few more solar panels and wind turbines here. We’re talking about a green energy revolution! Breakthroughs in solar cell technology will make harnessing the sun’s power more efficient than ever. Wind energy will see advancements in turbine designs, making them more powerful and less intrusive. Cities will start to glow with the promise of a cleaner, greener future, as renewable energy becomes more accessible and affordable than ever before.

3. The Emergence of Smart Cities

Picture a city that not only understands your needs but anticipates them. In 2024, the concept of ‘smart cities’ will evolve from buzzword to reality. We’re talking about urban areas equipped with sensors and AI-driven analytics to optimize everything from traffic flow to energy consumption. Public transport will become more efficient, pollution levels will drop, and city life will be smoother and more enjoyable. These smart cities will be a game-changer, significantly enhancing the quality of life for their residents.

4. Breakthroughs in Health Tech: Customized Medicine

Healthcare in 2024 is set to become more personalized than ever. With advancements in genomics and AI, doctors will be able to tailor treatments and medications to each individual’s genetic makeup. This means fewer side effects and more effective treatments. Imagine a world where your medication is designed specifically for you, maximizing its efficacy and minimizing its risks. This personalized approach will revolutionize how we think about medicine and healthcare.

5. The New Space Race: Commercial Space Travel

2024 could well be the year that space tourism takes off, quite literally. With companies like SpaceX and Blue Origin paving the way, we’re looking at the possibility of commercial space travel becoming a reality for those who dare to dream. Think about it – sipping a beverage while gazing at the Earth from space! This new space race isn’t just for the ultra-wealthy; it’s a stepping stone towards making space travel more accessible to everyone.

6. Virtual Reality: The New Frontier of Entertainment

Virtual reality (VR) is set to take the world of entertainment by storm in 2024. With advancements in technology, VR experiences will become more immersive and interactive, transcending the boundaries of traditional entertainment. Imagine being able to step into your favorite movie, interact with characters, or even alter the storyline. VR will offer an escape into fantastical worlds, making our entertainment experiences more intense and personal.

As we gear up for 2024, it’s clear that we’re on the cusp of a new era. An era defined by technological marvels that promise to reshape our world in ways we can barely begin to fathom. These six predictions are just the tip of the iceberg. The future is a canvas of endless possibilities, and 2024 is poised to paint a picture that’s vibrant, exhilarating, and positively mind-blowing.

So, there you have it – a glimpse into the not-so-distant future that’s brimming with potential and promise. As we inch closer to 2024, let’s embrace these changes with open arms and curious minds. The future is ours to shape, and it’s looking brighter than ever!

15 May 2022

A year-by-year timeline to 2045

The ground rules for the worldbuilding competition were attractive:

  • The year is 2045.
  • AGI has existed for at least 5 years.
  • Technology is advancing rapidly and AI is transforming the world sector by sector.
  • The US, EU and China have managed a steady, if uneasy, power equilibrium.
  • India, Africa and South America are quickly on the ride as major players.
  • Despite ongoing challenges, there have been no major wars or other global catastrophes.
  • The world is not dystopian and the future is looking bright.

Entrants were asked to submit four pieces of work. One was a new media piece. I submitted this video:

Another required piece was:

timeline with entries for each year between 2022 and 2045 giving at least two events (e.g. “X invented”) and one data point (e.g. “GDP rises by 25%”) for each year.

The timeline I created dovetailed with the framework from the above video. Since I enjoyed creating it, I’m sharing my submission here, in the hope that it may inspire readers.

(Note: the content was submitted on 11th April 2022.)

2022

US mid-term elections result in log-jammed US governance, widespread frustration, and a groundswell desire for more constructive approaches to politics.

The collapse of a major crypto “stablecoin” results in much wider adverse repercussions than was generally expected, and a new social appreciation of the dangers of flawed financial systems.

Data point: Number of people killed in violent incidents (including homicides and armed conflicts) around the world: 590,000

2023

Fake news that is spread by social media driven by a new variant of AI provokes riots in which more than 10,000 people die, leading to much greater interest a set of “Singularity Principles” that had previously been proposed to steer the development of potentially world-transforming technologies.

G7 transforms into the D16, consisting of the world’s 16 leading democracies, proclaiming a profound shared commitment to champion norms of: openness; free and fair elections; the rule of law; independent media, judiciary, and academia; power being distributed rather than concentrated; and respect for autonomous decisions of groups of people.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 6.4%

2024

South Korea starts a trial of a nationwide UBI scheme, in the first of what will become in later years a long line of increasingly robust “universal citizens’ dividends” schemes around the world.

A previously unknown offshoot of ISIS releases a bioengineered virus. Fortunately, vaccines are quickly developed and deployed against it. In parallel, a bitter cyber war takes place between Iran and Israel. These incidents lead to international commitments to prevent future recurrences.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 38%

2025

Extreme weather – floods and storms – kills 10s of 1000s in both North America and Europe. A major trial of geo-engineering is rushed through, with reflection of solar radiation in the stratosphere – causing global political disagreement and then a renewed determination for tangible shared action on climate change.

The US President appoints a Secretary for the Future as a top-level cabinet position. More US states adopt rank choice voting, allowing third parties to grow in prominence.

Data point: Proportion of earth’s habitable land used to rear animals for human food: 38%

2026

A song created entirely by an AI tops the hit parade, and initiates a radical new musical genre.

Groundswell opposition to autocratic rule in Russia leads to the fall from power of the president and a new dedication to democracy throughout countries formerly perceived as being within Russia’s sphere of direct influence.

Data point: Net greenhouse gas emissions (including those from land-use changes): 59 billion tons of CO2 equivalent – an unwelcome record.

2027

Metformin approved for use as an anti-aging medicine in a D16 country. Another D16 country recommends nationwide regular usage of a new nootropic drug.

Exchanges of small numbers of missiles between North and South Korea leads to regime change inside North Korea and a rapprochement between the long-bitter enemies.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 9.2%

2028

An innovative nuclear fusion system, with its design assisted by AI, runs for more than one hour and generates significantly more energy out than what had been put in.

As a result of disagreements about the future of an independent Taiwan, an intense destructive cyber battle takes place. At the end, the nations of the world commit more seriously than before to avoiding any future cyber battles.

Data point: Proportion of world population experiencing mental illness or dissatisfied with the quality of their mental health: 41%

2029

A trial of an anti-aging intervention in middle-aged dogs is confirmed to have increased remaining life expectancy by 25% without causing any adverse side effects. Public interest in similar interventions in humans skyrockets.

The UK rejoins a reconfigured EU, as an indication of support for sovereignty that is pooled rather than narrow.

Data point: Proportion of world population with formal cryonics arrangements: 1 in 100,000

2030

Russia is admitted into the D40 – a newly expanded version of the D16. The D40 officially adopts “Index of Human Flourishing” as more important metric than GDP, and agrees a revised version of the Universal Declaration of Human Rights, brought up to date with transhuman issues.

First permanent implant in a human of an artificial heart with a new design that draws all required power from the biology of the body rather than any attached battery, and whose pace of operation is under the control of the brain.

Data point: Net greenhouse gas emissions (including those from land-use changes): 47 billion tons of CO2 equivalent – a significant improvement

2031

An AI discovers and explains a profound new way of looking at mathematics, DeepMath, leading in turn to dramatically successful new theories of fundamental physics.

Widespread use of dynamically re-programmed nanobots to treat medical conditions that would previously have been fatal.

Data point: Proportion of world population regularly taking powerful anti-aging medications: 23%

2032

First person reaches the age of 125. Her birthday celebrations are briefly disrupted by a small group of self-described “naturality advocates” who chant “120 is enough for anyone”, but that group has little public support.

D40 countries put in place a widespread “trustable monitoring system” to cut down on existential risks (such as spread of WMDs) whilst maintaining citizens’ trust.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 35.7% 

2033

For the first time since the 1850s, the US President comes from a party other than Republican and Democratic.

An AI system is able to convincingly pass the Turing test, impressing even the previous staunchest critics with its apparent grasp of general knowledge and common sense. The answers it gives to questions of moral dilemmas also impress previous sceptics.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 58%

2034

The D90 (expanded from the D40) agrees to vigorously impose Singularity Principles rules to avoid inadvertent creation of dangerous AGI.

Atomically precise synthetic nanoscale assembly factories have come of age, in line with the decades-old vision of nanotechnology visionary Eric Drexler, and are proving to have just as consequential an impact on human society as AI.

Data point: Net greenhouse gas *removals*: 10 billion tons of CO2 equivalent – a dramatic improvement

2035

A novel written entirely by an AI reaches the top of the New York Times bestseller list, and is widely celebrated as being the finest piece of literature ever produced.

Successful measures to remove greenhouse gases from the atmosphere, coupled with wide deployment of clean energy sources, lead to a declaration of “victory over runaway climate change”.

Data point: Proportion of earth’s habitable land used to rear animals for human food: 4%

2036

A film created entirely by an AI, without any real human actors, wins Oscar awards.

The last major sceptical holdout, a philosophy professor from an Ivy League university, accepts that AGI now exists. The pope gives his blessing too.

Data point: Proportion of world population with cryonics arrangements: 24%

2037

The last instances of the industrial scale slaughter of animals for human consumption, on account of the worldwide adoption of cultivated (lab-grown) meat.

AGI convincingly explains that it is not sentient, and that it has a very different fundamental structure from that of biological consciousness.

Data point: Proportion of world population who are literate: 99.3%

2038

Rejuvenation therapies are in wide use around the world. “Eighty is the new fifty”. First person reaches the age of 130.

Improvements made by AGI upon itself effectively raise its IQ one hundred fold, taking it far beyond the comprehension of human observers. However, the AGI provides explanatory educational material that allows people to understand vast new sets of ideas.

Data point: Proportion of world population who consider themselves opposed to AGI: 0.1%

2039

An extensive set of “vital training” sessions has been established by the AGI, with all citizens over the age of ten participating for a minimum of seven hours per day on 72 days each year, to ensure that humans develop and maintain key survival skills.

Menopause reversal is common place. Women who had long ago given up any ideas of bearing another child happily embrace motherhood again.

Data point: Proportion of world population regularly taking powerful anti-aging medications: 99.2%

2040

The use of “mind phones” is widespread: new brain-computer interfaces that allow communication between people by mental thought alone.

People regularly opt to have several of their original biological organs replaced by synthetic alternatives that are more efficient, more durable, and more reliable.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 96%

2041

Shared immersive virtual reality experiences include hyper-realistic simulations of long-dead individuals – including musicians, politicians, royalty, saints, and founders of religions.

The number of miles of journey undertaken by small “flying cars” exceeds that of ground-based powered transport.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 100.0%

2042

First successful revival of mammal from cryopreservation.

AGI presents a proof of the possibility of time travel, but the resources required for safe transit of humans through time would require the equivalent of building a Dyson sphere around the sun.

Data point: Proportion of world population experiencing mental illness or dissatisfied with the quality of their mental health: 0.4%

2043

First person reaches the age of 135, and declares herself to be healthier than at any time in the preceding four decades.

As a result of virtual reality encounters of avatars of founders of religion, a number of new systems of philosophical and mystical thinking grow in popularity.

Data point: Proportion of world’s energy provided by earth-based nuclear fusion: 75%

2044

First human baby born from an ectogenetic pregnancy.

Family holidays on the Moon are an increasingly common occurrence.

Data point: Average amount of their waking time that people spend in a metaverse: 38%

2045

First revival of human from cryopreservation – someone who had been cryopreserved ten years previously.

Subtle messages decoded by AGI from far distant stars in the galaxy confirm that other intelligent civilisations exist, and are on their way to reveal themselves to humanity.

Data point: Number of people killed in violent incidents around the world: 59

Postscript

My thanks go to the competition organisers, the Future of Life Institute, for providing the inspiration for the creation of the above timeline.

Readers are likely to have questions in their minds as they browse the timeline above. More details of the reasoning behind the scenarios involved are contained in three follow-up posts:

22 February 2022

Nine technoprogressive proposals

Filed under: Events, futurist, vision — Tags: , , — David Wood @ 11:30 pm

Ahead of time, I wasn’t sure the format was going to work.

It seemed to be an ambitious agenda. Twenty-five speakers were signed up to deliver short presentations. Each had agreed to limit their remarks to just four minutes. The occasion was an International Technoprogressive Conference that took place earlier today (22nd February), with themes including:

  • “To be human, today and tomorrow”
  • “Converging visions from many horizons”.
Image credit: this graphic includes work by Pixabay user Sasin Tipchai

Each speaker had responded to a call to cover in their remarks either or both of the following:

  • Provide a brief summary of transhumanist-related activity in which they are involved
  • Make a proposal about “a concrete idea that could inspire positive and future-oriented people or organisations”.

Their proposals could address, for example, AI, enhancing human nature, equity and justice, accelerating science, existential risks, the Singularity, social and political angles, the governance of technology, superlongevity, superhappiness, or sustainable superabundance.

The speakers who provided concrete proposals were asked, ahead of the conference, to write down their proposal in 200 words or less, for distribution in a document to be shared among all attendees.

Attendees at the event – speakers and non-speakers alike – were asked to provide feedback on the proposals that had been presented, and to cast up to five votes among the different proposals.

I wondered whether we were trying to do too much, especially given the short amount of time spent in preparing for the event.

Happily, it all went pretty smoothly. A few speakers recorded videos of their remarks in advance, to be sure to keep to the allotted timespan. A small number of others were in the end unable to take part on the day, on account of last-minute schedule conflicts.

As for the presentations themselves, they were diverse – exactly as had been hoped by the organisers ( l’Association Françoise Transhumanistes (Technoprog), with some support from London Futurists).

For example, I found it particularly interesting to hear about perspectives on transhumanism from Cameroon and Japan.

Reflecting the quality of all the presentations, audience votes were spread widely. Comments made by voters again and again stressed the difficulty in each picking just five proposals to be prioritised. Nevertheless, audience members accepted the challenge. Some people gave one vote each to five different proposals. Others split them 2, 2, and 1, or in other combinations. One person gave all their five votes to a single proposal.

As for the outcome of the voting: I’m appending the text of the nine proposals that received the most votes. You’ll notice a number of common ideas, along with significant variety.

I’m presenting these nine proposals in alphabetical order of the first name of the proposers. I hope you find them interesting. If you find yourself inspired by what you read, please don’t hesitate to offer your own support to the projects described.

PS Big thanks are due to everyone who made this conference possible, especially the co-organisers, Didier Coeurnelle and Marc Roux.

Longevity: Opportunities and Challenges

Proposed by Anastasiia Velikanova, project coordinator at Open Longevity

Why haven’t we achieved significant progress in the longevity field yet? Although about 17,000 biological articles with the word “aging” in the title are published yearly, we do not have any therapy that reliably prolongs life.

One reason is that there are no large-scale projects in the biology of aging, such as the Human Genome or the  Large Hadron Collider. All research is conducted separately in academic institutions or startups and is mostly closed. With a great idea at the start, a company hides its investigations, but the capabilities of its team are not enough to globally change the situation with aging.

Another reason is that the problem of aging is highly interdisciplinary. We need advanced mathematical models and AI algorithms to accumulate all research about molecular processes and identify critical genes or targets.

Most importantly, we, transhumanists, should unite and create an infrastructure that would allow solving the problem of aging on a large scale, attracting the best specialists from different fields. 

An essential part of such an infrastructure is open databases. For example, our organization created Open Genes – the database of genes associated with aging, allowing the selection of combinatorial therapy against aging.

Vital Syllabus

Proposed by David Wood, Chair at London Futurists

Nearly every serious discussion about improving the future comes round to the need to improve education. In our age of multiple pressures, dizzying opportunities, daunting risks, and accelerating disruption, people in all walks of life need better access to information about the skills that are most important and the principles that matter most. Traditional education falls far short on these counts.

The Vital Syllabus project aims to collect and curate resources to assist students of all ages to acquire and deepen these skills, and to understand and embody the associated principles. To be included in the project, these resources must be free of charge, clear, engaging, and trustworthy – and to align with a transhumanist understanding.

A framework is already in place: 24 top-level syllabus areas, nearly 200 subareas, and an initial set of example videos. Please join this project to help fill out the syllabus quickly!

For information about how to help this project, see this FAQ page.

Longevity Art

Proposed by Elena Milova, Founder at LongevityArt

When we are discussing life extension, people most often refer to movies, animations, books, paintings, and other works of art. They find there the concepts and the role models that they can either follow or reject. Art has the potential to seed the ideas in one’s mind that can then gradually grow and mature until they become part of the personal life philosophy. Also, since one function of art is to uncover, question, mock and challenge the status quo, art is one of the most appropriate medias for spreading new ideas such as one of radical life extension.

I suggest that the community supports more art projects (movies, animations, books, paintings, digital artworks) by establishing foundations sponsoring the most valuable art projects.

Use longevity parties to do advocacy for more anti-aging research

Proposed by Felix Werth, Leader at Partei für Gesundheitsforschung

With the repair-approach we already know in principle, how to defeat aging. To increase our chance of being alive and healthy in 100 years significantly, much more resources have to be put into the implementation of the repair-approach. An efficient way to achieve this is to form single issue longevity parties and run in elections. There are many people who would like to live longer, but for some reason don’t do anything for it. Running in elections can be very efficient advocacy and gives the people the option to very easily support longevity research with their vote. If the governing parties see that they can get more votes with this issue, they will probably care about it more.

In 2015 I initiated a longevity party in Germany and since then, we have participated in 14 elections already and did a lot of advocacy, all this with very few active members and very few resources. With a little more resources, much more advocacy could be done this way. I suggest that more people, who want radical life extension in their lifetime, form longevity parties in their country and run in elections. Growing the longevity movement faster is key to success.

Revive LEV: The Game on Life Extension

Proposed by Gennady Stolyarov, Chair at U.S. Transhumanist Party

I propose to resurrect a computer game on longevity escape velocity, LEV: The Game, which was previously attempted in 2014 and for which a working Alpha version had been created but had unfortunately been lost since that time.

In this game one plays the role of a character who, through various lifestyle choices and pursuit of rejuvenation treatments, strives to live to age 200. The U.S. Transhumanist Party has obtained the rights to continue game development as well as the previously developed graphical assets. The logic of the game has been redesigned to be turn-based; all that remains is to recruit the programming talent needed to implement the logic of the game into code. A game on longevity escape velocity can draw in a much larger audience to take interest in the life-extension movement and also illustrate how LEV will likely actually arrive – dispelling common misunderstandings and enabling more people to readily understand the transition to indefinite lifespans.

Implement optimization and planning for your organization

Proposed by Ilia Stambler, Chair at Israeli Longevity Alliance

Often progressive, transhumanist and/or life-extensionist groups and associations are inefficient as organizations – they lack a clear and agreed vision, concrete goals and plans for the organization’s advancement, a clear estimate of the available as well desirable human and material resources necessary to achieve those goals and plans, do not track progress, performance and achievements toward the implementation of those goals. As a result, many groups are acting rather as discussion clubs at best, instead of active and productive organizations, drifting aimlessly along occasional activities, and so they can hardly be expected to bring about significant directional positive changes for the future.

Hence the general suggestion is to build up one’s own organizations through organizational optimization, to plan concretely, not so much in terms of what the organization “should do”, but rather what its specific members actually can and plan to do in the shorter and longer term. I believe, through increasing the planning efficiency and the organizational optimization for the existing and emerging organizations, a much stronger impact can be made. (The suggestion is general, but particular organizations may see whether it may apply to them and act according to their particular circumstances.)

Campaign for the Longevity Dividend

Proposed by James Hughes, Executive Director at the IEET

The most popular goal of the technoprogressive and futurist community is universal access to safe and effective longevity therapies. There are three things our community can do to advance this agenda:

  1. First, we need to engage with demographic, medical and policy issues that surround longevity therapies, from the old-age dependency ratio and pension crisis to biomarkers of aging and defining aging as a disease process.
  2. Second, we need to directly argue for public financing of research, a rational clinical trial pathway, and access to these therapies through public health insurance.
  3. Third, we need to identify the existing organizations with similar or related goals, and establish coalitions with them to work for the necessary legislation.

These projects can build on existing efforts, such as International Longevity Alliance, Ending Aging Media Response and the Global Healthspan Policy Institute.

Prioritise moral enhancement

Proposed by Marc Roux, Chair at the French Transhumanist Association (AFT-Technoprog)

As our efforts to attract funding and researchers to longevity have begun to bear fruit, we need to popularise much more moral enhancement.

Ageing is not defeated. However, longevity has already found powerful relays in the decision-making spheres. Mentalities are slowly changing, but the battle for longevity is underway.

Our vanguard can begin to turn to other great goal.

Longevity will not be enough to improve the level of happiness and harmony of our societies. History has shown that it doesn’t change the predisposition of humans to dominance, xenophobia, aggressiveness … They remain stuck in their prehistoric gangue, which condemns them to repeat the same mistakes. If we don’t allow humans to change these behavioural predeterminations, nothing essential will change.

We must prioritise cognitive sciences, and ensure that this is done in the direction of greater choice for everyone, access for all to an improvement in their mental condition, and an orientation towards greater solidarity.

And we’ll work to prevent cognitive sciences from continuing to be put at the service of liberticidal control and domination logics.

On this condition, moral enhancement can be an unprecedented good in the history of humanity.

Transhumanist Studies: Knowledge Accelerator

Proposed by Natasha Vita-More, Executive Director at Humanity+

An education is a crucial asset. Providing lifelong learning that is immediate, accessible and continually updating is key. Transhumanist Studies is an education platform designed to expand knowledge about how the world is transforming. Its Knowledge Accelerator curricula examines the field of longevity, facts on aging and advances in AI, nanomedicine and cryonics, critical and creative thinking, relationships between humanity and ecosystems of earth and space, ethics of fairness, and applied foresight concerning opportunities and risks on the horizon.

Our methodology is applied foresight with a learning model that offers three methods in its 50-25-25 curricula:

  1. 50% immersive learning environment (lectures, presentations, and resources);
  2. 25% project-based iterative study; and
  3. 25% open-form discussion and debate (aligned with a Weekly Studies Group and monthly H+ Academy Roundtable).

In its initiative to advance transhumanism, the Knowledge Accelerator supports the benefits of secular values and impartiality. With a team located across continents, the program is free for some and at a low cost for others. As the scope of transhumanism  continues to grow, the culture is as extraordinary as its advocacy, integrity, and long-term vision.

Homepage | Transhumanist Studies (teachable.com) (I spoke on the need for education at TransVision 2021.)

28 May 2018

Tug Life IV: Beware complacency

Filed under: Events, futurist — Tags: — David Wood @ 10:28 am

What does the collision of creativity, media and tech mean for humans?

That’s the overall subject for a series of events, Tug Life IV, being run from 12-15 June by Tug, Shoreditch-based digital marketing agency.

As the ‘IV’ in the name suggests, it’s the fourth year such a series of events have been held – each time, as part of the annual London Tech Week.

I was one of the speakers last year – at Tug Life III – when my topic was “What happens to humans as machines become more embedded in our lives?”

I enjoyed that session so much that I’ve agreed to be one of the speakers in the opening session of Tug Life IV this year. It’s taking place on the morning of Tuesday 12th June, on the topic “What are we doing with technology? Is it good for us? What should we do about it? As individuals? As businesses? As government?”

Other speakers for this session will include representatives from TalkToUs.AI, Microsoft, and Book of the Future. To register to attend, click here. Note that, depending on the availability of tickets, you can sign up for as many – or as few – of the Tug Life IV events as best match your own areas of interest and concern.

Ahead of the event, I answered some questions from Tug’s Olivia Lazenby about the content of the event. Here’s a lightly edited transcript of the conversation:

Q: What will you be talking about at Tug Life IV?

I’ll be addressing the questions, “Is technology good for us? And what should we do about it?”

I’ll be outlining three types of scenarios for the impact of technology on us, over the next 10-25 years:

  1. The first is business as usual: technology has, broadly, been good for us in the past, and will, broadly, continue to be good for us in the future.
  2. The second is: social collapse: technology will get out of hand, and provoke a set of unintended adverse consequences, resulting in humanitarian tragedy.
  3. The third is sustainable abundance for all, in which technology enables a huge positive leap for society and humanity.

In my talk, I’ll be sharing my assessment of the probabilities for these three families of scenario, namely, 10%, 30%, and 60%, respectively.

Q: Would you agree that the conversation around the future of technology has become increasingly polarised and sensationalised?

It’s good that the subject is receiving more airtime than before. But much of the coverage remains, sadly, at a primitive level.

Some of the coverage is playing for shock value – clickbait etc.

Other coverage is motivated by ideologies which are, frankly, well past their sell-by date – ideologies such as biological exceptionalism.

Finally, another distortion is that quite a few of the large mainstream consultancies are seeking to pass on blandly reassuring messages to their clients, in order to bolster their “business as usual” business models. I view much of that advice as irresponsible – similar to how tobacco industry spokespeople used to argue that we don’t know for sure that smoking causes cancer, so let’s keep calm and carry on.

Q: How can we move past the hysteria and begin to truly understand – and prepare for – how technology might shape our lives in the future?

We need to raise step by step the calibre of the conversation about the future. Two keys here are agile futurism and collaborative futurism. There are too many variables involved for any one person – or any one discipline – to be able to figure things out by themselves. The model of Wikipedia is a good one on which to build, but it’s only a start. I’m encouraging people to cooperate in the development of something I call H+Pedia.

My call to action is for people to engage more with the communities of futurists, transhumanists, and singularitarians, who are, thankfully, advancing a collaborative discussion that is progressing objective evaluations of the credibility, desirability, and actionability of key future scenarios. Let’s put aside the distractions of the present in order to more fully appreciate the huge opportunities and huge threats technology is about to unleash.

6 March 2018

Transcending left and right?

(The following consists of short extracts from Chapter 1,  “Vision and Roadmap”, of my recent new book Transcending Politics.)

One of the most destructive elements of current politics is its divisiveness. Politicians form into warring parties which then frequently find fault with each other. They seek to damage the reputation of their adversaries, throwing lots of mud in the hope that at least some of it will stick. Whereas disagreement is inherent in political process, what would be far better is if politicians could disagree without being disagreeable.

The division between “left” and “right” is particularly long-established. The pioneering transhumanist philosopher F.M. Esfandiary, who later changed his name to FM-2030, lamented this division in his 1977 book Up-Wingers:

To transcend more rapidly to higher levels of evolution we must begin by breaking out of the confinement of traditional ideologies.

We are at all times slowed down by the narrowness of Right-wing and Left-wing alternatives. If you are not conservative you are liberal. If not right of centre you are left of it or middle of the road.

Our traditions comprise no other alternatives. There is no ideological or conceptual dimension beyond conservative and liberal – beyond Right and Left.

Right and Left – even the extreme Left – are traditional frameworks predicated on traditional premises striving in obsolete ways to attain obsolete goals.

Esfandiary’s answer was a different dimension: “Up” – the optimistic embrace of radical technological possibility for positive human transformation:

How do you identify Space scientists who this very day are working with new sets of premises to establish communities in other worlds? Are they Right-wing or Left? Are they conservative or liberal?…

These and other breakthroughs are outside the range of all the traditional philosophical social economic political frameworks. These new dimensions are nowhere on the Right or on the Left. These new dimensions are Up.

Up is an entirely new framework whose very premises and goals transcend the conventional Right and Left…

The Right/Left establishment wants to maintain an evolutionary status quo. It is resigned to humanity’s basic predicament. It simply strives to make life better within this predicament.

Up-Wingers are resigned to nothing. We accept no human predicament as permanent, no tragedy as irreversible; no goals as unattainable.

The term “Up” dovetails with Esfandiary’s evident interest in the exploration of space. We should raise our thinking upwards – towards the stars – rather than being constrained with small-mindedness.

Professor Steve Fuller of the University of Warwick and legal expert Veronika Lipinska take these ideas further in their 2014 book The Proactionary Imperative: A Foundation for Transhumanism, in which they explore “the rotation of the ideological axis”, from left/right to up/down. Fuller and Lipinska provide some fascinating historical background and provocative speculations about possible futures – including a section on “the four styles of playing God in today’s world”.

I share the view that there are more important questions than the left-right split that has dominated politics for so long. Esfandiary was correct to highlight the question of whether to embrace (“Up”) or to reject (“Down”) the potential of new technology to dramatically enhance human capabilities.

But the “Up” decision to embrace the potential for transhuman enhancements still leaves many other decisions unresolved. People who identify as being up-wing are torn between being “right-leaning upwingers” and being “left-leaning upwingers”:

  • The former admire the capabilities of a free market
  • The latter admire the safety net of a welfare system
  • The former mistrust the potential over-reach of politicians
  • The latter mistrust the actions of profit-seeking corporations
  • The former wish to uphold as much individual freedom as possible
  • The latter wish to uphold as much social solidarity as possible
  • The former are keen to reduce taxation
  • The latter are keen to increase equality of opportunity
  • The former point to the marvels that can be achieved by competitive-minded self-made individuals
  • The latter point to the marvels that can be achieved by collaboration-minded progressive coalitions.

I identify myself as a technoprogressive more than a technolibertarian. Individual freedoms are important, but the best way to ensure these is via wise collective agreement on appropriate constraints. Rather than seeking minimal government and minimal taxation, you’ll see in the pages ahead that I argue for appropriate government and appropriate taxation.

However, I’m emphatically not going to advocate that left-leaning transhumanists should somehow overcome or defeat right-leaning transhumanists. The beliefs I listed as being characteristic of right-leaning transhumanists all contain significant truths – as do the beliefs I listed for left-leaning transhumanists. The task ahead is to pursue policies that respect both sets of insights. That’s what I mean when describing the Transpolitica initiative as “integrative”. Rather than “either-or” it’s “both-and”.

 

15 September 2016

Two cheers for “Technology vs. Humanity”

On Saturday I had the pleasure to host Swiss futurist Gerd Leonhard at a London Futurists event in central London. The meetup was organised in conjunction with the publication of Gerd’s new book, “Technology vs. Humanity”.

tvh-3d-1

This three minute video from his website gives a fast-paced introduction to Gerd’s thinking:

The subtitle of Gerd’s book indicates the emphasis that comes across loud and clear in its pages: “The coming clash between man and machine”. I have mixed feelings about that emphasis. Yes, a clash between humanity and technology is one of the possible scenarios ahead. But it’s by no means set in stone. If we are smart, much better futures lie ahead. These better future see a combination of the best of present-day humanity and the fruits of technological development, to create what I would call a Humanity+ future.

In the Humanity+ future, technology is used to enhance humanity – making us healthier, kinder, smarter, wiser, more compassionate, and more engaged. In contrast, Gerd expects that technology will result in a downgrade of humanity.

The video of Saturday’s London Futurists event records some dialog on exactly that point. If you’ve got a spare 60 minutes, it’s worth watching the video all the way through. (The Q&A starts after 44 minutes.)

You’ll see that Gerd is an engaging, entertaining presenter, with some stunning visuals.

Hip, hip…

Overall, I am happy to give two cheers to Gerd’s new book – two loud cheers.

The first cheer is that it has many fine examples of the accelerating pace of change. For example, chapter three of his book reviews “ten megashifts”. Gerd starts his presentation with the bold claim that “Humanity will change more in the next 20 years than in the previous 300 years”. He may well be right. Related, Gerd makes a strong case that major change can sneak up on people “gradually and then suddenly”. That’s the nature of exponential change.

The second cheer is even louder than the first one: I completely agree with Gerd that we need to carefully consider the pros and cons of adopting technology in greater areas of our lives. He has a brilliant slide in which human’s attitude towards a fast-improving piece of technology changes from “Magic” to “Manic” and then to “Toxic”. To avoid such progressions, Gerd recommends the formation of something akin to a “Humanity Protection Agency”, similar to the “Environmental Protection Agency” that constrains corporations from polluting and despoiling the environment. Gerd emphasises: just because it is possible to digitise aspects of our lives, it doesn’t mean we should digitise these aspects. More efficient doesn’t always mean better. More profit doesn’t always mean better. More experiences doesn’t always mean better – and so on. Instead of rushing ahead blindly, we need what Gerd calls “exponentially increased awareness”. He’s completely right.

So I am ready to say, “Hip, hip…” – but I hold back from the third cheer (“hurrah”).

Yes, the book can be a pleasure to read, with its clever turns of phrase and poignant examples. But to my mind, the advice in the book will make things unnecessarily hard for humanity – dangerously hard for humanity. That advice will unnecessarily handicap the “Team Human” which the book says it wants to support.

Specifically:

  • The book has too rosy a view of the present state of human nature
  • The book has too limited a view of the positive potential of technology to address the key shortcomings in human nature.

Let’s take these points one at a time.

Human nature

The book refers to human unpredictability, creativity, emotion, and so on, and insists that these aspects of human nature be protected at all costs. Even though machines might do the same tasks as humans, with greater predictability and less histrionics, it doesn’t mean we should hand these tasks over to machines. Thus far, I agree with the argument.

But humans also from time to time manifest a host of destructive characteristics: short-sightedness, stupidity, vengefulness, tribalism, obstructiveness, spitefulness, and so on. It’s possible that these characteristics were, on the whole, useful to humanity in earlier, simpler stages of civilisation. But in present times, with powerful weaponry all around us, these characteristics threaten to plunge humanity into a new dark age.

(I touched on this argument in a recent Transpolitica blogpost, “Flawed humanity, flawed politics”.)

Indeed, despite huge efforts from people all over the globe, the planet is still headed for a potential devastating rise in temperature, due to runaway climate change. What’s preventing an adequate response to this risk is a combination of shortcomings in human society, human politics, human economics, and – not least – human nature.

It’s a dangerous folly to overly romanticise human nature. We humans can, at times, be awful brutes. Our foibles aren’t just matters for bemusement. Our foibles should terrify us.

unfit-for-the-future

I echo the thoughts expressed in a landmark 2012 Philosophy Now article by  Professors Julian Savulescu and Ingmar Persson, “Unfit for the Future: The Urgent Need for Moral Enhancement”:

For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.

But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?…

Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species.

We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities…

Underestimating technology

This brings me to the second point where Gerd’s book misfires: its dogmatic dismissal of the possibility of technology to make any significant improvement in “soft” areas of human life, such as emotional intelligence, creativity, and intuition. The book asserts that whilst software might be able to mimic emotions, these emotions will have no real value. For example, no computer would be able to talk to a two year old human child, and hold its attention.

This assessment demonstrates a major blindspot regarding the ways in which software can already provide strong assistance for people suffering from autism, self-doubt, early stage dementia, or other emotional or social deficits. As one example, consider a Guardian article from last year, “How robots are helping children with autism”.

zeno-the-smiling-robot-008

Consider also this comment from Dr Lucy Maddox, an NHS clinical psychologist and lecturer:

There are loads of [computer] apps that claim to use psychological principles to increase wellbeing in some way, encouraging you to keep track of your mood, to manage worry, to influence what you dream about … Can an app really distil something useful from psychological research and plug you into some life-influencing wisdom? I think some can…

This discussion brings to mind the similar dismissals, from the 1970s and early 1980s, of the possibility that the technology of in-vitro fertilisation (“test-tube babies”) could result in fully human babies. The suggestion was that any such “devilish” technology would result in babies that somehow lacked souls. Here’s a comment from Philip Ball from New Humanist:

Doubts about the artificial being’s soul are still with us, although more often expressed now in secular terms: the fabricated person is denied genuine humanity. He or she is thought to be soulless in the colloquial sense: lacking love, warmth, human feeling. In a poll conducted for Life in the early days of IVF research, 39 per cent of women and 45 per cent of men doubted that an “in vitro child would feel love for family”. (Note that it is the sensibilities of the child, not of the parents, that are impaired.) A protest note placed on the car of a Californian fertility doctor when he first began offering an IVF service articulated the popular view more plainly: “Test tube babies have no souls.”

In 1978 Leon Kass – said, later, to be the favourite bioethicist of President George W. Bush – thundered his opposition to in-vitro fertilisation  as follows:

More is at stake [with IVF research] than in ordinary biomedical research or in experimenting with human subjects at risk of bodily harm. At stake is the idea of the humanness of our human life and the meaning of our embodiment, our sexual being, and our relation to ancestors and descendants.

These comments by Kass have strong echoes to the themes developed by Gerd in Technology vs. Humanity.

It turned out, contrary to Kass’s dire forecasts, that human society was more than capable of taking in its stride the opportunities provided by IVF technology. Numerous couples found great joy through that technology. Numerous wonderful children were brought into existence in that way.

It ought to be the same, in due course, with the opportunities provided by technologies to enhance our emotional intelligence, our creativity, our intuition, our compassion, our sociability, and so on. Applied wisely and thoughtfully, these technologies will allow the full potential of humanity to be reached – rather than being sabotaged by our innate shortcomings.

Emphatically, I’m not saying we should be rushing into anything. We need to approach the potential offered by these new technologies with great thoughtfulness. And with a more open mind than Gerd displays.

Dogmatism

I found my head shaking in disbelief at many of the paragraphs in Technology vs. Humanity. For examples, here’s Gerd’s description of the capabilities of Virtual Reality (VR):

Virtual travel technologies such as Facebook’s Oculus Rift, Samsung VR, and Microsoft’s HoloLens are just beginning to provide us with a very real feeling for what it would be like to raft the Amazon River or climb Mount Fuji. These are already very interesting experiences that will certainly change our way of experiencing reality, of communicating, of working, and of learning… [but] there is still a huge difference between these new ways to experience alternate realities and real life. Picture yourself standing in the middle of a crowded bazaar in Mumbai, India, for just two minutes. Then, compare the memories you would have accumulated in a very short time with those from a much longer but simulated experience using the most advanced systems available today or in the near future. The smells, the sounds and sights – all of these are a thousand times more intense than what even the most advanced gadgetry, fuelled by exponential gains, could ever hope to simulate.

“A thousand times more intense”? More intense than what “the most advanced gadgetry could ever hope to simulate”? Ever?! I see these sweeping claims as an evidence of a closed mind. The advice from elsewhere in the book was better: “gradually, and then suddenly”. The intensity of the emotional experience from VR technology is likely to increase gradually, and then suddenly.

Opening the book to another page, my attention is drawn to the exaggeration in another passage, in the discussion of the possibility of ectogenesis (growing a baby outside a woman’s body in an artificial womb):

I believe it would be utterly dehumanising and detrimental for a baby to be born in such a way.

During his presentation at London Futurists, Gerd used labelled the technology of ectogenesis as “jerk tech”. In discussion in the Marlborough Arms pub after the meetup, several women attendees remarked that they thought only a man could take such a high-handed, dismissive approach to this technology. They emphasised that they were unsure whether they would personally want to take advantage of ectogenesis, but they thought the possibility should be kept open.

Note: for a book that takes a much more thoughtful approach to the possibilities of using technology to transform genetic choice, I recommend Babies by Design: The Ethics of Genetic Choice” by Ronald Green.

babies-by-design

Transhumanism

The viewpoint I’m advocating, in this review of Technology vs. Humanity, is transhumanism:

…a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.

Oxford philosopher Nick Bostrom puts it like this:

Transhumanists view human nature as a work-in-progress, a half-baked beginning that we can learn to remold in desirable ways. Current humanity need not be the endpoint of evolution. Transhumanists hope that by responsible use of science, technology, and other rational means we shall eventually manage to become posthuman beings with vastly greater capacities than present human beings have.

One of the best introductions to the ideas of transhumanism is in the evocative “Letter to Mother Nature” written in 1999 by Max More. It starts as follows:

Dear Mother Nature:

Sorry to disturb you, but we humans—your offspring—come to you with some things to say. (Perhaps you could pass this on to Father, since we never seem to see him around.) We want to thank you for the many wonderful qualities you have bestowed on us with your slow but massive, distributed intelligence. You have raised us from simple self-replicating chemicals to trillion-celled mammals. You have given us free rein of the planet. You have given us a life span longer than that of almost any other animal. You have endowed us with a complex brain giving us the capacity for language, reason, foresight, curiosity, and creativity. You have given us the capacity for self-understanding as well as empathy for others.

Mother Nature, truly we are grateful for what you have made us. No doubt you did the best you could. However, with all due respect, we must say that you have in many ways done a poor job with the human constitution. You have made us vulnerable to disease and damage. You compel us to age and die—just as we’re beginning to attain wisdom. You were miserly in the extent to which you gave us awareness of our somatic, cognitive, and emotional processes. You held out on us by giving the sharpest senses to other animals. You made us functional only under narrow environmental conditions. You gave us limited memory, poor impulse control, and tribalistic, xenophobic urges. And, you forgot to give us the operating manual for ourselves!

What you have made us is glorious, yet deeply flawed. You seem to have lost interest in our further evolution some 100,000 years ago. Or perhaps you have been biding your time, waiting for us to take the next step ourselves. Either way, we have reached our childhood’s end.

We have decided that it is time to amend the human constitution.

We do not do this lightly, carelessly, or disrespectfully, but cautiously, intelligently, and in pursuit of excellence. We intend to make you proud of us. Over the coming decades we will pursue a series of changes to our own constitution, initiated with the tools of biotechnology guided by critical and creative thinking. In particular, we declare the following seven amendments to the human constitution…

In contrast, this is what Gerd says about transhumanism (with similar assertions being scattered throughout his book):

Transhumanism, with its lemming-like rush to the edge of the universe, represents the scariest of all present options.

What “lemming-like rush”? Where’s the “lemming-like rush” in the writings of Nick Bostrom (who co-founded the World Transhumanist Association in 1998)? Recall from his definition,

…by responsible use of science, technology, and other rational means we shall eventually manage to become posthuman beings with vastly greater capacities than present human beings have

And consider the sixth proposed “human constitutional amendment” from the letter by Max More:

Amendment No.6: We will cautiously yet boldly reshape our motivational patterns and emotional responses in ways we, as individuals, deem healthy. We will seek to improve upon typical human emotional excesses, bringing about refined emotions. We will strengthen ourselves so we can let go of unhealthy needs for dogmatic certainty, removing emotional barriers to rational self-correction.

As Max emphasised earlier in his Letter,

We do not do this lightly, carelessly, or disrespectfully, but cautiously, intelligently, and in pursuit of excellence

To Gerd’s puzzling claim that transhumanists are blind to the potential risks of new technology, let me exhibit as counter-evidence the nearest thing to a canonical document uniting transhumanist thinking – the “Transhumanist Declaration”. Of its eight clauses, at least half emphasise the potential drawbacks of an uncritical approach to technology:

  1. Humanity stands to be profoundly affected by science and technology in the future. We envision the possibility of broadening human potential by overcoming aging, cognitive shortcomings, involuntary suffering, and our confinement to planet Earth.
  2. We believe that humanity’s potential is still mostly unrealized. There are possible scenarios that lead to wonderful and exceedingly worthwhile enhanced human conditions.
  3. We recognize that humanity faces serious risks, especially from the misuse of new technologies. There are possible realistic scenarios that lead to the loss of most, or even all, of what we hold valuable. Some of these scenarios are drastic, others are subtle. Although all progress is change, not all change is progress.
  4. Research effort needs to be invested into understanding these prospects. We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented.
  5. Reduction of existential risks, and development of means for the preservation of life and health, the alleviation of grave suffering, and the improvement of human foresight and wisdom should be pursued as urgent priorities, and heavily funded.
  6. Policy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe. We must also consider our moral responsibilities towards generations that will exist in the future.
  7. We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise.
  8. We favour allowing individuals wide personal choice over how they enable their lives. This includes use of techniques that may be developed to assist memory, concentration, and mental energy; life extension therapies; reproductive choice technologies; cryonics procedures; and many other possible human modification and enhancement technologies.

It’s a pity that the editors and reviewers of Gerd’s book did not draw his attention to the many mistakes and misunderstandings of transhumanism that his book contains. My best guess is that the book was produced in a rush. (That would explain the many other errors of fact that are dotted throughout the various chapters.)

To be clear, I accept that many criticisms can be made regarding transhumanism. In an article I wrote for H+Pedia, I collected a total of 18 different criticisms. In that article, I seek to show, in each case,

  • Where these criticisms miss the mark
  • Where these criticisms have substance – so that transhumanists ought to pay attention.

That article – like all other H+Pedia articles – is open for further contributions. Either edit the page directly. Or raise some comments on the associated “Discussion” page.

The vital need for an improved conversation

The topics covered in Technology vs. Humanity have critical importance. A much greater proportion of humanity’s collective attention should be focused onto these topics. To that extent, I fully support Gerd’s call for an improved global conversation on the risks and opportunities of the forthcoming impact of accelerating technology.

During that conversation, each of us will likely find some of our opinions changing, as we move beyond an initial “future shock” to a calmer, more informed reflection of the possibilities. We need to move beyond a breathless “gee whiz” and an anguished “oh this is awful”.

The vision of an improved conversation about the future is what has led me to invest so much of my own time over the years in the London Futurists community.

lf-banner

More recently, that same vision has led me to support the H+Pedia online wiki – a Humanity+ project to spread accurate, accessible, non-sensational information about transhumanism and futurism among the general public.

banner

As the welcome page states,

H+Pedia welcomes constructive contributions from everyone interested in the future of humanity.

By all means get involved! Team Human deserves your support. Team Human also deserves the best information, free of dogmatism, hype, insecurity, or commercial pressures. Critically, Team Human deserves not to be deprived of access to the smart transformational technology of the near future that can become the source of its greatest flourishing.

21 June 2016

5G World Futurist Summit

Filed under: disruption, Events, futurist — Tags: , , , , — David Wood @ 11:30 pm

Intro slide

On Wednesday next week, 29th June, it will be my pleasure to chair the Futurist Summit which is one of the free-to-attend streams happening as part of the 5G World event taking place at London’s Olympia.

You can read more about the summit here, and more about the 5G World event here.

The schedule for the summit is as follows:

11:00 Introduction to the Futurist Summit
David Wood – Chair, London Futurists & Principal, Delta Wisdom

11:30 Education 2022 – MOOCs in full use, augmented by AIs doing marking and assessment-setting
Julia Begbie – Deputy Director of Studies – KLC School of Design

12:00 Healthcare 2022 – Digital healthcare systems finally fulfilling the promise that has long been expected of them
A
vi Roy – Biomedical Scientist & Research Fellow at the Centre for Advancing Sustainable Medical Innovation (CASMI) – Oxford University

12:30 Finance 2022 – Anticipating a world without physical cash, and in many cases operating without centralised banks
Jeffrey Bower, Digital Finance Specialist, United Nations

13:00 Networking Lunch

14:00 Reinventing urban mobility for new business strategies…self-driving cars and beyond
Stephane Barbier – CEO – Transpolis

14:30 The Future of Smart Cities
Paul Copping – Smart City Advisor – Digital Greenwich, Royal Borough of Greenwich

15:00 The Future of Computer Security and ‘Cybercrime’
Craig Heath, Director, Franklin Heath 

15:30 What happens when virtual reality experiences become more engaging than those in the real world?”
Steve Dann, Founder & CEO, Amplified Robot 

16:00 End of Futurist Summit

Speakers slide

Each of the 30 minute slots in the Summit will include a presentation from the speaker followed by audience Q&A.

If you’re in or near London that day, I hope to see many of you at the Summit!

Note that, although the Futurist Summit is free to attend, you need to register in advance for a Free Expo Pass, via the 5G World conference registration page. You’ll probably see other streams at the event that you would also like to attend.

Stop press: Any members of London Futurists can obtain a 50% discount off the price of a full pass to 5G World – if you wish to attend other aspects of the event – by using the Priority Code Partner50 on the registration webpage.

 

 

25 October 2015

Getting better at anticipating the future

History is replete with failed predictions. Sometimes pundits predict too much change. Sometimes they predict too little. Frequently they predict the wrong kinds of change.

Even those forecasters who claim a good track record for themselves sometime turn out, on closer inspection, to have included lots of wiggle room in their predictions – lots of scope for creative reinterpretation of their earlier words.

Of course, forecasts are often made for purposes other than anticipating the events that will actually unfold. Forecasts can serve many other goals:

  • Raising the profile of the forecaster and potentially boosting book sales or keynote invites – especially if the forecast is memorable, and is delivered in a confident style
  • Changing the likelihood that an event predicted will occur – either making it more likely (if the prediction is enthusiastic), or making it less likely (if the prediction is fearful)
  • Helping businesses and organisations to think through some options for their future strategy, via “scenario analysis”.

Given these alternative reasons why forecasters make predictions, it perhaps becomes more understandable that little effort is made to evaluate the accuracy of past forecasts. As reported by Alex Mayyasi,

Organizations spend staggering amounts of time and money trying to predict the future, but no time or money measuring their accuracy or improving on their ability to do it.

This bizarre state of affairs may be understandable, but it’s highly irresponsible, none the less. We can, and should, do better. In a highly uncertain, volatile world, our collective future depends on improving our ability to anticipate forthcoming developments.

Philip Tetlock

Mayyasi was referring to research by Philip Tetlock, a professor at the University of Pennsylvania. Over three decades, Tetlock has accumulated huge amounts of evidence about forecasting. His most recent book, co-authored with journalist Dan Gardner, is a highly readable summary of his research.

The book is entitled “Superforecasting: The Art and Science of Prediction”. I wholeheartedly recommend it.

Superforecasting

The book carries an endorsement by Nobel laureate Daniel Kahneman:

A manual for thinking clearly in an uncertain world. Read it.

Having just finished this book, I echo the praise it has gathered. The book is grounded in the field of geopolitical forecasting, but its content ranges far beyond that starting point. For example, the book can be viewed as one of the best descriptions of the scientific method – with its elevation of systematic, thoughtful doubt, and its search for ways to reduce uncertainty and eliminate bias. The book also provides a handy summary of all kinds of recent findings about human thinking methods.

“Superforecasting” also covers the improvements in the field of medicine that followed from the adoption of evidence-based medicine (in the face, it should be remembered, of initial fierce hostility from the medical profession). Indeed, the book seeks to accelerate a similar evidence-based revolution in the fields of economic and political analysis. It even has hopes to reduce the level of hostility and rancour that tends to characterise political discussion.

As such, I see the book as making an important contribution to the creation of a better sort of politics.

Summary of “Superforecasting”

The book draws on:

  • Results from four years of online competitions for forecasters held under the Aggregative Contingent Estimation project of IARPA (Intelligence Advanced Research Projects Activity)
  • Reflections from contest participants whose persistently scored highly in the competition – people who became known as ‘superforecasters’
  • Insight from the Good Judgement Project co-created by Tetlock
  • Reviews of the accuracy of predictions made publicly by politicians, political analysts, and media figures
  • Other research into decision-making, cognitive biases, and group dynamics.

Forecasters and superforecasters from the Good Judgement Project submitted more than 10,000 predictions over four years in response to questions about the likelihood of specified outcomes happening within given timescales over the following 3-12 months. Forecasts addressed the fields of geopolitics and economics.

The book highlights the following characteristics as being the cause of the success of superforecasters:

  • Avoidance of taking an ideological approach, which restricts the set of information that the forecaster considers
  • Pursuit of an evidence-based approach
  • Willingness to search out potential sources of disconfirming evidence
  • Willingness to incrementally adjust forecasts in the light of new evidence
  • The ability to break down estimates into a series of constituent questions that can, individually, be more easily calculated
  • The desire to obtain several different perspectives on a question, which can then be combined into an aggregate viewpoint
  • Comfort with mathematical and probabilistic reasoning
  • Adoption of careful, precise language, rather than vague terms (such as “might”) whose apparent meaning can change with hindsight
  • Acceptance of contingency rather than ‘fate’ or ‘inevitability’ as being the factor responsible for outcomes
  • Avoidance of ‘groupthink’ in which undue respect among team members prevents sufficient consideration of alternative viewpoints
  • Willingness to learn from past forecasting experiences – including both successes and failures
  • A growth mindset, in which personal characteristics and skill are seen as capable of improvement, rather than being fixed.

(This section draws on material I’ve added to H+Pedia earlier today. See that article for some links to further reading.)

Human pictures

Throughout “Superforecasting”, the authors provide the human backgrounds of the forecasters whose results and methods feature in the book. The superforecasters have a wide variety of backgrounds and professional experience. What they have in common, however – and where they differ from the other contest participants, whose predictions were less stellar – is the set of characteristics given above.

The book also discusses a number of well-known forecasters, and dissects the causes of their forecasting failures. This includes 9/11, the wars in Iraq, the Cuban Bay of Pigs fiasco, and many more. There’s much to learn from all these examples.

Aside: Other ways to evaluate futurists

Australian futurist Ross Dawson has recently created a very different method to evaluate the success of futurists. As Ross explains at http://rossdawson.com/futurist-rankings/:

We have created this widget to provide a rough view of how influential futurists are on the web and social media. It is not intended to be rigorous but it provides a fun and interesting insight into the online influence of leading futurists.

The score is computed from the number of Twitter followers, the Alexa score of websites, and the general Klout metric.

The widget currently lists 152 futurists. I was happy to find my name at #53 on the list. If I finish writing the two books I have in mind to publish over the next 12 months, I expect my personal ranking to climb 🙂

Yet another approach is to take a look at http://future.meetup.com/, the listing (by size) of the Meetup groups around the world that list “futurism” (or similar) as one of their interests. London Futurists, which I’ve been running (directly and indirectly) over the last seven years, features in third place on that list.

Of course, we futurists vary in the kind of topics we are ready (and willing) to talk to audiences abound. In my own case, I wish to encourage audiences away from “slow-paced” futurism, towards serious consideration of the possibilities of radical changes happening within just a few decades. These changes include not just the ongoing transformation of nature, but the possible transformation of human nature. As such, I’m ready to introduce the topic of transhumanism, so that audiences become more aware of the arguments both for and against this philosophy.

Within that particular subgrouping of futurist meetups, London Futurists ranks as a clear #1, as can be seen from http://transhumanism.meetup.com/.

Footnote

Edge has published a series of videos of five “master-classes” taught by Philip Tetlock on the subject of superforecasting:

  1. Forecasting Tournaments: What We Discover When We Start Scoring Accuracy
  2. Tournaments: Prying Open Closed Minds in Unnecessarily Polarized Debates
  3. Counterfactual History: The Elusive Control Groups in Policy Debates
  4. Skillful Backward and Forward Reasoning in Time: Superforecasting Requires “Counterfactualizing”
  5. Condensing it All Into Four Big Problems and a Killer App Solution

I haven’t had the time to view them yet, but if they’re anything like as good as the book “Superforecasting”, they’ll be well worth watching.

10 October 2015

Technological unemployment – Why it’s different this time

On Tuesday last week I joined members of “The Big Potatoes” for a spirited discussion entitled “Automation Anxiety”. Participants became embroiled in questions such as:

  • To what extent will increasingly capable automation (robots, software, and AI) displace humans from the workforce?
  • To what extent should humans be anxious about this process?

The Big Potatoes website chose an image from the marvellously provocative Channel 4 drama series “Humans” to set the scene for the discussion:

Channel4_HumansAdvertisingHoarding-440x293

“Closer to humans” than ever before, the fictional advertisement says, referring to humanoid robots with multiple capabilities. In the TV series, many humans became deeply distressed at the way their roles are being usurped by these new-fangled entities.

Back in the real world, many critics reject these worries. “We’ve heard it all before”, they assert. Every new wave of technological automation has caused employment disruption, yes, but it has also led to new types of employment. The new jobs created will compensate for the old ones destroyed, the critics say.

I see these critics as, most likely, profoundly mistaken. This time things are different. That’s because of the general purpose nature of ongoing improvements in the algorithms for automation. Machine learning algorithms that are developed with one set of skills in mind turn out to fit, reasonably straightforwardly, into other sets of skills as well.

The master algorithm

That argument is spelt out in the recent book “The master algorithm” by University of Washington professor of computer science and engineering Pedro Domingos.

TheMasterAlgorithm

The subtitle of that book refers to a “quest for the ultimate learning machine”. This ultimate learning machine can be contrasted with another universal machine, namely the universal Turing machine:

  • The universal Turing machine accepts inputs and applies a given algorithm to compute corresponding outputs
  • The universal learning machine accepts a set of corresponding input and output data, and makes the best possible task of inferring the algorithm that would obtain the outputs from the inputs.

For example, given sets of texts written in English, and matching texts written in French, the universal learning machine would infer an algorithm that will convert English into French. Given sets of biochemical reactions of various drugs on different cancers, the universal learning machine would infer an algorithm to suggest the best treatment for any given cancer.

As Domingos explains, there are currently five different “tribes” within the overall machine learning community. Each tribe has its separate origin, and also its own idea for the starting point of the (future) master algorithm:

  • “Symbolists” have their origin in logic and philosophy; their core algorithm is “inverse deduction”
  • “Connectionists” have their origin in neuroscience; their core algorithm is “back-propagation”
  • “Evolutionaries” have their origin in evolutionary biology; their core algorithm is “genetic programming”
  • “Bayesians” have their origin in statistics; their core algorithm is “probabilistic inference”
  • “Analogizers” have their origin in psychology; their core algorithm is “kernel machines”.

(See slide 6 of this Slideshare presentation. Indeed, take the time to view the full presentation. Better again, read Domingos’ entire book.)

What’s likely to happen over the next decade, or two, is that a single master algorithm will emerge that unifies all the above approaches – and, thereby, delivers great power. It will be similar to the progress made by physics as the fundamental force of natures have gradually been unified into a single theory.

And as that unification progresses, more and more occupations will be transformed, more quickly than people generally expect. Technological unemployment will rise and rise, as software embodying the master algorithm handles tasks previously thought outside the scope of automation.

Incidentally, Domingos has set out some ambitious goals for what his book will accomplish:

The goal is to do for data science what “Chaos” [by James Gleick] did for complexity theory, or “The Selfish Gene” [by Richard Dawkins] for evolutionary game theory: introduce the essential ideas to a broader audience, in an entertaining and accessible way, and outline the field’s rich history, connections to other fields, and implications.

Now that everyone is using machine learning and big data, and they’re in the media every day, I think there’s a crying need for a book like this. Data science is too important to be left just to us experts! Everyone – citizens, consumers, managers, policymakers – should have a basic understanding of what goes on inside the magic black box that turns data into predictions.

People who comment about the likely impact of automation on employment would do particularly well to educate themselves about the ideas covered by Domingos.

Rise of the robots

There’s a second reason why “this time it’s different” as regards the impact of new waves of automation on the employment market. This factor is the accelerating pace of technological change. As more areas of industry become subject to digitisation, they become, at the same time, subject to automation.

That’s one of the arguments made by perhaps the best writer so far on technological unemployment, Martin Ford. Ford’s recent book “Rise of the Robots: Technology and the Threat of a Jobless Future” builds ably on what previous writers have said.

RiseofRobots

Here’s a sample of review comments about Ford’s book:

Lucid, comprehensive and unafraid to grapple fairly with those who dispute Ford’s basic thesis, Rise of the Robots is an indispensable contribution to a long-running argument.
Los Angeles Times

If The Second Machine Age was last year’s tech-economy title of choice, this book may be 2015’s equivalent.
Financial Times, Summer books 2015, Business, Andrew Hill

[Ford’s] a careful and thoughtful writer who relies on ample evidence, clear reasoning, and lucid economic analysis. In other words, it’s entirely possible that he’s right.
Daily Beast

Surveying all the fields now being affected by automation, Ford makes a compelling case that this is an historic disruption—a fundamental shift from most tasks being performed by humans to one where most tasks are done by machines.
Fast Company

Well-researched and disturbingly persuasive.
Financial Times

Martin Ford has thrust himself into the center of the debate over AI, big data, and the future of the economy with a shrewd look at the forces shaping our lives and work. As an entrepreneur pioneering many of the trends he uncovers, he speaks with special credibility, insight, and verve. Business people, policy makers, and professionals of all sorts should read this book right away—before the ‘bots steal their jobs. Ford gives us a roadmap to the future.
—Kenneth Cukier, Data Editor for the Economist and co-author of Big Data: A Revolution That Will Transform How We Live, Work, and Think

Ever since the Luddites, pessimists have believed that technology would destroy jobs. So far they have been wrong. Martin Ford shows with great clarity why today’s automated technology will be much more destructive of jobs than previous technological innovation. This is a book that everyone concerned with the future of work must read.
—Lord Robert Skidelsky, Emeritus Professor of Political Economy at the University of Warwick, co-author of How Much Is Enough?: Money and the Good Life and author of the three-volume biography of John Maynard Keynes

If you’re still not convinced, I recommend that you listen to this audio podcast of a recent event at London’s RSA, addressed by Ford.

I summarise the takeaway message in this picture, taken from one of my Delta Wisdom workshop presentations:

Tech unemployment curves

  • Yes, humans can retrain over time, to learn new skills, in readiness for new occupations when their former employment has been displaced by automation
  • However, the speed of improvement of the capabilities of automation will increasingly exceed that of humans
  • Coupled with the general purpose nature of these capabilities, it means that, conceivably, from some time around 2040, very few humans will be able to find paid work.

A worked example: a site carpenter

During the Big Potatoes debate on Tuesday, I pressed the participants to name an occupation that would definitely be safe from incursion by robots and automation. What jobs, if any, will robots never be able to do?

One suggestion that came back was “site carpenter”. In this thinking, unfinished buildings are too complex, and too difficult for robots to navigate. Robots who try to make their way through these buildings, to tackle carpentry tasks, will likely fall down. Or assuming they don’t fall down, how will they cope with finding out that the reality in the building often varies sharply from the official specification? These poor robots will try to perform some carpentry task, but will get stymied when items are in different places from where they’re supposed to be. Or have different tolerances. Or alternatives have been used. Etc. Such systems are too messy for robots to compute.

My answer is as follows. Yes, present-day robots currently often do fall down. Critics seem to find this hilarious. But this is pretty similar to the fact that young children often fall down, while learning to walk. Or novice skateboarders often fall down, when unfamiliar with this mode of transport. However, robots will learn fast. One example is shown in this video, of the “Atlas” humanoid robot from Boston Dynamics (now part of Google):

As for robots being able to deal with uncertainty and surprises, I’m frankly struck by the naivety of this question. Of course software can deal with uncertainty. Software calculates courses of action statistically and probabilistically, the whole time. When software encounters information at variance from what it previously expected, it can adjust its planned course of action. Indeed, it can take the same kinds of steps that a human would consider – forming new hypotheses, and, when needed, checking back with management for confirmation.

The question is a reminder to me that the software and AI community need to do a much better job to communicate the current capabilities of their field, and the likely improvements ahead.

What does it mean to be human?

For me, the most interesting part of Tuesday’s discussion was when it turned to the following questions:

  • Should these changes be welcomed, rather than feared?
  • What will these forthcoming changes imply for our conception of what it means to be human?

To my mind, technological unemployment will force us to rethink some of the fundamentals of the “protestant work ethic” that permeates society. That ethic has played a decisive positive role for the last few centuries, but that doesn’t mean we should remain under its spell indefinitely.

If we can change our conceptions, and if we can manage the resulting social transition, the outcome could be extremely positive.

Some of these topics were aired at a conference in New York City on 29th September: “The World Summit on Technological Unemployment”, that was run by Jim Clark’s World Technology Network.

Robotic Steel Workers

One of the many speakers at that conference, Scott Santens, has kindly made his slides available, here. Alongside many graphs on the increasing “winner takes all” nature of modern employment (in which productivity increases but median income declines), Santens offers a different way of thinking about how humans should be spending their time:

We are not facing a future without work. We are facing a future without jobs.

There is a huge difference between the two, and we must start seeing the difference, and making the difference more clear to each other.

In his blogpost “Jobs, Work, and Universal Basic Income”, Santens continues the argument as follows:

When you hate what you do as a job, you are definitely getting paid in return for doing it. But when you love what you do as a job or as unpaid work, you’re only able to do it because of somehow earning sufficient income to enable you to do it.

Put another way, extrinsically motivated work is work done before or after an expected payment. It’s an exchange. Intrinsically motivated work is work only made possible by sufficient access to money. It’s a gift.

The difference between these two forms of work cannot be overstated…

Traditionally speaking, most of the work going on around us is only considered work, if one gets paid to do it. Are you a parent? Sorry, that’s not work. Are you in paid childcare? Congratulations, that’s work. Are you an open source programmer? Sorry, that’s not work. Are you a paid software engineer? Congratulations, that’s work…

What enables this transformation would be some variant of a “basic income guarantee” – a concept that is introduced in the slides by Santens, and also in the above-mentioned book by Martin Ford. You can hear Ford discuss this option in his RSA podcast, where he ably handles a large number of questions from the audience.

What I found particularly interesting from that podcast was a comment made by Anthony Painter, the RSA’s Director of Policy and Strategy who chaired the event:

The RSA will be advocating support for Basic Income… in response to Technological Unemployment.

(This comment comes about 2/3 of the way through the podcast.)

To be clear, I recognise that there will be many difficulties in any transition from the present economic situation to one in which a universal basic income applies. That transition is going to be highly challenging to manage. But these problems of transition are a far better problem to have, than dealing with the consequences of vastly increased unpaid unemployment and social alienation.

Life is being redefined

Just in case you’re still tempted to dismiss the above scenarios as some kind of irresponsible fantasy, there’s one more resource you might like to consult. It’s by Janna Q. Anderson, Professor of Communications at Elon University, and is an extended write-up of a presentation I heard her deliver at the World Future 2015 conference in San Francisco this July.

Janna Anderson keynote

You can find Anderson’s article here. It starts as follows:

The Robot Takeover is Already Here

The machines that replace us do not have to have superintelligence to execute a takeover with overwhelming impacts. They must merely extend as they have been, rapidly becoming more and more instrumental in our essential systems.

It’s the Algorithm Age. In the next few years humans in most positions in the world of work will be nearly 100 percent replaced by or partnered with smart software and robots —’black box’ invisible algorithm-driven tools. It is that which we cannot see that we should question, challenge and even fear the most. Algorithms are driving the world. We are information. Everything is code. We are becoming dependent upon and even merging with our machines. Advancing the rights of the individual in this vast, complex network is difficult and crucial.

The article is described as being a “45 minute read”. In turn, it contains numerous links, so you could spend lots longer following the resulting ideas. In view of the momentous consequences of the trends being discussed, that could prove to be a good use of your time.

By way of summary, I’ll pull out a few sentences from the middle of the article:

One thing is certain: Employment, as it is currently defined, is already extremely unstable and today many of the people who live a life of abundance are not making nearly enough of an effort yet to fully share what they could with those who do not…

It’s not just education that is in need of an overhaul. A primary concern in this future is the reinvention of humans’ own perceptions of human value…

[Another] thing is certain: Life is being redefined.

Who controls the robots?

Despite the occasional certainty in this field (as just listed above, extracted from the article by Janna Anderson), there remains a great deal of uncertainty. I share with my Big Potatoes colleagues the viewpoint that technology does not determine social responses. The question of which future scenario will unfold isn’t just a question of cheer-leading (if you’re an optimist) or cowering (if you’re a pessimist). It’s a question of choice and action.

That’s a theme I’ll be addressing next Sunday, 18th October, at a lunchtime session of the 2015 Battle of Ideas. The session is entitled “Man vs machine: Who controls the robots”.

robots

Here’s how the session is described:

From Metropolis through to recent hit film Ex Machina, concerns about intelligent robots enslaving humanity are a sci-fi staple. Yet recent headlines suggest the reality is catching up with the cultural imagination. The World Economic Forum in Davos earlier this year hosted a serious debate around the Campaign to Stop Killer Robots, organised by the NGO Human Rights Watch to oppose the rise of drones and other examples of lethal autonomous warfare. Moreover, those expressing the most vocal concerns around the march of the robots can hardly be dismissed as Luddites: the Elon-Musk funded and MIT-backed Future of Life Institute sparked significant debate on artificial intelligence (AI) by publishing an open letter signed by many of the world’s leading technologists and calling for robust guidelines on AI research to ‘avoid potential pitfalls’. Stephen Hawking, one of the signatories, has even warned that advancing robotics could ‘spell the end of the human race’.

On the other hand, few technophiles doubt the enormous potential benefits of intelligent robotics: from robot nurses capable of tending to the elderly and sick through to the labour-saving benefits of smart machines performing complex and repetitive tasks. Indeed, radical ‘transhumanists’ openly welcome the possibility of technological singularity, where AI will become so advanced that it can far exceed the limitations of human intelligence and imagination. Yet, despite regular (and invariably overstated) claims that a computer has managed to pass the Turing Test, many remain sceptical about the prospect of a significant power shift between man and machine in the near future…

Why has this aspect of robotic development seemingly caught the imagination of even experts in the field, when even the most remarkable developments still remain relatively modest? Are these concerns about the rise of the robots simply a high-tech twist on Frankenstein’s monster, or do recent breakthroughs in artificial intelligence pose new ethical questions? Is the question more about from who builds robots and why, rather than what they can actually do? Does the debate reflect the sheer ambition of technologists in creating smart machines or a deeper philosophical crisis in what it means to be human?

 As you can imagine, I’ll be taking serious issue with the above claim, from the session description, that progress with robots will “remain relatively modest”. However, I’ll be arguing for strong focus on questions of control.

It’s not just a question of whether it’s humans or robots that end up in control of the planet. There’s a critical preliminary question as to which groupings and systems of humans end up controlling the evolution of robots, software, and automation. Should we leave this control to market mechanisms, aided by investment from the military? Or should we exert a more general human control of this process?

In line with my recent essay “Four political futures: which will you choose?”, I’ll be arguing for a technoprogressive approach to control, rather than a technolibertarian one.

Four futures

I wait with interest to find out how much this viewpoint will be shared by the other speakers at this session:

15 September 2015

A wiser journey to a better Tomorrowland

Peter Drucker quote

Three fine books that I’ve recently had the pleasure to finish reading all underscore, in their own ways, the profound insight expressed in 1970 by management consultant Peter Drucker:

The major questions regarding technology are not technical but human questions.

That insights sits alongside the observation that technology has been an immensely important driver of change in human history. The technologies of agriculture, steam, electricity, medicine, and information, to name only a few, have led to dramatic changes in the key metrics in human civilisation – metrics such as population, travel, consumption, and knowledge.

But the best results of technology typically depend upon changes happening in parallel in human practice. Indeed, new general purpose technology sometimes initially results, not in an increase of productivity, but in an apparent decline.

The productivity paradox

Writing in Forbes earlier this year, in an article about the “current productivity paradox in healthcare”, Roy Smythe makes the following points:

There were two previous slowdowns in productivity that were not anticipated, and caused great consternation – the adoption of electricity and the computer. The issues at hand with both were the protracted time it took to diffuse the technology, the problem of trying to utilize the new technology alongside the pre-existing technology, and the misconception that the new technology should be used in the same context as the older one.

Although the technology needed to electrify manufacturing was available in the early 1890s, it was not fully adopted for about thirty years. Many tried to use the technology alongside or in conjunction with steam-driven engines – creating all manner of work-flow challenges, and it took some time to understand that it was more efficient to use electrical wires and peripheral, smaller electrical motors (dynamos) than to connect centrally-located large dynamos to the drive shafts and pulleys necessary to disperse steam-generated power. The sum of these activities resulted in a significant, and unanticipated lag in productivity in industry between 1890 and 1920…

However, in time, these new GPTs (general purpose technologies) did result in major productivity gains:

The good news, however, is substantial. In the two decades following the adoption of both electricity and the computer, significant acceleration of productivity was enjoyed. The secret was in the ability to change the context (in the case of the dynamo, taking pulleys down for example) assisting in a complete overhaul of the business process and environment, and the spawning of the new processes, tools and adjuncts that capitalized on the GPT.

In other words, the new general purpose technologies yielded the best results, not when humans were trying to follow the same processes as before, but when new processes, organisational models, and culture were adopted. These changes took time to conceive and adopt. Indeed, the changes took not only time but wisdom.

Wachter Kotler Naam

The Digital Doctor

Robert Wachter’s excellent book “The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age” provides a dazzling analysis of the ways in which the computerisation of health records – creating so-called EHRs (Electronic Health Records) – is passing through a similar phase of disappointing accomplishment. EHRs are often associated with new kinds of errors, with additional workload burdens, and with interfering in the all-important human relationship between doctor and patient. They’re far from popular with healthcare professionals.

Wachter believes these problems to be temporary: EHRs will live up to their promise in due course. But only once people can set the hype aside. What’s needed is that designers of healthcare tech products and systems will:

  • Put a much higher priority on ease of use, simplifying usage patterns, and on redesigning the overall flow of activity
  • Recognise and deal with the multiple complexities of the world of medicine.

For a good flavour of Wachter’s viewpoint, consider this extract from a New York Times opinion article he wrote in March, “Why Health Care Tech Is Still So Bad”,

Last year, I saw an ad recruiting physicians to a Phoenix-area hospital. It promoted state-of-the-art operating rooms, dazzling radiology equipment and a lovely suburban location. But only one line was printed in bold: “No E.H.R.”

In today’s digital era, a modern hospital deemed the absence of an electronic medical record system to be a premier selling point.

That hospital is not alone…

I interviewed Boeing’s top cockpit designers, who wouldn’t dream of green-lighting a new plane until they had spent thousands of hours watching pilots in simulators and on test flights. This principle of user-centered design is part of aviation’s DNA, yet has been woefully lacking in health care software design.

Our iPhones and their digital brethren have made computerization look easy, which makes our experience with health care technology doubly disappointing. An important step is admitting that there is a problem, toning down the hype, and welcoming thoughtful criticism, rather than branding critics as Luddites.

In my research, I found humility in a surprising place: the headquarters of I.B.M.’s Watson team, the people who built the computer that trounced the “Jeopardy!” champions. I asked the lead engineer of Watson’s health team, Eric Brown, what the equivalent of the “Jeopardy!” victory would be in medicine. I expected him to describe some kind of holographic physician, like the doctor on “Star Trek Voyager,” with Watson serving as the cognitive engine. His answer, however, reflected his deep respect for the unique challenges of health care. “It’ll be when we have a technology that physicians suddenly can’t live without,” he said.

I’m reminded of a principle I included in a long-ago presentation, “Enabling simply great mobile phones” (PDF), from 2004:

It’s easy to make something hard;
It’s hard to make something easy…

Smartphones will sell very well provided they allow users to build on, and do more of, the things that caused users to buy phones in the first place (communication and messaging, fashion and fun, and safety and connection) – and provided they allow users to do these things simply, even though the phones themselves are increasingly complex.

As for smartphones, so also for healthcare technology: the interfaces need to protect users from the innumerable complications that lurk under the surface. The greater the underlying complexity, the greater the importance of smart interfaces.

Again as for smartphones, once good human interfaces have been put in place, the results of new healthcare technology can be enormous. The New York Times article by Wachter contains a reminder of vexed issues within healthcare – issues that technology has the power to solve:

Health care, our most information-intensive industry, is plagued by demonstrably spotty quality, millions of errors and backbreaking costs. We will never make fundamental improvements in our system without the thoughtful use of technology.

Tomorrowland

In a different way, Steven Kotler’s new book also brings human considerations to the forefront. The title of the book is “Tomorrowland: Our Journey from Science Fiction to Science Fact”. It’s full of remarkable human interest stories, that go far beyond simple cheer-leading for the potential of technological progress.

I had the pleasure to help introduce Steven at a recent event in Campus London, which was co-organised by London Futurists and FutureSelf. Steven appeared by Skype.

AtCampusLondon

(photos by Kirsten Zverina)

Ahead of the event, I had hoped to be able to finish reading his book, but because of other commitments I had only managed to read the first 25%. That was already enough to convince me that the book departed from any simple formula of techno-optimism.

In the days after the event, I was drawn back to Kotler’s book time and again, as I kept discovering new depth in its stories. Kotler brings a journalist perspective to the hopes, fears, struggles, and (yes) remarkable accomplishments of many technology pioneers. For most of these stories, the eventual outcome is still far from clear. Topics covered included:

  • The difficulties in trying to save the Florida Everglades from environmental collapse
  • Highlights from the long saga of people trying to invent flying cars (you can read that excerpt online here)
  • Difficulties and opportunities with different kinds of nuclear energy
  • The potential for technology to provide quick access to the profound feelings of transcendence reported from so-called “out of the body” and “near death experiences”
  • Some unexpected issues with the business of sperm donation
  • Different ways to enable blind people to see
  • Some missed turnings in the possibilities to use psychedelic drugs more widely
  • Options to prevent bio-terrorists from developing pathogens that are targeted at particular individuals.

There’s a video preview for the book:

The preview is a bit breathless for my liking, but the book as a whole provides some wonderfully rounded explorations. The marvellous potential of new technology should, indeed, inspire awe. But that potential won’t be attained without some very clear thinking.

Apex

The third of the disparate trio of three books I want to mention is, itself, the third in a continuous trilogy of fast-paced futurist fiction by Ramez Naam.

In “Apex: Connect”, Naam brings to a climactic culmination the myriad chains of human and transhuman drama that started in “Nexus: Install” and ratcheted in “Crux: Upgrade”.

RamezNaamTrilogy

Having been enthralled by the first two books in this trilogy, I was nervous about starting to listen to the third, since I realised it would likely absorb me for most of the next few days. I was right – but the absorption was worth it.

There’s plenty of technology in this trilogy, which is set several decades in the future: enhanced bodies, enhanced minds, enhanced communications, enhanced artificial intelligence. Critically, there is plenty of human  frailty too: people with cognitive biases, painful past experiences, unbalanced perspectives, undue loyalty to doubtful causes. Merely the fact of more powerful technology doesn’t automatically make people kinder as well as stronger, or wiser as well as smarter.

Another reason I like Apex so much is because it embraces radical uncertainty. Will superintelligence be a force that enhances humanity, or destroys it? Are regulations for new technology an instrument of oppression, or a means to guide people to more trustworthy outcomes? Should backdoors be built into security mechanisms? How should humanity treat artificial general intelligence, to avoid that AGI reaching unpleasant conclusions?

To my mind, too many commentators (in the real world) have pat answers to these questions. They’re too ready to assert that the facts of the matter are clear, and that the path to a better Tomorrowland is evident. But the drama that unfolds in Apex highlights rich ambiguities. These ambiguities require careful thought and wide appreciation. They also require human focus.

Postscript: H+Pedia

In between my other projects, I’m trying to assemble some of the best thinking on the pros and cons of key futurist questions. My idea is to use the new site H+Pedia for that purpose.

hpluspedia

As a starter, see the page on Transhumanism, where I’ve tried to assemble the most important lines of argument for and against taking a transhumanist stance towards the future. The page includes some common lines of criticism of transhumanism, and points out:

  • Where these criticisms miss the mark
  • Where these criticisms have substance – so that transhumanists ought to pay attention.

In some cases, I offer clear-cut conclusions. But in other cases, the balance of the argument is ambiguous. The future is far from being set in stone.

I’ll welcome constructive contributions to H+Pedia from anyone interested in the future of humanity.

Second postscript:

It’s now less than three weeks to the Anticipating 2040 event, where many speakers will be touching on the themes outlined above. Here’s a 90 second preview of what attendees can expect.

Older Posts »

Blog at WordPress.com.