dw2

21 May 2015

Anticipating 2040: The triple A, triple h+ vision

Abundance Access Action

The following vision arises from discussions with colleagues in the Transhumanist Party.

TPUK_LOGO3_400pxAbundance

Abundance – sustainable abundance – is just around the corner – provided we humans collectively get our act together.

We have within our grasp a sustainable abundance of renewable energy, material goods, health, longevity, intelligence, creativity, freedom, and positive experience.

This can be attained within one human generation, by wisely accelerating the green technology revolution – including stem cell therapies, 3D printing, prosthetics, robotics, nanotechnology, genetic engineering, synthetic biology, neuro-enhancement, artificial intelligence, and supercomputing.

TPUK_LOGO2_400pxAccess

The rich fruits of technology – abundance – can and should be provided for all, not just for those who manage to rise to the top of the present-day social struggle.

A bold reorganisation of society can and should take place in parallel with the green technology revolution – so that everyone can freely access the education, healthcare, and everything else needed to flourish as a full member of society.

Action

TPUK_LOGO1_400pxTo channel the energies of industry, business, finance, universities, and the media, for a richly positive outcome within the next generation, swift action is needed:

  • Widespread education on the opportunities – and risks – of new technology
  • Regulations and checks to counter short-termist action by incumbent vested interests
  • The celebration and enablement of proactive innovation for the common good
  • The promotion of scientific, rational, evidence-based methods for taking decisions, rather than ideologies
  • Transformation of our democracy so that governance benefits from the wisdom of all of society, and serves the genuine needs of everyone, rather than perpetuating the existing establishment.

Transhumanism 2040

2040Within one generation – 25 years, that is, by 2040 – human society can and should be radically transformed.

This next step of conscious evolution is called transhumanism. Transhumanists see, and welcome, the opportunity to intelligently redesign humanity, drawing wisely on the best resources of existing humanity.

The transhumanist party is the party of abundance, access, and action. It is the party with a programme to transcend (overcome) our ingrained human limitations – limitations of animal biology, primate psychology, antiquated philosophy, and 20th century social structures.

Transhumanism 2020

2020As education spreads about the potential for a transhumanist future of abundance, access, and action – and as tangible transhumanist projects are seen to be having an increasingly positive political impact – more and more people will start to identify themselves as transhumanists.

This growing movement will have consequences around the world. For example, in the general election in 2020 in the UK, there may well be, in every constituency, either a candidate from the Transhumanist Party, or a candidate from one of the other parties who openly and proudly identifies as a transhumanist.

The political landscape will never be the same again.

Call to action

To offer support to the Transhumanist Party in the UK (regardless of where you are based in the world), you can join the party by clicking the following PayPal button:

Join now

Membership costs £25 per annum. Members will be invited to participate in internal party discussions of our roadmap.

For information about the Transhumanist Party in other parts of the world, see http://transhumanistpartyglobal.org/.

For a worldwide transhumanist network without an overt political angle, consider joining Humanity+.

To discuss the politics of the future, without any exclusive link to the Transhumanist Party, consider participating in one of the Transpolitica projects – for example, the project to publish the book “Politics 2.0″.

Anticipating the Transhumanist Party roadmap to 2040

Footnote: Look out for more news of a conference to be held in London during Autumn (*), entitled “Anticipating 2040: The Transhumanist Party roadmap”, featuring speakers, debates, open plenaries, and closed party sessions.

If anyone would like to speak at this event, please get in touch.

Anticipating 2040
(*) Possible date is 3-4 October 2015, though planning is presently at a preliminary stage.

 

10 May 2015

When the future of smartphones was in doubt

It’s hard to believe it now. But ten years ago, the future of smartphones was in doubt.

At that time, I wrote these words:

Smartphones in 2005 are roughly where the Internet was in 1995. In 1995, there were, worldwide, around 20-40 million users of the Internet. That’s broadly the same number of users of smartphones there are in the world today. In 1995, people were debating the real value of Internet usage. Was it simply an indulgent plaything for highly technical users, or would it have lasting wider attraction? In 2005, there’s a similar debate about smartphones. Will smartphones remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

That was the opening paragraph in an essay which the Internet site Archive.org has preserved. The original location for the essay, the Symbian corporate website, has long since been retired, having been absorbed inside Nokia infrastructure in 2009 (and, perhaps, being absorbed in turn into Microsoft in 2014).

Symbian Way Back

The entire essay can be found here, warts and all. That essay was the first in a monthly series known as “David Wood Insight” which extended from September 2005 to September 2006. (The entire set still exists on Archive.org – and, for convenience, I’ve made a copy here.)

Ten years later, it seems to me that wearable computers in 2015 are roughly where smartphones were in 2005 (and where the Internet was in 1995). There’s considerable scepticism about their future. Will they remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

Some commentators look at today’s wearable devices, such as Google Glass and Apple Watch, and express disappointment. There are many ways these devices can be criticised. They lack style. They lack “must have” functionality. Their usability leaves a lot to be desired. Battery life is too short. And so on.

But, like smartphones before them – and like the world-wide web ten years earlier – they’re going to get much, much better as time passes. Positive feedback cycles will ensure that happens.

I share the view of Augmented Reality analyst Ori Inbar, who wrote the following a few months ago in an updated version of his “Smart Glasses Market Report”:

When contemplating the evolution of technology in the context of the evolution of humanity, augmented reality (AR) is inevitable.

Consider the innovation cycles of computing from mainframes, to personal computers, to mobile computing, to wearables: It was driven by our need for computers to get smaller, better, and cheaper. Wearables are exactly that – mini computers on track to shrink and disappear on our bodies. In addition, there is a fundamental human desire for larger and sharper displays – we want to see and feel the world at a deeper level. These two trends will be resolved with Augmented Reality; AR extends our natural senses and will become humans’ primary interface for interaction with the world.

If the adoption curve of mobile phones is to repeat itself with glasses – within 10 years, over 1 billion humans will be “wearing.”

The report is packed with insight – I fully recommend it. For example, here’s Ori’s depiction of four waves of adoption of smart glasses:

Smart Glasses Adoption

(For more info about Augmented Reality and smart glasses, readers may be interested in the forthcoming Augmented World Expo, held 8-10 June at the Santa Clara Convention Centre in Silicon Valley.)

What about ten more years into the future?

All being well, here’s what I might be writing some time around 2025, foreseeing the growing adoption of yet another wave of computers.

If 1995-2005 saw the growth of desktop and laptop computers and the world wide web, 2005-2015 saw the growing ubiquity of smartphones, and 2015-2025 will see the triumph of wearable computers and augmented reality, then 2025-2035 is likely to see the increasingly widespread usage of nanobots (nano-computers) that operate inside our bodies.

The focus of computer innovation and usage will move from portables to mobiles to wearables to insideables.

And the killer app of these embedded nanobots will be internal human enhancement:

  • Biological rejuvenation
  • Body and brain repair
  • Body and brain augmentation.

By 2025, these applications will likely be in an early, rudimentary state. They’ll be buggy, irritating, and probably expensive. With some justification, critics will be asking: Will nanobots remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

28 April 2015

Why just small fries? Why no big potatoes?

Filed under: innovation, politics, Transpolitica, vision — Tags: , , , , — David Wood @ 3:12 pm

Big potatoesLast night I joined a gathering known as “Big Potatoes”, for informal discussion over dinner at the De Santis restaurant in London’s Old Street.

The potatoes in question weren’t on the menu. They were the potential big innovations that politicians ought to be contemplating.

The Big Potatoes group has a tag-line: “The London Manifesto for Innovation”.

As their website states,

The London Manifesto for Innovation is a contribution to improving the climate for innovation globally.

The group first formed in the run-up to the previous UK general election (2010). I blogged about them at that time, here, when I listed the principles from their manifesto:

  • We should “think big” about the potential of innovation, since there’s a great deal that innovation can accomplish;
  • Rather than “small is beautiful” we should keep in mind the slogan “scale is beautiful”;
  • We should seek more than just a continuation of the “post-war legacy of innovation” – that’s only the start;
  • Breakthrough innovations are driven by new technology – so we should prioritise the enablement of new technology;
  • Innovation is hard work and an uphill struggle – so we need to give it our full support;
  • Innovation arises from pure scientific research as well as from applied research – both are needed;
  • Rather than seeking to avoid risk or even to manage risk, we have to be ready to confront risk;
  • Great innovation needs great leaders of innovation, to make it happen;
  • Instead of trusting regulations, we should be ready to trust people;
  • Markets, sticks, carrots and nudges are no substitute for what innovation itself can accomplish.

That was 2010. What has caused the group to re-form now, in 2015, is the question:

Why is so much of the campaigning for the 2015 election preoccupied with small fries, when it could – and should – be concentrating on big potatoes?

Last night’s gathering was facilitated by three of the writers of the 2010 big potato manifestoNico MacdonaldJames Woudhuysen, and Martyn Perks. The Chatham House rules that were in place prevents me from quoting directly from the participants. But the discussion stirred up plenty of thoughts in my own mind, which I’ll share now.

The biggest potato

FreemanDysonI share the view expressed by renowned physicist Freeman Dyson, in the book “Infinite in all directions” from his 1985 Gifford lectures:

Technology is… the mother of civilizations, of arts, and of sciences

Technology has given rise to enormous progress in civilization, arts and sciences over recent centuries. New technology is poised to have even bigger impacts on civilization in the next 10-20 years. So why aren’t politicians paying more attention to it?

MIT professor Andrew McAfee takes up the same theme, in an article published in October last year:

History teaches us that nothing changes the world like technology

McAfee spells out a “before” and “after” analysis. Here’s the “before”:

For thousands of years, until the middle of the 18th century, there were only glacial rates of population growth, economic expansion, and social development.

And the “after”:

Then an industrial revolution happened, centred around James Watt’s improved steam engine, and humanity’s trajectory bent sharply and permanently upward

AndrewMcAfeeOne further quote from McAfee’s article rams home the conclusion:

Great wars and empires, despots and democrats, the insights of science and the revelations of religion – none of them transformed lives and civilizations as much as a few practical inventions

Inventions ahead

In principle, many of the grave challenges facing society over the next ten years could be solved by “a few practical inventions”:

  • Students complain, with some justification, about the costs of attending university. But technology can enable better MOOCs – Massive Online Open Courses – that can deliver high quality lectures, removing significant parts of the ongoing costs of running universities; free access to such courses can do a lot to help everyone re-skill, as new occupational challenges arise
  • With one million people losing their lives to traffic accidents worldwide every year, mainly caused by human driver error, we should welcome the accelerated introduction of self-driving cars
  • Medical costs could be reduced by greater application of the principles of preventive maintenance (“a stitch in time saves nine”), particularly through rejuvenation biotechnology and healthier diets
  • A sustained green tech new deal should push society away from dependency on fuels that emit dangerous amounts of greenhouse gases, resulting in lifestyles that are positive for the environment as well as positive for humanity
  • The growing costs of governmental bureaucracy itself could be reduced by whole-heartedly embracing improved information technology and lean automation.

Society has already seen remarkable changes in the last 10-20 years as a result of rapid progress in fields such as electronics, computers, digitisation, and automation. In each case, the description “revolution” is appropriate. But even these revolutions pale in significance to the changes that will, potentially, arise in the next 10-20 years from extraordinary developments in healthcare, brain sciences, atomically precise manufacturing, 3D printing, distributed production of renewable energy, artificial intelligence, and improved knowledge management.

Indeed, the next 10-20 years look set to witness four profound convergences:

  • Between artificial intelligence and human intelligence – with next generation systems increasingly embodying so-called “deep learning”, “hybrid intelligence”, and even “artificial emotional intelligence”
  • Between machine and human – with smart technology evolving from “mobile” to “wearable” and then to “insideable”, and with the emergence of exoskeletons and other cyborg technology
  • Between software and biology – with programming moving from silicon (semiconductor) to carbon (DNA and beyond), with the expansion of synthetic biology, and with the application of genetic engineering
  • Between virtual and physical – with the prevalence of augmented reality vision systems, augmented reality education via new MOOCs (massive open online courses), cryptocurrencies that remove the need for centralised audit authorities, and lots more.

To take just one example: Wired UK has just reported a claim by Brad Perkins, chief medical offer at Human Longevity Inc., that

A “supercharged” approach to human genome research could see as many health breakthroughs made in the next decade as in the previous century

The “supercharging” involves taking advantage of four converging trends:

“I don’t have a pill” to boost human lifespan, Perkins admitted on stage at WIRED Health 2015. But he has perhaps the next best thing — data, and the means to make sense of it. Based in San Diego, Human Longevity is fixed on using genome data and analytics to develop new ways to fight age-related diseases.

Perkins says the opportunity for humanity — and Human Longevity — is the result of the convergence of four trends: the reduction in the cost of genome sequencing (from $100m per genome in 2000, to just over $1,000 in 2014); the vast improvement in computational power; the development of large-scale machine learning techniques; and the wider movement of health care systems towards ‘value-based’ models. Together these trends are making it easier than ever to analyse human genomes at scale.

Small fries

french-fries-525005_1280Whilst entrepreneurs and technologists are foreseeing comprehensive solutions to age-related diseases – as well as the rise of smart automation that could free almost every member of the society of the need to toil in employment that they dislike – what are politicians obsessing about?

Instead of the opportunities of tomorrow, politicians are caught up in the challenges of yesteryear and today. Like a short-sighted business management team obsessed by the next few quarterly financial results but losing sight of the longer term, these politicians are putting all their effort into policies for incremental changes to present-day metrics – metrics such as tax thresholds, the gross domestic product, policing levels, the degree of privatisation in the health service, and the rate of flow of migrants from Eastern Europe into the United Kingdom.

It’s like the restricted vision which car manufacturing pioneer Henry Ford is said to have complained about:

If I had asked people what they wanted, they would have said faster horses.

This is light years away from leadership. It’s no wonder that electors are deeply dissatisfied.

The role of politics

To be clear, I’m not asking for politicians to dictate to entrepreneurs and technologists which products they should be creating. That’s not the role of politicians.

However, politicians should be ensuring that the broad social environment provides as much support as possible to:

  • The speedy, reliable development of those technologies which have the potential to improve our lives so fully
  • The distribution of the benefits of these technologies to all members of society, in a way that preserves social cohesion without infringing individual liberties
  • Monitoring for risks of accidental outcomes from these technologies that would have disastrous unintended consequences.

PeterDruckerIn this way, politicians help to address the human angle to technology. It’s as stated by management guru Peter Drucker in his 1986 book “Technology, Management, and Society”:

We are becoming aware that the major questions regarding technology are not technical but human questions.

Indeed, as the Transpolitica manifesto emphasises:

The speed and direction of technological adoption can be strongly influenced by social and psychological factors, by legislation, by subsidies, and by the provision or restriction of public funding.

Political action can impact all these factors, either for better or for worse.

The manifesto goes on to set out its objectives:

Transpolitica wishes to engage with politicians of all parties to increase the likelihood of an attractive, equitable, sustainable, progressive future. The policies we recommend are designed:

  • To elevate the thinking of politicians and other leaders, away from being dominated by the raucous issues of the present, to addressing the larger possibilities of the near future
  • To draw attention to technological opportunities, map out attractive roads ahead, and address the obstacles which are preventing us from fulfilling our cosmic potential.

Specific big potatoes that are missing from the discussion

If our political leaders truly were attuned to the possibilities of disruptive technological change, here’s a selection of the topics I believe would find much greater prominence in political discussion:

  1. How to accelerate lower-cost high quality continuous access to educational material, such as MOOCs, that will prepare people for the radically different future that lies ahead
  2. How to accelerate the development of personal genome healthcare, stem cell therapies, rejuvenation biotech, and other regenerative medicine, in order to enable much healthier people with much lower ongoing healthcare costs
  3. How to ensure that a green tech new deal succeeds, rather than continues to fall short of expectations (as it has been doing for the last 5-6 years)
  4. How to identify and accelerate the new industries where the UK can be playing a leading role over the next 5-10 years
  5. How to construct a new social contract – perhaps involving universal basic income – in order to cope with the increased technological unemployment which is likely to arise from improved automation
  6. How society should be intelligently assessing any new existential risks that emerging technologies may unintentionally trigger
  7. How to transition the network of bodies that operate international governance to a new status that is fit for the growing challenges of the coming decades (rather than perpetuating the inertia from the times of their foundations)
  8. How technology can involve more people – and more wisdom and insight from more people – in the collective decision-making that passes for political processes
  9. How to create new goals for society that embody a much better understanding of human happiness, human potential, and human flourishing, rather than the narrow economic criteria that currently dominate decisions
  10. How to prepare everyone for the next leaps forward in human consciousness which will be enabled by explorations of both inner and outer space.

Why small fries?

But the biggest question of all isn’t anything I’ve just listed. It’s this:

  • Why are politicians still stuck in present-day small fries, rather than focusing on the big potatoes?

I’ll be interested in answers to that question from readers. In the meantime, here are my own initial thoughts:

  • The power of inertia – politicians, like the rest of us, tend to keep doing what they’re used to doing
  • Too few politicians have any deep personal insight (from their professional background) into the promise (and perils) of disruptive technology
  • The lack of a specific vision for how to make progress on these Big Potato questions
  • The lack of clamour from the electorate as a whole for answers on these Big Potato questions.

If this is true, we must expect it will take some time for public pressure to grow, leading politicians in due course to pay attention to these topics.

It will be like the growth in capability of any given exponential technology. At first, development takes a long time. It seems as if nothing much is changing. But finally, tipping points are reached. At that stage, it become imperative to act quickly. And at that stage, politicians (and their advisors) will be looking around urgently for ready-made solutions they can adapt from think tanks. So we should be ready.

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Criticisms

Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

11 March 2015

My vision for Humanity+, 2015-2017

Filed under: Humanity Plus, vision — Tags: , , — David Wood @ 1:40 pm

The most important task for the worldwide Humanity+ organisation, over the next three years, is to dramatically raise the calibre of public discussion about transhumanism and radical futurism.

As an indication of the status quo of the public discussion about transhumanism, type the words “Transhumanists are” into a Google search bar. Google charmingly suggests the following auto-completions:

  • Transhumanists are stupid
  • Transhumanists are evil
  • Transhumanists are crazy.

Transhumanists Are

These sentiments are at stark variance with what I believe to be the case: transhumanists have an insight that deserves much wider support – an insight that, if acted on, will lead to vast improvements in the quality of life of people all over the planet.

That insight – known as the “central meme of transhumanism” – is that we can and should improve the human condition through technology. Rather than continuing to be diminished by limitations inherited from our evolutionary heritage – limitations in our physiology, our psychology, our philosophy, and our social structures – we can and should take conscious control of the next stage of human evolution. We can and should move from a long phase of Darwinian natural selection to a phase of accelerated intelligent design.

Transhumanists boldly assert, in the FAQ maintained on the Humanity+ website, that

Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.

Transhumanism is the viewpoint that human society should embrace, wisely, thoughtfully, and compassionately, the radical transformational potential of technology. Recent and forthcoming breakthroughs in technology fields such as nanotechnology, synthetic biology, renewable energy, regenerative medicine, brain sciences, big data analytics, robotics, and artificial intelligence can:

  • Enable humans to transcend (overcome) many of the deeply debilitating, oppressive, and hazardous aspects of our lives
  • Allow everyone a much wider range of personal autonomy, choice, experience, and fulfilment
  • Facilitate dramatically improved international relations, social harmony, and a sustainable new cooperation with nature and the environment.

Different opinions

But as I said, most people see things differently. They doubt that technology will change human nature, any time soon. Or, inasmuch as technology might change core aspects of human existence, they fear these changes will be for the worst. Or, if they think technology is likely to improve human experience, they see no need for any “ism” – any philosophy or movement – that promotes such an outcome; instead, they think it will be sufficient to leave technologists and entrepreneurs to get on with the task, unencumbered by philosophical baggage.

I’m very happy to enter discussion on all these points with informed critics of transhumanism – with people who are open to constructive dialogue. That’s a dialogue I wish to promote. That dialogue is, as I see things, a core part of the mission of the Humanity+ organisation.

All too often, however, critics of transhumanism (including the people noticed by Google as thinking that transhumanists are “stupid”, “evil”, and “crazy”) have only a hazy understanding of transhumanism. Worse, all too often the same people have only a hazy idea of the radical transformative potential of accelerating technology. To the extent that these people (who probably form the vast majority of the population) are futurists at all, they are “slow-paced” futurists rather than fast-paced futurists (to use a couple of terms I’ve written about previously). They’re largely oblivious to the far-reaching nature of changes that may take place in the next few decades.

To an extent, we transhumanists and other radical futurists share part of the blame for this situation. In our discussions of the positive transformational potential of technology, we’ve sometimes been collectively guilty of:

  • Presenting these technological developments as more-or-less inevitable, and as happening according to an inviolable timescale (linked over-closely to Moore’s Law)
  • Emphasising only the positive implications of these changes, and giving scant attention to potential negative implications
  • Taking it for granted that these positive benefits will become accessible to everyone, regardless of income, without there being any risk of them primarily benefiting the people who are already powerful and rich.

In other words, our collective advocacy of transhumanism has sometimes suffered from science fiction hype, wishful thinking, and political naivety. The popular negative appraisal of transhumanism stems, in part, from a reaction against these missteps.

A better dialogue

That’s what I believe the Humanity+ organisation can fix. Humanity+ can lead the way in encouraging a wiser, more credible, and more compelling assessment of transhumanism and radical futurism. This will involve multi-dimensional communications – short form and long form, written and video, intellectual and artistic, prose and poetry, serious and humorous, scientific and literary, real-time and recorded, face-to-face and online. As this library of material grows, it will be less and less possible for critics to radically misrepresent the intent and vision of transhumanists. Neutral observers will quickly call them out: you say such-and-such, but the clear evidence is that transhumanists have a much better understanding than that.

As time progresses, more and more people will understand the central messages of transhumanism. They’ll identify with these messages, viewing them as sensible, reasonable, and praiseworthy. And they’ll put more pressure on leaders of all sectors of society to prioritise changes which will accelerate the attainment of the positive evolution of humanity.

Practical steps

The outgoing board of directors of Humanity+ have already sketched a high-level strategic plan which will, in effect, put the organisation in a much better shape to carry out the role I’ve described above. I was part of the team that drew up that plan, and I’m now asking the set of Full Members of the organisation to choose me as one of their preferred candidates for the four elected vacancies on the board.

The strategic plan can be described in terms of five components: stability, speed, scale, vision, and engagement:

  • Stability: Recent changes in the constitution of Humanity+ have been designed to ensure greater stability in the format and membership of the board of directors. Rather than elections being held on an annual basis, the board now operates with a three-year cycle. For each three-year period, five of the directors are appointed to their roles by the outgoing board, and four more are elected by a vote by all Full Members. This hybrid structure seems to me to provide a strong basis for the other changes which I will describe next
  • Speed: For the last few years, Humanity+ has shown some aspects of being a bureaucratic organisation, held back from its true potential by a mix of inertia and unclear (diffuse) vision. By adopting modern principles of lean organisations and exponential organisations – learning from principles of successful business startups – the organisation can, and should, move more quickly. I offer my own experience in getting things done quickly – experience which I have honed over 25 years in the mobile computing and smartphone industry
  • Scale: To have a bigger impact, Humanity+ needs to be able to make better use of its wide network of potential supporters. In part, this involves hiring a Development Director, to improve the financial footing of the organisation. In part, this involves revitalising our structure of chapters, affiliates, and volunteer effort. Finally, this also involves modernising our use of information technology. I expect each of the new board members to play important roles in improving these structures
  • Vision: Perhaps the single most important energiser of action is to have a clear, inspiring, stretch goal – a so-called “massively transformational purpose”. My own personal vision is “transhumanism for all” – something I have spelt out in more detail in my online declaration of interest in being elected to continue my role on the board. In terms of a vision for Humanity+, I offer “dramatically raise the calibre of public discussion about transhumanism and radical futurism” (though I’m open to re-wording). That is, I offer the vision that I’ve described in the opening part of this article
  • Engagement: The public discussion about transhumanism has recently been heating up. Transhumanist ideas are appearing more and more often in popular magazines, including Time, Newsweek, and Bloomberg Markets (as I covered in a recent blogpost). Significant credit is due here to the high-energy work of the recently formed Transhumanist Party, led by Zoltan Istvan. The headline in a recent article in The Leftist Review put it as follows: “The age of transhumanist politics has begun”. As that article goes on to say, “transhumanist politics has momentous growth potential but with uncertain outcomes. The coming years will probably see a dialogue between humanism and transhumanism in — and about — most crucial fields of human endeavor, with strong political implications”. Humanity+ cannot stand aside from this engagement. Over the next few years, our engagement needs to continue to expand – not just in the worlds of science and technology, but also in the worlds of art, economics, and (last but not least) politics. One reason I recently founded the Transpolitica think-tank was to accelerate exactly that kind of dialogue. I’ll be delighted to position Humanity+ as being at the heart of that dialogue, rather than standing at the periphery.

A resilient, long-term contributor

I’ve recently passed the landmark of having organised 100 London Futurists events. As I covered in a previous blogpost, that series of meetings has extended for seven years (March 2008 to March 2015). I mention this as an example of the way I am able to work:

  • Long-term commitment
  • Regular incremental improvements
  • Success via building a collaborative team (including volunteers and regular audience members)
  • Hands-on facilitation and leadership.

That’s the kind of working discipline that I wish to continue to apply on the Humanity+ board.

The endorsements framework on LinkedIn is far from being a watertight reputation management system, but the set of endorsements that my professional colleagues have kindly provided for me surely gives at least some indication of my positive qualities.

For Humanity+ Full Members wishing to check out my personal history and philosophy in more detail, one option is to dip into my book “Smartphones and beyond: lessons from the remarkable rise and fall of Symbian”. Other options are to leaf through the eclectic set of articles on my personal blog (a couple of representative examples are “A muscular new kid on the block” and “Towards inner Humanity+”), and to view the videos on the Delta Wisdom and London Futurists channels on YouTube.

For transhumanists (old and new) who are currently not Full Members of Humanity+, you can find more details here about how to join the organisation. The election runs until midnight PST on 31st March. People who become Full Members up to 24 hours before the end of the election period will be added to the set of electors.

UKHplus FB header HD

10 March 2015

100 not out: 7 years of London Futurists

100 not outWhen my mouse skimmed across the page of the London Futurists meetup site a few days ago, it briefly triggered a pop-up display that caught my eye. The display summarised my own activities within London Futurists. “Been to 100 Meetups” was the phrase that made me pause. That’s a lot of organising, I thought.

That figure of 100 doesn’t quite tell the full story. The events that I’ve organised under the London Futurists umbrella, roughly once or twice a month, are part of a longer series that go all the way back to the 15th of March 2008. In those days, I used the UK Humanity+ group in Facebook to publicise these events (along with some postings in blogs such as Extrobritannia). I discovered the marvels of Meetup in 2009, and adopted the name “London Futurists” from that time.

Browsing the history of these events in Facebook’s archive, over the seven years from March 2008 to the present day, I see there have been periods of relative activity and periods of relative quiet:

  • 10 events in 2008, 13 in 2009, and 11 in 2010
  • a period of relative quiet, 2011-2012, when more of my personal focus was pre-occupied by projects at my then employer, Accenture
  • 21 events in 2013, and another 21 in 2014
  • 6 events already in 2015.

This long series of events has evolved as time has progressed:

  • Initially they were free to attend, but for the last few years, I’ve charged a £5 entrance fee, to cover the room hire costs
  • We’ve added occasional Hangout-on-Air video events, to complement the in-real-life meetups
  • More recently, we’ve videoed the events, and make the recordings available afterwards.

For example, here’s the video of our most recent event: The winning of the carbon war, featuring speaker Jeremy Leggett. (Note: turn down your volume before listening, as the audio isn’t great on this occasion.)

Another important change over the years is that the set of regular and occasional attendees has grown into a fine, well-informed audience, who reliably ask speakers a probing and illuminating set of questions. If I think about the factors that make these meetups successful, the audience deserves significant credit.

But rather than looking backwards, I prefer to look forwards. As was said of me in a recent profile article in E&T, “David Wood: why the future matters”,

Wood’s contribution to the phenomenon of smart, connected mobile devices has earned him plenty of recognition… While others with a similar track record might consider their mid-50s to be the time to start growing wine or spending afternoons on the golf course, Wood thinks his “next 25 years will take that same vision and give it a twist. I now look more broadly at how technology can help all of us to become smarter and more mobile”.

Thankfully, mainstream media have recently been carrying more and more articles about radical futurist topics that would, until only recently, have been regarded as fringe and irresponsible. These are topics that have regularly been addressed during London Futurists events over the last seven years. To take just one example, consider the idea that technology may soon provide the ability to radically extend healthy human lifespan – perhaps indefinitely:

  • The cover of Time for February 12th displayed a baby, with the accompanying text: This baby could live to be 142 years old. Despatches from the frontiers of longevity
    baby-final1
  • The cover of Newsweek on March 5th proclaimed the message Never say die: billionaires, science, and immortality
    immortality-cover
  • The cover for Bloomberg Markets for April will bear the headline Google wants you to live forever
    Bill Maris

It’s worth reiterating the quote which starts the Bloomberg Markets article – a quote from Bill Maris, the president and managing director of Google Ventures:

If you ask me today, is it possible to live to be 500? The answer is yes.

Alongside articles on particular transhumanist and radical futurist themes – such as healthy life-extension, superhuman artificial intelligence, and enhanced mental well-being – there have been a recent flurry of general assessments of the growing importance of the transhumanist philosophy. For example, note the article “The age of transhumanist politics has begun” from The Leftist Review a few days ago. Here’s a brief extract:

According to political scientist and sociologist Roland Benedikter, research scholar at the University of California at Santa Barbara, “transhumanist” politics has momentous growth potential but with uncertain outcomes. The coming years will probably see a dialogue between humanism and transhumanism in — and about — most crucial fields of human endeavor, with strong political implications that will challenge, and could change the traditional concepts, identities and strategies of Left and Right.

The age of transhumanist politics may well have begun, but it has a long way to run. And as Benedikter sagely comments, although there is momentous growth potential, the outcome remains uncertain. That’s why the next item in the London Futurists series – the one which will be the 101st meetup in that series – is on the theme “Anticipating tomorrow’s politics”. You can find more details here:

This London Futurists event marks two developments in the political landscape:

  • The launch of the book “Anticipating tomorrow’s politics”
  • The launch of the Transhumanist Party in the UK.

The speakers at this event, Amon Twyman and David Wood, will be addressing the following questions:

  • How should politics change, so that the positive potential of technology can be safely harnessed to most fully improve human society?
  • What are the topics that politicians generally tend to ignore, but which deserve much more attention?
  • How should futurists and transhumanists regard the political process?
  • Which emerging political movements are most likely to catalyse these needed changes?

All being well, a video of that event will be posted online shortly afterwards, for those unable to attend in person. But for those who attend, there will be plenty of opportunity to contribute to the real-time discussion.

Footnote: The UK Humanity+ events were themselves preceded by a series organised by “Estropico”, that stretch back at least as far as 2003. (A fuller history of transhumanism in the UK is being assembled as part of the background briefing material for the Transhumanist Party.)

15 February 2015

Ten years of quantified self

Filed under: books, healthcare — Tags: , , , , , , , — David Wood @ 12:02 am

Ten years. Actually 539 weeks. I’ve been recording my weight every morning from 23 October 2004, and adding a new data point to my chart every weekend.

10 years of Quantified Self

I’ve been recording my weight ever since I read that people who monitor their weight on a regular basis are more likely to avoid it ballooning upwards. There’s an instant feedback which allows me to seek adjustments in my personal health regime. With ten years of experience under my (varyingly-sized) belt, I’m strongly inclined to continue the experiment.

The above chart started life on my Psion Series 5mx PDA. Week after week, I added data, and watched as the chart expanded. Eventually, the graph hit the limits of what could be displayed on a single screen on the S5mx (width = 480 pixels), so I had to split the chart into two. And then three. Finally, after a number of hardware failures in my stock of S5mx devices, I transferred the data into an Excel spreadsheet on my laptop several months ago. Among other advantages, it once again lets me see the entire picture.

20150214_084625This morning, 14th Feb 2015, I saw the scales dip down to a point I had last reached in September 2006. This result seems to confirm the effectiveness of my latest dietary regime – which I’ve been following since July. Over these seven months, I’ve shrunk from a decidedly unhealthy (and unsightly) 97 kg down to 81 kg.

In terms of the BMI metric (Body Mass Index), that’s a reduction from 31.2 – officially “obese” – down to 26.4. 26.4 is still “marginally overweight”, since, for men, the top end of the BMI scale for a “healthy weight for adults” is 24.9. With my height, that would mean a weight of 77 kg. So there’s still a small journey for me to travel. But I’m happy to celebrate this incremental improvement!

The NHS page on BMI issues this sobering advice:

BMI of 30 or more: a BMI above 30 is classified as obese. Being obese puts you at a raised risk of health problems such as heart disease, stroke and type 2 diabetes. Losing weight will bring significant health improvements..

BMI score of 25 or more: your BMI is above the ideal range and this score means you may be overweight. This means that you’re heavier than is healthy for someone of your height. Excess weight can put you at increased risk of heart disease, stroke and type 2 diabetes. It’s time to take action…

As the full chart of my weight over the last ten years shows, I’ve had three major attempts at “action” to achieve a healthier body mass.

The first: For a while in 2004 and 2005, I restricted myself to two Herbalife meal preparations a day – even when I was travelling.

Later, in 2011, I ran across the book by Gary Taubes, “Why We Get Fat: And What to Do About It”, which made a great deal of sense to me. Taubes emphasises that some kinds of calories are more damaging to health than others. Specifically, carbohydrates, such as wheat, change the body metabolism to make it retain more weight. I also read “Wheat belly” by William Davis. Here’s an excerpt from the description of that book:

Renowned cardiologist William Davis explains how eliminating wheat from our diets can prevent fat storage, shrink unsightly bulges and reverse myriad health problems.

Every day we eat food products made of wheat. As a result millions of people experience some form of adverse health effect, ranging from minor rashes and high blood sugar to the unattractive stomach bulges that preventative cardiologist William Davis calls ‘wheat bellies’. According to Davis, that fat has nothing to do with gluttony, sloth or too much butter: it’s down to the whole grain food products so many people eat for breakfast, lunch and dinner.

After witnessing over 2,000 patients regain their health after giving up wheat, Davis reached the disturbing conclusion that wheat is the single largest contributor to the nationwide obesity epidemic – and its elimination is key to dramatic weight loss and optimal health.

In Wheat Belly, Davis exposes the harmful effects of what is actually a product of genetic tinkering being sold to the public as ‘wheat’ and provides readers with a user-friendly, step-by-step plan to navigate a new, wheat-free lifestyle. Benefits include: substantial weight loss, correction of cholesterol abnormalities, relief from arthritis, mood benefits and prevention of heart disease.

As a result, I cut back on carbohydrates – and was pleased to see my weight plummet once again. For a while – until I re-acquired many of my former carb-enjoying habits, whoops.

That takes me to regime number three. This time, I’ve followed the more recent trend known as “5+2″. According to this idea, people can eat normally for, say, five days in the week, and then eat a very reduced amount of calories on the other two days (known as “fasting days”). My initial worry about this approach was that I wasn’t sure I’d eat sensible foods on the two low-calorie days.

That’s when I ran across the meal preparations of the LighterLife company. These include soups, shakes, savoury meals, porridge, and bars. Each of these meals is just 150-200 calories. LighterLife suggest that people eat, on their low-calorie days, four of these meals. These preparations include sufficient proteins, fibre, and 100% of the recommended daily intake of key vitamins and minerals.

To be clear, I am not a medical doctor, and I urge anyone who is considering adopting a diet to obtain their own medical advice. I also recognise that different people have different metabolisms, so a diet that works for one person won’t necessarily work for someone else. However, I can share my own personal experience, in case it inspires others to do their own research:

  • Instead of 5+2, I generally follow 3+4. That is, I have four low-calorie days each week, along with three other days in which I tend to indulge myself (except that, on these other days, I still try to avoid consuming too many carbs, such as wheat, bread, rice, and potatoes)
  • On the low-calorie days, I generally eat around 11.30am, 2.30pm, 5.30pm, and 8.30pm
  • If I’m working at home, I’ll include soups, a savoury meal, and shakes; if I’m away from home, I’ll eat three (or four) different bars, that I pack into my back-pack at the beginning of the day
  • On the low-calorie days, it’s important to drink as well as to eat, but I avoid any drinks with calories in them. In practice, I find drinks of herbal teas to be very effective at dulling any sense of hunger I’m experiencing
  • In addition to eating less, I continue to do a lot of walking (e.g. between Waterloo Station and meeting locations in Central London), as well as other forms of exercise (like on the golf driving range or golf course).

Note: I know that BMI is far from being a complete representation of personal healthiness. However, I view it as a good starting point.

To round off my recommendations for diet-related books that I have particularly enjoyed reading, I’ll add “Mindless eating” by Brian Wansink to the two I mentioned earlier. I listened to the Audible version of that book. It’s hilarious, but thought-provoking, and the research it describes seems very well founded:

Every day, we each make around 200 decisions about eating. But studies have shown that 90% of these decisions are made without any conscious choice. Dr Brian Wansink lays bare the facts about our true eating habits to show that awareness of our patterns can allow us to lose weight effectively and without serious changes to our lives. Dr Wansink’s revelations include:

  • Food mistakes we all make in restaurants, supermarkets and at home
  • How we are manipulated by brand, appearance and parental habits more than price and our choices
  • Our emotional relationship with food and how we can overcome it to revitalise our diets.

Forget calorie counting and starving yourself and learn the truth about why we overeat in this fascinating, innovative guide.

Three books

I’ll finish by thanking my friends, family, and colleagues for their gentle and thoughtful encouragement, over the years, for me to keep an eye on my body mass, and on the general goodness of what I eat. “Health is the first wealth”.

8 February 2015

A tale of two cities – and of two speeds

Filed under: Barcelona, Cambridge, futurist, MWC, Singularity University — Tags: , , , , — David Wood @ 12:30 am

The two cities I have in mind are both Spanish: Barcelona in the north of the country, and Seville in the south. They’re each outstanding cities.

TwoCitiesInSpain

I’ll come back to these two cities in a moment. But first, a word about two speeds – two speeds of futurism – slow-paced futurism and fast-paced futurism.

As someone who’s had the word “Futurist” on my personal business card since early 2009, I’m inspired to see more and more people taking the subject of futurism seriously. There’s a widespread awareness, nowadays, that it’s important to analyse future scenarios. If we spend time thinking about the likely developments of current trends, we’ll be better prepared to try to respond to these trends. Instead of being shocked when disruptive forces burst through from being “under the radar” to having major impacts on lifestyles and society, we’ll have been acting to influence the outcome – pushing hard to increase the likelihood of positive changes, and to decrease the likelihood of negative changes.

But it’s my observation that, in many of the meetings I attend and the discussions I observe, the futurism on display is timid and conservative. Well-meaning speakers contemplate a future, ten or twenty years ahead, that is 95% the same as today, but with, say, 5% changes. In these modestly innovative future scenarios, we might have computers that are faster than today’s, screens that are more ubiquitous than today’s, and some jobs will have been displaced by robots and automation. But human nature will be the same in the future as in the past, and the kinds of thing people spend their time doing will be more-or-less the same as they have been doing for the last ten or twenty years too (except, perhaps, faster).

In contrast, I foresee that, within just a couple of decades, it will be very clear to everyone that momentous changes in human nature and human society are at hand (if they have not already taken place):

  • Robots and other forms of automation will be on the point of displacing perhaps 90% of human employees from the workforce – with “creative” jobs and “managerial” jobs being every bit as much at risk as “muscle” jobs
  • Enhanced suites of medical therapies will be poised to enable decades of healthy life extension, and an associated “longevity dividend” financial bonanza (since costs of healthcare will have plummeted)
  • Systems that exist both inside and outside of the human brain will be ready to dramatically increase multiple dimensions of our intelligence – including emotional and spiritual intelligence as well as rational intelligence
  • Virtual reality and augmented reality will be every bit as vivid and compelling as “natural reality”
  • Artificial general intelligence software will be providing convincing new answers to long-standing unsolved questions of science and philosophy
  • Cryonic suspension of people on the point of death will have become pervasive, since the credibility of the possibility of reanimation by future science will have grown much higher.

So whilst I cautiously welcome the slow-paced futurists, I wish more people would realise the immensity of the transformations ahead, and become fast-paced futurists.

One group of people who do have a strong appreciation of the scale of potential future changes are the faculty of Singularity University. In November, I took part in the Singularity University Summit Europe held at the DeLaMar Theater in Amsterdam.

delamar-outside

I was already familiar with a lot of the material covered by the different presenters, but – wow:

  • The information was synthesised in a way that was compelling, entertaining, highly credible, and thought-provoking
  • The different sessions dovetailed extremely well together
  • The speakers clearly knew their material, and were comfortable providing good answers to the various questions raised by audience members (including offbeat and tangential questions).

People in the audience told me later that their jaws had been on the floor for nearly the entire two days.

My own reaction was: I should find ways of enabling lots more people to attend future similar Summits. The experience would likely transform them from being slow-paced futurists to fast-paced futurists.

Happily, many Singularity University faculty members are returning to Europe, for the next Summit in the series. This will be taking place from 12-14 March in Seville. You can find the details here.

SUSS speakers

Sessions at SU Summit Spain will include:

  • Intro to SU and Exponentials – Rob Nail
  • Artificial Intelligence – Neil Jacobstein
  • Robotics – Rob Nail
  • Networks and Computing: Autonomous Cars – Brad Templeton
  • Breakthrough in Digital Biology – Raymond McCauly
  • Future of Medicine – Daniel Kraft
  • Digital Manufacturing – Scott Summit and Nigel Ackland
  • Energy Breakthroughs – Ramez Naam
  • SU Labs – Sandy Miller
  • Global Grand Challenges – Nick Haan
  • Security – David Roberts
  • Institutional Innovation and Scaling from the edge – Salim Ismail

And did I mention that the event is taking place in the fabulous history-laden city of Seville?

As it happens, Summit Spain will be taking place just ten days after another large event that’s also happening in Spain: Mobile World Congress (MWC), held in Barcelona, from 2-5 March. Many readers will know that I’ve been at every MWC since 2002, and I’ve found them to be extremely useful networking events. In my 2014 book Smartphones and beyond, I told the story of my first visit to MWC – which was called “3GSM” at that time, and which was held that year in Cannes, across the border from Spain into France. Unexpected management changes at Symbian, the pioneering smartphone OS company, meant I suddenly had to step into a whole series of press interviews scheduled for that week:

Never having attended 3GSM before, I had a rapid learning curve. Symbian’s PR advisors gave me some impromptu “media training”, to lessen the chance of me fluffing my lines, unwittingly breaching confidentiality restrictions, or otherwise saying something I would subsequently regret. My diary was soon full of appointments to talk to journalists from all over Europe, in the cramped meeting rooms and coffee bars in Cannes. The evenings were bristling with networking events in the yachts which clustered around the dock areas. Happily, when the week was over, there was nothing to regret. Indeed, Symbian’s various PR departments invited me back for numerous interviews at every subsequent 3GSM. In later years, 3GSM changed its name to MWC (Mobile World Congress), and outgrew Cannes, so it relocated instead to Barcelona. I have attended every year since that first sudden immersion in 2002.

But all good things come to an end (so it is said). In recent years, I’ve found MWC to be less compelling. Smartphones, once dramatically different from one year to the next, have slowed down their curve of change. The wellspring of innovation is moving to other industries.

After MWC 2014, I had the privilege to chair a discussion of industry experts in Cambridge, co-hosted by Cambridge Wireless and Accenture, regarding both the highs and lows (the “fiesta” and the “siesta”) of the Barcelona event.

In that panel, the expressions of “siesta” (snooze) were consistently more heartfelt than those of “fiesta” (feast).

When the time came, a few weeks back, for me to decide whether to follow my habit of the last dozen years and book my presence in Barcelona for 2015, I found my heart was no longer inspired by that prospect. I’ve decided not to go.

I’m sure a great deal of important business will happen during these hectic few days at MWC, including some ground-breaking developments in fields such as wearable computing and augmented reality. But that will be slow ground-breaking – whereas it’s my judgement that the world needs, and is headed towards, fast ground-breaking. And Seville, ten days later, is the place to get early warning of these changes. So that’s where I’m headed.

If you’re interested in a preview taster of that early warning – a ninety minute anticipation of these three days – then please consider attending an event happening at Google’s Campus London on the morning of Thursday 12th February. This preview meeting is free to attend, though attendees need to pre-register, here. The preview on Thursday will:

  • Introduce the rich resources of the Singularity University (SU) community
  • Highlight some of the most dramatic of the technological changes that can be expected in the next few years
  • Answer your questions about SU Summit Spain
  • Conduct a lottery among all attendees, with the winner receiving a free admission ticket to SU Summit Spain.

The speakers I’ll be introducing at the preview will be:

  • Russell Buckley: Mentor, angel investor in 40+ startups, Government advisor, fundraising specialist, and Singularitarian
  • Nick Chrissos: Collaboration CTO, Cisco
  • Luis Rey: Director of the Singularity University Summit Spain.

The preview will start at 9am with tea/coffee and light breakfast. Presentations will start at 9.10am.

Note:

Icon combo 3
Footnote: If you’re interested in how the wireless industry can respond to the threat of being bypassed (or even steamrollered) by innovation arising elsewhere, you should consider registering for the 7th Future of Wireless International Conference, being held by CW (Cambridge Wireless) on 23-24 June. That conference has the timely theme “Wireless is dead. Long live wireless!” I’ll be one of the keynote speakers at the event. Here’s the description of what I’ll be talking about there (taken from the event website):

Wireless disrupted.

Wireless has spent two decades disrupting numerous other industries. But the boot is now on the other foot. This talk anticipates the powerful forthcoming trends that threaten to steamroller the wireless industry, with the well-spring of innovation moving beyond its grasp. These trends include technologies, such as artificial intelligence, next generation robotics, implantable computing, and cyber-security; they also include dramatic social transformations. The talk ends by suggesting some steps to enable a judo-like response to these threats.

9 January 2015

A transhumanist political manifesto?

I’ll welcome some feedback. The looming advent of the UK General Election in May this year has focused some minds – minds that are unhappy that too much of the political debate ignores some very important topics. These are the topics of the likelihood of large societal changes occurring in the next 5-10 years – changes arising from breakthroughs in multiple fields of technology. Politicians, for various reasons, aren’t giving much mind-share to these impending changes.

The outcome of this unhappiness about the current political discussion is the idea of launching a “transhumanist political manifesto”. The current draft of this document is below. (It’s a live document, and may have changed by the time you read this blogpost. You can access the live draft here.)

All being well, a later version of this document will be used as part of a checklist, in the next few months, to publicly evaluate various political parties for their degree of “future-readiness”.

The feedback I’m interested in is this. Especially if you consider yourself a transhumanist (or have some sympathies for that philosophy), what’s your reaction to the points picked out in this draft manifesto? Would you change the content? Or the prioritisation? And to what extent might the content be applicable in other countries, rather than just in the UK?

Preamble: Anticipating tomorrow’s humanity

UKhplusPrepared by UK Humanity+ (UKH+)

Transhumanism is the viewpoint that human society should embrace, wisely, thoughtfully, and compassionately, the radical transformational potential of technology.

The speed and direction of technological adoption can be strongly influenced by social and psychological factors, by legislation, by subsidies, and by the provision or restriction of public funding. Political action can impact all these factors, either for better or for worse.

UKH+ wishes to engage with politicians of all parties to increase the likelihood of an attractive, equitable, sustainable, progressive future. The policies we recommend are designed:

  • To elevate the thinking of politicians and other leaders, away from being dominated by the raucous issues of the present, to addressing the larger possibilities of the near future
  • To draw attention to technological opportunities, map out attractive roads ahead, and address the obstacles which are preventing us from fulfilling our cosmic potential.

Headlines

UKH+ calls upon politicians of all parties to define and support:

  • Regenerative projects to take full advantage of accelerating technology.

More specifically, we call for:

  • Economic and personal liberation via the longevity dividend
  • An inclusive new social contract in the light of technological disruption
  • A proactionary regulatory system to fast-track innovative breakthroughs
  • Reform of democratic processes with new digital tools
  • Education transformed in readiness for a radically different future
  • A progressive transhumanist rights agenda
  • An affirmative new perspective on existential risks.

Details

1. Regenerative projects to take full advantage of accelerating technology

Anticipating profound change

Accelerating technological progress has the potential to transform lives in the next ten years more profoundly than in any preceding ten year period in history.

Radical technological changes are coming sooner than people think, in technology fields such as nanotechnology, synthetic biology, renewable energy, regenerative medicine, brain sciences, big data analytics, robotics, and artificial intelligence. Together, these technologies will change society in unexpected ways, disrupting familiar patterns of industry, lifestyle, and thinking.

These changes include the potential for exceptional benefits for both the individual and society, as well as the potential for tremendous risk.

Current policymakers rarely tackle the angle of convergent disruptive technologies. This means they react to each new disruption with surprise, after it appears, rather than anticipating it with informed policy and strategy.

Politicians of all parties urgently need to:

  • Think through the consequences of these changes in advance
  • Take part in a wide public discussion and exploration of these forthcoming changes
  • Adjust public policy in order to favour positive outcomes
  • Support bold regenerative projects to take full advantage of accelerating technology – projects with the uplifting vision and scale of the 1960s Apollo moonshot program.

These bold regenerative projects can galvanize huge collaborative endeavours, via providing a new sense of profound purpose and shared destiny.

Benefits from profound change

The outcomes of these regenerative projects can:

  • Enable humans to transcend (overcome) many of the deeply debilitating, oppressive, and hazardous aspects of our lives
  • Allow everyone a much wider range of personal autonomy, choice, experience, and fulfilment
  • Facilitate dramatically improved international relations, social harmony, and a sustainable new cooperation with nature and the environment.

Managing the regenerative projects

These projects can be funded and resourced:

  • By tapping into a well-spring of positive motivation and discretionary effort which these projects will unleash
  • By benefiting from the longevity dividend, in which less budget will be consumed by end-of-life healthcare
  • From smarter forms of international cooperation, which should reduce costs from efforts duplicated between different countries
  • By progressively diverting funding from military budgets to regenerative budgets
  • By eliminating the loopholes which allow multinational companies to shuffle revenues between countries and thereby avoid paying due taxes
  • From savings from applying principles of automation and Information Technology wherever applicable.

The policies in this manifesto are designed to expedite these positive transformations whilst avoiding adverse consequences.

2. Economic and personal liberation via the longevity dividend

Given adequate resources, human longevity could be enormously extended using technologies which are already broadly understood. Prolonging healthy lifespan would clearly benefit the very large number of citizens concerned, and it would also benefit society by preserving and deepening the experience and wisdom available to solve our various social problems.

Transhumanists aspire to indefinite healthy life extension. Rejuvenation therapies based on regenerative medicine can and should be developed and progressively made available to all citizens. The resulting “longevity dividend” will have large social and economic benefits, as well as personal ones. We do not believe it would impose a dangerous pressure on resources. We call for a bold new moonshot-scale project with the specific goal of ameliorating the degenerative aging process and significantly extending healthy human lifespan.

A practical suggestion is that 20% of the public research funding that currently goes to specific diseases should be reassigned, instead, to researching solutions to aging. In line with the analysis of e.g. SENS, the “ending aging” angle is likely to provide promising lines of research and solutions to many diseases, such as senile dementia (including Alzheimer’s), cancer, heart disease, motor neurone disease, respiratory diseases, and stroke.

3. An inclusive new social contract in the light of technological disruption

Emerging technologies – in particular automation – are likely to impose significant strains on the current economic model. It is far from clear how this will play out, nor what are the best strategies for response. Society and its leaders need to consider and discuss these changes, and draw up plans to deal with different outcome scenarios.

Transhumanists anticipate that accelerating technological unemployment may cause growing social disruption and increased social inequality and alienation. A new social contract is needed, involving appropriate social, educational, and economic support for those who are left with no viable option of ‘earning a living’ due to unprecedented technological change.

A form of “negative income tax” (as proposed by Milton Friedman) or a “basic income guarantee” could provide the basis for this new social contract. Some observers feel it may take an moonshot-scale program to fully design and implement these changes in our social welfare systems. However, political parties around the world have developed promising models, backed up by significant research, for how universal basic income might be implemented in a cost-effective manner. UKH+ urges action based on the best of these insights.

A practical suggestion is to repeat the 1970s Canadian “Minincome” guaranteed income experiment in several different locations, over longer periods than the initial experiments, and to monitor the outcome. Further references can be found here andhere.

4. A proactionary regulatory system to fast-track innovative breakthroughs

The so-called “precautionary principle” preferred by some risk-averse policy makers is often self-defeating: seeking to avoid all risks can itself pose many risks. The precautionary principle frequently hinders intelligent innovation. The “proactionary principle” is a better stance, in which risks are assessed and managed in a balanced way, rather than always avoided. Any bias in favour of the status quo should be challenged, with an eye on better futures that can be created.

Transhumanists observe that many potentially revolutionary therapies are under research, but current drug development has become increasingly slow and expensive (as summarised by “Eroom’s law”). Translational research is doing badly, in part due to current drug regulations which are increasingly out of step with public opinion, actual usage, and technology.

In practical terms, UKH+ recommends:

  • Streamlining regulatory approval for new medicines, in line with recommendations by e.g. CASMI in the UK
  • Removing any arbitrary legal distinction between “therapies for ill-health” and “therapies for enhancement”.

We also urge revisions in patent and copyright laws to discourage counter-productive hoarding of intellectual property:

  • Reduce the time periods of validity of patents in certain industry areas
  • Make it much less likely that companies can be granted “obvious” patents that give them a throat-choke on subsequent development in an industry area
  • Explore the feasibility of alternative and complementary schemes for facilitating open innovation, such as reputation economies or prize funds.

5. Reform of democratic processes with new digital tools

The underpinnings of a prosperous, democratic, open society include digital rights, trusted, safe identities, and the ability to communicate freely without fear of recrimination or persecution. Transhumanists wish to:

  • Accelerate the development and deployment of tools ensuring personal privacy and improved cyber-security
  • Extend governmental open data initiatives
  • Champion the adoption of “Democracy 2.0” online digital tools to improve knowledge-sharing, fact-checking, and collective decision-making
  • Increase the usefulness and effectiveness of online petitions
  • Restrict the undue influence which finance can have over the electoral and legislative process.

Government policy should be based on evidence rather than ideology:

  • Insights from the emerging field of cognitive biases should be adapted into decision-making processes
  • New committees and organisations should be designed according to debiasing knowledge, so they are less likely to suffer groupthink
  • AI systems should be increasingly used to support smart decision making.

All laws restricting free-speech based on the concept of “personal offence” should be revoked. Anyone accepted into a country, whether as a visitor or as an immigrant, must confirm that they fully accept the principle of free speech, and renounce any use of legal or extralegal means to silence those who offend their religion or worldview.

6. Education transformed in readiness for a radically different future

A greater proportion of time spent in education and training (whether formal or informal) should be future-focused, exploring

  • Which future scenarios are technically feasible, and which are fantasies
  • Which future scenarios are desirable, once their “future shock” has been accepted
  • What actions can be taken to accelerate the desirable outcomes, and avoid the undesirable ones
  • How to achieve an interdisciplinary understanding of future scenarios
  • How resilience can be promoted, rather than society just having a focus on efficiency
  • How creativity can be promoted, rather than society just having a focus on consumption
  • The intelligent management of risk.

Lifelong training and education should become the norm, with people of all ages learning new skills as the need becomes apparent in the new age of automation. Educational curricula need to be able to adapt rapidly.

We would mandate that each university and educational establishment makes an increasing proportion of its material freely accessible online every year.

Education should take greater advantage of MOOCs, and the possibility for people having their knowledge certified without enrolling in a traditional college. MOOCs can be usefully complemented with location based learning labs (“makerspaces”) absorbing some of existing library empty space, preserving the “open knowledge” of libraries and expanding it into “open education and learning”. UKH+ anticipates a time where, apart from lab work, the whole of tertiary education will be delivered online.

7. A progressive transhumanist rights agenda

Transhumanists wish to:

  • Explore the gradual applicability of selected human rights to sentient beings, such as primates, that demonstrate relevant mental life, and also advanced AIs, that need such rights to function in their respective purpose
  • Hasten the adoption of synthetic (in-vitro) meat, and the abolition of cruelty to farm animals.

Transhumanists champion the concept of morphological freedom:

  • The rights of all people, including sexual and gender minorities, to bodily self-determination
  • Free access to modern reproductive technologies, including genetic screening to improve the quality of life, for all prospective parents
  • Making it easier for people, if they so choose, to enter a state of cryonic suspension as their bodies come close to clinical death.

Transhumanists envision support for a radical future for consciousness:

  • Enhanced mental cooperation as minds become more interconnected via brain-to-computer interfaces and other foreseeable brain/mind technologies, which will enable the ability to share qualia at rapid speeds.

8. An affirmative new perspective on existential risks

Some emerging technologies – in particular artificial general intelligence and nanotechnology – are so powerful as to produce changes more dramatic than anything since the agricultural revolution. The outcomes could be extraordinarily positive for humanity, or they could threaten our very existence.

Existing technologies already pose potential catastrophic risks to the well-being of humanity:

  • The risk persists of accidental nuclear warfare
  • Runaway climate change might be triggered by unchecked emissions of greenhouse gases that push global temperatures beyond sudden tipping points.

There are further complications from relatively easy access by alienated, destructive individuals to weapons of mass destruction, including dirty bombs and synthetic pathogens.

Without being complacent, transhumanists believe that sustained human innovation can mitigate all these risks, once they are fully understood. We call for significant resources to be applied to working out how to ensure that the outcomes are positive.

The wise management of the full set of existential risks is likely to involve innovations in technology (e.g. the development and production of cleaner energy sources), economics (e.g. a carbon tax to redress the market failure of unpenalized negative externalities), and politics (e.g. the collaborative creation and enforcement of binding treaties). The end outcome will be the successful harnessing of technologies, both old and new, for the radical enhancement of humanity.

21 October 2014

An exponential approach to peace?

While the information-based world is now moving exponentially, our organizational structures are still very linear (especially larger and older ones)…

We’ve learned how to scale technology… Now it’s time to scale the organization: strategy, structure, processes, culture, KPIs, people and systems

Opening slide

The above messages come from a punchy set of slides that have just been posted on SlideShare by Yuri van Geest. Yuri is the co-author of the recently published book “Exponential Organizations: Why new organizations are ten times better, faster, and cheaper than yours (and what to do about it)”, and the slides serve as an introduction to the ideas in the book. Yuri is also the Dutch Ambassador of the Singularity University (SU), and the Managing Director of the SU Summit Europe which is taking place in the middle of next month in Amsterdam.

Conference overview

Yuri’s slides have many impressive examples of rapid decline in the cost for functionality, over the last few years, in different technology sectors.

Industrial robots

DNA sequencing

But what’s even more interesting than the examples of exponential technology are the examples of what the book calls exponential organizations – defined as follows:

An Exponential Organization (ExO) is one whose impact (or output) is disproportionately large — at least 10x larger — compared to its peers because of the use of new organizational techniques that leverage exponential technologies.

Organizations reviewed in the book include Airbnb, GitHub, Google, Netflix, Quirky, Valve, Tesla, Uber, Waze, and Xiaomi. I’ll leave it to you to delve into the slides (and/or the book) to review what these organizations have in common:

  • A “Massive Transformative Purpose” (MTP)
  • A “SCALE” set of attributes (SCALE is an acronym) enabling enhanced “organizational right brain creativity, growth, and acceptance of uncertainty”
  • An “IDEAS” set of attributes (yes, another acronym) enabling enhanced “organizational left brain order, control, and stability”.

I find myself conflicted by some of the examples in the book. For example, I believe there’s a lot more to the decline of the once all-conquering Nokia than the fact that they acquired Navteq instead of Waze. (I tell a different version of the causes of that decline in my own book, Smartphones and beyond. Nevertheless I agree that organizational matters had a big role in what happened.)

But regardless of some queries over details in the examples, the core message of the book rings true: companies will stumble in the face of fast-improving exponential technologies if they persist with “linear organization practice”, including top-down hierarchies, process inflexibility, and a focus on “ownership” and “control”.

The book quotes with approval the following dramatic assertion from David S. Rose, serial entrepreneur and angel investor:

Any company designed for success in the 20th century is doomed to failure in the 21st.

I’d put the emphasis a bit differently: Any company designed for success in the 20th century needs to undergo large structural change to remain successful in the 21st. The book provides advice on what these changes should be – whether the company is small, medium-sized, or large.

A third level of exponential change

I like the change in focus from exponential technology to exponential organizations – more nimble organizational structures that are enabled and even made necessary by the remarkable spread of exponential technologies (primarily those based on information).

However, I’m interested in a further step along that journey – the step to exponential societies.

Can we find ways to take advantage of technological advances, not just to restructure companies, but to restructure wider sets of human relationships? Can we find better ways to co-exist without the threat of armed warfare, and without the periodic outbursts of savage conflict which shatter so many people’s lives?

The spirit behind these questions is conveyed by the explicit mission statement of the Singularity University:

Our mission is to educate, inspire and empower leaders to apply exponential technologies to address humanity’s grand challenges.

Indeed, the Singularity University has set up a Grand Challenge Programme, dedicated to finding solutions to Humanity’s grand challenges. The Grand Challenge framework already encompasses global health, water, energy, environment, food, education, security, and poverty.

Framework picture

Peace Grand Challenge

A few weeks ago, Mike Halsall and I got talking about a slightly different angle that could be pursued in a special Grand Challenge essay contest. Mike is the Singularity University ambassador for the UK, and has already been involved in a number of Grand Challenge events in the UK. The outcome of our discussion was announced on http://londonfuturists.com/peace-grand-challenge/:

Singularity University and London Futurists invite you to submit an essay describing your idea on the subject ‘Innovative solutions for world peace, 2014-2034’.

Rocket picture v2

First prize is free attendance for one person at the aforementioned Singularity University’s European two-day Summit in Amsterdam, November 19th-20th 2014. Note: the standard price of a ticket to this event is €2,000 (plus VAT). The winner will also receive a cash prize of £200 as a contribution towards travel and other expenses.

We’ve asked entrants to submit their essay to the email address lf.grandchallenge@gmail.com by noon on Wednesday 29th October 2014. The winners will be announced no later than Friday 7th November.

Among the further details from the contest website:

  • Submitted essays can have up to 2,000 words. Any essays longer than this will be omitted from the judging process
  • Entrants must be resident in the UK, and must be at least 18 years old on the closing date of the contest
  • Three runners-up will receive a signed copy of the book Exponential Organizations, as well as free attendance at all London Futurists events for the twelve months following the completion of the competition.

At the time of writing, only a handful of essays have been received. That’s not especially surprising: my experience from previous essay contests is that most entrants tend to leave essay submission until the last 24-48 hours (and a large proportion have arrived within the final 6o minutes).

But you can look at this from an optimistic perspective: the field is still wide open. Make the effort to write down your own ideas as to how technology can defuse violent flashpoints around the world, or contribute to world peace in some other way within the next 20 years. Let’s collectively advance the discussion of how exponential technology can do more than just help us find a more effective taxi ride or the fastest route to drive to our destination. Let’s figure out ways in which that technology can solve, not just traffic jams, but logjams of conflicting ideologies, nationalist loyalties, class mistrust, terrorists and counter-terrorists bristling with weaponry and counter-weaponry, and so on.

But don’t delay, since the contest entry deadline is at noon, UK time, on the 29th of October. (That deadline is necessary to give the winner time to book travel to the Summit Europe.)

London Futurists looks forward to publishing a selection of the best essays – and perhaps even converting some of the ideas into animated video format, for wider appeal.

Footnote: discounted price to attend the SU Summit Europe

Note: by special arrangement with the Singularity University, a small number of tickets for the Summit Europe are being reserved for the extended London Futurists community in the UK, with a €500 discount. To obtain this discount, use partner code ‘SUMMITUK’ when you register.

 

 

Older Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 140 other followers