dw2

12 March 2013

The coming revolution in mental enhancement

Filed under: entrepreneurs, futurist, intelligence, neuroengineering, nootropics, risks, UKH+ — David Wood @ 2:50 pm

Here’s a near-future scenario: Within five years, 10% of people in the developed world will be regularly taking smart drugs that noticeably enhance their mental performance.

It turns out there may be a surprising reason for this scenario to fail to come to pass. I’ll get to that shortly. But first, let’s review why the above scenario would be a desirable one.

nbpicAs so often, Nick Bostrom presents the case well. Nick is Professor at the Faculty of Philosophy & Oxford Martin School, Director at the Future of Humanity Institute, and Director of the Programme on the Impacts of Future Technology, all at the University of Oxford. He wrote in 2008,

Those who seek the advancement of human knowledge should [consider] kinds of indirect contribution…

No contribution would be more generally applicable than one that improves the performance of the human brain.

Much more effort ought to be devoted to the development of techniques for cognitive enhancement, be they drugs to improve concentration, mental energy, and memory, or nutritional enrichments of infant formula to optimize brain development.

Society invests vast resources in education in an attempt to improve students’ cognitive abilities. Why does it spend so little on studying the biology of maximizing the performance of the human nervous system?

Imagine a researcher invented an inexpensive drug which was completely safe and which improved all‐round cognitive performance by just 1%. The gain would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited from the drug the inventor would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists. Each year the invention would amount to an indirect contribution equal to 100,000 times what the average scientist contributes. Even an Einstein or a Darwin at the peak of their powers could not make such a great impact.

Meanwhile others too could benefit from being able to think better, including engineers, school children, accountants, and politicians.

This example illustrates the enormous potential of improving human cognition by even a tiny amount…

The first objection to the above scenario is that it is technically infeasible. People imply that no such drug could possibly exist. Any apparent evidence offered to the contrary is inevitably suspect. Questions can be raised over the anecdotes shared in the Longecity thread “Ten months of research condensed – A total newbies guide to nootropics” or in the recent Unfinished Man review “Nootropics – The Facts About ‘Smart Drugs'”. After all, the reasoning goes, the brain is too complex. So these anecdotes are likely to involve delusion – whether it is self-delusion (people not being aware of placebo effects and similar) or delusion from snake oil purveyors who have few scruples in trying to sell products.

A related objection is that the side-effects of such drugs are unknown or difficult to assess. Yes, there are substances (take alcohol as an example) which can aid our creativity, but with all kinds of side-effects. The whole field is too dangerous – or so it is said.

These objections may have carried weight some years ago, but increasingly they have less force. Other complex aspects of human functionality can be improved by targeted drugs; why not also the brain? Yes, people vary in how they respond to specific drug combinations, but that’s something that can be taken into account. Indeed, more data is being collected all the time.

Evidence of progress in the study of these smart drugs is one thing I expect to feature in an event taking place in central London this Wednesday (13th March).

next big thingThe event, The Miracle Pill: What do brain boosting drugs mean for the future? is being hosted by Nesta as part of the Policy Exchange “Next big thing” series.

Here’s an extract from the event website:

If you could take a drug to boost your brain-power, would you?

Drugs to enhance human performance are nothing new. Long-haul lorry drivers and aircraft pilots are known to pop amphetamines to stay alert, and university students down caffeine tablets to ward off drowsiness during all-nighters. But these stimulants work by revving up the entire nervous system and the effect is only temporary.

Arguments over smart drugs are raging. If a drug can improve an individual’s performance, and they do not experience side-effects, some argue, it cannot be such a bad thing.

But where will it all stop? Ambitious parents may start giving mind-enhancing pills to their children. People go to all sorts of lengths to gain an educational advantage and eventually success might be dependent on access to these mind-improving drugs…

This event will ask:

  • What are the limits to performance enhancement drugs, both scientifically and ethically? And who decides?
  • Is there a role for such pills in developing countries, where an extra mental boost might make a distinct difference to those in developing countries?
  • Does there need to be a global agreement to monitor the development of these pills?
  • Should policymakers give drug companies carte blanche to develop these products or is a stricter regulatory regime required?

The event will be chaired by Louise Marston, Head of Innovation and Economic Growth, Nesta. The list of panelists is impressive:

  • Dr Bennett Foddy, Deputy Director and Senior Research Fellow, Institute for Science and Ethics, Oxford Martin School, University of Oxford
  • Dr Anders Sandberg, James Martin Fellow, Future of Humanity Institute, Oxford Martin School, University of Oxford
  • Dr Hilary Leevers, Head of Education & Learning, the Wellcome Trust
  • Dame Sally Davies, Chief Medical Officer for England.

Under-currents of mistrust

From my own experience in discussing smart drugs that could enhance mental performance, I’m aware that objections to their use often run more deeply than the technical questions covered above. There are often under-currents of mistrust:

  • Reliance of smart drugs is viewed as irresponsible, self-indulgent, or as cheating
  • There’s an association with the irresponsible advocacy of so-called “recreational” mind-altering drugs
  • Surely, it is said, there are more reliable and more honourable ways of enhancing our mental powers
  • Besides, what is the point of simply being able to think faster?

I strongly reject the implication of irresponsibility or self-indulgence. Increased mental capability can be applied to all sorts of important questions, resulting in scientific progress, technological breakthrough, more elegant product development, and social benefit. The argument I quoted earlier, from Nick Bostrom, applies here.

I also strongly reject the “either/or” implication, when people advocate pursuit of more traditional methods of mental enhancement instead of reliance of modern technology. Why cannot we do both? When considering our physical health, we pay attention to traditional concerns, such as diet and rest, as well as to the latest medical findings. It should be the same for our mental well-being.

No, the real question is: does it work? And once it becomes clearer that certain combinations of smart drugs can make a significant difference to our mental prowess, with little risk of unwelcome side effects, the other objections to their use will quickly fade away.

It will be similar to the rapid change in attitudes towards IVF (“test tube babies”). I remember a time when all sorts of moral and theological hand-wringing took place over the possibility of in-vitro fertilisation. This hubristic technology, it was said, might create soul-less monstrosities; only wickedly selfish people would ever consider utilising the treatment. That view was held by numerous devout observers – but quickly faded away, in the light of people’s real-world experience with the resulting babies.

Timescales

This brings us back to the question: how quickly can we expect progress with smart drugs? It’s the 64 million dollar question. Actually it might be a 640 million dollar question. Possibly even more. The entrepreneurs and companies who succeed in developing and marketing good products in the field of mental enhancement stand to tap into very sizeable revenue streams. Pfizer, the developer of Viagra, earned revenues of $509 million in 2008 alone, from that particular enhancement drug. The developers of a Viagra for the mind could reasonably imagine similar revenues.

The barriers here are regulatory as well as technical. But with a rising public interest in the possibility of significant mental enhancement, the mood could swing quickly, enabling much more vigorous investment by highly proficient companies.

The biophysical approach

But there’s one more complication.

Actually this is a positive complication rather than a negative one.

Critics who suggest that there are better approaches to enhancing mental powers than smart drugs, might turn out to be right in a way they didn’t expect. The candidate for a better approach is to use non-invasive electrical and magnetic stimulation of the brain, targeted to specific functional areas.

headset-renderA variety of “helmets” are already available, or have been announced as being under development.

The start-up website Flow State Engaged raises and answers a few questions on this topic, as follows:

Q: What is tDCS?

A: Transcranial direct-current stimulation (tDCS) is one of the coolest health/self improvement technologies available today. tDCS is a form of neurostimulation which uses a constant, low current delivered directly to the brain via small electrodes to affect brain function.

Q: Is this for real?

A: The US Army and DARPA both currently use tDCS devices to train snipers and drone pilots, and have recorded 2.5x increases in learning rates. This incredible phenomenon of increased learning has been documented by multiple clinical studies as well.

Q: You want one?

A: Today if you want a tDCS machine it’s nearly impossible to find one for less than $600, and you need a prescription to order one. We wanted a simpler cheaper option. So we made our own kit, for ourselves and for all you body hackers out there…

AndrewVSomeone who has made a close personal study of the whole field of nootropics and biophysical approaches (including tDCS) is London-based researcher Andrew Vladimirov.

Back in November, Andrew gave a talk to the London Futurists on “Hacking our wetware: smart drugs and beyond”. It was a well-attended talk that stirred up lots of questions, both in the meeting itself, and subsequently online.

The good news is that Andrew is returning to London Futurists on Saturday 23rd March, where his talk this time will focus on biophysical approaches to “hacking our wetware”.

You can find more details of this meeting here – including how to register to attend.

Introducing the smart-hat

In advance of the meeting, Andrew has shared an alternative vision of the ways in which many people in the not-so-distant future will pursue mental enhancement.

He calls this vision “Towards digital nootropics”:

You are tired, anxious and stressed, and perhaps suffer from a mild headache. Instead of reaching for a pack from Boots the local pharmacists, you put on a fashionable “smarthat” (a neat variation of an “electrocap” with a comfortable 10-20 scheme placement for both small electrodes and solenoids) or, perhaps, its lighter version – a “smart bandana”.

Your phone detects it and a secure wireless connection is instantly established. A Neurostimulator app opens. You select “remove anxiety”, “anti-headache” and “basic relaxation” options, press the button and continue with your business. In 10-15 minutes all these problems are gone.

However, there is still much to do, and an important meeting is looming. So, you go to the “enhance” menu of the Neurostimulator and browse through the long list of options which include “thinking flexibility”, “increase calculus skills”, “creative imagination”, “lateral brainstorm”, “strategic genius”, “great write-up”, “silver tongue” and “cram before exam” amongst many others. There is even a separate night menu with functionality such as “increase memory consolidation while asleep”. You select the most appropriate options, press the button and carry on the meeting preparations.

There are still 15 minutes to go, which is more than enough for the desired effects to kick in. If necessary, they can be monitored and adjusted via the separate neurofeedback menu, as the smarthat also provides limited EEG measurement capabilities. You may use a tablet or a laptop instead of the phone for that.

A new profession: neuroanalyst

Entrepreneurs reading this article may already have noticed the very interesting business-development opportunities this whole field offers. These same entrepreneurs may pay further attention to the next stage of Andrew Vladimirov’s “Towards digital nootropics” vision of the not-so-distant future:

Your neighbour Jane is a trained neuroanalyst, an increasingly popular trade that combines depth psychology and a variety of advanced non-invasive neurostimulation means. Her machinery is more powerful and sophisticated than your average smartphone Neurostim.

While you lie on her coach with the mindhelmet on, she can induce highly detailed memory recall, including memories of early childhood to go through as a therapist. With a flick of a switch, she can also awake dormant mental abilities and skills you’ve never imagined. For instance, you can become a savant for the time it takes to solve some particularly hard problem and flip back to your normal state as you leave Jane’s office.

Since she is licensed, some ethical modulation options are also at her disposal. For instance, if Jane suspects that you are lying and deceiving her, the mindhelmet can be used to reduce your ability to lie – and you won’t even notice it.

Sounds like science fiction? The bulk of necessary technologies is already there, and with enough effort the vision described can be realised in five years or so.

If you live in the vicinity of London, you’ll have the opportunity to question Andrew on aspects of this vision at the London Futurists meetup.

Smart drugs or smart hats?

Will we one day talk as casually about our smarthats as we currently do about our smartphones? Or will there be more focus, instead, on smart drugs?

Personally I expect we’ll be doing both. It’s not necessarily an either/or choice.

And there will probably be even more dramatic ways to enhance our mental powers, that we currently can scarcely conceive.

23 February 2013

Health improvements via mobile phones: achieving scale

Filed under: Accenture, Barcelona, Cambridge, healthcare, mHealth, MWC, partners — David Wood @ 10:27 pm

How can mobile reach its potential to improve both the outcomes and the economics of global health?

MWC13_logoThat’s the headline question for the panel I’m chairing on Wednesday at the Mobile World Congress (MWC) event in Barcelona.

MWC is an annual conference that celebrates progress with mobile technology. Last year, there were over 67,000 attendees, including:

  • More than 12,000 mobile app developers
  • 3,300+ press members representing 1,500 media outlets from 92 countries
  • CEOs from more than 3,500 companies.

This year, a larger venue is being used, and the attendee numbers are expected to be even larger. Keynote speakers include the CEOs or Presidents from Vodafone, Telefonica, China Mobile, AT&T, Telecom Italia, NTT DoCoMo, Korea Telecom, Deutsch Telekom, Qualcomm, Nokia, General Motors, CNN Digital, American Heart Foundation, Bharti Enterprises, Qtel, Ericsson, Viber Media, Juniper Networks, Dropbox, Foursquare, Deezer, Mozilla, Ubuntu, Tizen, Jolla, and countless more.

And in the midst of all that, there’s a panel entitled Health: Achieving Scale through Partnerships – which, in my role as Technology Planning Lead for Accenture Mobility, I’ve been asked to chair.

MWC as a whole generates a lot of excitement about mobile technology – and about relative shifts in the competitive positions of key companies in the industry. However, it strikes me that the subject under discussion in my panel is more profound. Simply put, what we’re discussing is a matter of life and death.

Done well, mobile technology has the potential to enable the delivery of timely healthcare to people who would otherwise be at risk of death. Prompt diagnosis and prompt treatment can spell the difference between a bitterly unpleasant experience and something that is much more manageable.

But more than that: mobile technology has the potential to address very significant financial problems in the delivery of healthcare. Runaway medical bills impact individuals around the planet. According to a 2010 report by the World Health Organisation (PDF):

When people use healthcare services, they often incur high, sometimes catastrophic costs in paying for their care.

In some countries, up to 11% of the population suffers this type of  severe financial hardship each year, and up to 5% is forced into poverty. Globally, about 150  million people suffer financial catastrophe annually while 100 million are pushed below the poverty line.

It’s not just individuals who are facing ruinous costs from healthcare. A 2011 study by the World Economic Forum and Harvard University anticipates that productivity losses and medical treatment for diabetes, heart disease and other non-contagious chronic diseases will cost economies $47 trillion by 2030. In the UK, the growing cost of treating diabetes alone is said to be likely to “bankrupt the NHS in 20 years”. In countries around the world, surging costs of healthcare treatment are exceeding the growth rates of the national economies.

In principle, mobile technology has the potential to reduce these trends in a number of ways:

  • By enabling more cost-effective treatments, that are less time-consuming and less personally intrusive
  • By enabling earlier detection of medical issues: prevention can be much cheaper than cure!
  • By monitoring compliance with treatment regimes
  • By improving real-time communications within busy, geographically separated teams of clinicians
  • By reducing barriers for people to access information relevant to their health and well-being.

The Creative Destruction of MedicineHere, the key phrase is “in principle”. The potential of mobile technology to beneficially transform healthcare has long been recognised. Success stories can indeed be found. This recent NBC news video featuring physician Eric Topol contains some excellent examples of the use of smartphones in medical practice; for my review of Dr Topol’s award-winning book “The Creative Destruction of Medicine” see my previous blogpost Smartphone technology, super-convergence, and the great inflection of medicine. Nevertheless, the mobile industry is full of people who remain unsure about how quickly this potential can turn into a reality.

Indeed, I regularly encounter people in the mobile industry who have been assigned responsibility in their companies for aspects of “mHealth programmes”, or similar. The recurring refrain that I hear is as follows:

  • The technology seems to work
  • Small-scale pilot trials demonstrate encouraging results
  • But it’s hard to see how these trials can be scaled up into self-sustaining activities – activities which no longer rely on any strategic subsidies
  • Specifically, people wonder how their programmes will ever deliver meaningful commercial revenues to their companies – since, after all, these companies are driven by commercial imperatives.

In this sense, the question of scaling up mobile health programmes is a matter of commercial life-or-death for many managers within the mobile industry. Without credible plans for commercially significant revenues, these programmes may be cut back, and managers risk losing their jobs.

For all these reasons, I see the panel on Wednesday as being highly relevant. Here’s how the MWC organisers describe the panel on the event website:

There are hundreds of live and pilot mHealth deployments currently underway across many and diverse territories, but many of these projects, both commercial and pilot, will remain short term or small scale and will fold once initial funding is exhausted.

To reach scale, mHealth systems must in many cases be designed to integrate with existing health systems. This is not something the mobile industry can achieve alone, despite operators’ expertise and experience in delivering end-to-end services to their customers, and will require strong working partnerships between mobile network operators, health applications and health IT providers.

Speakers in this session will draw upon their own experience to showcase examples of mHealth projects that have gone beyond the small scale and pilot stages.

They will seek to identify best practice in making mHealth sustainable, and will discuss the progress and challenges in partnering for mHealth.

The panellists bring a wealth of different experience to these questions:

Faces

  • Pamela Goldberg is CEO of the Massachusetts Technology Collaborative (MassTech), an economic development engine charged with charged with catalyzing technology innovation throughout the Massachusetts Commonwealth. She has an extensive background in entrepreneurship, innovation and finance, and is the first woman to lead the agency in its nearly 30 year history. MassTech is currently advancing technology‐based solutions that improve the health care system, expand high‐speed Internet access, and strengthen the growth and development of the state’s technology sector.
  • Kirsten Gagnaire is the Global Partnership Director for the Mobile Alliance for Maternal Action (MAMA), where she manages a cross-sector partnership between USAID, Johnson & Johnson, the UN Foundation, the mHealth Alliance and BabyCenter. MAMA is focused on engaging an innovative global community to deliver vital health information to new and expectant mothers through mobile phones. She recently co-lead the Ashoka Global Accelerator, focused on getting mid-stage social entrepreneurs in developing countries the support & resources they need to scale their work across multiple countries and continents. These organizations are focused on using innovation and technology to address global health issues. She recently spent a year living in Ghana, where she was the Country Director for the Grameen Foundation and managed a large-scale mobile health project focused on maternal and child health across Ghana.
  • Chris Mulley is a Principal Business Consultant within the Operator Solutions department of ZTE Corporation. He is responsible for the analysis of market and business drivers that feed into the development of cost-effective end-to-end solutions, targeted at major global telecom operators, based on ZTE’s portfolio of fixed-line and wireless infrastructure equipment and ICT platforms. A key part of this role involves informing ZTE Corporation’s strategic approach to the provision of solutions that meet the objectives of the European Commission Digital Agenda for Europe policy initiative for the wide scale adoption of ICT in the provision of e-Health, e-Transport and e-Government across Europe. Chris was instrumental in the establishment of an e-Health collaboration between ZTE Corporation, the Centro Internazionale Radio Medico and Beijing People’s Hospital.
  • Tong En is Deputy General Manager of the Data Service department and Director of the R&D center at China Mobile Communications Corporation (CMCC), JiangSu Company. He has long been engaged in the research of mobile communication and IoT related technologies, and has chaired or participated more than 10 CMCC research projects. He is a multiple winner of CMCC innovation awards, and has published nearly 20 academic papers.
  • Oscar Gómez is Director of eHealth Product Marketing in Telefónica Digital, where he leads the creation and implementation of a Connected Healthcare proposition to help transform Health and Social Care systems in the light of the challenges they are facing. Oscar has global responsibility over Telefonica’s portfolio of products and solutions in the eHealth and mHealth space. Oscar holds an Executive MBA from Instituto de Empresa, a M.Sc. degree in Telecommunication Engineering from Universidad Politécnica de Madrid and a Diploma in Economics from Universidad Autónoma de Madrid. He graduated in Healthcare Management from IESE in 2012.

In case you’re interested in the topic but you’re not able to attend the event in person, you can follow the live tweet stream for this panel, by tracking the hashtag #mwc13hlt1.

Postscript

Although I passionately believe in the significance of this particular topic, I realise there will be many other announcements, debates, and analyses of deep interest happening at MWC. I’ll be keeping my own notes on what I see as the greatest “hits” and “misses” of the show. These notes will guide me as I chair a “Fiesta or Siesta” debrief session in Cambridge in several weeks time. Jointly hosted by Cambridge Wireless and Accenture, on the 12th of March, this event will take place in the Møller Centre at Churchill College, Cambridge. As the event website explains,

Whether you attended Mobile World Congress (MWC), or you didn’t, you will have formed an opinion (or read someone else’s) on the key announcements and themes of this year’s show. “MWC – Fiesta or Siesta?!” will re-create the emotion of Barcelona as we discuss the hits and misses of the 2013 Mobile World Congress, Cambridge Wireless style…

Registration for this “Fiesta or Siesta” event is now open. Knowing many of the panellists personally, I am confident in predicting that sparks will fly in this discussion, and we’ll end up collectively wiser.

22 February 2013

Controversies over singularitarian utopianism

I shouldn’t have been surprised at the controversy that arose.

The cause was an hour-long lecture with 55 slides, ranging far and wide over a range of disruptive near-future scenarios, covering both upside and downside. The basic format of the lecture was: first the good news, and then the bad news. As stated on the opening slide,

Some illustrations of the enormous potential first, then some examples of how adding a high level of ambient stupidity might mean we might make a mess of it.

Ian PearsonThe speaker was Ian Pearson, described on his company website as “futurologist, conference speaker, regular media guest, strategist and writer”. The website continues, boldly,

Anyone can predict stuff, but only a few get it right…

Ian Pearson has been a full time futurologist since 1991, with a proven track record of over 85% accuracy at the 10 year horizon.

Ian was speaking, on my invitation, at the London Futurists last Saturday. His chosen topic was audacious in scope:

A Singularitarian Utopia Or A New Dark Age?

We’re all familiar with the idea of the singularity, the end-result of rapid acceleration of technology development caused by positive feedback. This will add greatly to human capability, not just via gadgets but also through direct body and mind enhancement, and we’ll mess a lot with other organisms and AIs too. So we’ll have superhumans and super AIs as part of our society.

But this new technology won’t bring a utopia. We all know that some powerful people, governments, companies and terrorists will also add lots of bad things to the mix. The same technology that lets you enhance your senses or expand your mind also allows greatly increased surveillance and control, eventually to the extremes of direct indoctrination and zombification. Taking the forces that already exist, of tribalism, political correctness, secrecy for them and exposure for us, and so on, it’s clear that the far future will be a weird mixture of fantastic capability, spoiled by abuse…

There were around 200 people in the audience, listening as Ian progressed through a series of increasingly mind-stretching technology opportunities. Judging by the comments posted online afterwards, some of the audience deeply appreciated what they heard:

Thank you for a terrific two hours, I have gone away full of ideas; I found the talk extremely interesting indeed…

I really enjoyed this provocative presentation…

Provocative and stimulating…

Very interesting. Thank you for organizing it!…

Amazing and fascinating!…

But not everyone was satisfied. Here’s an extract from one negative comment:

After the first half (a trippy sub-SciFi brainstorm session) my only question was, “What Are You On?”…

Another audience member wrote his own blogpost about the meeting:

A Singularitanian Utopia or a wasted afternoon?

…it was a warmed-over mish-mash of technological cornucopianism, seasoned with Daily Mail-style reactionary harrumphing about ‘political correctness gone mad’.

These are just the starters of negative feedback; I’ll get to others shortly. As I review what was said in the meeting, and look at the spirited ongoing exchange of comments online, some thoughts come to my mind:

  • Big ideas almost inevitably provoke big reactions; this talk had a lot of particularly big ideas
  • In some cases, the negative reactions to the talk arise from misunderstandings, due in part to so much material being covered in the presentation
  • In other cases, Isee the criticisms as reactions to the seeming over-confidence of the speaker (“…a proven track record of over 85% accuracy”)
  • In yet other cases, I share the negative reactions the talk generated; my own view of the near-future landscape significantly differs from the one presented on stage
  • In nearly all cases, it’s worth taking the time to progress the discussion further
  • After all, if we get our forecasts of the future wrong, and fail to make adequate preparations for the disruptions ahead, it could make a huge difference to our collective well-being.

So let’s look again at some of the adverse reactions. My aim is to raise them in a way that people who didn’t attend the talk should be able to follow the analysis.

(1) Is imminent transformation of much of human life a realistic scenario? Or are these ideas just science fiction?

NBIC SingularityThe main driver for belief in the possible imminent transformation of human life, enabled by rapidly changing technology, is the observation of progress towards “NBIC” convergence.

Significant improvements are taking place, almost daily, in our capabilities to understand and control atoms (Nano-tech), genes and other areas of life-sciences (Bio-tech), bits (Info-comms-tech), and neurons and other areas of mind (Cogno-tech). Importantly, improvements in these different fields are interacting with each other.

As Ian Pearson described the interactions:

  • Nanotech gives us tiny devices
  • Tiny sensors help neuroscience figure out how the mind works
  • Insights from neuroscience feed into machine intelligence
  • Improving machine intelligence accelerates R&D in every field
  • Biotech and IT advances make body and machine connectable

Will all the individual possible applications of NBIC convergence described by Ian happen in precisely the way he illustrated? Very probably not. The future’s not as predictable as that. But something similar could well happen:

  • Cheaper forms of energy
  • Tissue-cultured meat
  • Space exploration
  • Further miniaturisation of personal computing (wearable computing, and even “active skin”)
  • Smart glasses
  • Augmented reality displays
  • Gel computing
  • IQ and sensory enhancement
  • Dream linking
  • Human-machine convergence
  • Digital immortality: “the under 40s might live forever… but which body would you choose?”

(2) Is a focus on smart cosmetic technology an indulgent distraction from pressing environmental issues?

Here’s one of the comments raised online after the talk:

Unfortunately any respect due was undermined by his contempt for the massive environmental challenges we face.

Trivial contact lens / jewellery technology can hang itself, if our countryside is choked by yoghurt factory fumes.

The reference to jewellery took issue with remarks in the talk such as the following:

Miniaturisation will bring everyday IT down to jewellery size…

Decoration; Social status; Digital bubble; Tribal signalling…

In contrast, the talk positioned greater use of technology as the solution to environmental issues, rather than as something to exacerbate these issues. Smaller (jewellery-sized) devices, created with a greater attention to recyclability, will diminish the environmental footprint. Ian claimed that:

  • We can produce more of everything than people need
  • Improved global land management could feed up to 20 billion people
  • Clean water will be plentiful
  • We will also need less and waste less
  • Long term pollution will decline.

Nevertheless, he acknowledged that there are some short-term problems, ahead of the time when accelerating NBIC convergence can be expected to provide more comprehensive solutions:

  • Energy shortage is a short to mid term problem
  • Real problems are short term.

Where there’s room for real debate is the extent of these shorter-term problems. Discussion on the threats from global warming brought these disagreements into sharp focus.

(3) How should singularitarians regard the threat from global warming?

BalanceTowards the end of his talk, Ian showed a pair of scales, weighing up the wins and losses of NBIC technologies and a potential singularity.

The “wins” column included health, growth, wealth, fun, and empowerment.

The “losses” column included control, surveillance, oppression, directionless, and terrorism.

One of the first questions from the floor, during the Q&A period in the meeting, asked why the risk of environmental destruction was not on the list of possible future scenarios. This criticism was echoed by online comments:

The complacency about CO2 going into the atmosphere was scary…

If we risk heading towards an environmental abyss let’s do something about what we do know – fossil fuel burning.

During his talk, I picked up on one of Ian’s comments about not being particularly concerned about the risks of global warming. I asked, what about the risks of adverse positive feedback cycles, such as increasing temperatures triggering the release of vast ancient stores of methane gas from frozen tundra, accelerating the warming cycle further? That could lead to temperature increases that are much more rapid than presently contemplated, along with lots of savage disturbance (storms, droughts, etc).

Ian countered that it was a possibility, but he had the following reservations:

  • He thought these positive feedback loops would only kick into action when baseline temperature rose by around 2 degrees
  • In the meantime, global average temperatures have stopped rising, over the last eleven years
  • He estimates he spends a couple of hours every day, keeping an eye on all sides of the global warming debate
  • There are lots of exaggerations and poor science on both sides of the debate
  • Other factors such as the influence of solar cycles deserve more research.

Here’s my own reaction to these claims:

  • The view that global average temperatures  have stopped rising, is, among serious scientists, very much a minority position; see e.g. this rebuttal on Carbon Brief
  • Even if there’s only a small probability of a runaway spurt of accelerated global warming in the next 10-15 years, we need to treat that risk very seriously – in the same way that, for example, we would be loath to take a transatlantic flight if we were told there was a 5% chance of the airplane disintegrating mid-flight.

Nevertheless, I did not want the entire meeting to divert into a debate about global warming – “that deserves a full meeting in its own right”, I commented, before moving on to the next question. In retrospect, perhaps that was a mistake, since it may have caused some members of the audience to mentally disengage from the meeting.

(4) Are there distinct right-wing and left-wing approaches to the singularity?

Here’s another comment that was raised online after the talk:

I found the second half of the talk to be very disappointing and very right-wing.

And another:

Someone who lists ‘race equality’ as part of the trend towards ignorance has shown very clearly what wing he is on…

In the second half of his talk, Ian outlined changes in norms of beliefs and values. He talked about the growth of “religion substitutes” via a “random walk of values”:

  • Religious texts used to act as a fixed reference for ethical values
  • Secular society has no fixed reference point so values oscillate quickly.
  • 20 years can yield 180 degree shift
  • e.g. euthanasia, sexuality, abortion, animal rights, genetic modification, nuclear energy, family, policing, teaching, authority…
  • Pressure to conform reinforces relativism at the expense of intellectual rigour

A complicating factor here, Ian stated, was that

People have a strong need to feel they are ‘good’. Some of today’s ideological subscriptions are essentially secular substitutes for religion, and demand same suspension of free thinking and logical reasoning.

Knowledge GraphA few slides later, he listed examples of “the rise of nonsense beliefs”:

e.g. new age, alternative medicine, alternative science, 21st century piety, political correctness

He also commented that “99% are only well-informed on trivia”, such as fashion, celebrity, TV culture, sport, games, and chat virtual environments.

This analysis culminated with a slide that personally strongly resonated with me: a curve of “anti-knowledge” accelerating and overtaking a curve of “knowledge”:

In pursuit of social compliance, we are told to believe things that are known to be false.

With clever enough spin, people accept them and become worse than ignorant.

So there’s a kind of race between “knowledge” and “anti-knowledge”.

One reason this resonated with me is that it seemed like a different angle on one of my own favourite metaphors for the challenges of the next 15-30 years – the metaphor of a dramatic race:
Race

  • One runner in the race is “increasing rationality, innovation, and collaboration”; if this runner wins, the race ends in a positive singularity
  • The other runner in the race is “increasing complexity, rapidly diminishing resources”; if this runner wins, the race ends in a negative singularity.

In the light of Ian’s analysis, I can see that the second runner is aided by the increase of anti-knowledge: over-attachment to magical, simplistic, ultimately misleading worldviews.

However, it’s one thing to agree that “anti-knowledge” is a significant factor in determining the future; it’s another thing to agree which sets of ideas count as knowledge, and which as anti-knowledge! One of Ian’s slides included the following list of “religion substitutes”:

Animal rights, political correctness, pacifism, vegetarianism, fitness, warmism, environmentalism, anti-capitalism

It’s no wonder that many of the audience felt offended. Why list “warmism” (a belief in human-caused global warming), but not “denialism” (denial of human-caused global warming? Why list “anti-capitalism” but not “free market fundamentalism”? Why list “pacifism” but not “militarism”?

One online comment made a shrewd observation:

Ian raised my curiosity about ‘false beliefs’ (or nonsense beliefs as Ian calls them) as I ‘believe’ we all inhabit different belief systems – so what is true for one person may be false for another… at that exact moment in time.

And things can change. Once upon a time, it was a nonsense belief that the world was round.

There may be 15% of truth in some nonsense beliefs…or possibly even 85% truth. Taking ‘alternative medicine’ as an example of one of Ian’s nonsense beliefs – what if two of the many reasons it was considered nonsense were that (1) it is outside the world (the system) of science and technology and (2) it cannot be controlled by the pharmaceutical companies (perhaps our high priests of today)?

(5) The role of corporations and politicians in the approach to the singularity

One place where the right-wing / left-wing division becomes more acute in the question of whether anything special needs to be done to control the behaviour of corporations (businesses).

One of Ian’s strong positive recommendations, at the end of his presentation, was that scientists and engineers should become more actively involved in educating the general public about issues of technology. Shortly afterward, the question came from the floor: what about actions to educate or control corporations? Ian replied that he had very little to recommend to corporations, over and above his recommendations to the individuals within these corporations.

My own view is different. From my life inside industry, I’ve seen numerous cases of good people who are significantly constrained in their actions by the company systems and metrics in which they find themselves enmeshed.

Indeed, just as people should be alarmed about the prospects of super-AIs gaining too much power, over and above the humans who created them, we should also be alarmed about the powers that super-corporations are accumulating, over and above the powers and intentions of their employees.

The argument to leave corporations alone finds its roots in ideologies of freedom: government regulation of corporations often has undesirable side-effects. Nevertheless, that’s just an argument for being smarter and more effective in how the regulation works – not an argument to abstain from regulation altogether.

The question of the appropriate forms of collaborative governance remains one of the really hard issues facing anyone concerned about the future. Leaving corporations to find their own best solutions is, in my view, very unlikely to be the optimum approach.

In terms of how “laissez-faire” we should be, in the face of potential apocalypse down the road, I agree with the assessment near the end of Jeremy Green’s blogpost:

Pearson’s closing assertion that in the end our politicians will always wake up and pull us back from the brink of any disaster is belied by many examples of civilisations that did not pull back and went right over the edge to destruction.

Endnote:

After the presentation in Birkbeck College ended, around 40-50 of the audience regrouped in a nearby pub, to continue the discussion. The discussion is also continuing, at a different tempo, in the online pages of the London Futurists meetup. Ian Pearson deserves hearty congratulation for stirring up what has turned out to be an enlightening discussion – even though there’s heat in the comments as well as light!

Evidently, the discussion is far from complete…

20 February 2013

The world’s most eminent sociologist highlights the technological singularity

It’s not every day that the world’s most eminent sociologist reveals himself as having an intense interest in the Technological Singularity, and urges that “Everyone should read the books of Ray Kurzweil”. That’s what happened this evening.

The speaker in question was Lord Anthony Giddens, one of whose many claims to fame is his description as “Tony Blair’s guru”.

His biography states that, “According to Google Scholar, he is the most widely cited sociologist in the world today.”

In support of that claim, a 2009 article in the Times Higher Education supplement notes the following:

Giddens trumps Marx…

A list published today by Times Higher Education reveals the most-cited academic authors of books in the humanities…

As one of the world’s pre-eminent sociologists, Anthony Giddens, the Labour peer and former director of the London School of Economics, will be used to academic accolades.

But even he may be pleased to hear that his books are cited more often than those of iconic thinkers such as Sigmund Freud and Karl Marx.

Lord Giddens, now emeritus professor at LSE and a life fellow at King’s College, Cambridge, is the fifth most-referenced author of books in the humanities, according to the list produced by scientific data analysts Thomson Reuters.

The only living scholar ranked higher is Albert Bandura, the Canadian psychologist and pioneer of social learning theory at Stanford University…

Freud enters the list in 11th place. The American linguist and philosopher Noam Chomsky, who is based at the Massachusetts Institute of Technology and whose political books have a broader readership than some of his peers in the list, is 15th…

Lord Giddens is now 75 years old. Earlier this evening, I saw for myself evidence of his remarkable calibre. He gave an hour-long lecture in front of a packed audience at the London School of Economics, without any notes or slides, and without any hesitation, deviation, or verbal infelicity. Throughout, his remarks bristled with compelling ideas. He was equally competent – and equally fluent – when it came to the question-and-answer portion of the event.

LSE Events

The lecture was entitled “Off the edge of history: the world in the 21st century”. From its description on the LSE website, I had already identified it as relevant to many of the themes that I seek to have discussed in the series of London Futurists meetups that I chair:

The risks we face, and the opportunities we have, in the 21st century are in many respects quite different from those experienced in earlier periods of history. How should we analyse and respond to such a world? What is a rational balance of optimism and pessimism? How can we plan for a future that seems to elude our grasp and in some ways is imponderable?

As the lecture proceeded, I was very pleasantly impressed by the sequence of ideas. I append here a lightly edited copy of the verbatim notes I took on my Psion Series 5mx, supplemented by a few additions from the #LSEGiddens tweet stream. Added afterwards: the LSE has made a podcast available of the talk.

My rough notes from the talk follow… (text in italics are my parenthetical comments)

This large lecture room is completely full, twenty minutes before the lecture is due to start. I’m glad I arrived early!

Today’s topic is work in progress – he’s writing a book on the same topic, “Off the edge of history”.

  • Note this is a very different thesis from “the end of history”.

His starting point is in the subject of geology – a long way from sociology. He’s been working on climate change for the last seven years. It’s his first time to work so closely with scientists.

Geologists tend to call the present age “the Holocene age” – the last 12,000 years. But a geologist called Paul Crutzen recommended that we should use a different term for the last 200 years or so – we’re now in the Anthropocene age:

  • In this period, human activity strongly influences nature and the environment
  • This re-orients and restructures the world of geology
  • A great deal of what used to be natural, is natural no longer
  • Human beings are invading nature, in a way that has no precedent
  • Even some apparently natural catastrophes, like tsunamis and volcanoes, might be linked to impacts from humans.

We have continuities from previous history (of course), but so many things are different nowadays. One example is the impacts of new forms of biological threat. Disease organisms have skipped from animals to human beings. New disease organisms are being synthesised.

There are threats facing us, which are in no ways extensions of previous threats.

For example, what is the Internet doing to the world? Is it a gigantic new mind? Are you using the mobile phone, or is the mobile phone using you? There’s no parallel from previous periods. Globally connected electronic communications are fundamentally different from what went before.

When you are dealing with risks you’ve never experienced before, you can’t measure them. You’ll only know for sure when it’s too late. We’re on the edge of history because we are dealing with risks we have never faced before.

Just as we are invading nature, we are invading human nature in a way that’s unprecedented.

Do you know about the Singularity? (A smattering of people in the audience raise their hands.) It’s mind-blowing. You should find out about it:

  • It’s based on a mathematical concept
  • It’s accelerating processes of growth, rapidly disappearing to a far off point very different from today.

Everyone should read the books of Ray Kurzweil – who has recently become an Engineering Director at Google.

Kurzweil’s book makes it clear that:

  • Within our lifetimes, human beings will no longer be human beings
  • There are multiple accelerating rates of change in several different disciplines
  • The three main disciplines contributing to the singularity are nanotech, AI, and biotech
  • All are transforming our understanding of the human body and, more importantly, the human mind
  • This is described by the “Law of accelerating returns”
  • Progress is not just linear but geometrical.

This book opens our minds to multiple possibilities of what it means to be human, as technology penetrates us.

Nanotech is like humans playing God:

  • It’s a level below DNA
  • We can use it to rebuild many parts of the human body, and other artefacts in the world.

Kurzweil states that human beings will develop intelligence which is 100x higher than at present:

  • Because of merging of human bodies with computers
  • Because of the impact of nanotech.

Kurzweil gives this advice: if you are relatively young: live long, in order to live forever:

  • Immortality is no longer a religious concept, it’s now a tangible prospect
  • It could happen in the next 20-40 years.

This is a fantastic expansion of what it means to be human. Importantly, it’s a spread of opportunities and risk.

These were religious notions before. Now we have the real possibility of apocalypse – we’ve had it since the 1950s, when the first thermonuclear weapons were invented. The possibility of immortality has become real too.

We don’t know how to chart these possibilities. None of us know how to fill in that gap.

What science fiction writers were writing 20 years ago, is now in the newspapers everyday. Reading from the Guardian from a couple of days ago:

Paralysed people could get movement back through thought control

Brain implant could allow people to ‘feel’ the presence of infrared light and one day be used to move artificial limbs

Scientists have moved closer to allowing paralysed people to control artificial limbs with their thoughts following a breakthrough in technology…

…part of a series of sessions on advances in brain-machine interfaces, at which other scientists presented a bionic hand that could connect directly to the nerves in a person’s arm and provide sensory feedback of what they were holding.

Until now, neurological prosthetics have largely been demonstrated as a way to restore a loss of function. Last year, a 58-year-old woman who had become paralysed after a stroke demonstrated that she could use a robotic arm to bring a cup of coffee to her mouth and take a sip, just by thinking about it…

In the future…  it might be possible to use prosthetic devices to restore vision – for example, if a person’s visual cortex had been damaged – by training a different part of the brain to process the information.

Or you could even augment normal brain function in non-invasive ways to deliver the information.

We could learn to detect other sorts of signals that we normally don’t see or experience; the perceptual range could increase.

These things are real; these things are happening. There is a kind of geometric advance.

The literature of social scientists has a big division here, between doomsday thinkers and optimists, with respected thinkers in both camps.

Sir Martin Rees is example of first category. He wrote a book called “Our final century”:

  • It examines forms of risk that could destroy our society
  • Climate change is a huge existential risk – most people aren’t aware of it
  • Nanotech is another existential risk – grey goo scenario
  • We also have lots of weaponry: drones circulating above the world even as we speak
  • Most previous civilisations have ended in disaster – they subverted themselves
  • For the first time, we have a civilisation on a global scale
  • It could well be our final century.

Optimists include Matt Ridley, a businessman turned scientist, and author of the book “The rational optimist”:

  • Over the course of human civilisation there is progress – including progress in culture, and medical advances.

This is a big division. How do we sort this out? His view: it’s not possible to decide. We need to recognise that we live in a “high opportunity, high risk society”:

  • The level of opportunity and level of risk are both much higher than before
  • But risk and opportunity always intertwine
  • “In every risk there’s an opportunity…” and vice versa
  • We must be aware of the twists and tangles of risk and opportunity – their interpenetration.

Studying this area has led him to change some of his views from before:

  • He now sees the goal of sustainability as a harder thing than before
  • Living within our limits makes sense, but we no longer know what our limits are
  • We have to respect limits, but also recognise that limits can be changed.

For example, could we regard a world population of 9 billion people as an opportunity, rather than just a risk?

  • It would lead us to put lots more focus on food innovation, blue sky tech for agriculture, social reform, etc – all good things.

A few points to help us sort things out:

  1. One must never avoid risk – we live in a world subject to extreme system risk; we mustn’t live in denial of risk in our personal life (like denying the risks of smoking or riding motor cycles) or at an civilisational level
  2. We have to think about the future in a very different way, because the future has become opaque to us; the enlightenment thought was that we would march in and make sense of history (Marx had similar thoughts), but it turns out that the future is actually opaque – for our personal lives too as well as society (he wonders whether the EU will still exist by the time he finishes his book on the future of the EU!)
  3. We’ll have to learn to backcast rather than forecast – to borrow an idea from the study of climate change. We have to think ahead, and then think back.

This project is the grand task of social sciences in the 21st century.

One more example: the possibility of re-shoring of jobs in the US and EU:

  • 3D printing is an unbelievable technological invention
  • 3D printers can already print shoes
  • A printer in an MIT lab can print whole systems – eg in due course a plane which will fly directly out of the computer
  • This will likely produce a revolution in manufacturing – many, many implications.

Final rhetorical question: As we confront this world, should we be pessimists or optimists? This is the same question he used to consider, at the end of the talks he used to give on climate change.

His answer: we should bracket out that opposition; it’s much more important to be rational than either pessimist or optimist:

  • Compare the case of someone with very serious cancer – they need more than wishful thinking. Need rational underpinning of optimism and/or pessimism.

Resounding applause from the audience. Then commence questions and answers.

Q: Are today’s governance structures, at local and national levels, fit to deal with these issues?

A: No. For example, the he European Union has proved not to be the vanguard of global governance that we hoped it would be. Climate change is another clear example: twenty years of UN meetings with no useful outcome whatsoever.

Q: Are our human cognitive powers capable to deal with these problems? Is there a role for technology to assist our cognitive powers?

A: Our human powers are facing a pretty difficult challenge. It’s human nature to put off what we don’t have to do today. Like 16 years taking up smoking who can’t really see themselves being 40. Maybe a supermind might be more effective.

Q: Although he has given examples where current governance models are failing, are there any bright spots of hope for governance? (The questioner in this case was me.)

A: There are some hopeful signs for economic governance. Surely bankers will not get away with what they’ve done. Movement to address tax havens (“onslaught”) – bring the money back as well as bringing the jobs back. Will require global co-operation. Nuclear proliferation (Iran, Israel) is as dangerous as climate change. The international community has done quite well with non-proliferation, but it only takes one nuclear war for things to go terribly wrong.

Q: What practical advice would he give to the Prime Minister (or to Ed Miliband)?

A: He supports Ed Miliband trying to restructure capitalism; there are similar moves happening in the US too. However, with global issues like these, any individual prime minister is limited in his influence. For better or for worse, Ray Kurzweil has more influence than any politician!

(Which is a remarkable thing to say, for someone who used to work so closely with Prime Minister Tony Blair…)

10 February 2013

Fixing bugs in minds and bugs in societies

Suppose we notice what appears to be bugs in our thinking processes. Should we try to fix these bugs?

Or how about bugs in the way society works? Should we try to fix these bugs too?

As examples of bugs of the first kind, I return to a book I reviewed some time ago, “Kluge: The Haphazard Construction of the Human Mind”. I entitled my review “The human mind as a flawed creation of nature”, and I still stick by that description. In that review, I pulled out the following quote from near to the end of the book:

In this book, we’ve discussed several bugs in our cognitive makeup: confirmation bias, mental contamination, anchoring, framing, inadequate self-control, the ruminative cycle, the focussing illusion, motivated reasoning, and false memory, not to mention absent-mindedness, an ambiguous linguistic system, and vulnerability to mental disorders. Our memory, contextually driven as it is, is ill suited to many of the demands of modern life, and our self-control mechanisms are almost hopelessly split. Our ancestral mechanisms were shaped in a different world, and our more modern deliberative mechanisms can’t shake the influence of that past. In every domain we have considered, from memory to belief, choice, language, and pleasure, we have seen that a mind built largely through the progressive overlay of technologies is far from perfect…

These bugs in our mental makeup are far from being harmless quirks or curiosities. They can lead us:

  • to overly trust people who have visual trappings of authority,
  • to fail to make adequate provision for our own futures,
  • to keep throwing money into bad investments,
  • and to jump to all kinds of dangerous premature conclusions.

But should we try to fix these bugs?

The field where the term ‘bug’ was first used in this sense of a mistake, software engineering, provides many cautionary tales of bug fixing going wrong:

  • Sometimes what appears to be a ‘bug’ in a piece of software turns out to be a useful ‘feature’, with a good purpose after all
  • Sometimes a fix introduces unexpected side-effects, which are worse than the bug which was fixed.

I shared an example of the second kind in the “Managing defects” chapter of the book I wrote in 2004-5, “Symbian for software leaders: principles of successful smartphone development projects”:

An embarrassing moment with defects

The first million-selling product that I helped to build was the Psion Series 3a handheld computer. This was designed as a distinct evolutionary step-up from its predecessor, the original Series 3 (often called the “Psion 3 classic” in retrospect)…

At last the day came (several weeks late, as it happened) to ship the software to Japan, where it would be flashed into large numbers of chips ready to assemble into production Series 3a devices. It was ROM version 3.20. No sooner was it sent than panic set into the development team. Two of us had independently noticed a new defect in the agenda application. If a user set an alarm on a repeating entry, and then adjusted the time of this entry, in some circumstances the alarm would fail to ring. We reasoned that this was a really bad defect – after all, two of us had independently found it.

The engineer who had written the engine for the application – the part dealing with all data manipulation algorithms, including calculating alarm times – studied his code, and came up with a fix. We were hesitant, since it was complex code. So we performed a mass code review: lots of the best brains in the team talked through the details of the fix. After twenty four hours, we decided the fix was good. So we recalled 3.20, and released 3.21 in its place. To our relief, no chips were lost in the process: the flashing had not yet started.

Following standard practice, we upgraded the prototype devices of everyone in the development team, to run 3.21. As we waited for the chips to return, we kept using our devices – continuing (in the jargon of the team) to “eat our own dog food”. Strangely, there were a few new puzzling problems with alarms on entries. Actually, it soon became clear these problems were a lot worse than the problem that had just been fixed. As we diagnosed these new problems, a sinking feeling grew. Despite our intense care (but probably because of the intense pressure) we had failed to fully consider all the routes through the agenda engine code; the change made for 3.21 was actually a regression on previous behaviour.

Once again, we made a phone call to Japan. This time, we were too late to prevent some tens of thousands of wasted chips. We put the agenda engine code back to its previous state, and decided that was good enough! (Because of some other minor changes, the shipping version number was incremented to 3.22.) We decided to live with this one defect, in order not to hold up production any longer.

We were expecting to hear more news about this particular defect from the Psion technical support teams, but the call never came. This defect never featured on the list of defects reported by end users. In retrospect, we had been misled by the fact that two of us had independently found this defect during the final test phase: this distorted our priority call…

That was an expensive mistake, which seared a cautionary attitude into my own brain, regarding the dangers of last-minute changes to complex software. All seasoned software engineers have similar tales they can tell, from their own experience.

If attempts to fix defects in software are often counter-productive, how much more dangerous are attempts to fix defects in our thinking processes – or defects in how our societies operate! At least in the first case, we generally still have access to the source code, and to the design intention of the original software authors. For the other examples, the long evolutionary history that led to particular designs is something at which we can only guess. It would be like trying to fix a software bug, that somehow results from the combination of many millions of lines of source code, written decades ago by people who left no documentation and who are not available for consultation.

What I’ve just stated is a version of an argument that conservative-minded thinkers often give, against attempts to try to conduct “social engineering” or “improve on nature”. Tinkering with ages-old thinking processes – or with structures within societies – carries the risk that we fail to appreciate many hidden connections. Therefore (the argument runs) we should desist from any such experimentation.

Versions of this argument appeared, from two different commentators, in responses to my previous blogpost. One put it like this:

The trouble is that ‘cognitive biases and engrained mistakes’ may appear dysfunctional but they are, in fact, evolutionarily successful adaptations of humanity to its highly complex environment. These, including prejudice, provide highly effective means for the resolution of really existing problems in human capacity…

Rational policies to deal with human and social complexity have almost invariably been proved to be inhumane and brutal, fine for the theoretician in the British Library, but dreadful in the field.

Another continued the theme:

I have much sympathy for [the] point about “cognitive biases and engrained mistakes”. The belief that one has identified cognitive bias in another or has liberated oneself from such can be a “Fatal Conceit,” to borrow a phrase from Hayek, and has indeed not infrequently given rise to inhumane treatment even of whole populations. One of my favourite sayings is David Hume’s “the rules of morality are not conclusions of our reason,” which is at the heart of Hayek’s Fatal Conceit argument.

But the conclusion I draw is different. I don’t conclude, “Never try to fix bugs”. After all, the very next sentence from my chapter on “Managing defects” stated, “We eventually produced a proper fix several months later”. Indeed, many bugs do demand urgent fixes. Instead, my conclusion is that bug fixing in complex systems needs a great deal of careful thought, including cautious experimentation, data analysis, and peer review.

The analogy can be taken one more step. Suppose that a software engineer has a bad track record in his or her defect fixes. Despite claiming, each time, to be exercising care and attention, the results speak differently: the fixes usually make things worse. Suppose, further, that this software engineer comes from a particular company, and that fixes from that company have the same poor track record. (To make this more vivid, the name of this company might be “Technocratic solutions” or “Socialista” or “Utopia software”. You can probably see where this argument is going…) That would be a reason for especial discomfort if someone new from that company is submitting code changes in attempts to fix a given bug.

Well, something similar happens in the field of social change. History has shown, in many cases, that attempts at mental engineering and social engineering were counter-productive. For that reason, many conservatives support various “precautionary principles”. They are especially fearful of any social changes proposed by people they can tar with labels such as “technocratic” or”socialist” or “utopian”.

These precautionary principles presuppose that the ‘cure’ will be worse than the ‘disease’. However, I personally have greater confidence in the fast improving power of new fields of science, including the fields that study our mind and brain. These improvements are placing ever greater understanding in our hands – and hence, ever greater power to fix bugs without introducing nasty side-effects.

For these reasons, I do look forward (as I said in my previous posting) to these improvements

helping individuals and societies rise above cognitive biases and engrained mistakes in reasoning… and accelerating a reformation of the political and economic environment, so that the outcomes that are rationally best are pursued, instead of those which are expedient and profitable for the people who currently possess the most power and influence.

Finally, let me offer some thoughts on the observation that “the rules of morality are not conclusions of our reason”. That observation is vividly supported by the disturbing “moral dumbfounding” examples discussed by Jonathan Haidt in his excellent book “The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” (which I briefly reviewed here). But does that observation mean that we should stop trying to reason with people about moral choices?

MoralLandscapeHere, I’ll adapt comments from my review of “The Moral Landscape: How Science Can Determine Human Values”, by Sam Harris.

That book considers how we might go about finding answers to big questions such as “how should I live?” and “what makes some ways of life more moral than others?”  As some specific examples, how should we respond to:

  • The Taliban’s insistence that the education of girls is an abomination?
  • The stance by Jehovah’s Witnesses against blood transfusion?
  • The prohibition by the Catholic Church of the use of condoms?
  • The legalisation of same-sex relationships?
  • The use of embryonic stem cells in the search for cures of diseases such as Alzheimer’s and Parkinson’s?
  • A would-be Islamist suicide bomber who is convinced that his intended actions will propel him into a paradise of abundant mental well-being?

One response is that such questions are the province of religion. The correct answers are revealed via prophets and/or holy books.  The answers are already clear, to those with the eye of faith. It is a divine being that tells us, directly or indirectly, the difference between good and evil. There’s no need for experimental investigations here.

A second response is that the main field to study these questions is that of philosophy. It is by abstract reason, that we can determine the difference between good and evil.

But Sam Harris, instead, primarily advocates the use of the scientific method. Science enters the equation because it is increasingly able to identify:

  • Neural correlates (or other physical or social underpinnings) of sentient well-being
  • Cause-and-effect mechanisms whereby particular actions typically bring about particular changes in these neural correlates.

With the help of steadily improving scientific understanding, we can compare different actions based on their likely effects on sentient well-being. Actions which are likely to magnify sentient well-being are good, and those which are likely to diminish it are evil. That’s how we can evaluate, for example, the Taliban’s views on girls’ education.

As Harris makes clear, this is far from being an abstract, other-worldly discussion. Cultures are clashing all the time, with lots of dramatic consequences for human well-being. Seeing these clashes, are we to be moral relativists (saying “different cultures are best for different peoples, and there’s no way to objectively compare them”) or are we to be moral realists (saying “some cultures promote significantly more human flourishing than others, and are to be objectively preferred as a result”)? And if we are to be moral realists, do we resolve our moral arguments by deference to religious tradition, or by open-minded investigation of real-world connections?

In the light of these questions, here are some arguments from Harris’s book that deserve thought:

  • There’s a useful comparison between the science of human values (the project espoused by Harris), and a science of diets (what we should eat, in order to enjoy good health).  In both cases, we’re currently far from having all the facts.  And in both cases, there are frequently several right answers.  But not all diets are equally good.  Similarly, not all cultures are equally good.  And what makes one diet better than another will be determined by facts about the physical world – such as the likely effects (direct and indirect) of different kinds of fats and proteins and sugars and vitamins on our bodies and minds.  While people still legitimately disagree about diets, that’s not a reason to say that science can never answer such questions.  Likewise, present-day disagreements about specific causes of happiness, mental flourishing, and general sentient well-being, do not mean these causes fail to exist, or that we can never know them.
  • Likewise with the science of economics.  We’re still far from having a complete understanding of how different monetary and financial policies impact the long-term health of the economy.  But that doesn’t mean we should throw up our hands and stop searching for insight about likely cause and effect.  The discipline of economics, imperfect though it is, survives in an as-yet-incomplete state.  The same goes for political science too.  And, likewise, for the science of the moral landscape.
  • Attempts to reserve some special area of “moral insight” for religion are indefensible.  As Harris says, “How is it that most Jews, Christians, and Muslims are opposed to slavery? You don’t get this moral insight from scripture, because the God of Abraham expects us to keep slaves. Consequently, even religious fundamentalists draw many of their moral positions from a wider conversation about human values that is not, in principle, religious.” That’s the conversation we need to progress.

PS I’ve written more about cognitive biases and cognitive dissonance – and how we can transcend these mistakes – in my blogpost “Our own entrenched enemies of reason”.

4 February 2013

Responding to the call for a new Humanity+ manifesto

Filed under: BHAG, futurist, Humanity Plus, leadership, risks — David Wood @ 7:37 am

I’ve been pondering the call, on Transhumanity.net, to upgrade the Transhumanist Declaration.

This endeavour needs the input of many minds to be successful. Below, please find a copy of a submission from me, to add into the mix. I’ll welcome feedback!

Humanity is on the brink of a momentous leap forwards in evolution. If we are wise and strong, we can – and should – make that leap.

This evolutionary transformation takes advantage of rapidly improving technology – technology that arises from positive virtuous cycles and unprecedented interdisciplinary convergence. This technology will grant us awesome powers: the power to capture ample energy from the Sun, the atom, and beyond; the power to synthesise new materials to rejuvenate our environment and fuel our societies; the power to realise an unparalleled abundance of health, security, vigour, vitality, creativity, knowledge, and experience; the power to consciously, thoughtfully, proactively remake Humanity.

Through imminently available technology, our lives can be radically enhanced, expanded, and extended. We can be the generation that banishes disease, destitution, decay, and death. Our societies can become marvels of autonomy and inclusion, featuring splendid variety and harmony. We can move far beyond the earth, spreading ever higher consciousness in both inner and outer space. We can transcend our original biological nature, and become as if divine; we’ll be as far ahead of current human capabilities as current humans exceed the prowess of our ape forebears.

But technology is a two-edged sword. Alongside the potential for transcendent improvement lies the potential for existential destruction. We face fearsome perils of environmental catastrophe, unstoppable new plagues and pathogens, rampant unemployment and alienation, the collapse of world financial markets, pervasive systems of unresponsive computers and moronically intelligent robots that act in frustration to human desires, horrific new weaponry that could easily fall into the wrong hands and precipitate Armageddon, and intensive mechanisms for draconian surveillance and thought control.

Continuing the status quo is not an option. Any quest for sustainability of current lifestyles is a delusion. We cannot stay still, and we cannot retreat. The only way to survive is radical enhancement – moving from Humanity to Humanity+.

We’ll need great wisdom and strength to successfully steer the acceleration of converging technology for a positive rather than a negative outcome. We’ll need to take full advantage of the best of current Humanity, to successfully make the leap to Humanity+.

Grand battles of ideas lie ahead. In all these grand battles, smart technology can be our powerful ally – technology that can unlock and enhance our human capacities for insight, innovation, compassion, kindness, and solidarity.

We’ll need to transcend worldviews that insist on viewing humans as inherently diminished, incapable, flawed, and mortal. We’ll need to help individuals and societies rise above cognitive biases and engrained mistakes in reasoning. And we’ll need to accelerate a reformation of the political and economic environment, so that the outcomes that are rationally best are pursued, instead of those which are expedient and profitable for the people who currently possess the most power and influence.

As more and more people come to appreciate the tremendous attractiveness and the credibility of the Humanity+ future, they’ll collectively commit more of their energy, skills, and resources in support of realising that future. But the outcome is still far from clear.

Time is short, risks are high, and there is much to do. We need to open minds, raise awareness, transform the public mood, overturn prejudices, establish rights, build alliances, resist over-simplification, avoid the temptations of snake oil purveyors, dispel distractions, weigh up the best advice available, take hard decisions, and accelerate specific research and development. If we can navigate these slippery paths, with wisdom and strength, we will indeed witness the profound, glorious emergence of Humanity+.

22 December 2012

Symbian retrospective: hits and misses

Filed under: More Than Smartphones, Nokia, Psion, retrospection, Symbian, Symbian Story — David Wood @ 12:19 pm

As another calendar year draws to a close, it’s timely to reflect on recent “hits” and “misses” – what went well, and what went less well.

In my case, I’m in the midst of a much longer reflection process, surveying not just the past calendar year, but the entire history (and pre-history) of Symbian – the company that played a significant role in kick-starting the smartphone phenomenon, well before anyone had ever heard of “iPhone” or “Android”. I’m channeling my thoughts into a new book that I’m in the midst of writing, “More than smartphones”. The working subtitle is “Learning from Symbian…”

I’ve got no shortage of source material to draw on – including notes in my electronic diary that go all the way back to January 1992. As I note in my current draft of the introductory chapter,

My analysis draws on an extensive set of notes I’ve taken throughout two decades of leadership positions in and around Symbian – including many notes written in the various Psion PDA organisers that have been my constant electronic companions over these years. These Psion devices have been close to my heart, in more than one sense.

Indeed, the story of Symbian is deeply linked with that of Psion, its original parent. Psion and Symbian were both headquartered in London and shared many of the same personnel…

The PDAs that Psion brought to market in the 1980s and 1990s were the mobile game-changers of their day, generating (albeit on a smaller scale) the same kind of industry buzz as would later manifest around new smartphone releases. Psion PDAs were also the precursors for much of the functionality that subsequently re-emerged in smartphones, satellite navigation products, and other smart mobile devices.

My own Psion electronic diary possibly ranks among the longest continuously maintained personal electronic agendas in the world. The oldest entry in it is at 2.30pm on Friday 31st January, 1992. That entry reads “Swedes+Danes Frampton St”. Therein lies a tale.

At that time, Psion’s commercial departments were located in a building in Frampton Street, in central London, roughly midway between the Edgware Road and Maida Vale tube stations. Psion’s technical teams were located in premises in Harcourt Street, about 15 minutes distance by walking. In 1992, the Psion Series 3a PDA was in an early stage of development, and I was trialling its new Agenda application – an application whose UI and rich set of views were being built by a team under my direction. In parallel, discussions were proceeding with representatives from several overseas distributors and partners, about the process to create versions of Psion PDAs for different languages: German, French, Italian, Spanish… and Swedish and Danish…

As the person who assembled and integrated all the files for different software versions, I met the leads of the teams doing the various translations. That day, 31st January 1992, more than 20 years ago, was among my first meetings with work professionals from the Nordic countries.

I recall that we discussed features such as keyboards that would cater for the additional characters of the Danish and Swedish alphabets, like ‘å’ and ‘ø’. I had no inkling in 1992 that professionals from Denmark, Sweden, and Finland (including employees of mobile phone juggernauts Ericsson and Nokia) would come to have such a far-reaching influence on the evolution of the software which was at that time being designed for the Series 3a. Nor could I foresee the subsequent 20 year evolution of my electronic agenda file:

  • Through numerous pieces of Series 3a hardware
  • Via the Series 3c successor to the Series 3a, with its incrementally improved hardware and software systems
  • Via a one-time migration process to a new data format, for the 32-bit Series 5, which could cope with much larger applications, and with much larger data files (the Series 3 family used a 16-bit architecture)
  • Into the Series 5mx successor of the Series 5
  • Through numerous pieces of Series 5mx hardware – all of which give (in their “About” screen) 1999 as the year of their creation; when one piece of hardware ceases to work, because, say, of problems with the screen display or the hinge mechanism, I transfer the data onto another in my possession…

Why 1999 is the end of this particular run of changes is a fascinating tale in its own right. It’s just one of many fascinating tales that surround the changing fortunes of the players in the Symbian story…

Step forwards from chapter one to the penultimate chapter, “Symbian retrospective”. This is where I’d welcome some extra input from readers of this blog, to complement and refine my own thinking.

This is the first of two retrospective chapters that draw conclusions from the episodes explored in preceding pages. In this chapter, I look at the following questions:

  • Out of all the choices over the years made by the players at the heart of the Symbian world, which ones were the most significant?
  • Of these choices, which were the greatest hits, and which the greatest misses?
  • With the advantage of hindsight, what are the different options that could credibly have been pursued which would have had the greatest impact on Symbian success or failure?

So far, my preliminary outline for that chapter lists a total of twenty hits and misses. Some examples of the hits:

  • Create Symbian with a commercial basis (not a “customers’ cooperative”)
  • Support from multiple key long-term investors (especially Nokia)
  • Enable significant differentiation (including network operator customisation)
  • Focus on performance and stability

And some examples of the misses:

  • Failure to appreciate the importance of the mobile web browser
  • Tolerating Symbian platform fragmentation
  • Failure to provide a CDMA solution
  • Failure to merge Nokia S60 and Symbian

My question for readers of this blogpost is: What would be in your list (say, 1-3 items) of the top hits and misses of decisions made by Symbian?

Footnote: Please accept some delays in your comments appearing. WordPress may hold them in a queue awaiting my review and approval. But I’m in a part of the world with great natural beauty and solitude, where the tour guides request that we all leave our wireless communication devices behind on the ship when we land for the daily excursions. Normally I would have balked at that very idea, but there are times and places when multi-tasking has to stop!

20 December 2012

An absorbing, challenging vision of near-future struggles

nexus-75-dpiTechnology can cause carnage, and in the wake of the carnage, outrage.

Take the sickening example of the shooting dead of 20 young children and six adults at Sandy Hook Elementary School in Newtown, Connecticut. After that fearful carnage, it’s no surprise that there are insistent calls to restrict the availability of powerful automatic guns.

There are similar examples of carnage and outrage in the new science fiction novel “Nexus: mankind gets an upgrade”, by the noted futurist and writer Ramez Naam.

I met Ramez at the WorldFuture 2012 event in Toronto earlier this year, where he gave a presentation on “Can Innovation Save the Planet?” which I rated as one of the very best sessions in the midst of a very good conference. I’ve been familiar with the high calibre of his thinking for some time, so when I heard that his new book Nexus was available for download to my Kindle – conveniently just ahead of me taking a twelve-hour flight – I jumped at the chance to purchase a copy. It turned out to be a great impulse purchase decision. I finished the book just as the airplane wheels touched down.

The type of technology that is linked to carnage and outrage in Nexus can be guessed from the image on the front cover of the book – smart drugs. Of course, drugs, like guns, are already the source of huge public debate in terms of whether to restrict access. Events described in Nexus make it clear why certain drugs become even more controversial, a few short decades ahead, in this fictional but all-too-credible vision of the near future.

Back in the real world, public interest in smart drugs is already accelerating:

  • I hear more and more discussions when people talk about taking nootropics of one sort or another – to help them “pull an all-nighter”, or to be especially sharp and mentally focused for an important interview. These comments often get followed up by reflections on whether these drugs might convey an unfair advantage.
  • The 2011 film Limitless – which I reviewed in passing here – helped to raise greater public awareness of the potential of this technology.
  • Audience attendance (and the subsequent online debate) at the recent London Futurist event “Hacking our wetware, with Andrew Vladimirov”, convinced me that public appetite for information on smart drugs is about to greatly intensify.

And as discussion of the technology of smart drugs increases, so (quite rightly) does discussion of the potential downsides and drawbacks of that technology.

Nexus is likely to ratchet this interest even higher. The technology in the novel doesn’t just add a few points of IQ, in a transitory basis, to the people who happen to take it. It goes much further than that. It has the potential to radically upgrade humans – with as big a jump in evolution (in the course of a few decades) as the transition between apes and humans. And not everyone likes that potential, for reasons that the book gradually makes credible, through sympathetic portrayals of various kinds of carnage.

Nexus puts the ideas of transhumanism and posthumanism clearly on the map. And lots more too, which I shouldn’t say much about, to avoid giving away the plot and spoiling the enjoyment of new readers.

But I will say this:

  • My own background as a software engineer (a profession I share with Ramez Naam) made me especially attuned to the descriptions of the merging of computing science ideas with those of smart drugs; other software engineers are likely to enjoy these speculations too
  • My strong interest in the battle of ideas about progress made me especially interested in inner turmoil (and changes of mind) of various key characters, as they weighed up the upsides and downsides of making new technology more widely available
  • My sympathy for the necessity of an inner path to enlightenment, to happen in parallel with increasingly smart deployment of increasingly powerful technology, meant that I was intrigued by some of the scenes in the book involving meditative practices
  • My status as an aspiring author myself – I’m now about one third of the way through the book I’m writing – meant that I took inspiration from seeing how a good author can integrate important ideas about technology, philosophy, societal conflict, and mental enlightenment, in a cracking good read.

Ramez is to be congratulated on writing a book that should have wide appeal, and which will raise attention to some very important questions – ahead of the time when rapid improvements of technology might mean that we have missed our small window of opportunity to steer these developments in ways that augment, rather than diminish, our collective humanity.

Anyone who thinks of themselves as a futurist should do themselves a favour and read this book, in order to participate more fully in the discussions which it is bound to catalyse.

Footnote: There’s a lot of strong language in the book, and “scenes of an adult nature”. Be warned. Some of the action scenes struck me as implausible – but hey, that’s the same for James Bond and Jason Bourne, so that’s no showstopper. Which prompts the question – could Nexus be turned into a film? I hope so!

2 December 2012

Let It Be at the Prince of Wales Theatre – Beatles stream of consiciousness

Filed under: fun, healthcare, music, theatre — David Wood @ 11:03 am

“For our last number I’d like to ask your help. Would the people in the cheaper seats clap your hands? And the rest of you, if you’ll just rattle your jewelry”

These were the words used by John Lennon, on stage for the Royal Variety Performance at the Prince of Wales theatre in central London on 4th November 1963, to introduce the last number of the set played by the Beatles. The packed audience included the British royal family. Black and white archive film of the set exists:

That moment was part of a period of a few months when the phenomenon of “Beatlemania” burst into the public consciousness. As told by Beatles historian Bruce Spizer,

By September 1963, The Beatles were gaining coverage in the British press and were receiving tremendous radio and television exposure. But their big break through was a widely-watched and well-publicized television appearance on “Val Parnell’s Sunday Night at the London Palladium”, which was televised throughout the U.K. during prime time Sunday evening and was the British equivalent of “The Ed Sullivan Show”. The Beatles headlined the Oct. 13, 1963, Palladium show, which was seen by more than 15 million people. The bedlam caused by the group both inside and outside the theater caught the attention of British news editors, who elevated The Beatles from a successful entertainment act to a national news phenomenon. The Daily Mirror described the hysteria as “Beatlemania!” The term stuck.

The Beatles’ triumphant Palladium appearance was quickly followed by the Oct. 31 airport reception witnessed by Sullivan and their playing before British high society at the Royal Command Performance, also known as the Royal Variety Show. Their presence on the Nov. 4, 1963, show drew more attention than the arrival of Royal Family. The Beatles, who were seventh on the bill of 19 acts, impressed the upscale crowd with “She Loves You”, “Till There Was You”, “From Me To You” and “Twist and Show”. Prior to ripping into a rousing rendition of their closing rocker, Lennon said, “For our last number I’d like to ask your help. Would the people in the cheaper seats clap your hands? And the rest of you, if you’ll just rattle your jewelry.” While [Beatles manager Brian] Epstein viewed John’s remarks as being a bit risque, he was relieved that the crowd seemed charmed by the Beatle’s cheeky humor. Before the show, John had joked to Brian that he was going to ask the Royals to rattle their “fookin’ jewelry.”

Nearly fifty years later, the show “Let It Be”, playing at the very same Prince of Wales theatre, re-created a great deal of the same music, musicianship, and mannerisms of the original act. Including the jewelry quip.

LetItBeI had the great pleasure of viewing the show last night – and it was, indeed, a great pleasure.

There’s no plot. It’s simply a group of four musicians who look and sound remarkably similar to the original Beatles, playing a series of sets of fabulous music, interspersed (allowing the band a chance to change clothing – and wigs) with archive news footage, mock advertisements conveying a wistful sense of the 1960s, and audio excerpts of retrospective interviews by the Beatles.

The show progresses through segments (each with their own clothing and hairstyles)

  • the 1963 Royal Variety Show era,
  • a set from the 1965 Shea Stadium concert – where the Beatles had played to an audience of more than 55,000
  • a Sergeant Pepper segment
  • a flower power segment featuring All You Need is Love, Magical Mystery Tour, and more
  • a quieter section, with the group members seated for evocative melodies such as Norwegian Wood and Blackbird
  • an Abbey Road segment, culminating in a powerful rendition of The End
  • a final encore – including (of course) Let It Be, as well as a fore-taste of forthcoming solitary careers: Give Peace A Chance.

I offer a few thoughts from my stream of consciousness during the performance:

  • On either side of the stage, large screens showed images to frame the main actions. The young women who were shouting and screaming with such hysteria must in many cases be grandmothers by now – I wonder if they know their images are still delighting London audiences, nearly fifty years after their rush of blood was captured on camera
  • The vibrant twanging of Get Back mentally transported me back in time to April 1969, when I remember being enthralled, as a very naive ten-year old, by that song playing on Top of The Pop: “Sweet Loretta Martin thought she was a woman, But she was another man…”
  • The vocals to Lucy in the Sky with Diamonds and A Day in the Life were, if anything, even more trippy than in the original
  • Actually the audience seemed bemused and unsure about A Day in the Life, with many of them showing blank faces as the cacophony grew – I guess this song is nothing like as well known nowadays. And the clincher: half the audience started applauding the end of this song too soon, before that final apocalyptic multi-piano E Major chord rang out, woops
  • Perhaps another sign of the differentially fading memories of the Beatles music – the audience were happy to rise to its feet to sway along to Twist and Shout in the opening section, but when a similar request was made to stand up during Sgt Pepper Reprise, everyone sat stuck in their seats
  • A nice touch of fidelity in the Abbey Road segment – the “Paul McCartney” character was barefoot on stage – as on the Abbey Road album cover photo
  • For sheer musicianship, the guitar crescendo at the end of While My Guitar Gently Weeps was outstanding; that has always been one of my favourite Beatles tracks – particularly in its remastered version on the Love album remix – but it seemed particularly dramatic on stage this evening.

With such a rich music portfolio to choose from, inevitably many favourites have to be excluded from the two-hour show. Personally I would have missed out one or two of the tracks chosen, in order to find room for glorious stomping classics such as Lady Madonna, Hello Goodbye, The Walrus, or Back In the USSR.  For example, I’ve probably heard Hey Jude enough times already in my life, but its iconic status presumably meant it needed to be included.

Is this the show with the best set of music ever? Seeing that the competition includes Mamma Mia (with its feast of Abba hits), Westside Story (with its feast of Bernstein), and Amadeus (with its feast of Mozart), the answer is perhaps not – but it was still a tremendous occasion, providing a welcome break from thoughts about futurism, existential risk, free markets, and mobile phone technology!

Footnote: But I could not forget about mobile phone technology altogether that evening. On the way home, my companion found that her London Travel Card was being systematically rejected by tube turnstiles – again. That’s despite having bought the ticket only a few hours earlier. It’s by no means the first occurrence for her. “Is it OK to carry my travel card here, right next to my mobile phone, in this small section of my handbag?” she asked. “That is exactly the problem”, I answered – and there seems to be plenty of knowledge of this problem online. And the Beatles music faded out of my mind, to be replaced by thoughts on the health implications of proximity of mobile phones to the human body.

2 November 2012

The future of human enhancement

Is it ethical to put money and resources into trying to develop technological enhancements for human capabilities, when there are so many alternative well-tested mechanisms available to address pressing problems such as social injustice, poverty, poor sanitation, and endemic disease? Is that a failure of priority? Why make a strenuous effort in the hope of allowing an elite few individuals to become “better than well”, courtesy of new technology, when so many people are currently so “less than well”?

These were questions raised by Professor Anne Kerr at a public debate earlier this week at the London School of Economics: The Ethics of Human Enhancement.

The event was described as follows on the LSE website:

This dialogue will consider how issues related to human enhancement fit into the bigger picture of humanity’s future, including the risks and opportunities that will be created by future technological advances. It will question the individualistic logic of human enhancement and consider the social conditions and consequences of enhancement technologies, both real and imagined.

From the stage, Professor Kerr made a number of criticisms of “individualistic logic” (to use the same phrase as in the description of the event). Any human enhancements provided by technology, she suggested, would likely only benefit a minority of individuals, potentially making existing social inequalities even worse than at present.

She had a lot of worries about technology amplifying existing human flaws:

  • Imagine what might happen if various clever people could take some pill to make themselves even cleverer? It’s well known that clever people often make poor decisions. Their cleverness allows them to construct beguiling sophistry to justify the actions they already want to take. More cleverness could mean even more beguiling sophistry.
  • Or imagine if rapacious bankers could take drugs to boost their workplace stamina and self-serving brainpower – how much more effective they would become at siphoning off public money to their own pockets!
  • Might these risks be addressed by public policy makers, in a way that would allow benefits of new technology, without falling foul of the potential downsides? Again, Professor Kerr was doubtful. In the real world, she said, policy makers cannot operate at that level. They are constrained by shorter-term thinking.

For such reasons, Professor Kerr was opposed to these kinds of technology-driven human enhancements.

When the time for audience Q&A arrived, I felt bound to ask from the floor:

Professor Kerr, would you be in favour of the following examples of human enhancement, assuming they worked?

  1. An enhancement that made bankers more socially attuned, with more empathy, and more likely to use their personal wealth in support of philanthropic projects?
  2. An enhancement that made policy makers less parochial, less politically driven, and more able to consider longer-term implications in an objective manner?
  3. And an enhancement that made clever people less likely to be blind to their own personal cognitive biases, and more likely to genuinely consider counters to their views?

In short, would you support enhancements that would make people wiser as well as smarter, and kinder as well as stronger?

The answer came quickly:

No. They would not work. And there are other means of achieving the same effects, including progress of democratisation and education.

I countered: These other methods don’t seem to be working well enough. If I had thought more quickly, I would have raised examples such as society’s collective failure to address the risk of runaway climate change.

Groundwork for this discussion had already been well laid by the other main speaker at the event, Professor Nick Bostrom. You can hear what Professor Bostrom had to say – as well as the full content of the debate – in an audio recording of the event that is available here.

(Small print: I’ve not yet taken the time to review the contents of this recording. My description in this blogpost of some of the verbal exchanges inevitably paraphrases and extrapolates what was actually said. I apologise in advance for any mis-representation, but I believe my summary to be faithful to the spirit of the discussion, if not to the actual words used.)

Professor Bostrom started the debate by mentioning that the question of human enhancement is a big subject. It can be approached from a shorter-term policy perspective: what rules should governments set, to constrain the development and application of technological enhancements, such as genetic engineering, neuro-engineering, smart drugs, synthetic biology, nanotechnology, and artificial general intelligence? It can also be approached from the angle of envisioning larger human potential, that would enable the best possible future for human civilisation. Sadly, much of the discussion at the LSE got bogged down in the shorter-term question, and lost sight of the grander accomplishments that human enhancements could bring.

Professor Bostrom had an explanation for this lack of sustained interest in these larger possibilities: the technologies for human enhancement that are currently available do not work that well:

  • Some drugs give cyclists or sprinters an incremental advantage over their competitors, but the people who take these drugs still need to train exceptionally hard, to reach the pinnacle of their performance
  • Other drugs seem to allow students to concentrate better over periods of time, but their effects aren’t particularly outstanding, and it’s possible that methods such as good diet, adequate rest, and meditation, have results that are at least as significant
  • Genetic selection can reduce the risk of implanted embryos developing various diseases that have strong genetic links, but so far, there is no clear evidence that genetic selection can result in babies with abilities higher than the general human range.

This lack of evidence of strong tangible results is one reason why Professor Kerr was able to reply so quickly to my suggestion about the three kinds of technological enhancements, saying these enhancements would not work.

However, I would still like to press they question: what if they did work? Would we want to encourage them in that case?

A recent article in the Philosophy Now journal takes the argument one step further. The article was co-authored by Professors Julian Savulescu and Ingmar Persson, and draws material from their book “Unfit for the Future: The Need for Moral Enhancement”.

To quote from the Philosophy Now article:

For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.

But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?…

Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species.

We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities…

In short, the argument of Professors Savulescu and Persson is not just that we should allow the development of technology that can enhance human reasoning and moral awareness, but that we must strongly encourage it. Failure to do so would be to commit a grave error of omission.

These arguments about moral imperative – what technologies should we allow to be developed, or indeed encourage to be developed – are in turn strongly influenced by our beliefs about what technologies are possible. It’s clear to me that many people in positions of authority in society – including academics as well as politicians – are woefully unaware about realistic technology possibilities. People are familiar with various ideas as a result of science fiction novels and movies, but it’s a different matter to know the division between “this is an interesting work of fiction” and “this is a credible future that might arise within the next generation”.

What’s more, when it comes to people forecasting the likely progress of technological possibilities, I see a lot of evidence in favour of the observation made by Roy Amara, long-time president of the Institute for the Future:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

What about the technologies mentioned by Professors Savulescu and Persson? What impact will be possible from smart drugs, genetic selection and engineering, and the use of external devices that affect the brain or the learning process? In the short term, probably less than many of us hope; in the longer term, probably more than most of us expect.

In this context, what is the “longer term”? That’s the harder question!

But the quest to address this kind of question, and then to share the answers widely, is the reason I have been keen to support the growth of the London Futurist meetup, by organising a series of discussion meetings with well-informed futurist speakers. Happily, membership has been on the up-and-up, reaching nearly 900 by the end of October.

The London Futurist event happening this weekend – on the afternoon of Saturday 3rd November – picks up the theme of enhancing our mental abilities. The title is “Hacking our wetware: smart drugs and beyond – with Andrew Vladimirov”:

What are the most promising methods to enhance human mental and intellectual abilities significantly beyond the so-called physiological norm? Which specific brain mechanisms should be targeted, and how?  Which aspects of wetware hacking are likely to grow in prominence in the not-too-distant future?

By reviewing a variety of fascinating experimental findings, this talk will explore:

  • various pharmacological methods, taking into account fundamental differences in Eastern and Western approaches to the development and use of nootropics
  • the potential of non-invasive neuro-stimulation using CES (Cranial Electrotherapy Stimulation) and TMS (Transcranial Magnetic Stimulation)
  • data suggesting the possibility to “awaken” savant-like skills in healthy humans without paying the price of autism
  • apparent means to stimulate seemingly paranormal abilities and transcendental experiences
  • potential genetic engineering perspectives, aiming towards human cognition enhancement.

The advance number of positive RSVPs for this talk, as recorded on the London Futurist meetup site, has reached 129 at the time of writing – which is already a record.

(From my observations, I have developed the rule of thumb that the number of people who actually turn up for a meeting is something like 60%-75% of the number of positive RSVPs.)

I’ll finish by returning to the question posed at the beginning of my posting:

  • Are these technological enhancements likely to increase human inequality (by benefiting only a small number of users),
  • Or are they instead likely to drop in price and grow in availability (the same as happened, for example, with smartphones, Internet access, and many other items of technology)?

My answer – which I believe is shared by Professor Bostrom – is that things could still go either way. That’s why we need to think hard about their development and application, ahead of time. That way, we’ll become better informed to help influence the outcome.

« Newer PostsOlder Posts »

Blog at WordPress.com.