dw2

10 October 2015

Technological unemployment – Why it’s different this time

On Tuesday last week I joined members of “The Big Potatoes” for a spirited discussion entitled “Automation Anxiety”. Participants became embroiled in questions such as:

  • To what extent will increasingly capable automation (robots, software, and AI) displace humans from the workforce?
  • To what extent should humans be anxious about this process?

The Big Potatoes website chose an image from the marvellously provocative Channel 4 drama series “Humans” to set the scene for the discussion:

Channel4_HumansAdvertisingHoarding-440x293

“Closer to humans” than ever before, the fictional advertisement says, referring to humanoid robots with multiple capabilities. In the TV series, many humans became deeply distressed at the way their roles are being usurped by these new-fangled entities.

Back in the real world, many critics reject these worries. “We’ve heard it all before”, they assert. Every new wave of technological automation has caused employment disruption, yes, but it has also led to new types of employment. The new jobs created will compensate for the old ones destroyed, the critics say.

I see these critics as, most likely, profoundly mistaken. This time things are different. That’s because of the general purpose nature of ongoing improvements in the algorithms for automation. Machine learning algorithms that are developed with one set of skills in mind turn out to fit, reasonably straightforwardly, into other sets of skills as well.

The master algorithm

That argument is spelt out in the recent book “The master algorithm” by University of Washington professor of computer science and engineering Pedro Domingos.

TheMasterAlgorithm

The subtitle of that book refers to a “quest for the ultimate learning machine”. This ultimate learning machine can be contrasted with another universal machine, namely the universal Turing machine:

  • The universal Turing machine accepts inputs and applies a given algorithm to compute corresponding outputs
  • The universal learning machine accepts a set of corresponding input and output data, and makes the best possible task of inferring the algorithm that would obtain the outputs from the inputs.

For example, given sets of texts written in English, and matching texts written in French, the universal learning machine would infer an algorithm that will convert English into French. Given sets of biochemical reactions of various drugs on different cancers, the universal learning machine would infer an algorithm to suggest the best treatment for any given cancer.

As Domingos explains, there are currently five different “tribes” within the overall machine learning community. Each tribe has its separate origin, and also its own idea for the starting point of the (future) master algorithm:

  • “Symbolists” have their origin in logic and philosophy; their core algorithm is “inverse deduction”
  • “Connectionists” have their origin in neuroscience; their core algorithm is “back-propagation”
  • “Evolutionaries” have their origin in evolutionary biology; their core algorithm is “genetic programming”
  • “Bayesians” have their origin in statistics; their core algorithm is “probabilistic inference”
  • “Analogizers” have their origin in psychology; their core algorithm is “kernel machines”.

(See slide 6 of this Slideshare presentation. Indeed, take the time to view the full presentation. Better again, read Domingos’ entire book.)

What’s likely to happen over the next decade, or two, is that a single master algorithm will emerge that unifies all the above approaches – and, thereby, delivers great power. It will be similar to the progress made by physics as the fundamental force of natures have gradually been unified into a single theory.

And as that unification progresses, more and more occupations will be transformed, more quickly than people generally expect. Technological unemployment will rise and rise, as software embodying the master algorithm handles tasks previously thought outside the scope of automation.

Incidentally, Domingos has set out some ambitious goals for what his book will accomplish:

The goal is to do for data science what “Chaos” [by James Gleick] did for complexity theory, or “The Selfish Gene” [by Richard Dawkins] for evolutionary game theory: introduce the essential ideas to a broader audience, in an entertaining and accessible way, and outline the field’s rich history, connections to other fields, and implications.

Now that everyone is using machine learning and big data, and they’re in the media every day, I think there’s a crying need for a book like this. Data science is too important to be left just to us experts! Everyone – citizens, consumers, managers, policymakers – should have a basic understanding of what goes on inside the magic black box that turns data into predictions.

People who comment about the likely impact of automation on employment would do particularly well to educate themselves about the ideas covered by Domingos.

Rise of the robots

There’s a second reason why “this time it’s different” as regards the impact of new waves of automation on the employment market. This factor is the accelerating pace of technological change. As more areas of industry become subject to digitisation, they become, at the same time, subject to automation.

That’s one of the arguments made by perhaps the best writer so far on technological unemployment, Martin Ford. Ford’s recent book “Rise of the Robots: Technology and the Threat of a Jobless Future” builds ably on what previous writers have said.

RiseofRobots

Here’s a sample of review comments about Ford’s book:

Lucid, comprehensive and unafraid to grapple fairly with those who dispute Ford’s basic thesis, Rise of the Robots is an indispensable contribution to a long-running argument.
Los Angeles Times

If The Second Machine Age was last year’s tech-economy title of choice, this book may be 2015’s equivalent.
Financial Times, Summer books 2015, Business, Andrew Hill

[Ford’s] a careful and thoughtful writer who relies on ample evidence, clear reasoning, and lucid economic analysis. In other words, it’s entirely possible that he’s right.
Daily Beast

Surveying all the fields now being affected by automation, Ford makes a compelling case that this is an historic disruption—a fundamental shift from most tasks being performed by humans to one where most tasks are done by machines.
Fast Company

Well-researched and disturbingly persuasive.
Financial Times

Martin Ford has thrust himself into the center of the debate over AI, big data, and the future of the economy with a shrewd look at the forces shaping our lives and work. As an entrepreneur pioneering many of the trends he uncovers, he speaks with special credibility, insight, and verve. Business people, policy makers, and professionals of all sorts should read this book right away—before the ‘bots steal their jobs. Ford gives us a roadmap to the future.
—Kenneth Cukier, Data Editor for the Economist and co-author of Big Data: A Revolution That Will Transform How We Live, Work, and Think

Ever since the Luddites, pessimists have believed that technology would destroy jobs. So far they have been wrong. Martin Ford shows with great clarity why today’s automated technology will be much more destructive of jobs than previous technological innovation. This is a book that everyone concerned with the future of work must read.
—Lord Robert Skidelsky, Emeritus Professor of Political Economy at the University of Warwick, co-author of How Much Is Enough?: Money and the Good Life and author of the three-volume biography of John Maynard Keynes

If you’re still not convinced, I recommend that you listen to this audio podcast of a recent event at London’s RSA, addressed by Ford.

I summarise the takeaway message in this picture, taken from one of my Delta Wisdom workshop presentations:

Tech unemployment curves

  • Yes, humans can retrain over time, to learn new skills, in readiness for new occupations when their former employment has been displaced by automation
  • However, the speed of improvement of the capabilities of automation will increasingly exceed that of humans
  • Coupled with the general purpose nature of these capabilities, it means that, conceivably, from some time around 2040, very few humans will be able to find paid work.

A worked example: a site carpenter

During the Big Potatoes debate on Tuesday, I pressed the participants to name an occupation that would definitely be safe from incursion by robots and automation. What jobs, if any, will robots never be able to do?

One suggestion that came back was “site carpenter”. In this thinking, unfinished buildings are too complex, and too difficult for robots to navigate. Robots who try to make their way through these buildings, to tackle carpentry tasks, will likely fall down. Or assuming they don’t fall down, how will they cope with finding out that the reality in the building often varies sharply from the official specification? These poor robots will try to perform some carpentry task, but will get stymied when items are in different places from where they’re supposed to be. Or have different tolerances. Or alternatives have been used. Etc. Such systems are too messy for robots to compute.

My answer is as follows. Yes, present-day robots currently often do fall down. Critics seem to find this hilarious. But this is pretty similar to the fact that young children often fall down, while learning to walk. Or novice skateboarders often fall down, when unfamiliar with this mode of transport. However, robots will learn fast. One example is shown in this video, of the “Atlas” humanoid robot from Boston Dynamics (now part of Google):

As for robots being able to deal with uncertainty and surprises, I’m frankly struck by the naivety of this question. Of course software can deal with uncertainty. Software calculates courses of action statistically and probabilistically, the whole time. When software encounters information at variance from what it previously expected, it can adjust its planned course of action. Indeed, it can take the same kinds of steps that a human would consider – forming new hypotheses, and, when needed, checking back with management for confirmation.

The question is a reminder to me that the software and AI community need to do a much better job to communicate the current capabilities of their field, and the likely improvements ahead.

What does it mean to be human?

For me, the most interesting part of Tuesday’s discussion was when it turned to the following questions:

  • Should these changes be welcomed, rather than feared?
  • What will these forthcoming changes imply for our conception of what it means to be human?

To my mind, technological unemployment will force us to rethink some of the fundamentals of the “protestant work ethic” that permeates society. That ethic has played a decisive positive role for the last few centuries, but that doesn’t mean we should remain under its spell indefinitely.

If we can change our conceptions, and if we can manage the resulting social transition, the outcome could be extremely positive.

Some of these topics were aired at a conference in New York City on 29th September: “The World Summit on Technological Unemployment”, that was run by Jim Clark’s World Technology Network.

Robotic Steel Workers

One of the many speakers at that conference, Scott Santens, has kindly made his slides available, here. Alongside many graphs on the increasing “winner takes all” nature of modern employment (in which productivity increases but median income declines), Santens offers a different way of thinking about how humans should be spending their time:

We are not facing a future without work. We are facing a future without jobs.

There is a huge difference between the two, and we must start seeing the difference, and making the difference more clear to each other.

In his blogpost “Jobs, Work, and Universal Basic Income”, Santens continues the argument as follows:

When you hate what you do as a job, you are definitely getting paid in return for doing it. But when you love what you do as a job or as unpaid work, you’re only able to do it because of somehow earning sufficient income to enable you to do it.

Put another way, extrinsically motivated work is work done before or after an expected payment. It’s an exchange. Intrinsically motivated work is work only made possible by sufficient access to money. It’s a gift.

The difference between these two forms of work cannot be overstated…

Traditionally speaking, most of the work going on around us is only considered work, if one gets paid to do it. Are you a parent? Sorry, that’s not work. Are you in paid childcare? Congratulations, that’s work. Are you an open source programmer? Sorry, that’s not work. Are you a paid software engineer? Congratulations, that’s work…

What enables this transformation would be some variant of a “basic income guarantee” – a concept that is introduced in the slides by Santens, and also in the above-mentioned book by Martin Ford. You can hear Ford discuss this option in his RSA podcast, where he ably handles a large number of questions from the audience.

What I found particularly interesting from that podcast was a comment made by Anthony Painter, the RSA’s Director of Policy and Strategy who chaired the event:

The RSA will be advocating support for Basic Income… in response to Technological Unemployment.

(This comment comes about 2/3 of the way through the podcast.)

To be clear, I recognise that there will be many difficulties in any transition from the present economic situation to one in which a universal basic income applies. That transition is going to be highly challenging to manage. But these problems of transition are a far better problem to have, than dealing with the consequences of vastly increased unpaid unemployment and social alienation.

Life is being redefined

Just in case you’re still tempted to dismiss the above scenarios as some kind of irresponsible fantasy, there’s one more resource you might like to consult. It’s by Janna Q. Anderson, Professor of Communications at Elon University, and is an extended write-up of a presentation I heard her deliver at the World Future 2015 conference in San Francisco this July.

Janna Anderson keynote

You can find Anderson’s article here. It starts as follows:

The Robot Takeover is Already Here

The machines that replace us do not have to have superintelligence to execute a takeover with overwhelming impacts. They must merely extend as they have been, rapidly becoming more and more instrumental in our essential systems.

It’s the Algorithm Age. In the next few years humans in most positions in the world of work will be nearly 100 percent replaced by or partnered with smart software and robots —’black box’ invisible algorithm-driven tools. It is that which we cannot see that we should question, challenge and even fear the most. Algorithms are driving the world. We are information. Everything is code. We are becoming dependent upon and even merging with our machines. Advancing the rights of the individual in this vast, complex network is difficult and crucial.

The article is described as being a “45 minute read”. In turn, it contains numerous links, so you could spend lots longer following the resulting ideas. In view of the momentous consequences of the trends being discussed, that could prove to be a good use of your time.

By way of summary, I’ll pull out a few sentences from the middle of the article:

One thing is certain: Employment, as it is currently defined, is already extremely unstable and today many of the people who live a life of abundance are not making nearly enough of an effort yet to fully share what they could with those who do not…

It’s not just education that is in need of an overhaul. A primary concern in this future is the reinvention of humans’ own perceptions of human value…

[Another] thing is certain: Life is being redefined.

Who controls the robots?

Despite the occasional certainty in this field (as just listed above, extracted from the article by Janna Anderson), there remains a great deal of uncertainty. I share with my Big Potatoes colleagues the viewpoint that technology does not determine social responses. The question of which future scenario will unfold isn’t just a question of cheer-leading (if you’re an optimist) or cowering (if you’re a pessimist). It’s a question of choice and action.

That’s a theme I’ll be addressing next Sunday, 18th October, at a lunchtime session of the 2015 Battle of Ideas. The session is entitled “Man vs machine: Who controls the robots”.

robots

Here’s how the session is described:

From Metropolis through to recent hit film Ex Machina, concerns about intelligent robots enslaving humanity are a sci-fi staple. Yet recent headlines suggest the reality is catching up with the cultural imagination. The World Economic Forum in Davos earlier this year hosted a serious debate around the Campaign to Stop Killer Robots, organised by the NGO Human Rights Watch to oppose the rise of drones and other examples of lethal autonomous warfare. Moreover, those expressing the most vocal concerns around the march of the robots can hardly be dismissed as Luddites: the Elon-Musk funded and MIT-backed Future of Life Institute sparked significant debate on artificial intelligence (AI) by publishing an open letter signed by many of the world’s leading technologists and calling for robust guidelines on AI research to ‘avoid potential pitfalls’. Stephen Hawking, one of the signatories, has even warned that advancing robotics could ‘spell the end of the human race’.

On the other hand, few technophiles doubt the enormous potential benefits of intelligent robotics: from robot nurses capable of tending to the elderly and sick through to the labour-saving benefits of smart machines performing complex and repetitive tasks. Indeed, radical ‘transhumanists’ openly welcome the possibility of technological singularity, where AI will become so advanced that it can far exceed the limitations of human intelligence and imagination. Yet, despite regular (and invariably overstated) claims that a computer has managed to pass the Turing Test, many remain sceptical about the prospect of a significant power shift between man and machine in the near future…

Why has this aspect of robotic development seemingly caught the imagination of even experts in the field, when even the most remarkable developments still remain relatively modest? Are these concerns about the rise of the robots simply a high-tech twist on Frankenstein’s monster, or do recent breakthroughs in artificial intelligence pose new ethical questions? Is the question more about from who builds robots and why, rather than what they can actually do? Does the debate reflect the sheer ambition of technologists in creating smart machines or a deeper philosophical crisis in what it means to be human?

 As you can imagine, I’ll be taking serious issue with the above claim, from the session description, that progress with robots will “remain relatively modest”. However, I’ll be arguing for strong focus on questions of control.

It’s not just a question of whether it’s humans or robots that end up in control of the planet. There’s a critical preliminary question as to which groupings and systems of humans end up controlling the evolution of robots, software, and automation. Should we leave this control to market mechanisms, aided by investment from the military? Or should we exert a more general human control of this process?

In line with my recent essay “Four political futures: which will you choose?”, I’ll be arguing for a technoprogressive approach to control, rather than a technolibertarian one.

Four futures

I wait with interest to find out how much this viewpoint will be shared by the other speakers at this session:

30 September 2013

Questions about Hangouts on Air

Filed under: collaboration, Google, Hangout On Air, intelligence — David Wood @ 11:05 pm

HOA CaptureI’m still learning about how to get the best results from Google Hangouts On Air – events that are broadcast live over the Internet.

On Sunday, I hosted a Hangout On Air which ran pretty well. However, several features of the experience were disappointing.

Here, I’m setting aside questions about what the panellists said. It was a fascinating discussion, but in this blogpost, I want to ask some questions, instead, about the technology involved in creating and broadcasting the Hangout On Air. That was the disappointing part.

If anyone reading this can answer my questions, I’ll be most grateful.

If you take a quick look at the beginning of the YouTube video of the broadcast, you’ll immediately see the first problem I experienced:

The problem was that the video uplink from my own laptop didn’t get included in the event. Instead of what I thought I was contributing to the event, the event just showed my G+ avatar (a static picture of my face). That was in contrast to situation for the other four participants.

When I looked at the Hangout On Air window on my laptop as I was hosting the call, it showed me a stream of images recorded by my webcam. It also showed, at other times, slides which I was briefly presenting. That’s what I saw, but no-one else saw it. None of these displays made it into the broadcast version.

Happily, the audio feed from my laptop did reach the broadcast version. But not the video.

As it happens, I think that particular problem was “just one of those things”, which happen rarely, and in circumstances that are difficult to reproduce. I doubt this problem will recur in this way, the next time I do such an event. I believe that the software system on my laptop simply got itself into a muddle. I saw other evidence for the software being in difficulty:

  • As the event was taking place, I got notifications that people had added me to their G+ circles. But when I clicked on these notifications, to consider reciprocally adding these people into my own circles, I got an error message, saying something like “Cannot retrieve circle status info at this time”
  • After the event had finished, I tried to reboot my laptop. The shutdown hung, twice. First, it hung with a most unusual message, “Waiting for explorer.exe – playing logoff sound”. Second, after I accepted the suggestion from the shutdown dialog to close down that app regardless, the laptop hung indefinitely in the final “shutting down” display. In the end, I pressed the hardware reset button.

That muddle shouldn’t have arisen, especially as I had taken the precaution of rebooting my laptop some 30 minutes before the event was due to start. But it did. However, what made things worse is that I only became aware of this issue once the Hangout had already started its broadcast phase.

At that time, the other panellists told me they couldn’t see any live video from my laptop. I tried various quick fixes (e.g. switching my webcam off and on), but to no avail. I also wondered whether I was suffering from a local bandwidth restriction, but I had reset my broadband router 30 minutes before the call started, and I was the only person in my house at that time.

Exit the hangout and re-enter it, was the next suggestion offered to me. Maybe that will fix things.

But this is where I see a deeper issue with the way Hangouts On Air presently work.

From my experience (though I’ll be delighted if people can tell me otherwise), when the person who started the Hangout On Air exits the event, the whole event shuts down. It’s therefore different from if any of the other panellists exits and rejoins. The other panellists can exit and rejoin without terminating the event. Not so for the host.

By the time I found out about the video uplink problem, I had already published the URL of where the YouTube of the Hangout would be broadcast. After starting the Hangout On Air (but before discovering the problem with my video feed), I had copied this URL to quite a few different places on social media – Meetup.com, Facebook, etc. I knew that people were already watching the event. If I exited the Hangout, to see if that would get the video uplink working again, we would have had to start a new Hangout, which would have had a different YouTube URL. I would have had to manually update all these social networking pages.

I can imagine two possible solutions to this – but I don’t think either are available yet, right?

  1. There may be a mechanism for the host to leave the Hangout On Air, without that Hangout terminating
  2. There may be a mechanism for something like a URL redirector to work, even for a second Hangout instance, which replaces a previous instance. The same URL would work for two different Hangouts.

Incidentally, in terms of URLs for the Hangout, note that there are at least three different such URLs:

  1. The URL of the “inside” of the Hangout, which the host can share with panellists to allow them to join it
  2. The URL of the Google+ window where the Hangout broadcast runs
  3. The URL of the YouTube window where the Hangout broadcast runs.

As far as I know, all three URLs change when a Hangout is terminated and restarted. What’s more, #1 and #3 are created when the Hangout starts, even before it switches into Broadcast mode, whereas #2 is only available when the host presses the “Start broadcasting” button.

In short, it’s a pretty complicated state of affairs. I presume that Google are hard at work to simplify matters…

To look on the positive side, one outcome that I feared (as I mentioned previously) didn’t come to pass. That outcome was my laptop over-heating. Instead, according to the CPU temperature monitor widget that I run on my laptop, the temperature remained comfortable throughout (reaching the 70s Centigrade, but staying well short of the 100 degree value which triggers an instant shutdown). I imagine that, because no video uplink was taking place, there was no strong CPU load on my laptop. I’ll have to wait to see what happens next time.

After all, over-heating is another example of something that might cause a Hangout host to want to temporarily exit the Hangout, without bringing the whole event to a premature end. There are surely other examples as well.

27 September 2013

Technology for improved collaborative intelligence

Filed under: collaboration, Hangout On Air, intelligence, Symbian — David Wood @ 1:02 pm

Interested in experiences in using Google Hangout On Air, as a tool to improve collaborative intelligence? Read on.

Google’s Page Rank algorithm. The Wikipedia editing process. Ranking of reviewers on Amazon.com. These are all examples of technology helping to elevate useful information above the cacophony of background noise.

To be clear, in such examples, insight doesn’t just come from technology. It comes from a combination of good tools plus good human judgement – aided by processes that typically evolve over several iterations.

For London Futurists, I’m keen to take advantage of technology to accelerate the analysis of radical scenarios for the next 3-40 years. One issue is that the general field of futurism has its own fair share of background noise:

  • Articles that are full of hype or sensationalism
  • Articles motivated by commercial concerns, with questionable factual accuracy
  • Articles intended for entertainment purposes, but which end up overly influencing what people think.

Lots of people like to ramp up the gas while talking about  the future, but that doesn’t mean they know what they’re talking about.

I’ve generally been pleased with the quality of discussion in London Futurists real-life meetings, held (for example) in Birkbeck College, Central London. The speaker contributions in these meetings are important, but the audience members collectively raise a lot of good points too. I do my best to ‘referee’ the discussions, in a way that a range of opinions have a chance to be aired. But there have been three main limitations with these meetups:

  1. Meetings often come to an end well before we’ve got to the bottom of some of the key lines of discussion
  2. The insights from individual meetings can sometimes fail to be taken forward into subsequent meetings – where the audience members are different
  3. Attendance is limited to people who live near to London, and who have no other commitments when the meetup is taking place.

These limitations won’t disappear overnight, but I have plans to address them in stages.

I’ve explained some of my plans in the following video, which is also available at http://londonfuturists.com/2013/08/30/introducing-london-futurists-academy/.

As the video says, I want to be able to take advantage of the same kind of positive feedback cycles that have accelerated the progress of technology, in order to accelerate in a similar way the generation of reliable insight about the future.

As a practical step, I’m increasingly experimenting with Google Hangouts, as a way to:

  • Involve a wider audience in our discussions
  • Preserve an online record of the discussions
  • Find out, in real-time, which questions the audience collectively believes should be injected into a conversation.

In case it helps others who are also considering the usage of Google Hangouts, here’s what I’ve found out so far.

The Hangouts are a multi-person video conference call. Participants have to log in via one of their Google accounts. They also have to download an app, inside Google Plus, before they can take part in the Hangout. Google Plus will prompt them to download the app.

The Hangout system comes with its own set of plug-in apps. For example, participants can share their screens, which is a handy way of showing some PowerPoint slides that back up a point you are making.

By default, the maximum number of attendees is 10. However, if the person who starts the Hangout has a corporate account with Google (as I have, for my company Delta Wisdom), that number can increase to 15.

For London Futurists meetings, instead of a standard “Hangout”, I’m using “Hangouts On Air” (sometime abbreviated as ‘HOA’). These are started from within their own section of the Google Plus page:

  • The person starting the call (the “moderator”) creates the session in a “pre-broadcast” state, in which he/she can invite a number of participants
  • At this stage, the URL is generated, for where the Hangout can be viewed on YouTube; this vital piece of information can be published on social networking sites
  • The moderator can also take some other pre-broadcast steps, such as enabling the “Questions” app (further mentioned below)
  • When everyone is ready, the moderator presses the big red “Start broadcast” button
  • A wide audience is now able to watch the panellists discussion via the YouTube URL, or on the Google Plus page of the moderator.

For example, there will be a London Futurists HOA this Sunday, starting 7pm UK time. There will be four panellists, plus me. The subject is “Projects to accelerate radical healthy longevity”. The details are here. The event will be visible on my own Google Plus page, https://plus.google.com/104281987519632639471/posts. Note that viewers don’t need to be included in any of the Circles of the moderator.

As the HOA proceeds, viewers typically see the current speaker at the top of the screen, along with the other panellists in smaller windows below. The moderator has the option to temporarily “lock” one of the participants into the top area, so that their screen has prominence at that time, even though other panellists might be speaking.

It’s good practice for panellists to mute their microphones when they’re not speaking. That kind of thing is useful for the panellists to rehearse with the moderator before the call itself (perhaps in a brief preview call several days earlier), in order to debug connectivity issues, the installation of apps, camera positioning, lighting, and so forth. Incidentally, it’s best if there’s a source of lighting in front of the speaker, rather than behind.

How does the audience get to interact with the panellists in real-time? Here’s where things become interesting.

First, anyone watching via YouTube can place text comments under the YouTube window. These comments are visible to the panellists:

  • Either by keeping an eye on the same YouTube window
  • Or, simpler, within the “Comment Tracker” tab of the “Hangout Toolbox” app that is available inside the Hangout window.

However, people viewing the HOA via Google Plus have a different option. Provided the moderator has enabled this feature before the start of the broadcast, viewers will see a big button inviting them to ask a question, in a text box. They will also be able to view the questions that other viewers have submitted, and to give a ‘+1’ thumbs up endorsement.

In real-time, the panellists can see this list of questions appear on their screens, inside the Hangout window, along with an indication of how many ‘+1′ votes they have received. Ideally, this will help the moderator to pick the best question for the panel to address next. It’s a small step in the direction of greater collaborative intelligence.

At time of writing, I don’t think there’s an option for viewers to downvote each others’ questions. However, there is an option to declare that a question is spam. I expect the Google team behind HOA will be making further enhancements before long.

This Questions app is itself an example of how the Google HOA technology is improving. The last time I ran a HOA for London Futurists, the Questions apps wasn’t available, so we just used the YouTube comments mechanism. One of the panellists for that call, David Orban, suggested I should look into another tool, called Google Moderator, for use in a subsequent occasion. I took a look, and liked what I saw, and my initial announcement of my next HOA (the one happening on Sunday) mentioned that I would be using Google Moderator. However, as I said, technology moves on quickly. Giulio Prisco drew my attention to the recently announced Questions feature of the HOA itself – a feature that had previously been in restricted test usage, but which is now available for all users of HOA. So we’ll be using that instead of Google Moderator (which is a rather old tool, without any direct connection into the Hangout app).

The overall HOA system is still new, and it’s not without its issues. For example, panellists have a lot of different places they might need to look, as the call progresses:

  • The “YouTube comment tracker” screen is mutually exclusive from the “Questions” screen: panellists can only have one of these visible to them at a time
  • These screens are in turn mutually exclusive from a text chat window which the panellists can use to chat amongst themselves (for example, to coordinate who will be speaking next) while one of the other panellists is speaking.

Second – and this is what currently makes me most apprehensive – the system seems to put a lot of load on my laptop, whenever I am the moderator of a HOA. I’ve actually seen something similar whenever my laptop is generating video for any long call. The laptop gets hotter and hotter as time progresses, and might even cut out altogether – as happened one hour into the last London Futurists HOA (see the end of this video).

Unfortunately, when the moderator’s PC loses connection to the HOA, the HOA itself seems to shut down (after a short delay, to allow quick reconnections). If this happens again on Sunday, we’ll restart the HOA as soon as possible. The “part two” will be visible on the same Google Plus page, but the corresponding YouTube video will have its own, brand new URL.

Since the last occurrence of my laptop overheating during a video call, I’ve had a new motherboard installed, plus a new hard disk (as the old one was giving some diagnostic errors), and had all the dust cleaned out of my system. I’m keeping my fingers crossed for this Sunday. Technology brings its challenges as well as many opportunities…

Footnote: This threat of over-heating reminds me of a talk I gave on several occasions as long ago as 2006, while at Symbian, about “Horsemen of the apocalypse”, including fire. Here’s a brief extract:

Standing in opposition to the potential for swift continuing increase in mobile technology, however, we face a series of major challenges. I call them “horsemen of the apocalypse”.  They include fire, flood, plague, and warfare.

“Fire” is the challenge of coping with the heat generated by batteries running ever faster. Alas, batteries don’t follow Moore’s Law. As users demand more work from their smartphones, their battery lifetimes will tend to plummet. The solution involves close inter-working of new hardware technology (including multi-core processors) and highly sophisticated low-level software. Together, this can reduce the voltage required by the hardware, and the device can avoid catching fire as it performs its incredible calculations…

12 March 2013

The coming revolution in mental enhancement

Filed under: entrepreneurs, futurist, intelligence, neuroengineering, nootropics, risks, UKH+ — David Wood @ 2:50 pm

Here’s a near-future scenario: Within five years, 10% of people in the developed world will be regularly taking smart drugs that noticeably enhance their mental performance.

It turns out there may be a surprising reason for this scenario to fail to come to pass. I’ll get to that shortly. But first, let’s review why the above scenario would be a desirable one.

nbpicAs so often, Nick Bostrom presents the case well. Nick is Professor at the Faculty of Philosophy & Oxford Martin School, Director at the Future of Humanity Institute, and Director of the Programme on the Impacts of Future Technology, all at the University of Oxford. He wrote in 2008,

Those who seek the advancement of human knowledge should [consider] kinds of indirect contribution…

No contribution would be more generally applicable than one that improves the performance of the human brain.

Much more effort ought to be devoted to the development of techniques for cognitive enhancement, be they drugs to improve concentration, mental energy, and memory, or nutritional enrichments of infant formula to optimize brain development.

Society invests vast resources in education in an attempt to improve students’ cognitive abilities. Why does it spend so little on studying the biology of maximizing the performance of the human nervous system?

Imagine a researcher invented an inexpensive drug which was completely safe and which improved all‐round cognitive performance by just 1%. The gain would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited from the drug the inventor would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists. Each year the invention would amount to an indirect contribution equal to 100,000 times what the average scientist contributes. Even an Einstein or a Darwin at the peak of their powers could not make such a great impact.

Meanwhile others too could benefit from being able to think better, including engineers, school children, accountants, and politicians.

This example illustrates the enormous potential of improving human cognition by even a tiny amount…

The first objection to the above scenario is that it is technically infeasible. People imply that no such drug could possibly exist. Any apparent evidence offered to the contrary is inevitably suspect. Questions can be raised over the anecdotes shared in the Longecity thread “Ten months of research condensed – A total newbies guide to nootropics” or in the recent Unfinished Man review “Nootropics – The Facts About ‘Smart Drugs'”. After all, the reasoning goes, the brain is too complex. So these anecdotes are likely to involve delusion – whether it is self-delusion (people not being aware of placebo effects and similar) or delusion from snake oil purveyors who have few scruples in trying to sell products.

A related objection is that the side-effects of such drugs are unknown or difficult to assess. Yes, there are substances (take alcohol as an example) which can aid our creativity, but with all kinds of side-effects. The whole field is too dangerous – or so it is said.

These objections may have carried weight some years ago, but increasingly they have less force. Other complex aspects of human functionality can be improved by targeted drugs; why not also the brain? Yes, people vary in how they respond to specific drug combinations, but that’s something that can be taken into account. Indeed, more data is being collected all the time.

Evidence of progress in the study of these smart drugs is one thing I expect to feature in an event taking place in central London this Wednesday (13th March).

next big thingThe event, The Miracle Pill: What do brain boosting drugs mean for the future? is being hosted by Nesta as part of the Policy Exchange “Next big thing” series.

Here’s an extract from the event website:

If you could take a drug to boost your brain-power, would you?

Drugs to enhance human performance are nothing new. Long-haul lorry drivers and aircraft pilots are known to pop amphetamines to stay alert, and university students down caffeine tablets to ward off drowsiness during all-nighters. But these stimulants work by revving up the entire nervous system and the effect is only temporary.

Arguments over smart drugs are raging. If a drug can improve an individual’s performance, and they do not experience side-effects, some argue, it cannot be such a bad thing.

But where will it all stop? Ambitious parents may start giving mind-enhancing pills to their children. People go to all sorts of lengths to gain an educational advantage and eventually success might be dependent on access to these mind-improving drugs…

This event will ask:

  • What are the limits to performance enhancement drugs, both scientifically and ethically? And who decides?
  • Is there a role for such pills in developing countries, where an extra mental boost might make a distinct difference to those in developing countries?
  • Does there need to be a global agreement to monitor the development of these pills?
  • Should policymakers give drug companies carte blanche to develop these products or is a stricter regulatory regime required?

The event will be chaired by Louise Marston, Head of Innovation and Economic Growth, Nesta. The list of panelists is impressive:

  • Dr Bennett Foddy, Deputy Director and Senior Research Fellow, Institute for Science and Ethics, Oxford Martin School, University of Oxford
  • Dr Anders Sandberg, James Martin Fellow, Future of Humanity Institute, Oxford Martin School, University of Oxford
  • Dr Hilary Leevers, Head of Education & Learning, the Wellcome Trust
  • Dame Sally Davies, Chief Medical Officer for England.

Under-currents of mistrust

From my own experience in discussing smart drugs that could enhance mental performance, I’m aware that objections to their use often run more deeply than the technical questions covered above. There are often under-currents of mistrust:

  • Reliance of smart drugs is viewed as irresponsible, self-indulgent, or as cheating
  • There’s an association with the irresponsible advocacy of so-called “recreational” mind-altering drugs
  • Surely, it is said, there are more reliable and more honourable ways of enhancing our mental powers
  • Besides, what is the point of simply being able to think faster?

I strongly reject the implication of irresponsibility or self-indulgence. Increased mental capability can be applied to all sorts of important questions, resulting in scientific progress, technological breakthrough, more elegant product development, and social benefit. The argument I quoted earlier, from Nick Bostrom, applies here.

I also strongly reject the “either/or” implication, when people advocate pursuit of more traditional methods of mental enhancement instead of reliance of modern technology. Why cannot we do both? When considering our physical health, we pay attention to traditional concerns, such as diet and rest, as well as to the latest medical findings. It should be the same for our mental well-being.

No, the real question is: does it work? And once it becomes clearer that certain combinations of smart drugs can make a significant difference to our mental prowess, with little risk of unwelcome side effects, the other objections to their use will quickly fade away.

It will be similar to the rapid change in attitudes towards IVF (“test tube babies”). I remember a time when all sorts of moral and theological hand-wringing took place over the possibility of in-vitro fertilisation. This hubristic technology, it was said, might create soul-less monstrosities; only wickedly selfish people would ever consider utilising the treatment. That view was held by numerous devout observers – but quickly faded away, in the light of people’s real-world experience with the resulting babies.

Timescales

This brings us back to the question: how quickly can we expect progress with smart drugs? It’s the 64 million dollar question. Actually it might be a 640 million dollar question. Possibly even more. The entrepreneurs and companies who succeed in developing and marketing good products in the field of mental enhancement stand to tap into very sizeable revenue streams. Pfizer, the developer of Viagra, earned revenues of $509 million in 2008 alone, from that particular enhancement drug. The developers of a Viagra for the mind could reasonably imagine similar revenues.

The barriers here are regulatory as well as technical. But with a rising public interest in the possibility of significant mental enhancement, the mood could swing quickly, enabling much more vigorous investment by highly proficient companies.

The biophysical approach

But there’s one more complication.

Actually this is a positive complication rather than a negative one.

Critics who suggest that there are better approaches to enhancing mental powers than smart drugs, might turn out to be right in a way they didn’t expect. The candidate for a better approach is to use non-invasive electrical and magnetic stimulation of the brain, targeted to specific functional areas.

headset-renderA variety of “helmets” are already available, or have been announced as being under development.

The start-up website Flow State Engaged raises and answers a few questions on this topic, as follows:

Q: What is tDCS?

A: Transcranial direct-current stimulation (tDCS) is one of the coolest health/self improvement technologies available today. tDCS is a form of neurostimulation which uses a constant, low current delivered directly to the brain via small electrodes to affect brain function.

Q: Is this for real?

A: The US Army and DARPA both currently use tDCS devices to train snipers and drone pilots, and have recorded 2.5x increases in learning rates. This incredible phenomenon of increased learning has been documented by multiple clinical studies as well.

Q: You want one?

A: Today if you want a tDCS machine it’s nearly impossible to find one for less than $600, and you need a prescription to order one. We wanted a simpler cheaper option. So we made our own kit, for ourselves and for all you body hackers out there…

AndrewVSomeone who has made a close personal study of the whole field of nootropics and biophysical approaches (including tDCS) is London-based researcher Andrew Vladimirov.

Back in November, Andrew gave a talk to the London Futurists on “Hacking our wetware: smart drugs and beyond”. It was a well-attended talk that stirred up lots of questions, both in the meeting itself, and subsequently online.

The good news is that Andrew is returning to London Futurists on Saturday 23rd March, where his talk this time will focus on biophysical approaches to “hacking our wetware”.

You can find more details of this meeting here – including how to register to attend.

Introducing the smart-hat

In advance of the meeting, Andrew has shared an alternative vision of the ways in which many people in the not-so-distant future will pursue mental enhancement.

He calls this vision “Towards digital nootropics”:

You are tired, anxious and stressed, and perhaps suffer from a mild headache. Instead of reaching for a pack from Boots the local pharmacists, you put on a fashionable “smarthat” (a neat variation of an “electrocap” with a comfortable 10-20 scheme placement for both small electrodes and solenoids) or, perhaps, its lighter version – a “smart bandana”.

Your phone detects it and a secure wireless connection is instantly established. A Neurostimulator app opens. You select “remove anxiety”, “anti-headache” and “basic relaxation” options, press the button and continue with your business. In 10-15 minutes all these problems are gone.

However, there is still much to do, and an important meeting is looming. So, you go to the “enhance” menu of the Neurostimulator and browse through the long list of options which include “thinking flexibility”, “increase calculus skills”, “creative imagination”, “lateral brainstorm”, “strategic genius”, “great write-up”, “silver tongue” and “cram before exam” amongst many others. There is even a separate night menu with functionality such as “increase memory consolidation while asleep”. You select the most appropriate options, press the button and carry on the meeting preparations.

There are still 15 minutes to go, which is more than enough for the desired effects to kick in. If necessary, they can be monitored and adjusted via the separate neurofeedback menu, as the smarthat also provides limited EEG measurement capabilities. You may use a tablet or a laptop instead of the phone for that.

A new profession: neuroanalyst

Entrepreneurs reading this article may already have noticed the very interesting business-development opportunities this whole field offers. These same entrepreneurs may pay further attention to the next stage of Andrew Vladimirov’s “Towards digital nootropics” vision of the not-so-distant future:

Your neighbour Jane is a trained neuroanalyst, an increasingly popular trade that combines depth psychology and a variety of advanced non-invasive neurostimulation means. Her machinery is more powerful and sophisticated than your average smartphone Neurostim.

While you lie on her coach with the mindhelmet on, she can induce highly detailed memory recall, including memories of early childhood to go through as a therapist. With a flick of a switch, she can also awake dormant mental abilities and skills you’ve never imagined. For instance, you can become a savant for the time it takes to solve some particularly hard problem and flip back to your normal state as you leave Jane’s office.

Since she is licensed, some ethical modulation options are also at her disposal. For instance, if Jane suspects that you are lying and deceiving her, the mindhelmet can be used to reduce your ability to lie – and you won’t even notice it.

Sounds like science fiction? The bulk of necessary technologies is already there, and with enough effort the vision described can be realised in five years or so.

If you live in the vicinity of London, you’ll have the opportunity to question Andrew on aspects of this vision at the London Futurists meetup.

Smart drugs or smart hats?

Will we one day talk as casually about our smarthats as we currently do about our smartphones? Or will there be more focus, instead, on smart drugs?

Personally I expect we’ll be doing both. It’s not necessarily an either/or choice.

And there will probably be even more dramatic ways to enhance our mental powers, that we currently can scarcely conceive.

10 February 2013

Fixing bugs in minds and bugs in societies

Suppose we notice what appears to be bugs in our thinking processes. Should we try to fix these bugs?

Or how about bugs in the way society works? Should we try to fix these bugs too?

As examples of bugs of the first kind, I return to a book I reviewed some time ago, “Kluge: The Haphazard Construction of the Human Mind”. I entitled my review “The human mind as a flawed creation of nature”, and I still stick by that description. In that review, I pulled out the following quote from near to the end of the book:

In this book, we’ve discussed several bugs in our cognitive makeup: confirmation bias, mental contamination, anchoring, framing, inadequate self-control, the ruminative cycle, the focussing illusion, motivated reasoning, and false memory, not to mention absent-mindedness, an ambiguous linguistic system, and vulnerability to mental disorders. Our memory, contextually driven as it is, is ill suited to many of the demands of modern life, and our self-control mechanisms are almost hopelessly split. Our ancestral mechanisms were shaped in a different world, and our more modern deliberative mechanisms can’t shake the influence of that past. In every domain we have considered, from memory to belief, choice, language, and pleasure, we have seen that a mind built largely through the progressive overlay of technologies is far from perfect…

These bugs in our mental makeup are far from being harmless quirks or curiosities. They can lead us:

  • to overly trust people who have visual trappings of authority,
  • to fail to make adequate provision for our own futures,
  • to keep throwing money into bad investments,
  • and to jump to all kinds of dangerous premature conclusions.

But should we try to fix these bugs?

The field where the term ‘bug’ was first used in this sense of a mistake, software engineering, provides many cautionary tales of bug fixing going wrong:

  • Sometimes what appears to be a ‘bug’ in a piece of software turns out to be a useful ‘feature’, with a good purpose after all
  • Sometimes a fix introduces unexpected side-effects, which are worse than the bug which was fixed.

I shared an example of the second kind in the “Managing defects” chapter of the book I wrote in 2004-5, “Symbian for software leaders: principles of successful smartphone development projects”:

An embarrassing moment with defects

The first million-selling product that I helped to build was the Psion Series 3a handheld computer. This was designed as a distinct evolutionary step-up from its predecessor, the original Series 3 (often called the “Psion 3 classic” in retrospect)…

At last the day came (several weeks late, as it happened) to ship the software to Japan, where it would be flashed into large numbers of chips ready to assemble into production Series 3a devices. It was ROM version 3.20. No sooner was it sent than panic set into the development team. Two of us had independently noticed a new defect in the agenda application. If a user set an alarm on a repeating entry, and then adjusted the time of this entry, in some circumstances the alarm would fail to ring. We reasoned that this was a really bad defect – after all, two of us had independently found it.

The engineer who had written the engine for the application – the part dealing with all data manipulation algorithms, including calculating alarm times – studied his code, and came up with a fix. We were hesitant, since it was complex code. So we performed a mass code review: lots of the best brains in the team talked through the details of the fix. After twenty four hours, we decided the fix was good. So we recalled 3.20, and released 3.21 in its place. To our relief, no chips were lost in the process: the flashing had not yet started.

Following standard practice, we upgraded the prototype devices of everyone in the development team, to run 3.21. As we waited for the chips to return, we kept using our devices – continuing (in the jargon of the team) to “eat our own dog food”. Strangely, there were a few new puzzling problems with alarms on entries. Actually, it soon became clear these problems were a lot worse than the problem that had just been fixed. As we diagnosed these new problems, a sinking feeling grew. Despite our intense care (but probably because of the intense pressure) we had failed to fully consider all the routes through the agenda engine code; the change made for 3.21 was actually a regression on previous behaviour.

Once again, we made a phone call to Japan. This time, we were too late to prevent some tens of thousands of wasted chips. We put the agenda engine code back to its previous state, and decided that was good enough! (Because of some other minor changes, the shipping version number was incremented to 3.22.) We decided to live with this one defect, in order not to hold up production any longer.

We were expecting to hear more news about this particular defect from the Psion technical support teams, but the call never came. This defect never featured on the list of defects reported by end users. In retrospect, we had been misled by the fact that two of us had independently found this defect during the final test phase: this distorted our priority call…

That was an expensive mistake, which seared a cautionary attitude into my own brain, regarding the dangers of last-minute changes to complex software. All seasoned software engineers have similar tales they can tell, from their own experience.

If attempts to fix defects in software are often counter-productive, how much more dangerous are attempts to fix defects in our thinking processes – or defects in how our societies operate! At least in the first case, we generally still have access to the source code, and to the design intention of the original software authors. For the other examples, the long evolutionary history that led to particular designs is something at which we can only guess. It would be like trying to fix a software bug, that somehow results from the combination of many millions of lines of source code, written decades ago by people who left no documentation and who are not available for consultation.

What I’ve just stated is a version of an argument that conservative-minded thinkers often give, against attempts to try to conduct “social engineering” or “improve on nature”. Tinkering with ages-old thinking processes – or with structures within societies – carries the risk that we fail to appreciate many hidden connections. Therefore (the argument runs) we should desist from any such experimentation.

Versions of this argument appeared, from two different commentators, in responses to my previous blogpost. One put it like this:

The trouble is that ‘cognitive biases and engrained mistakes’ may appear dysfunctional but they are, in fact, evolutionarily successful adaptations of humanity to its highly complex environment. These, including prejudice, provide highly effective means for the resolution of really existing problems in human capacity…

Rational policies to deal with human and social complexity have almost invariably been proved to be inhumane and brutal, fine for the theoretician in the British Library, but dreadful in the field.

Another continued the theme:

I have much sympathy for [the] point about “cognitive biases and engrained mistakes”. The belief that one has identified cognitive bias in another or has liberated oneself from such can be a “Fatal Conceit,” to borrow a phrase from Hayek, and has indeed not infrequently given rise to inhumane treatment even of whole populations. One of my favourite sayings is David Hume’s “the rules of morality are not conclusions of our reason,” which is at the heart of Hayek’s Fatal Conceit argument.

But the conclusion I draw is different. I don’t conclude, “Never try to fix bugs”. After all, the very next sentence from my chapter on “Managing defects” stated, “We eventually produced a proper fix several months later”. Indeed, many bugs do demand urgent fixes. Instead, my conclusion is that bug fixing in complex systems needs a great deal of careful thought, including cautious experimentation, data analysis, and peer review.

The analogy can be taken one more step. Suppose that a software engineer has a bad track record in his or her defect fixes. Despite claiming, each time, to be exercising care and attention, the results speak differently: the fixes usually make things worse. Suppose, further, that this software engineer comes from a particular company, and that fixes from that company have the same poor track record. (To make this more vivid, the name of this company might be “Technocratic solutions” or “Socialista” or “Utopia software”. You can probably see where this argument is going…) That would be a reason for especial discomfort if someone new from that company is submitting code changes in attempts to fix a given bug.

Well, something similar happens in the field of social change. History has shown, in many cases, that attempts at mental engineering and social engineering were counter-productive. For that reason, many conservatives support various “precautionary principles”. They are especially fearful of any social changes proposed by people they can tar with labels such as “technocratic” or”socialist” or “utopian”.

These precautionary principles presuppose that the ‘cure’ will be worse than the ‘disease’. However, I personally have greater confidence in the fast improving power of new fields of science, including the fields that study our mind and brain. These improvements are placing ever greater understanding in our hands – and hence, ever greater power to fix bugs without introducing nasty side-effects.

For these reasons, I do look forward (as I said in my previous posting) to these improvements

helping individuals and societies rise above cognitive biases and engrained mistakes in reasoning… and accelerating a reformation of the political and economic environment, so that the outcomes that are rationally best are pursued, instead of those which are expedient and profitable for the people who currently possess the most power and influence.

Finally, let me offer some thoughts on the observation that “the rules of morality are not conclusions of our reason”. That observation is vividly supported by the disturbing “moral dumbfounding” examples discussed by Jonathan Haidt in his excellent book “The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” (which I briefly reviewed here). But does that observation mean that we should stop trying to reason with people about moral choices?

MoralLandscapeHere, I’ll adapt comments from my review of “The Moral Landscape: How Science Can Determine Human Values”, by Sam Harris.

That book considers how we might go about finding answers to big questions such as “how should I live?” and “what makes some ways of life more moral than others?”  As some specific examples, how should we respond to:

  • The Taliban’s insistence that the education of girls is an abomination?
  • The stance by Jehovah’s Witnesses against blood transfusion?
  • The prohibition by the Catholic Church of the use of condoms?
  • The legalisation of same-sex relationships?
  • The use of embryonic stem cells in the search for cures of diseases such as Alzheimer’s and Parkinson’s?
  • A would-be Islamist suicide bomber who is convinced that his intended actions will propel him into a paradise of abundant mental well-being?

One response is that such questions are the province of religion. The correct answers are revealed via prophets and/or holy books.  The answers are already clear, to those with the eye of faith. It is a divine being that tells us, directly or indirectly, the difference between good and evil. There’s no need for experimental investigations here.

A second response is that the main field to study these questions is that of philosophy. It is by abstract reason, that we can determine the difference between good and evil.

But Sam Harris, instead, primarily advocates the use of the scientific method. Science enters the equation because it is increasingly able to identify:

  • Neural correlates (or other physical or social underpinnings) of sentient well-being
  • Cause-and-effect mechanisms whereby particular actions typically bring about particular changes in these neural correlates.

With the help of steadily improving scientific understanding, we can compare different actions based on their likely effects on sentient well-being. Actions which are likely to magnify sentient well-being are good, and those which are likely to diminish it are evil. That’s how we can evaluate, for example, the Taliban’s views on girls’ education.

As Harris makes clear, this is far from being an abstract, other-worldly discussion. Cultures are clashing all the time, with lots of dramatic consequences for human well-being. Seeing these clashes, are we to be moral relativists (saying “different cultures are best for different peoples, and there’s no way to objectively compare them”) or are we to be moral realists (saying “some cultures promote significantly more human flourishing than others, and are to be objectively preferred as a result”)? And if we are to be moral realists, do we resolve our moral arguments by deference to religious tradition, or by open-minded investigation of real-world connections?

In the light of these questions, here are some arguments from Harris’s book that deserve thought:

  • There’s a useful comparison between the science of human values (the project espoused by Harris), and a science of diets (what we should eat, in order to enjoy good health).  In both cases, we’re currently far from having all the facts.  And in both cases, there are frequently several right answers.  But not all diets are equally good.  Similarly, not all cultures are equally good.  And what makes one diet better than another will be determined by facts about the physical world – such as the likely effects (direct and indirect) of different kinds of fats and proteins and sugars and vitamins on our bodies and minds.  While people still legitimately disagree about diets, that’s not a reason to say that science can never answer such questions.  Likewise, present-day disagreements about specific causes of happiness, mental flourishing, and general sentient well-being, do not mean these causes fail to exist, or that we can never know them.
  • Likewise with the science of economics.  We’re still far from having a complete understanding of how different monetary and financial policies impact the long-term health of the economy.  But that doesn’t mean we should throw up our hands and stop searching for insight about likely cause and effect.  The discipline of economics, imperfect though it is, survives in an as-yet-incomplete state.  The same goes for political science too.  And, likewise, for the science of the moral landscape.
  • Attempts to reserve some special area of “moral insight” for religion are indefensible.  As Harris says, “How is it that most Jews, Christians, and Muslims are opposed to slavery? You don’t get this moral insight from scripture, because the God of Abraham expects us to keep slaves. Consequently, even religious fundamentalists draw many of their moral positions from a wider conversation about human values that is not, in principle, religious.” That’s the conversation we need to progress.

PS I’ve written more about cognitive biases and cognitive dissonance – and how we can transcend these mistakes – in my blogpost “Our own entrenched enemies of reason”.

26 March 2012

Short-cuts to sharper thinking?

Filed under: bias, futurist, intelligence, nootropics — David Wood @ 11:15 pm

What are the best methods to get our minds working well? Are there ways to significantly improve our powers of concentration, memory, analysis, and insight?

Some methods for cognitive enhancement are well known:

  • Get plenty of sleep
  • Avoid distracting environments
  • Practice concentration, to build up mental stamina
  • Augment our physical memories with external memories, whether in physical or electronic format, that we can consult again afterwards
  • Beware the sway of emotion – “when your heart’s on fire, smoke gets in your eyes”
  • Learn about cognitive fallacies and biases – and how to avoid them
  • Share our thinking with trusted friends and colleagues, who can provide constructive criticism
  • Listen to music which has the power both to soothe the mind and to stimulate it
  • Practice selected yoga techniques, which can provide a surge of mental energy
  • Get in touch with our “inner why”, that rekindles our motivation and focus.

Then there are lots of ideas about food and drink to partake, or to avoid. Caffeine provides at least a transient boost to concentration. Alcohol encourages creativity but weakens accurate discernment. Sugar can provide a short-term buzz, though (perhaps) at the cost of longer-term sluggishness. Claims have been made for ginseng, ginkgo biloba, ginger, dark chocolate, Red Bull, and many other foods and supplements.

But potentially the most dramatic effects could result from new compounds – compounds that are being specially engineered in the light of recent findings about the operation of the brain. The phrase “smart drugs” refers to something that could dramatically boost our mental powers.

Think of the character Eddie in the film Limitless, and of the mental superpowers he acquired from NZT, a designer pharmaceutical.

If a real-world version of NZT were offered to you, would you take it?

(Note: NZT has its own real-world website – which is a leftover part of a sophisticated marketing campaign for Limitless.)

I foresee four kinds of answer:

  1. No such drug could be created. This is just fiction.
  2. If such a drug existed, there would be risks of horrible side-effects (as indeed – spoiler alert! – happened in Limitless). It would be foolish to experiment.
  3. If such a drug existed, it would be immoral and/or inappropriate to take it. It’s unfair to short-circuit the effort required to actually make ourselves mentally sharper.
  4. Sure, bring it to me! – especially for mission-critical situations like major exams, job interviews, client bid preparation, project delivery deadlines, and for those social occasions when it’s particularly important to make a good impression.

My own answer: even though nothing as remarkable as NZT exists today, drugs with notable mental effects are going to become increasingly available over the next decade or so.  As well as being more widely available, the quality and reliability will increase too.

So we’re likely to be hearing more and more of the phrases “cognitive enhancers”, “smart drugs”, and “nootropics“.  We’ll all going to have to come to terms with weighing up the pros and cons of taking these enhancers.  And we’ll probably need to appreciate many variations and special cases.

Yes, there will be risks of side effects.  But it’s the same with other drugs and dietary supplements.  We need to collect and sift evidence, as it is most likely to apply to us.

For example: on the advice of my doctors, I take a small dose of aspirin every evening, and a statin.  These drugs are known to have side-effects in some cases.  So my GP ensured that I had a blood test after I’d been taking the statins for a while, to check there were no signs of the most prevalent side-effect.  In due course, genomic sequences might identify which people are more susceptible to particular side-effects.

Similarly with nootropics: the best effects are likely to arise from tailoring doses to the special circumstances of individual people, and to monitoring for unusual side effects.

There’s already lots of information online about various nootropics.  For example, see this Nootropics FAQ.  That’s a lot to take in!

Personally, for the next few years, I expect to continue to focus my own cognitive enhancement project on the methods I listed at the start of this article.  But I want to keep myself closely informed about developments in nootropics.  If the evidence of substantive beneficial effect becomes clearer, I’ll be ready to take full advantage.

Hmm, the likelihood is that I’m going to need to become smarter, in order to figure out when it’s wise to try to make myself smarter again by taking one or more nootropics.  But that first-stage mental enhancement can happen by immersing myself in a bunch of other smart people…

That’s one reason I’m looking forward to the London Futurist Meetup on the subject of nootropics that is taking place this Thursday (29th March), from 7pm, in the Lord Wargrave pub at 42 Brendon Street, London W1H 5HE.  It’s going to be a semi-informal discussion, with attendees being encouraged to talk about their own experiences, expectations, hopes, and fears about nootropics.  Hopefully, the outcome will be improved collective wisdom!

17 April 2011

Towards inner humanity+

Filed under: challenge, films, Humanity Plus, intelligence, vision — David Wood @ 11:06 am

There’s a great scene near the beginning of the film “Limitless“.  The central character, Eddie (played by Bradley Cooper), has just been confronted by his neighbour, Valerie. It’s made clear to the viewers that Valerie is generally nasty and hostile to Eddie. Worse, Eddie owes money to Valerie, and is overdue payment. It seems that a fruitless verbal confrontation looms. Or perhaps Eddie will try to quickly evade her.

But this time it’s different.  Eddie’s brain has been switched into a super-fast enhanced mode (which is the main theme of the film).  Does he take the opportunity to weaken Valerie with fast verbal gymnastics and put-downs?

Instead, he uses his new-found rocket-paced analytic abilities to a much better purpose.  Picking up the tiniest of clues, he realises that Valerie’s foul mood is caused by something unconnected with Eddie himself: Valerie is having a particular problem with her legal studies.  Gathering memories out of the depths of his brain from long-past discussions with former student friends, Eddie is able to suggest ideas to Valerie that rouse her interest and defuse her hostility.  Soon, she’s more receptive.  The two sit down together, and Eddie guides her in the swift completion of a brilliant essay for the tricky homework assignment that has been preying on Valerie’s nerves.

Anyone who watches Limitless is bound to wonder: can technology – such as a smart drug – really have that kind of radical transformative effect on human ability?

Humanity+ is the name of the worldview that says, not only is that kind of technology feasible (within the lifetimes of many people now alive), but it is desirable.  If you watch Limitless right through to the end, you’ll find plenty in the film that offers broad support to the Humanity+ mindset.  That’s a pleasant change from the usual Hollywood conviction that technology-induced human enhancement typically ends up in dysfunction and loss of important human characteristics.

But the question remains: if we become smarter, does it mean we would be better people?  Or would we tend to use accelerated mental faculties to advance our own self-centred personal agendas?

A similar question was raised by an audience member at the “Post Transcendent Man” event in Birkbeck in London last weekend.  Is it appropriate to consider intellectual enhancement without also considering moral enhancement?  Or is it like giving a five year old the keys to a sports car?  Or like handing a bunch of Mujahideen terrorists the instructions to create advanced nuclear weaponry?

Take another example of accelerating technology: the Internet.  This can be used to spy and to hassle, as well as to educate and uplift.  Consider the chilling examples mentioned in the recent Telegraph article “The toxic rise of internet bullies“:

At first glance, Natasha MacBryde’s Facebook page is nothing unusual. A pretty, slightly self-conscious blonde teenager gazes out, posed in the act of taking her own picture. But unlike other pages, this has been set up in commemoration, following her death under a train earlier this month. Now though it has had to be moderated after it was hijacked by commenters who mocked both Natasha and the manner of her death heartlessly.

“Natasha wasn’t bullied, she was just a whore,” said one, while another added: “I caught the train to heaven LOL [laugh out loud].” Others clicked on the “like” symbol, safe in their anonymity, to indicate that they agreed. The messages were removed after a matter of hours, but Natasha’s grieving father Andrew revealed that Natasha’s brother had also discovered a macabre video – entitled “Tasha The Tank Engine” on YouTube (it has since been removed). “I simply cannot understand how or why these people get any enjoyment or satisfaction from making such disgraceful comments,” he said.

He is far from alone. Following the vicious sexual assault on NBC reporter Lara Logan in Cairo last week, online debate on America’s NPR website became so ugly that moderator Mark Memmott was forced to remove scores of comments and reiterate the organisation’s stance on offensive message-posting…

It’s not just anonymous comments that cause concern.  As Richard Adhikari notes in his article “The Internet’s Destruction of Critical Thinking“,

Prior to the dawn of the Internet Age, anyone who wanted to keep up with current events could pretty much count on being exposed to a diversity of subjects and viewpoints. News consumers were passive recipients of content delivered by print reporters or TV anchors, and choices were few. Now, it’s alarmingly easy to avoid any troublesome information that might provoke one to really think… few people do more than skim the surface — and as they do with newspapers, most people tend to read only what interests them. Add to that the democratization of the power to publish, where anyone with access to the Web can put up a blog on any topic whatsoever, and you have a veritable Tower of Babel…

Of course, the more powerful the technology, the bigger the risks if it is used in pursuit of our lower tendencies.  For a particularly extreme example, review the plot of the 1956 science fiction film “Forbidden planet”, as covered here.  As Roko Mijic has explained:

Here are two ways in which the amplification of human intelligence could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

For all these reasons, it’s my strong conviction that any quest to what might be called “outer Humanity+” must be accompanied (and, indeed, preceded) by a quest for “inner Humanity+”.  Both these quests consider the ways in which accelerating technology can enhance human capabilities.  However the differences are summed up in the following comparison:

Outer Humanity+

  • Seeks greater strength
  • Seeks greater speed
  • Seeks to transcend limits
  • Seeks life extension
  • Seeks individual progress
  • Seeks more experiences
  • Seeks greater intelligence
  • Generally optimistic about technology
  • Generally hostile to goals and practice of religion and meditation

Inner Humanity+

  • Seeks greater kindness
  • Seeks deeper insight
  • Seeks self-mastery
  • Seeks life expansion
  • Seeks cooperation
  • Seeks more fulfilment
  • Seeks greater wisdom
  • Has major concerns about technology
  • Has some sympathy to goals and practice of religion and meditation

Back to Eddie in Limitless.  It’s my hunch he was basically a nice guy to start with – except that he was ineffectual.  Once his brainpower was enhanced, he could be a more effectual nice guy.  His brain provided rapid insight on the problems and issues being faced by his neighbour – and proposed effective solutions.  In this example, greater strength led to a more effective kindness.  But if real-life technology delivers real-life intellect enhancement any time soon, all bets are off regarding whether it will result in greater kindness or greater unkindness.  In other words, all bets are off as to whether we’ll create a heaven-like state, or hell on earth.  For this reason, the quest to achieve Inner Humanity+ must overtake the quest to achieve Outer Humanity+.

31 December 2010

Welcome 2011 – what will the future hold?

Filed under: aging, futurist, Humanity Plus, intelligence, rejuveneering — David Wood @ 6:42 pm

As 2010 turns into 2011, let me offer some predictions about topics that will increasingly be on people’s minds, as 2011 advances.

(Spoiler: these are all topics that will feature as speaker presentations at the Humanity+ UK 2011 conference that I’m organising in London’s Conway Hall on 29th January.  At time of writing, I’m still waiting to confirm possibly one or two more speakers for this event, but registration is already open.)

Apologies for omitting many other key emerging tech-related trends from this list.  If there’s something you care strongly about – and if you live within striking distance of London – you’ll be more than welcome to join the discussion on 29th January!

19 September 2010

Our own entrenched enemies of reason

Filed under: books, deception, evolution, intelligence, irrationality, psychology — David Wood @ 3:39 pm

I’m a pretty normal, observant guy.  If there was something as large as an elephant in that room, then I would have seen it – sure as eggs are eggs.  I don’t miss something as large as that.  So someone who says, afterwards, that there was an elephant there, must have some kind of screw loose, or some kind of twisted ulterior motivation.  Gosh, what kind of person are they?

Here’s another version of the same, faulty, line of reasoning:

I’m a pretty good police detective.  Over the years, I’ve developed the knack of knowing when people are telling the truth.  That’s what my experience has taught me.  I know when a confession is for real.  I don’t get things like that wrong.  So someone who says, afterwards, that the confession was forced, or that the criminal should get off on a technicality, must have some kind of screw loose, or some kind of twisted ulterior motivation.  Gosh, what kind of person are they?

And another:

I’m basically a moral person.  I don’t knowingly cause serious harm to my fellow human beings.  I don’t get things as badly wrong as that.  I’m not that kind of person.  So if undeniable evidence subsequently emerges that I really did seriously harm a group of people, well, these people must have deserved it.  They were part of a bad crowd.  I was actually doing society a favour.  Gosh, don’t you know, I’m one of the good guys.

Finally, consider this one:

I’m basically a savvy, intelligent person.  I don’t make major errors in reasoning.  If I take the time to investigate a religion and believe in it, I must be right.  All that investment of time and belief can’t have been wrong.  Perish the thought.  If that religion makes a prophecy – such as the end of the world on a certain date – then I must be right to believe it.  If the world subsequently appears not to have ended on that date, then it must have been our faith, and our actions, that saved the world after all.  Or maybe the world ended in an invisible, but more important way.  The kingdom of heaven has been established within. Either way, how right we were!

It can sometimes be fun to observe the self-delusions of the over-confident.  Psychologists talk about “cognitive dissonance”, when someone’s deeply held beliefs appear to be contradicted by straightforward evidence.  That person is forced to hold two incompatible viewpoints in mind at the same time: I deeply believe X, but I seem to observe not-X.  Most people are troubled by this kind of dissonance.  It’s psychologically uncomfortable.  And because it can be hard for them to give up their underlying self-belief that “If I deeply believe X, I must have good reasons to do so”, it can lead them into outlandish hoops and illogical jumps to deny the straightforward evidence.  For them, rather than “seeing is believing”, the saying becomes inverted: “believing is seeing”.

As I said, it can be fun to see the daft things people have done, to resolve their cognitive dissonance in favour of maintaining their own belief in their own essential soundness, morality, judgement, and/or reasoning.  It can be especial fun to observe the mental gymnastics of people with fundamentalist religious and/or political faith, who refuse to accept plain facts that contradict their certainty.  The same goes for believers in alien abduction, for fan boys of particular mobile operating systems, and for lots more besides.

But this can also be a deadly serious topic:

  • It can result in wrongful imprisonments, with the prosecutors unwilling to face up to the idea that their over-confidence was misplaced.  As a result, people spend many years of their life unjustly incarcerated.
  • It can result in families being shattered under the pressures of false “repressed memories” of childhood abuse, seemingly “recovered” by hypnotists and subsequently passionately believed by the apparent victims.
  • It can split up previously happy couples, who end up being besotted, not with each other, but with dreadful ideas about each other (even though “there’s always two sides to a story”).
  • Perhaps worst of all, it can result in generations-long feuds and wars – such as the disastrous entrenched enmity of the Middle East – with each side staunchly holding onto the view “we’re the good guys, and anything we did to these other guys was justified”.

Above, I’ve retold some of the thoughts that occurred to me as I recently listened to the book “Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts”, by veteran social psychologists Carol Tavris and Elliot Aronson.  (See here for this book’s website.)  At first, I found the book to be a very pleasant intellectual voyage.  It described, time and again, experimental research that should undermine anyone’s over-confidence about their abilities to observe, remember, and reason.  (I’ll come back to that research in a moment).  It reviewed real-life examples of cognitive dissonance – both personal examples and well-known historical examples.  So far, so good.  But later chapters made me more and more serious – and, frankly, more and more angry – as they explored horrific examples of miscarriages of justice (the miscarriage being subsequently demonstrated by the likes of DNA evidence), family breakups, and escalating conflicts and internecine violence.  All of this stemmed from faulty reasoning, brought on by self-justification (I’m not the kind of person who could make that kind of mistake) and by over-confidence in our own thinking skills.

Some of the same ground is covered in another recent book, “The invisible gorilla – and other ways our intuition deceives us”, by Christopher Chabris and Daniel Simons.  (See here for the website accompanying this book.)  The gorilla in the title refers to the celebrated experiment where viewers are asked to concentrate on one set of activity – counting the number of passes made by a group of basketball players – and often totally fail to notice someone in a gorilla suit wandering through the crowd of players.  Gorilla?  What gorilla?  Don’t be stupid!  If there had been a gorilla there, I would have seen it, sure as eggs are eggs.

Chapter by chapter, “The invisible gorilla” reviews evidence that we tend to be over-confident in our own abilities to observe, remember, and reason.  The chapters cover:

  • Our bias to think we would surely observe anything large and important that happened
  • Our bias to think our memories are reliable
  • Our bias to think that people who express themselves confidently are more likely to be trustworthy
  • Our bias to think that we would give equal weight to evidence that contradicts our beliefs, as to evidence that supports our beliefs (the reality is that we search high and low for confirming evidence, and quickly jump to reasons to justify ignoring disconfirming evidence)
  • Our bias to think that correlation implies causation: that if event A is often followed by event B, then A will be the cause of B
  • Our bias to think there are quick fixes that will allow significant improvements in our thinking power – such as playing classical music to babies (an effect that has been systematically discredited)
  • Our bias to think we can do many things simultaneously (“multi-task”) without any individual task being affected detrimentally.

These biases probably all were useful to Homo sapiens at an early phase of our evolutionary history.  But in the complex society of the present day, these biases do us more harm than good.

Added together, the two books provide sobering material about our cognitive biases, and about the damage that all too often follows from us being unaware of these biases.

“Mistakes were made (but not by me)” adds the further insight that we tend to descend gradually into a state of gross over-confidence.  The book frequently refers to the metaphor of a pyramid.  Before we make a strong commitment, we are often open-minded.  We could go in several different directions.  But once we start down any of the faces in the pyramid, it becomes harder and harder to retract – and we move further away from people who, initially, were in the very same undecided state as us.  The more we follow a course of action, the greater our commitment to defend all the time and energy we’ve committed down that path.  I can’t have taken a wrong decision, because if I had, I would have wasted all that time and energy, and that’s not the kind of person I am. So they invest even more time and energy, walking yet further down that pyramid of over-confidence, in order to maintain their own self-image.

At root, what’s going wrong here is what psychologists call self-justification.  Once upon a time, the word pride would have been used.  We can’t bear to realise that our own self-image is at fault, so we continue to take actions – often harmful actions – in support of our self-image.

The final chapters of both books offer hope.  They give examples of people who are able to break out of this spiral of self-justification.  It isn’t easy.

An important conclusion is that we should put greater focus on educating people about cognitive biases.  Knowing about a cognitive bias doesn’t make us immune to it, but it does help – especially when we are still only a few rungs down the face of the pyramid.  As stated in the conclusion of “The invisible gorilla”:

One of our messages in this book is indeed negative: Be wary of your intuitions, especially intuitions about how your own mind works.  Our mental systems for rapid cognition excel at solving the problems they evolved to solve, but our cultures, societies, and technologies today are much more complex than those of our ancestors.  In many cases, intuition is poorly adapted to solving problems in the modern world.  Think twice before you decide to trust intuition over rational analysis, especially in important matters, and watch out for people who tell you intuition can be a panacea for decision-making ills…

But we also have an affirmative message to leave you with.  You can make better decisions, and maybe even get a better life, if you do your best to look for the invisible gorillas in the world around you…  There may be important things right in front of you that you aren’t noticing due to the illusion of attention.  Now that you know about this illusion, you’ll be less apt to assume you’re seeing everything there is to see.  You may think you remember some things much better than you really do, because of the illusion of memory.  Now that you understand this illusion, your trust your own memories, and that of others, a bit less, and you’ll try to corroborate your memory in important situations.  You’ll recognise that the confidence people express often reflects their personalities rather than their knowledge, memory, or abilities…  You’ll be skeptical of claims that simple tricks can unleash the untapped potential in your mind, but you’ll be aware than you can develop phenomenal levels of expertise if you study and practice the right way.

Similarly, we should also take more care to widely explain the benefits of the scientific approach, which searches for disconfirming evidence as must as it searches for confirming evidence.

That’s the pro-reason approach to encouraging better reasoning.  But reason, by itself, often isn’t enough.  If we are going to face up to the fact that we’ve made grave errors of judgement, which have caused pain, injustice, and sometimes even death and destruction, we frequently need powerful emotional support.  To enable us to admit to ourselves that we’ve made major mistakes, it greatly helps if we can find another image of ourselves, which sees us as making better contributions in the future.  That’s the pro-hope approach to encouraging better reasoning.  The two books have examples of each approach.  Both books are well worth reading.  At the very least, you may get some new insight as to why discussions on Internet forums often descend into people seemingly talking past each other, or why formerly friendly colleagues can get stuck into an unhelpful rut of deeply disliking each other.

18 February 2010

Coping without my second brain

Filed under: intelligence, Psion — David Wood @ 6:06 pm

Every so often, my current Psion Series 5mx PDA develops a fault in its screen display.  Due to repeated stress on the cable joining the screen to the main body of the device, the connectors in the cable fail.

When that happens, all I can see on the screen is a series of horizontal lines, looking a bit like an extract of a bar code:

I find that, with my pattern of using the Psion device, this problem arises roughly once every 6-12 months.  It’s because I open and shut the device numerous times most waking hours – in order to access the applications on the device which help me to manage my life: Agenda, Contacts, To-do, Alarms, numerous documents and spreadsheets, and so on.  The heavy usage magnifies the stress on the cable.

I can manage my life with these applications provided the screen is working.

When the screen cable fault occurs, I can sometimes mitigate the problem by viewing the screen at a half-open angle.  I presume that, with less stress on the cable, the connectors are able to work properly again.  However, using the device in a propped partially-open state is hardly an ideal ergonomic experience.

Because I know this fault will eventually afflict all the S5mx devices I use, I keep a backup device – bought from EBay.  Alas, my current device developed this problem when I opened it last Saturday, as I sat down in the airplane to fly from Heathrow to Barcelona, for this week’s Mobile World Congress event.  My backup device is still at home in London.  Worse, the usual remediation step did not work in this case: the screen was unviewable even when partially open.

Hmm – I thought to myself – maybe this will be a chance to see how well I can function without the device I often think of as my second brain.

The answer: it has been hard!  Details of my hotel, as well as other logistics matters and appointment details, are stored inside the S5mx.

To restore at least an element of personal productivity, I copied a few key files from the Psion to my laptop, and started up the PC emulator of this device.  It took me a while to remember how to configure the emulator (but I found the details via Google – part of my third brain).  My heart started to beat normally again, as my Agenda showed up on my laptop screen:

By means of this PC emulator, I was able to find out where I should be at various times, and so on.

On the other hand, my laptop is significantly less convenient than the pocket-occupying, instant-on Psion device.  Time and again over the last few days, I’ve scribbled notes on pieces of paper, and been slow to identify times in my schedule when I would be able to slot in new meetings.  It’s been a strain.

I feel a little bit like the character Manfred who has his personal glasses stolen (by “Spring-Heeled Jack”) at the start of Chapter 3 of of Charlie Stross‘s magnificent book Accelerando:

Spring-Heeled Jack runs blind, blue fumes crackling from his heels. His right hand, outstretched for balance, clutches a mark’s stolen memories. The victim is sitting on the hard stones of the pavement behind him. Maybe he’s wondering what’s happened; maybe he looks after the fleeing youth. But the tourist crowds block the view effectively, and in any case, he has no hope of catching the mugger. Hit-and-run amnesia is what the polis call it, but to Spring-Heeled Jack it’s just more loot to buy fuel for his Russian army-surplus motorized combat boots.

* * *

The victim sits on the cobblestones clutching his aching temples. What happened? he wonders. The universe is a brightly colored blur of fast-moving shapes augmented by deafening noises. His ear-mounted cameras are rebooting repeatedly: They panic every eight hundred milliseconds, whenever they realize that they’re alone on his personal area network without the comforting support of a hub to tell them where to send his incoming sensory feed. Two of his mobile phones are bickering moronically, disputing ownership of his grid bandwidth, and his memory … is missing.

A tall blond clutching an electric chainsaw sheathed in pink bubble wrap leans over him curiously: “you all right?” she asks.

“I –” He shakes his head, which hurts. “Who am I?” His medical monitor is alarmed because his blood pressure has fallen: His pulse is racing, his serum cortisol titer is up, and a host of other biometrics suggest that he’s going into shock.

“I think you need an ambulance,” the woman announces. She mutters at her lapel, “Phone, call an ambulance. ” She waves a finger vaguely at him as if to reify a geolink, then wanders off, chain-saw clutched under one arm. Typical southern émigré behavior in the Athens of the North, too embarrassed to get involved. The man shakes his head again, eyes closed, as a flock of girls on powered blades skid around him in elaborate loops. A siren begins to warble, over the bridge to the north.

Who am I? he wonders. “I’m Manfred,” he says with a sense of stunned wonder. He looks up at the bronze statue of a man on a horse that looms above the crowds on this busy street corner. Someone has plastered a Hello Cthulhu! holo on the plaque that names its rider: Languid fluffy pink tentacles wave at him in an attack of kawaii. “I’m Manfred – Manfred. My memory. What’s happened to my memory?” Elderly Malaysian tourists point at him from the open top deck of a passing bus. He burns with a sense of horrified urgency. I was going somewhere, he recalls. What was I doing? It was amazingly important, he thinks, but he can’t remember what exactly it was. He was going to see someone about – it’s on the tip of his tongue –

When I reach home again this evening, I’ll copy all my data files to my backup second brain, and (all being well) I’ll be back to my usual level of personal organisation and effectiveness.

Older Posts »

Blog at WordPress.com.