dw2

27 September 2013

Technology for improved collaborative intelligence

Filed under: collaboration, Hangout On Air, intelligence, Symbian — David Wood @ 1:02 pm

Interested in experiences in using Google Hangout On Air, as a tool to improve collaborative intelligence? Read on.

Google’s Page Rank algorithm. The Wikipedia editing process. Ranking of reviewers on Amazon.com. These are all examples of technology helping to elevate useful information above the cacophony of background noise.

To be clear, in such examples, insight doesn’t just come from technology. It comes from a combination of good tools plus good human judgement – aided by processes that typically evolve over several iterations.

For London Futurists, I’m keen to take advantage of technology to accelerate the analysis of radical scenarios for the next 3-40 years. One issue is that the general field of futurism has its own fair share of background noise:

  • Articles that are full of hype or sensationalism
  • Articles motivated by commercial concerns, with questionable factual accuracy
  • Articles intended for entertainment purposes, but which end up overly influencing what people think.

Lots of people like to ramp up the gas while talking about  the future, but that doesn’t mean they know what they’re talking about.

I’ve generally been pleased with the quality of discussion in London Futurists real-life meetings, held (for example) in Birkbeck College, Central London. The speaker contributions in these meetings are important, but the audience members collectively raise a lot of good points too. I do my best to ‘referee’ the discussions, in a way that a range of opinions have a chance to be aired. But there have been three main limitations with these meetups:

  1. Meetings often come to an end well before we’ve got to the bottom of some of the key lines of discussion
  2. The insights from individual meetings can sometimes fail to be taken forward into subsequent meetings – where the audience members are different
  3. Attendance is limited to people who live near to London, and who have no other commitments when the meetup is taking place.

These limitations won’t disappear overnight, but I have plans to address them in stages.

I’ve explained some of my plans in the following video, which is also available at http://londonfuturists.com/2013/08/30/introducing-london-futurists-academy/.

As the video says, I want to be able to take advantage of the same kind of positive feedback cycles that have accelerated the progress of technology, in order to accelerate in a similar way the generation of reliable insight about the future.

As a practical step, I’m increasingly experimenting with Google Hangouts, as a way to:

  • Involve a wider audience in our discussions
  • Preserve an online record of the discussions
  • Find out, in real-time, which questions the audience collectively believes should be injected into a conversation.

In case it helps others who are also considering the usage of Google Hangouts, here’s what I’ve found out so far.

The Hangouts are a multi-person video conference call. Participants have to log in via one of their Google accounts. They also have to download an app, inside Google Plus, before they can take part in the Hangout. Google Plus will prompt them to download the app.

The Hangout system comes with its own set of plug-in apps. For example, participants can share their screens, which is a handy way of showing some PowerPoint slides that back up a point you are making.

By default, the maximum number of attendees is 10. However, if the person who starts the Hangout has a corporate account with Google (as I have, for my company Delta Wisdom), that number can increase to 15.

For London Futurists meetings, instead of a standard “Hangout”, I’m using “Hangouts On Air” (sometime abbreviated as ‘HOA’). These are started from within their own section of the Google Plus page:

  • The person starting the call (the “moderator”) creates the session in a “pre-broadcast” state, in which he/she can invite a number of participants
  • At this stage, the URL is generated, for where the Hangout can be viewed on YouTube; this vital piece of information can be published on social networking sites
  • The moderator can also take some other pre-broadcast steps, such as enabling the “Questions” app (further mentioned below)
  • When everyone is ready, the moderator presses the big red “Start broadcast” button
  • A wide audience is now able to watch the panellists discussion via the YouTube URL, or on the Google Plus page of the moderator.

For example, there will be a London Futurists HOA this Sunday, starting 7pm UK time. There will be four panellists, plus me. The subject is “Projects to accelerate radical healthy longevity”. The details are here. The event will be visible on my own Google Plus page, https://plus.google.com/104281987519632639471/posts. Note that viewers don’t need to be included in any of the Circles of the moderator.

As the HOA proceeds, viewers typically see the current speaker at the top of the screen, along with the other panellists in smaller windows below. The moderator has the option to temporarily “lock” one of the participants into the top area, so that their screen has prominence at that time, even though other panellists might be speaking.

It’s good practice for panellists to mute their microphones when they’re not speaking. That kind of thing is useful for the panellists to rehearse with the moderator before the call itself (perhaps in a brief preview call several days earlier), in order to debug connectivity issues, the installation of apps, camera positioning, lighting, and so forth. Incidentally, it’s best if there’s a source of lighting in front of the speaker, rather than behind.

How does the audience get to interact with the panellists in real-time? Here’s where things become interesting.

First, anyone watching via YouTube can place text comments under the YouTube window. These comments are visible to the panellists:

  • Either by keeping an eye on the same YouTube window
  • Or, simpler, within the “Comment Tracker” tab of the “Hangout Toolbox” app that is available inside the Hangout window.

However, people viewing the HOA via Google Plus have a different option. Provided the moderator has enabled this feature before the start of the broadcast, viewers will see a big button inviting them to ask a question, in a text box. They will also be able to view the questions that other viewers have submitted, and to give a ‘+1′ thumbs up endorsement.

In real-time, the panellists can see this list of questions appear on their screens, inside the Hangout window, along with an indication of how many ‘+1′ votes they have received. Ideally, this will help the moderator to pick the best question for the panel to address next. It’s a small step in the direction of greater collaborative intelligence.

At time of writing, I don’t think there’s an option for viewers to downvote each others’ questions. However, there is an option to declare that a question is spam. I expect the Google team behind HOA will be making further enhancements before long.

This Questions app is itself an example of how the Google HOA technology is improving. The last time I ran a HOA for London Futurists, the Questions apps wasn’t available, so we just used the YouTube comments mechanism. One of the panellists for that call, David Orban, suggested I should look into another tool, called Google Moderator, for use in a subsequent occasion. I took a look, and liked what I saw, and my initial announcement of my next HOA (the one happening on Sunday) mentioned that I would be using Google Moderator. However, as I said, technology moves on quickly. Giulio Prisco drew my attention to the recently announced Questions feature of the HOA itself – a feature that had previously been in restricted test usage, but which is now available for all users of HOA. So we’ll be using that instead of Google Moderator (which is a rather old tool, without any direct connection into the Hangout app).

The overall HOA system is still new, and it’s not without its issues. For example, panellists have a lot of different places they might need to look, as the call progresses:

  • The “YouTube comment tracker” screen is mutually exclusive from the “Questions” screen: panellists can only have one of these visible to them at a time
  • These screens are in turn mutually exclusive from a text chat window which the panellists can use to chat amongst themselves (for example, to coordinate who will be speaking next) while one of the other panellists is speaking.

Second – and this is what currently makes me most apprehensive – the system seems to put a lot of load on my laptop, whenever I am the moderator of a HOA. I’ve actually seen something similar whenever my laptop is generating video for any long call. The laptop gets hotter and hotter as time progresses, and might even cut out altogether – as happened one hour into the last London Futurists HOA (see the end of this video).

Unfortunately, when the moderator’s PC loses connection to the HOA, the HOA itself seems to shut down (after a short delay, to allow quick reconnections). If this happens again on Sunday, we’ll restart the HOA as soon as possible. The “part two” will be visible on the same Google Plus page, but the corresponding YouTube video will have its own, brand new URL.

Since the last occurrence of my laptop overheating during a video call, I’ve had a new motherboard installed, plus a new hard disk (as the old one was giving some diagnostic errors), and had all the dust cleaned out of my system. I’m keeping my fingers crossed for this Sunday. Technology brings its challenges as well as many opportunities…

Footnote: This threat of over-heating reminds me of a talk I gave on several occasions as long ago as 2006, while at Symbian, about “Horsemen of the apocalypse”, including fire. Here’s a brief extract:

Standing in opposition to the potential for swift continuing increase in mobile technology, however, we face a series of major challenges. I call them “horsemen of the apocalypse”.  They include fire, flood, plague, and warfare.

“Fire” is the challenge of coping with the heat generated by batteries running ever faster. Alas, batteries don’t follow Moore’s Law. As users demand more work from their smartphones, their battery lifetimes will tend to plummet. The solution involves close inter-working of new hardware technology (including multi-core processors) and highly sophisticated low-level software. Together, this can reduce the voltage required by the hardware, and the device can avoid catching fire as it performs its incredible calculations…

26 September 2013

Risk blindness and the forthcoming energy crash

Filed under: books, carbon, chaos, climate change, Economics, irrationality, politics, risks, solar energy — David Wood @ 11:28 am

‘Logical’ is the last thing human thinking, individual and collective, is. Too compelling an argument can even drive people with a particularly well-insulated belief system deeper into denial.

JL in Japan 2The Energy of Nations: Risk Blindness and the Road to Renaissance, by Jeremy Leggett, is full of vividly quotable aphorisms – such as the one I’ve just cited. I see Jeremy as one of the world’s leading thinkers on solar energy, oil depletion, climate change, and the dysfunctional ways in which investment all-too-frequently works. The Observer has described him as “Britain’s most respected green energy boss”. A glance at his CV shows an impressive range of accomplishments:

Jeremy Leggett is founder and chairman of Solarcentury, the UK’s fastest growing renewable energy company since 2000, and founder and chairman of SolarAid, an African solar lighting charity set up with 5% of Solarcentury’s annual profits and itself parent to a social venture, SunnyMoney, that is the top-selling retailer of solar lights in Africa.

Jeremy has been a CNN Principal Voice, and an Entrepreneur of the Year at the New Energy Awards. He was the first Hillary Laureate for International Leadership on Climate Change, chairs the financial-sector think-tank Carbon Tracker and is a consultant on systemic risk to large corporations. He writes and blogs on occasion for the Guardian and the Financial Times, lectures on short courses in business and society at the universities of Cambridge and St Gallen, and is an Associate Fellow at Oxford University’s Environmental Change Institute.

On his own website, The triple crunch log, Jeremy has the following to say about himself:

This log covers the energy-, climate-, and financial crises, and issues pertinent to society’s response to this “triple crunch”…

Let me explain why am I worried about oil depletion, climate change, and dysfunctional investment.

I researched earth history for 14 years, and so know a bit about what makes up the climate system. I researched oil source rocks for several of those years, funded by BP and Shell among others, and I explored for oil and gas in the Middle East and Asia, so I have a background in the issues relevant to peak oil. And more recently I have been a clean-energy entrepreneur and investor for more than decade, as founder of a solar energy company and founding director of a Swiss venture capital fund, so I have seen how the capital markets operate close to. That experience is the basis for my concerns…

Many of the critics who comment on my blogs urge readers to discount everything I say because I am trying to sell solar energy, and so therefore must be in it for the money, hyping concerns about climate change and peak oil in the cause of self enrichment. (As you would). They have it completely the wrong way round.

I left a lucrative career consulting for the oil industry, and teaching its technicians, because I was concerned about global warming and wanted to act on that concern. I joined Greenpeace (1989), on a fraction of my former income, to campaign for clean energy. I left Greenpeace (1997) to set up a non-profit organisation campaigning for clean energy. I turned it into a for-profit company (1999) because I came to the view that was the best possible way I could campaign for clean energy – by creating a commercial success that could show the way. The company I set up gives 5% of its operating profit to a charity that also campaigns for clean energy, SolarAid. All that said, I hope Solarcentury makes a lot of money. It won’t have succeeded in its mission if it doesn’t. I’m hoping fewer people will still want to discount my arguments, knowing the history.

Today marks the UK availability of his book, The Energy of Nations. Heeding its own advice, quoted above, that there are drawbacks to presenting arguments in an overly rational or compelling format, the book proceeds down a parallel course. A large part of the book reads more like a novel than a textbook, with numerous fascinating episodes retold from Jeremy’s diaries.

937893A1-06FA-4829-B09E-599DEFDC1C7F

The cast of characters that have walk-on parts in these episodes include prime ministers, oil industry titans, leading bankers, journalists, civil servants, analysts, and many others. Heroes and villains appear and re-appear, sometimes grown wiser with the passage of years, but sometimes remaining as recalcitrant, sinister (yes), and slippery (yes again) as ever.

A core theme of the book is risk blindness. Powerful vested interests in society have their own reasons to persuade public opinion that there’s nothing to worry about – that everything is under control. Resources at the disposal of these interests (“the incumbency”) inflict a perverse blindness on society, as regards the risks of the status quo. Speaking against the motion at a debate, This House Believes Peak Oil Is No Longer a Concern, in London’s Queen Elizabeth II Congress Centre in March 2009, in the aftermath of the global financial crisis brought on by hugely unwarranted over-confidence among bankers, Jeremy left a trenchant analogy hanging in the mind of the audience:

I explain that those of us who worry about peak oil fear that the oil industry has lapsed into a culture of over-exuberance about both the remaining oil reserves and prospects of resources yet to be turned into reserves, and about the industry’s ability to deliver capacity to the market even if enough resources exist.

Our main argument is that new capacity flows coming onstream from discoveries made by the oil industry over the past decade don’t compensate for depletion. Hence projections of demand cannot be met a few years hence. This problem will be compounded by other issues, including the accelerating depletion of the many old oilfields that prop up much of global oil production today, the probable exaggeration by OPEC countries of their reserves, and the failure of the ‘price-mechanism’ assumption that higher prices will lead to increased exploration and expanding discoveries…

In conclusion, this debate is all about the risk of a mighty global industry having its asset assessment systemically overstated, due to an endemic culture of over-optimism, with potentially ruinous economic implications.

I pause to let that sentence hang in the air for a second or two.

Now that couldn’t possibly happen, could it?

This none too subtle allusion to the disaster playing out in the financial sector elicits a polite laugh from the audience…

Nowadays, people frequently say that the onset of shale oil and gas should dissolve fears about impending reductions in the availability of oil. Jeremy sees this view as profoundly misguided. Shale is likely to fall far, far short of the expectations that have been heaped on it:

For many, the explosive growth of shale gas production in the USA – now extending into oil from shale, or ‘tight oil’ as it is properly known – is a revolution, a game-changer, and it even heralds a ‘new era of fossil fuels’. For a minority, it shows all the signs of being the next bubble in the markets.

In the incumbency’s widely held view, the US shale gas phenomenon can be exported, opening the way to cheap gas in multiple countries. For others, even if there is no bubble, the phenomenon is not particularly exportable, for a range of environmental, economic and political reasons

This risk too entails shock potential. Take a country like the UK. Its Treasury wishes actively to suppress renewables, so as to ensure that investors won’t be deterred from bankrolling the conversion of the UK into a ‘gas hub’. Picture the scene if most of the national energy eggs are put in that basket, infrastructure is capitalised, and then supplies of cheap gas fall far short of requirement, or even fail to materialise.

As the book makes clear, our collective risk blindness prevents society as a whole from reaching a candid appraisal of no fewer than five major risks facing us over the next few years: oil shock, climate shock, a further crash in the global financial system, the bursting of a carbon bubble in the capital markets, and the crash of the shale gas boom. The emphasis on human irrationality gels with a lot of my own prior reading – as I’ve covered e.g. in Our own entrenched enemies of reasonAnimal spirits – a richer understanding of economics, Influencer – the power to change anything, as well as in my most recent posting When faith gets in the way of progress.

The book concludes with a prediction that society is very likely to encounter, by as early as 2015, either a dramatic oil shock (the widespread realisation that the era of cheap oil is behind us, and that the oil industry has misled us as badly as did the sellers of financial hocus pocus), or a renewed financial crisis, which would then precipitate (but perhaps more slowly) the same oil shock. To that extent, the book is deeply pessimistic.

But there is plenty of optimism in the book too. The author believes – as do I – that provided suitable preparatory steps are taken (as soon as possible), society ought to be able to rebound from the forthcoming crash. He spends time explaining “five premises for the Road to Renaissance”:

  1. The readiness of clean energy for explosive growth
  2. The intrinsic pro-social attributes of clean energy
  3. The increasing evidence of people power in the world
  4. The pro-social tendencies in the human mind
  5. The power of context that leaders will be operating in after the oil crash.

But alongside his optimism, he issues a sharp warning:

I do not pretend that things won’t get much worse before they get better. There will be rioting. There will be food kitchens. There will be blood. There already have been, after the financial crash of 2008. But the next time round will be much worse. In the chaos, we could lose our way like the Maya did.

In summary, it’s a profoundly important book. I found it to be a real pleasure to read, even though the topic is nerve-racking. I burst out laughing in a number of places, and then reflected that it was nervous laughter.

The book is full of material that will probably make you want to underline it or tweet an extract online. The momentum builds up to a dramatic conclusion. Anyone concerned about the future should make time to read it.

Not everyone will agree with everything it contains, but it is clearly an honest and heartfelt contribution to vital debates. The book has already been receiving some terrific reviews from an interesting variety of people. You can see those, a summary, Chapter One, and links for buying the book here.

Finally, it’s a book that is designed to provoke discussion. I’m delighted that the author has agreed to speak at a London Futurists event on Saturday 5th October. Please click here for more details and to RSVP. This is a first class topic addressed by a first class speaker, which deserves a first class audience to match!

17 September 2013

When faith gets in the way of progress

Is it good that we grow old, weak, disease-prone, and eventually succumb, dead, to the ravages of aging?

The rise and fall of our health and vigour is depicted in this sketch from leading biogerontology researcher Alex Zhavoronkov:

Aging Decline

This diagram is taken from the presentation Alex made at a London Futurists event on 31st August. Alex used the same slide in his presentation, several days later, to the SENS6 conference “Reimage aging” at Queens’ College, Cambridge.

conf-page-banner

My impression from the attendees at SENS6 that I met, over the four days I spent at the conference, is that the vast majority of them would give a resounding ‘No’ as the answer to the question,

Is it good that we grow old, weak, disease-prone, and eventually succumb, dead, to the ravages of aging?

What’s more, they shared a commitment that action should be taken to change this state of affairs. In various ways, they described themselves as “fighters against aging”, “healthy longevity activists”, and as “campaigners for negligible senescence”. They share an interest in the declaration made on the page on the SENS Research Foundation website describing the conference:

The purpose of the SENS conference series, like all the SENS initiatives, is to expedite the development of truly effective therapies to postpone and treat human aging by tackling it as an engineering problem: not seeking elusive and probably illusory magic bullets, but instead enumerating the accumulating molecular and cellular changes that eventually kill us and identifying ways to repair – to reverse – those changes, rather than merely to slow down their further accumulation.

This broadly defined regenerative medicine – which includes the repair of living cells and extracellular material in situ – applied to damage of aging, is what we refer to as rejuvenation biotechnologies.

This “interventionist” approach, if successful, would lead to a line, on the chart of performance against age, similar to that shown in the bright green colour: we would retain our youthful vigour indefinitely. Mechanisms supporting this outcome were explored in considerable technical details in the SENS6 presentations. The SENS6 audience collectively posed some probing questions to the individual presenters, but the overall direction was agreed. Rejuvenation biotechnologies ought to be developed, as soon as possible.

But not everyone sees things like this. SENS6 attendees agreed on that point too. Over informal discussions throughout the event, people time and again shared anecdotes about their personal acquaintances being opposed to the goals of SENS. You can easily see the same kind of negative reactions, in the online comments pages of newspapers, whenever a newspaper reports some promising news about potential techniques to overcome aging.

For example, the Daily Mail in the UK recently published a well-researched article, “Do lobsters hold the key to eternal life? Forget gastronomic indulgence, the crustacean can defy the aging process”. The article starts as follows:

They are usually associated with a life of gastronomic indulgence and heart-stopping excess. But away from the dinner table, lobsters may actually hold the secret to a long, healthy — and possibly even eternal — life.

For this crustacean is one of a handful of bizarre animals that appear to defy the normal aging process.

While the passing years bring arthritis, muscle loss, memory problems and illness to humans, lobsters seem to be immune to the ravages of time. They can be injured, of course. They can pick up diseases. They can be caught and thrown into a pot, then smothered in béchamel sauce.

But rather than getting weaker and more vulnerable over the years, they become stronger and more fertile each time they shed their shells.

The typical lobster weighs 1 to 2 lb. But in 2009, a Maine fisherman landed a colossus of 20 lb, which was estimated to be 140 years old. And that isn’t even the oldest lobster found so far. According to Guinness World Records, a 44 lb leviathan was caught in 1977, with claws powerful enough to snap a man’s arm.

The species belongs to an elite group that appears to be ‘biologically immortal’. Away from predators, injury or disease, these astonishing creatures’ cells don’t deteriorate with age…

For healthy longevity activists, there was lots of good news in the article. This information, however, was too much for some readers to contemplate. Some of the online comments make for fascinating (but depressing) reading. Here are four examples, quoted directly from the comments:

  1. How would humankind cope with tens of millions of extremely old and incredibly crabby people?
  2. People have to die and they’re not dying quickly enough. Soon the earth will run out of water and food for the ever increasing masses.
  3. These “researchers” should watch Death Becomes Her
  4. The only guarantee of eternal life is to read your Bibles. Though even if you don’t, eternal life of another kind exists, though it’s not particularly appealing: “And the smoke of their torment ascendeth up for ever and ever” (Rev 14:11).

To be clear, the goal of project such as those in the SENS umbrella is to extend healthy lifespans (sometimes known as “healthspans”) rather than simply extending lifespans themselves. Rejuvenation technologies are envisioned to undo tendencies towards unwelcome decrepitude, crabbiness, and so on.

As for the reference to the 1992 Hollywood film “Death Becomes Her” featuring Meryl Streep and Goldie Hawn in a frightful “living dead” immortality, I’ll get back to that later.

Infinite ResourceThe question of potential over-population has a bit more substance. However, the worry isn’t so much the number of people on the earth, but the rate at which everyone is consuming and polluting. With potential forthcoming improvements in harnessing solar energy, we’ll have more than enough energy available to look after a planet with 10 billion people. Arguably the planet could sustain at least 100 billion people. (That argument is made, in a well-balanced way, by Ramez Naam in his recent book “The infinite resource” – a book I thoroughly recommend. I’ve also covered this question from time to time in earlier blogposts – see e.g. “Achieving a 130-fold improvement in 40 years”.)

However, I believe that there are deeper roots to the opposition that many people have to the idea of extending healthy lifespans. They may offer intellectual rationalisations for their opposition (e.g. “How would humankind cope with tens of millions of extremely old and incredibly crabby people?”) but these rationalisations are not the drivers for the position they hold.

Instead, their opposition to extending healthy lifespans comes from what we can call faith.

This thought crystallised in my mind as I reflected on the very last presentation from SENS6. The speaker was Thomas Pyszczynski of the University of Colorado, and his topic was “Understanding the paradox of opposition to long-term extension of the human lifespan: fear of death, cultural worldviews, and the illusion of objectivity”.

The presentation title was long, but the content was clear and vivid. The speaker outlined some conclusions from decades of research he had conducted into “Terror Management Theory (TMT)”. I’ve since discovered that the subject of “Terror Management Theory” has its own article in Wikipedia:

Terror management theory (TMT), in social psychology, proposes a basic psychological conflict that results from having a desire to live but realizing that death is inevitable. This conflict produces terror, and is believed to be unique to humans. Moreover, the solution to the conflict is also generally unique to humans: culture. According to TMT, cultures are symbolic systems that act to provide life with meaning and value. If life is thought meaningful, death is less terrifying. Cultural values therefore serve to manage the terror of death by providing life with meaning…

pyszczynski

Here’s the “paradox” to which Pyszczynski (pictured) referred: people oppose the idea that we could have longer healthy lives, because of the operation of a set of culture and philosophical ideas, which were themselves an adaptive response to the underlying fact that we deeply desire indefinitely long healthy lives. So the opposition is self-contradictory, but the people involved don’t see it like that.

For all of history up until the present age, the idea of having an indefinitely long healthy life was at stark variance to everything else that we saw around ourselves. Death seemed inevitable. In order to avoid collapsing into terror, we needed to develop rationalisations and techniques that prevented us from thinking seriously about our own finitude and mortality. That’s where key aspects of our culture arose. These aspects of our culture became deeply rooted.

Our culture operates, in many cases, below the level of conscious awareness. We find ourselves being driven by various underlying beliefs, without being aware of the set of causes and effects. However, we find comfort in these beliefs. This faith (belief in the absence of sufficient reason) helps to keep us mentally sane, and keeps society functional, even as it prepares us, as individuals, to grow infirm and die.

In case any new ideas challenge this faith, we find ourselves compelled to lash out against these ideas, even without taking the time to analyse them. Our motivation, here, is to preserve our core culture and faith, since that’s what provides the foundation of meaning in our lives. We fight the new ideas, even if these new ideas would be a better solution to our underlying desire to live an indefinitely long, healthy life. The new ideas leave us with a feeling of alienation, even though we don’t see the actual connections between ideas. Our faith causes us to lose our rationality.

Incidentally, similar factors apply, of course, when other things that have profound importance to us are challenged. For example, when we think we may lose a cherished romantic partner, we can all too easily become crazy. When your heart’s on fire, smoke gets in your eyes.

Ending AgingIt turns out that Aubrey de Grey, the chief science officer of SENS, has already written on this same topic. In chapter two of his 2007 book “Ending aging”, he notes the following:

There is a very simple reason why so many people defend aging so strongly – a reason that is now invalid, but until quite recently was entirely reasonable. Until recently, no one has had any coherent idea how to defeat aging, so it has been effectively inevitable. And when one is faced with a fate that is as ghastly as aging and about which one can do absolutely nothing, either for oneself or even for others, it makes perfect psychological sense to put it out of one’s mind – to make one’s peace with it, you might say – rather than to spend one’s miserably short life preoccupied by it. The fact that, in order to sustain this state of mind, one has to abandon all semblance of rationality on the subject – and, inevitably, to engage in embarrassingly unreasonable conversational tactics to shore up that irrationality – is a small price to pay….

Aubrey continues this theme at the start of chapter three:

We’ve recently reached the point where we can engage in the rational design of therapies to defeat aging: most of the rest of this book is an account of my favoured approach to that design. But in order to ensure that you can read that account with an open mind, I need to dispose beforehand of a particularly insidious aspect of the pro-aging trance: the fact that most people already know, in their heart of hearts, that there is a possibility that aging will eventually be defeated.

Why is this a problem? Indeed, at first sight you might think that it would make my job easier, since surely it means that the pro-aging trance is not particularly deep. Unfortunately, however, self-sustained delusions don’t work like that. Just as it’s rational to be irrational about the desirability of aging in order to make your peace with it, it’s also rational to be irrational about the feasibility of defeating aging while the chance of defeating it any time soon remains low. If you think there’s even a 1 percent chance of defeating aging within your lifetime (or within the lifetime of someone you love), that sliver of hope will prey on your mind and keep your pro-aging trance uncomfortably fragile, however hard you’ve worked to convince yourself that aging is actually not such a bad thing after all. If you’re completely convinced that aging is immutable, by contrast, you can sleep more soundly.

Underwood_Mair_2013_smallAnother speaker from the final session of SENS6, Mair Underwood of the University of Queensland, provided some timely advice to the SENS6 community, that dovetails well with the discussion above. Underwood’s presentation was entitled “What reassurances do the community need regarding life extension? Evidence from studies of community attitudes and an analysis of film portrayals”. The presentation pointed out the many ways in which popular films (such as “Death Becomes Her”, mentioned above) portray would-be life extensionists in a bad light. These people, the films imply, are emotionally immature, selfish, frustrated, obstructive, and generally unattractive. This is the pro-death culture at work.

To counteract these impressions, and to help free the broader community from its faith that aging and death are actually good things, Underwood gave the following advice:

  1. Assure that life extension science, and the distribution of life extension technologies, are ethical and regulated, and seen to be so
  2. Assuage community concerns about life extension as unnatural or playing god
  3. Assure that life extension would involve an extension of healthy lifespan
  4. Assure that life extension does not mean a loss of fertility
  5. Assure the community that life extension will not exacerbate social divides, and that those with extended lives will not be a burden on society
  6. Create a new cultural framework for understanding life extension.

This advice is all good, but I suspect that the new few years may see a growing “battle of faiths”, as representatives of the old culture fight harder in opposition to the emerging evidence that we we are on the point of possessing the technological means to extend human healthspans very significantly. This is a battle that may need more tools, to influence the outcome, than mere hard-honed rationality. At the very least, we’ll need to keep in mind how culture works, and the ways in which faith draws strength.

Follow ups: Several forthcoming London Futurists meetups address topics that are directly relevant to the above line of thinking:

  • Futurism, Spirituality, and Faith, in Birkbeck College on Saturday 21st September, discusses ways in which committed technoprogressives can best interact with faith-based movements, without these interactions leading to fruitless irrationality and loss of direction
  • Projects to accelerate radical healthy longevity, a Google Hangout On Air (HOA) on Sunday 29th September, features a panel discussion on the question, “What are the most important ongoing projects to accelerate radical healthy longevity?”
  • Futurists discuss The Transhumanist Wager, with Zoltan Istvan, another Google HOA, on Sunday 20th September, reviews a recently published novel about a possible near-future scenario of a growing battle between the old human culture and an emerging new culture that favours indefinitely long healthspans.
  • Finally, if you’re interested in the question of whether solar energy will be able, as I implied above, to address pending shortages in global energy supplies, even as human population continues to increase, you should make it a priority to attend the London Futurists event on Saturday 5th October, The Energy of Nations, with Jeremy Leggett. The speaker on this occasion is one of the world’s foremost authorities on solar energy, oil depletion, climate change, and dysfunctional investment. The topic of the best energy systems for the decades ahead is, alas, another one in which faith tends to subvert reason, and in which we need to be smart to prevent our thinking being hijacked by adverse factors.

For more information about the evolution of London Futurists, you can take a peek at a new website which is in the process of being implemented, at http://londonfuturists.com/.

19 August 2013

Longevity and the looming financial meltdown

Filed under: aging, books, challenge, converged medicine, Economics, futurist, healthcare, rejuveneering, SENS — David Wood @ 2:12 pm

What kind of transformational infrastructure investment projects should governments prioritise?

In the UK, government seems committed to spending a whopping £42 billion between now and 2032 on a lengthy infrastructure project, namely the “HS2″ High Speed rail link which could see trains travelling between London, Birmingham, and six other cities, at up to 250 miles per hour. The scheme has many critics. As Nigel Morris notes in The Independent,

In an analysis published today (Monday), the IEA (Institute for Economic Affairs ) says the scheme’s cost has been vastly underestimated and had failed to take into account changes to routes and extra tunnelling because of local opposition.

Richard Wellings, its author, said: “The evidence is now overwhelming that this will be unbelievably costly to the taxpayer while delivering incredibly poor value for money.”

Supporters of this investment claim that the improved infrastructure will be a boon for business in the UK. Multi-year infrastructure improvement projects are something that the private sector tends not to attempt. Unless there’s coordination from government, this kind of project will not happen.

The BBC news website (here and here) helpfully listed ten alternative infrastructure improvement projects that might be better recipients of portions of the £42B earmarked for HS2. Suggestions include:

  • A new road motorway for the east of Britain
  • A bridge to the Isle of Wight
  • A new Channel tunnel, directly accessible to car drivers
  • Tram systems for Liverpool and Leeds
  • A tunnel between Great Britain and Ireland
  • Aerial cycle highways for London

If it were my decision, I would reallocate a large chunk of this funding to a different kind of multi-year infrastructure improvement project. This is in the area of health rather than the area of transport. The idea is to significantly promote research and deployment of treatments in preventive and regenerative medicine.

Ageless CoverThe argument for this kind of sustained investment is laid out in the book The Ageless Generation: How Advances in Biomedicine Will Transform the Global Economy, by Alex Zhavoronkov, which I’ve just finished reading. It’s a compelling analysis.

Alex will be sharing his views at a forthcoming meeting of the London Futurists, on Saturday 31st July. There are more details of this meeting here. (Note that a number of copies of the speaker’s book will be available free of charge to attendees of this meeting.)

The book contains many eye-opening pointers to peer-reviewed research. This covers the accelerating pace of medical breakthroughs, in areas such as bioartificial organs, stem cell therapies, repairing damaged tissues, fortifying the immune system, and autophagy. The research also covers financial and economic matters.

For example, here’s a snippet from the 2009 report “The Burden of Chronic Disease” (PDF) – which is written from a US point of view, though the implications apply for other countries too:

Our current economic reality reminds us that now more than ever, we need to invest in the backbone of our economy: the American workforce. Without question, the single biggest force threatening U.S. workforce productivity, as well as health care affordability and quality of life, is the rise in chronic conditions…

Further into that report, data is quoted from the Milken Institute report “The Economic Burden of Chronic Disease” (PDF)

By our calculations, the most common chronic diseases are costing the economy more than $1 trillion annually—and that figure threatens to reach $6 trillion by the middle of the century.

The costs include lost of productivity, as well as absenteeism:

The potential savings on treatment represents just the tip of the proverbial iceberg. Chronically ill workers take sick days, reducing the supply of labor—and, in the process, the GDP. When they do show up for work to avoid losing wages, they perform far below par—a circumstance known as “presenteeism,” in contrast to absenteeism. Output loss (indirect impacts) due to presenteeism (lower productivity) is immense—several times greater than losses associated with absenteeism. Last (but hardly a footnote), avoidable illness diverts the productive capacity of caregivers, adding to the reduction in labor supply for other uses. Combined, the indirect impacts of these diseases totaled just over $1 trillion in 2003…

In his book, Alex builds on this analysis, focussing on the looming costs to healthcare systems and pensions systems of ever greater portions of our population being elderly and infirm, and becoming increasingly vulnerable to chronic illnesses. Countries face bankruptcy on account of the increased costs. At the very least, we must expect radical changes in the provision of social welfare. The pensionable age is likely to rocket upwards. Families are likely to discover that the provisions they have made for their old age and retirement are woefully inadequate.

The situation is bleak, but solutions are at hand, through a wave of biomedical innovation which could make our recent wave of IT innovation look paltry in comparison. However, despite their promise, these biomedical solutions are arriving too slowly. The healthcare and pharmaceutical industries are bringing us some progress, but they are constrained by their own existing dynamics.

Alex_cover_2_smallAs Alex writes,

The revolution in information technology has irreversibly changed our lives over the past two decades. However, advances in biomedicine stand poised to eclipse the social and economic effects of IT in the near future.

Biomedical innovations typically reach the mass market in much slower fashion than those from information technology. They follow a paradigm where neither demand, in the form of the consumer, nor supply, in the form of the innovator, can significantly accelerate the process. Nevertheless, many of the advances made over the past three decades are already propagating into mainstream clinical practice and converging with other technologies extending our life spans.

However, in the near-term, unless the governments of the debt-laden developed countries make proactive policy changes, there is a possibility of lengthy economic decline and even collapse.

Biomedical advances are not all the same. The current paradigm in biomedical research, clinical regulation and healthcare has created a spur of costly procedures that provide marginal increases late in life extending the “last mile”, with the vast percentage of the lifetime healthcare costs being spent in the last few years of patient’s life, increasing the burden on the economy and society.

There is an urgent need to proactively adjust healthcare, social security, research and regulatory policies:

  • To ameliorate the negative near-term effects
  • To accelerate the mass adoption of technologies contributing positively to the economy.

Now that’s a project well worth spending billions on. It’s a vision of expanded healthspans rather than just of expanded lifespans. It’s a vision of people continuing to be happily productive members of society well into their 80s and 90s and beyond, learning new skills, continuing to expand their horizons, whilst sharing their wisdom and experience with younger generations.

It’s a great vision for the individuals involved (and their families), but also a great vision for the well-being of society as a whole. However, without concerted action, it’s unlikely to become reality.

Footnote 1: To connect the end of this line of reasoning back to its start: If the whole workforce remains healthy, in body, mind, and spirit, for many years more than before, there will be plenty of extra resources and skills available to address problems in other fields, such as inadequate traffic vehicle infrastructure. My own preferred approach to that particular problem is improved teleconferencing, virtual presence, avatar representation, and other solutions based on transporting bits rather than transporting atoms, though there’s surely scope for improved physical transport too. Driverless vehicles have a lot of promise.

Footnote 2: The Lifestar Institute produced a well-paced 5 minute video, “Can we afford not to try?” covering many of the topics I’ve mentioned above. View it at the Lifestar Institute site, or, for convenience, embedded below.

Footnote 3: The Lifestar Institute video was shown publicly for the first time at the SENS4 conference in Cambridge in September 2009. I was in the audience that day and vividly remember the impact the video made on me. The SENS Foundation is running the next in their series of biennial conferences (“SENS 6″) this September, from the 3rd to the 7th. The theme is “Reimagine aging”. I’m greatly looking forward to it!

conf-page-banner

3 July 2013

Preparing for driverless vehicles

Filed under: driverless vehicles, futurist, Humanity Plus, robots, safety, sensors, vision, Volvo — David Wood @ 10:56 am

It’s not just Google that is working on autonomous, self-driving cars. Take a look at this recent Atutoblog video showing technology under development by Swedish manufacturer Volvo:

This represents another key step in the incorporation of smart wireless technology into motor vehicles.

Smart wireless technology already has the potential to reduce the number of lives lost in road accidents. A memo last month from the EU commission describes the potential effect of full adoption of the 112 eCall system inside cars:

The 112 eCall automatically dials Europe’s single emergency number 112 in the event of a serious accident and communicates the vehicle’s location to the emergency services. This call to 112, made either automatically by means of the activation of in-vehicle sensors or manually, carries a standardised set of data (containing notably the type and the location of the vehicle) and establishes an audio channel between the vehicle and the most appropriate emergency call centre via public mobile networks.

Using a built-in acceleration sensor, the system detects when a crash has occurred, and how serious it is likely to be. For example, it can detect whether the car has rolled over onto its roof. Then it transmits the information via a built-in wireless SIM. As the EU commission memo explains:

  • In 2012 around 28,000 people were killed and more than 1.5 million injured in 1.1 million traffic accidents on EU roads.
  • Only around 0.7% of vehicles are currently equipped with private eCall systems in the EU, with numbers barely rising. These proprietary systems do not offer EU-wide interoperability or continuity.
  • In addition to the tragedy of loss of life and injury, this also carries an economic burden of around EUR 130 billion in costs to society every year.
  • 112 eCall can speed up emergency response times by 40% in urban areas and 50% in the countryside. Fully deployed, it can save up to 2500 lives a year and alleviate severity of road injuries. In addition, thanks to improved accident management, it is expected to reduce congestion costs caused by traffic accidents.

That’s 9% fewer fatalities, as a result of emergency assistance being contacted more quickly.

But what if the number of accidents could themselves be significantly reduced? Here it’s important to know the predominant factors behind road accidents. A landmark investigation of 700,000 road accidents in the UK over 2005-2009 produced some surprising statistics. As reported by David Williams in the Daily Telegraph,

Vehicle defects are a factor in only 2.8 per cent of fatals, with tyres mostly to blame (1.5 per cent) followed by dodgy brakes (0.7 per cent).

The overriding message? It’s not your car or the “road conditions” that are most likely to kill you. It’s your own driving.

In more detail:

The biggest cause of road accidents in the UK today? The statistics are quite clear on this and it’s “driver error or reaction”. It’s listed by police as a factor in more than 65 per cent of fatal crashes and the heading covers a multitude of driving sins many of which you’re probably on first-name terms with. Topping the charge sheet is failing to look properly (the Smidsy factor – “Sorry mate, I didn’t see you’, relevant in 20.5 per cent of fatals involving driver error), followed by “loss of control” (34 per cent) which, says Greig, often means leaving yourself with “nowhere to go” after entering a bend or other situation, too quickly. Other errors include “poor turn or manoeuvre” (12 per cent) and “failed to judge other person’s path or speed” (11.6 per cent.).

Second biggest cause of fatal accidents, to blame for 31 per cent, is the “injudicious action”, an umbrella term for “travelled too fast for the conditions’ (15.9 per cent of those labelled injudicious), “exceeded speed limit” (13.9 per cent) or “disobeyed give-way or stop sign” (2.1 per cent)?

Third culprit in the daily gamble on who lives and who dies is “behaviour or inexperience” (28 per cent), which covers faults such as “careless, reckless or in a hurry” (17 per cent), “aggressive driving” (8.3 per cent) and “learner/inexperienced” (5.3 per cent).

The fourth main category is “impairment or distraction” (to blame for 19.6 per cent of fatal accidents) covering “alcohol” (a factor in 9.6 per cent of fatal accidents) and “distraction in vehicle” (2.6 per cent).

(The numbers add up to more than 100% because accidents are often attributed to more than one factor.)

These statistics give strength to the remark by Eric Schmidt, Executive Chairman of Google:

Your car should drive itself. It’s amazing to me that we let humans drive cars. It’s a bug that cars were invented before computers.

This suggestion commonly gives rise to three objections:

  1. The technology will never become good enough
  2. Even if the raw technology inside cars becomes better and better, there will need to be lots of changes in roadways, which will take a very long time to achieve
  3. Even if the technology did become good enough, legal systems will never catch up. Who’s going to accept liability for crashes caused by bugs in software?

The first objection is heard less often these days. As noted in a 2011 New York Times interview by Erik Brynjolfsson and Andrew P. McAfee of the M.I.T. Center for Digital Business, and authors of the book Race Against the Machine,

In 2004, two leading economists, Frank Levy and Richard J. Murnane, published “The New Division of Labor,”which analyzed the capabilities of computers and human workers. Truck driving was cited as an example of the kind of work computers could not handle, recognizing and reacting to moving objects in real time.

But last fall, Google announced that its robot-driven cars had logged thousands of miles on American roads with only an occasional assist from human back-seat drivers. The Google cars are but one sign of the times.

The third objection will surely fall away soon too. There are already mechanisms whereby some degree of liability can be accepted by car manufacturers, in cases where software defects (for example, in braking and accelerating systems) contribute to accidents. Some examples are covered in the CNN Money review “Toyota to pay $1.1 billion in recall case”.

Another reason the third objection will fall away is because the costs of not changing – that is, of sticking with human drivers – may be much larger than the costs of adopting driverless vehicles. So long as we continue to allow humans to drive cars, there will continue to be driver-induced accidents, with all the physical and social trauma that ensues.

That still leaves the second objection: the other changes in the environment that will need to take place, before driverless vehicles can be adopted more widely. And what other changes will take place, possibly unexpectedly, once driverless cars are indeed adopted?

That’s one of the topics that will be covered in this Saturday’s London Futurists event: The future of transport: Preparing for driverless vehicles? With Nathan Koren.

Nathan_Koren_PhotoAs explained by the speaker at the event, Nathan Koren,

The robots have arrived. Driverless transport pods are now in operation at Heathrow Terminal 5 and several other locations around the world. Driver-assist technologies are becoming commonplace. Many believe that fully driverless cars will be commercially available before the decade is out. But what will the broader impact of driverless transport be?

Automobiles were once called “horseless carriages,” as though the lack of a horse was their most important feature. In reality, they changed the way we work, live, and play; changed the way we design cities; and altered the global economy, political landscape, and climate.

It will be the same with driverless vehicles: we can expect their impact to be go far beyond simply being able to take our hands off the wheel.

This presentation and discussion goes into depth about how automated transport will affect our lives and reshape the the world’s cities.

Nathan is a London-based, American-born architect, transport planner, and entrepreneur. He is widely recognised as a leading authority on Automated Transit Networks, and designed what is scheduled to become the world’s first urban-scale system, in Amritsar, India. He works as a Transport Technology & Planning Consultant for Capita Symonds, and recently founded Podaris, a cloud-based platform for the collaborative design of Automated Transit Networks. Nathan holds an Architecture degree from Arizona State University, and an MBA from the University of Oxford.

I hope to see some readers of this blog, who are based in or near London, at the meeting this Saturday. It’s an important topic!

For additional background inspiration, I recommend the three short videos in the article “The future of travel: Transportation confronts its ‘Kodak moment’”. (Thanks to Nathan for drawing this article to my attention.)

Speakers in these videos talk about the industries that are liable to radical disruption (and perhaps irrelevance) due to the rise of collision-proof driverless vehicles. The airbag industry is one; car collision insurance might be another. I’m sure you can think of more.

13 June 2013

Previewing Global Future 2045

Filed under: futurist, GF2045, robots — David Wood @ 4:32 am

The website for this weekend’s Global Future 2045 international congress has the following bold headline:

Towards a new strategy for human evolution

gf2045-logo

By many measures, the event is poised to be a breakthrough gathering: check the list of eminent speakers and the provocative list of topics to be addressed.

The congress is scheduled to start at 9am on Saturday morning. However, I’ve been chatting with some of the attendees, and we’ve agreed we’ll meet the previous evening, to help kick-start the conversation.

The venue we’ve agreed is Connolly’s Pub and Restaurant. Note that there are several different buildings: we’ll be in the one at 121 W 45th St, from 6.30pm onwards.

Anyone who is in New York to attend the congress is welcome to join us. To find us inside the building:

  • Look for a table with a futurist book on it (“Abundance” by Peter Diamandis)
  • Alternatively, ring my temporary US mobile number, 1 347-562-3920, or that of Chris Smedley, 1 773-432-5712.

There’s no fixed agenda. However, here are several topics that people might want to discuss:

  1. GF2045 foresees the potential future merger of humans and robots (“avatars”). How credible is this vision?
  2. What hard questions are people inclined to ask, to some of the speakers at the event?
  3. Some speakers at the conference believe that mind is deeply linked to quantum effects or other irreducible processes. Will progress with technology and/or philosophy ever resolve these questions?
  4. Speakers at GF2045 include religious and spiritual leaders. Was that a good decision?
  5. What should we and can we do, as interested global citizens, to help support the positive goals of the GF2045 project?
  6. GF2045 took place in Moscow in 2012 and in New York in 2013. Where should it be held in 2014?

I’m open to other suggestions!

milestones_small_en

Footnote:

I’ll also be involved in a couple of post-GF2045 review meetings:

If you’d like to attend either of these reviews, please click on the corresponding link above and register.

18 May 2013

Breakthroughs with M2M: moving beyond the false starts

Filed under: collaboration, Connectivity, Internet of Things, leadership, M2M, standards — David Wood @ 10:06 am

Forecasts of machine-to-machine wireless connectivity envision 50 billion, or even one trillion, wirelessly connected devices, at various times over the next 5-10 years. However, these forecasts date back several years, and there’s a perception in some quarters that all is not well in the M2M world.

HeronTowerThese were the words that I used to set the scene for a round-table panel discussion at the beginning of this month, at the Harvey Nash offices in high-rise Heron Tower in the City of London. Participants included senior managers from Accenture Mobility, Atholl Consulting, Beecham Research, Eseye, Interskan, Machina Research, Neul, Oracle, Samsung, Telefonica Digital, U-Blox, Vodafone, and Wyless – all attending in a personal capacity. I had the privilege to chair the discussion.

My goal for the discussion was that participants would leave the meeting with clearer ideas and insights about:

  • Obstacles hindering wider adoption of M2M connectivity
  • Potential solutions to these obstacles.

The gathering was organised by Ian Gale, Senior Telecoms Consultant of Harvey Nash. The idea for the event arose in part from reflections from a previous industry round-table that I had also chaired, organised by Cambridge Wireless and Accenture. My online notes on that meeting – about the possible future of the Mobile World Congress (MWC) – included the following thoughts about M2M:

MWC showed a lot of promise for machine-to-machine (M2M) communications and for connected devices (devices that contain communications functionality but which are not phones). But more remains to be done, for this promise to reach its potential.

The GSMA Connected City gathered together a large number of individual demos, but the demos were mainly separated from each other, without there being a clear overall architecture incorporating them all.

Connected car was perhaps the field showing the greatest progress, but even there, practical questions remain – for example, should the car rely on its own connectivity, or instead rely on connectivity of smartphones brought into the car?

For MWC to retain its relevance, it needs to bring M2M and connected devices further to the forefront…

The opening statements from around the table at Harvey Nash expressed similar views about M2M not yet living up to its expected potential. Several of the participants had written reports and/or proposals about machine-to-machine connectivity as long as 10-12 years ago. It was now time, one panellist suggested, to “move beyond the false starts”.

Not one, but many opportunities

An emerging theme in the discussion was that it distorts perceptions to talk about a single, unified M2M opportunity. Headline figures for envisioned near-future numbers of “connected devices” add to the confusion, since:

  • Devices can actually connect in many different ways
  • The typical data flow can vary widely, between different industries, and different settings
  • Differences in data flow means that the applicable standards and regulations also vary widely
  • The appropriate business models vary widely too.

Particular focus on particular industry opportunities is more likely to bring tangible results than a general broad-brush approach to the entire potential space of however many billion devices might become wirelessly connected in the next 3-5 years. One panellist remarked:

Let’s not try to boil the ocean.

And as another participant put it:

A desire for big volume numbers is understandable, but isn’t helpful.

Instead, it would be more helpful to identify different metrics for different M2M opportunities. For example, these metrics would in some cases track credible cost-savings, if various M2M solutions were to be put in place.

Compelling use-cases

To progress the discussion, I asked panellists for their suggestions on compelling use-cases for M2M connectivity. Two of the most interesting answers also happened to be potentially problematic answers:

  • There are many opportunities in healthcare, if people’s physiological and medical data can be automatically communicated to monitoring software; savings include freeing up hospital beds, if patients can be reliably monitored in their own homes, as well as proactively detecting early warning signs of impending health issues
  • There are also many opportunities in automotive, with electronic systems inside modern cars generating huge amounts of data about performance, which can be monitored to identify latent problems, and to improve the algorithms that run inside on-board processors.

However, the fields of healthcare and automotive are, understandably, both heavily regulated. As appropriate for life-and-death issues, these industries are risk-averse, so progress is slow. These fields are keener to adopt technology systems that have already been well-proven, rather than carrying out bleeding-edge experimentation on their own. Happily, there are other fields which have a lighter regulatory touch:

  • Several electronics companies have plans to wirelessly connect all their consumer devices – such as cameras, TVs, printers, fridges, and dishwashers – so that users can be alerted when preventive maintenance should be scheduled, or when applicable software upgrades are available; a related example is that a printer could automatically order a new ink cartridge when ink levels are running low
  • Dustbins can be equipped with sensors that notify collection companies when they are full enough to warrant a visit to empty them, avoiding unnecessary travel costs
  • Sensors attached to roadway lighting systems can detect approaching vehicles and pedestrians, and can limit the amount of time lights are switched on to the time when there is a person or vehicle in the vicinity
  • Gas pipeline companies can install numerous sensors to monitor flow and any potential leakage
  • Tracking devices can be added to items of equipment to prevent them becoming lost inside busy buildings (such as hospitals).

Obstacles

It was time to ask the first big question:

What are the obstacles that stand in the way of the realisation of the grander M2M visions?

That question prompted a raft of interesting observations from panellists. Several of the points raised can be illustrated by a comparison with the task of selling smartphones into organisations for use by employees:

  • These devices only add business value if several different parts of the “value chain” are in good working order – not only the device itself, but also the mobile network, the business-specific applications, and connectivity for the mobile devices into the back-end data systems used by business processes in the company
  • All the different parts of the value chain need to be able to make money out of their role in this new transaction
  • To avoid being locked into products from only one supplier, the organisation will wish to see evidence of interoperability with products from different suppliers – in order words, a certain degree of standardisation is needed.

At the same time, there are issues with hardware and network performance:

  • Devices might need to be able to operate with minimal maintenance for several years, and with long-lived batteries
  • Systems need to be immune from tampering or hacking.

Companies and organisations generally need assurance, before making the investments required to adopt M2M technology, that:

  • They have a clear idea of likely ongoing costs – they don’t want to be surprised by needs for additional expenditure, system upgrades, process transformation, repeated re-training of employees, etc
  • They have a clear idea of at least minimal financial benefits arising to them.

Especially in a time of uncertain financial climate, companies are reluctant to invest money now with the promise of potential savings being realised at some future date. This results in long, slow sales cycles, in which several layers of management need to be convinced that an investment proposal makes sense. For these reasons, panellists listed the following set of obstacles facing M2M adoption:

  • The end-to-end technology story is often too complicated – resulting in what one panellist called “a disconnected value chain”
  • Lack of clarity over business model; price points often seem unattractive
  • Shortage of unambiguous examples of “quick wins” that can drum up more confidence in solutions
  • Lack of agreed standards – made worse by the fact that standardisation processes seem to move so slowly
  • Conflicts of interest among the different kinds of company involved in the extended value chain
  • Apprehension about potential breaches of security or privacy
  • The existing standards are often unsuitable for M2M use cases, having been developed, instead, for voice calls and video connectivity.

Solutions

My next question turned the discussion to a more positive direction:

Based on your understanding of the obstacles, what initiatives would you recommend, over the next 18-24 months, to accelerate the development of one or more M2M solution?

In light of the earlier observation that M2M brings “not one, but many opportunities”, it’s no surprise that panellists had divergent views on how to proceed and how to prioritise the opportunities. But there were some common thoughts:

  1. We should expect it to take a long time for complete solutions to be established, but we should be able to plan step-by-step improvements
  2. Better “evangelisation” is needed – perhaps a new term to replace “M2M”
  3. There is merit in pooling information and examples that can help people who are writing business cases for adopting M2M solutions in their organisations
  4. There is particular merit in simplifying the M2M value chain and in accelerating the definition and adoption of fit-for-purpose standards
  5. Formal standardisation review processes are obliged to seek to accommodate the conflicting needs of large numbers of different perspectives, but de facto standards can sometimes be established, a lot more quickly, by mechanisms that are more pragmatic and more focused.

To expand on some of these points:

  • One way to see incremental improvements is by finding new business models that work with existing M2M technologies. Another approach is to change the technology, but without disrupting the existing value chains. The more changes that are attempted at the same time, the harder it is to execute everything successfully
  • Rather than expecting large enterprises to lead changes, a lesson can be learned from what has happened with smartphones over the last few years, via the “consumer-led IT”; new devices appealed to individuals as consumers, and were then taken into the workforce to be inserted into business processes. One way for M2M solutions to progress to a point when enterprises would be forced to take them more seriously is if consumers adopt them first for non-work purposes
  • One key to consumer and developer experimentation is to make it easier for small groups of people to create their own M2M solutions. For example, an expansion in the reach of Embedded Java could enable wider experimentation. The Arduino open-source electronics prototyping platform can play a role here too, as can the Raspberry Pi
  • Weightless.org is an emerging standard in which several of the panellists expressed considerable interest. To quote from the Weightless website:

White space spectrum provides the scope to realise tens of billions of connected devices worldwide overcoming the traditional problems associated with current wireless standards – capacity, cost, power consumption and coverage. The forecasted demand for this connectivity simply cannot be accommodated through existing technologies and this is stifling the potential offered by the machine to machine (M2M) market. In order to reach this potential a new standard is required – and that standard is called Weightless.

Grounds for optimism

As the discussion continued, panellists took the opportunity to highlight areas where they, individually, saw prospects for more rapid progress with M2M solutions:

  • The financial transactions industry is one in which margins are still high; these margins should mean that there is greater possibility for creative experimentation with the adoption of new M2M business models, in areas such as reliable automated authentication for mobile payments
  • The unsustainability of current transport systems, and pressures for greater adoption of new cars with hybrid or purely electric power systems, both provide opportunities to include M2M technology in so-called “intelligent systems”
  • Rapid progress in the adoption of so-called “smart city” technology by cities such as Singapore might provide showcase examples to spur adoption elsewhere in the world, and in new industry areas
  • Progress by weightless.org, which addresses particular M2M use cases, might also serve as a catalyst and inspiration for faster progress in other standards processes.

Some take-aways

To wind up the formal part of our discussion, I asked panellists if they could share any new thoughts that had occurred to them in the course of the preceding 120 minutes of round-table discussion. Here’s some of what I heard:

  • It’s like the early days of the Internet, in which no-one had a really good idea of what would happen next, but where there are clearly plenty of big opportunities ahead
  • There is no “one correct answer”
  • Systems like Arduino will allow young developers to flex their muscles and, no doubt, make lots of mistakes; but a combination of youthful vigour and industry experience (such as represented by the many “grey hairs” around the table) provide good reason for hope
  • We need a better message to evangelise with; “50 billion connected devices” isn’t sufficient
  • Progress will result from people carefully assessing the opportunities and then being bold
  • Progress in this space will involve some “David” entities taking the courage to square up to some of the “Goliaths” who currently have vested interests in the existing technology systems
  • Speeding up time-to-market will require companies to take charge of the entire value chain
  • Enabling consumerisation is key
  • We have a powerful obligation to make the whole solution stack simpler; that was already clear before today, but the discussion has amply reinforced this conclusion.

Next steps

A number of forthcoming open industry events are continuing the public discussion of M2M opportunities.

M2M World

With thanks to…

I’d like to close by expressing my thanks to the hosts of the event, Harvey Nash, and to the panellists who took the time to attend the meeting and freely share their views:

21 March 2013

The burning need for better supra-national governance

International organisations have a bad reputation these days. The United Nations is widely seen as ineffective. There’s a retreat towards “localism”: within Britain, the EU is unpopular; within Scotland, Britain is unpopular. And any talk of “giving up sovereignty” is deeply unpopular.

However, lack of effective international organisations and supra-national governance is arguably the root cause of many of the biggest crises facing humanity in the early 21st century.

That was the thesis which Ian Golding, Oxford University Professor of Globalisation and Development, very ably shared yesterday evening in the Hong Kong Theatre in the London School of Economics. He was quietly spoken, but his points hit home strongly. I was persuaded.

DividedNationsThe lecture was entitled Divided Nations: Why global governance is failing and what we can do about it. It coincided with the launch of a book with the same name. For more details of the book, see this blogpost on the website of the Oxford Martin School, where Ian Golding holds the role of Director.

It’s my perception that many technology enthusiasts, futurists, and singularitarians have a blind spot when it comes to the topic of the dysfunction of current international organisations. They tend to assume that technological improvements will automatically resolve the crises and risks facing society. Governments and regulators should ideally leave things well alone – so the plea goes.

My own view is that smarter coordination and regulation is definitely needed – even though it will be hard to set that up. Professor Goldin’s lecture amply reinforced that view.

On the train home from the lecture, I downloaded the book onto my Kindle. I recommend anyone who is serious about the future of humanity to read it. Drawing upon the assembled insights and wisdom of the remarkable set of scholars at the Oxford Martin School, in addition to his own extensive experience in the international scene, Professor Goldin has crystallised state-of-the-art knowledge regarding the pressing urgency, and options, for better supra-national governance.

In the remainder of this blogpost, I share some of the state-of-consciousness notes that I typed while listening to the lecture. Hopefully this will give a flavour of the hugely important topics covered. I apologise in advance for any errors introduced in transcription. Please see the book itself for an authoritative voice. See also the live tweet stream for the meeting, with the hash-tag #LSEGoldin.

What keeps Oxford Martin scholars awake at night

The fear that no one is listening. The international governance system is in total gridlock. There are failures on several levels:

  • Failure of governments to lift themselves to a higher level, instead of being pre-occupied by local, parochial interests
  • Failure of electorates to demand more from their governments
  • Failure of governments for not giving clearer direction to the international institutions.

Progress with international connectivity

80 countries became democratic in the 1990s. Only one country in the world today remains disconnected – North Korea.

Over the last few decades, the total global population has increased, but the numbers in absolute poverty have decreased. This has never happened before in history.

So there are many good aspects to the increase in the economy and inter-connectivity.

However, economists failed to think sufficiently far ahead.

What economists should have thought about: the global commons

What was rational for the individuals and for national governments was not rational for the whole world.

Similar problems exist in several other fields: antibiotic resistance, global warming, the markets. He’ll get to these shortly.

The tragedy of the commons is that, when everyone does what is rational for them, everyone nevertheless ends up suffering. The common resource is not managed.

The pursuit of profits is a good thing – it has worked much better than central planning. But the result is irrationality in aggregate.

The market alone cannot provide a response to resource allocation. Individual governments cannot provide a solution either. A globally coordinated approach is needed.

Example of several countries drawing water from the Aral Sea – which is now arid.

That’s what happens when nations do the right thing for themselves.

The special case of Finance

Finance is by far the most sophisticated of the resource management systems:

  • The best graduates go into the treasury, the federal reserve, etc
  • They are best endowed – the elite organisation
  • These people know each other – they play golf together.

If even the financial bodies can’t understand their own system, this has black implications for other systems.

The growth of the financial markets had two underbellies:

  1. Growing inequality
  2. Growing potential for systemic risk

The growing inequality has actually led to lobbying that exaggerates inequality even more.

The result was a “Race to the bottom”, with governments being persuaded to get out of the regulation of things that actually did need to be regulated.

Speaking after the crisis, Hank Paulson, US Treasury Secretary and former CEO of Goldman Sachs, in effect said “we just did not understand what was happening” – even with all the high-calibre people and advice available to him. That’s a shocking indictment.

The need for regulation

Globalisation requires regulation, not just at the individual national level, but at an international level.

Global organisations are weaker now than in the 1990s.

Nations are becoming more parochial – the examples of UK (thinking of leaving EU) and Scotland (thinking of leaving UK) are mirrored elsewhere too.

Yes, integration brings issues that are hard to control, but the response to withdraw from integration is terribly misguided.

We cannot put back the walls. Trying to withdraw into local politics is dreadfully misguided.

Five examples

His book has five examples as illustrations of his general theme (and that’s without talking in this book about poverty, or nuclear threats):

  1. Finance
  2. Pandemics
  3. Migration
  4. Climate change
  5. Cyber-security

Many of these problems arise from the success of globalisation – the extraordinary rise in incomes worldwide in the last 25 years.

Pandemics require supra-national attention, because of increased connectivity:

  • The rapid spread of swine flu was correlated tightly with aircraft travel.
  • It will just take 2 days for a new infectious disease to travel all the way round the world.

The idea that you can isolate yourself from the world is a myth. There’s little point having a quarantine regime in place in Oxford if a disease is allowed to flourish in London. The same applies between countries, too.

Technology developments exacerbate the problem. DNA analysis is a good thing, but the capacity to synthesise diseases has terrible consequences:

  • There’s a growing power for even a very small number of individuals to cause global chaos, e.g. via pathogens
  • Think of something like Waco Texas – people who are fanatical Armageddonists – but with greater technical skills.

Cyber-security issues arise from the incredible growth in network connectivity. Jonathan Zittrain talks about “The end of the Internet”:

  • The Internet is not governed by governments
  • Problems to prosecute people, even when we know who they are and where they are (but in a different jurisdiction)
  • Individuals and small groups could destabilise whole Internet.

Migration is another “orphan issue”. No international organisation has the authority to deal with it:

  • Control over immigration is, in effect, an anarchic, bullying system
  • We have very bad data on migration (even in the UK).

The existing global institutions

The global institutions that we have were a response to post-WW2 threats.

For a while, these institutions did well. The World Bank = Bank for reconstruction. It did lead a lot of reconstruction.

But over time, we became complacent. The institutions became out-dated and lost their vitality.

The recent financial crisis shows that the tables have been turned round: incredible scene of EU taking its begging bowl to China.

The tragedy is that the lessons well-known inside the existing institutions have not been learned. There are lessons about the required sequencing of reforms, etc. But with the loss of vitality of these institutions, the knowledge is being lost.

The EU has very little bandwidth for managing global affairs. Same as US. Same as Japan. They’re all preoccupied by local issues.

The influence of the old G7 is in decline. The new powers are not yet ready to take over the responsibility: China, Russia, India, Indonesia, Brazil, South Africa…

  • The new powers don’t actually want this responsibility(different reasons for different countries)
  • China, the most important of the new powers, has other priorities – managing their own poverty issues at home.

The result is that no radical reform happens, of the international institutions:

  • No organisations are killed off
  • No new ones created
  • No new operating principles are agreed.

Therefore the institutions remain ineffective. Look at the lack of meaningful progress towards solving the problems of climate change.

He has been on two Bretton Woods reform commissions, along with “lots of wonderfully smart, well-meaning people”. Four prime ministers were involved, including Gordon Brown. Kofi Annan received the report with good intentions. But no actual reform of UN took place. Governments actually want these institutions to remain weak. They don’t want to give up their power.

It’s similar to the way that the UK is unwilling to give up power to Brussels.

Sleep-walking

The financial crisis shows what happens when global systems aren’t managed:

  • Downwards spiral
  • Very hard to pull it out afterwards.

We are sleep-walking into global crises. The financial crisis is just a foretaste of what is to come. However, this need not be the case.

A positive note

He’ll finish the lecture by trying to be cheerful.

Action on global issues requires collective action by both citizens and leaders who are not afraid to relinquish power.

The good news:

  • Citizens are more connected than ever before
  • Ideologies that have divided people in the past are reducing in power
  • We can take advantage of the amplification of damage to reputation that can happen on the Internet
  • People can be rapidly mobilised to overturn bad legislation.

Encouraging example of SOPA debate in US about aspects of control of the Internet:

  • 80 million people went online to show their views, in just two days
  • Senate changed their intent within six hours.

Some good examples where international coordination works

  • International plane travel coordination (air traffic control) is example that works very well – it’s a robust system
  • Another good example: the international postal system.

What distinguishes the successes from the failures:

  • In the Air Traffic Control case, no one has a different interest
  • But in other cases, there are lots of vested interest – neutering the effectiveness of e.g. the international response to the Syrian crisis
  • Another troubling failure example is what happened in Iraq – it was a travesty of what the international system wanted and needed.

Government leaders are afraid that electorate aren’t ready to take a truly international perspective. To be internationalist in political circles is increasingly unfashionable. So we need to change public opinion first.

Like-minded citizens need to cooperate, building a growing circle of legitimacy. Don’t wait for the global system to play catch-up.

In the meantime, true political leaders should find some incremental steps, and should avoid excuse of global inaction.

Sadly, political leaders are often tied up addressing short-term crises, but these short-term crises are due to no-one satisfactorily addressing the longer-term issues. With inaction on the international issues, the short-term crises will actually get worse.

Avoiding the perfect storm

The scenario we face for the next 15-20 years is “perfect storm with no captain”.

He calls for a “Manhattan project” for supra-national governance. His book is a contribution to initiating such a project.

He supports the subsidiarity principle: decisions should be taken at the most local level possible. Due to hyper-globalisation, there are fewer and fewer things that it makes sense to control at the national level.

Loss of national sovereignty is inevitable. We can have better sovereignty at the global level – and we can influence how that works.

The calibre of leaders

Example of leader who consistently took a global perspective: Nelson Mandela. “Unfortunately we don’t have many Mandelas around.”

Do leaders owe their power bases with electorates because they are parochial? The prevailing wisdom is that national leaders have to shy away from taking a global perspective. But the electorate actually have more wisdom. They know the financial crisis wasn’t just due to bankers in Canary Wharf having overly large bonuses. They know the problems are globally systemic in nature, and need global approaches to fix them.

ian goldin

18 March 2013

The future of the Mobile World Congress

Filed under: Accenture, Cambridge, Connectivity, innovation, Internet of Things, M2M, MWC — David Wood @ 3:37 am

How should the Mobile World Congress evolve? What does the future hold for this event?

MWC logoMWC (the Mobile World Congress) currently has good claims to be the world’s leading show for the mobile industry. From 25-28 February, 72 thousand attendees from over 200 countries made their way around eight huge halls where over 1,700 companies were showcasing their products or services. The Barcelona exhibition halls were heaving and jostling.

Tony Poulos, Market Strategist for TM Forum, caught much of the mood of the event in his review article, “Billions in big business as Barcelona beats blues”. Here’s an excerpt:

In one place for four days each year you can see, meet and hear almost every key player in the GSM mobile world. And there lies its secret. The glitz, the ritzy exhibits, the partially clad promo girls, the gimmicks, the giveaways are all inconsequential when you get down to the business of doing business. No longer do people turn up at events like MWC just to attend the conference sessions, walk the stands or attend the parties, they all come here to network in person and do business.

For suppliers, all their customers and prospects are in one place for one week. No need to send sales teams around the globe to meet with them, they come to you. And not just the managers and directors, there are more telco C-levels in Barcelona for MWC than are left behind in the office. For suppliers and operators alike, if you are not seen at MWC you are either out of business or out of a job.

Forget virtual social networking, this is good old-fashioned, physical networking at its best. Most meetings are arranged ahead of time and stands are changing slowly from gaudy temples pulling in passers-by to sophisticated business environments complete with comfortable meeting rooms, lounges, bars, espresso machines and delicacies including Swiss chocolates, Portuguese egg tarts, French pastries and wines from every corner of the globe…

But at least some of the 72,000 MWC attendees found the experience underwhelming. Kevin Coleman, CEO of Alliantus, offered a damning assessment at the end of the show:

I am wondering if I am the boy who shouts – “but the emperor is wearing no clothes” – or the masked magician about to reveal the secrets of the magic trick.

Here it is. “Most of you at Mobile World Congress have wasted your money.”

Yes, I have just returned from the MWC where I have seen this insanity with my own eyes…

That’s quite a discrepancy in opinion. Billions in business, or Insanity?

Or to rephrase the question in terms suggested by my Accenture colleague Rhian Pamphilon, Fiesta or Siesta?

To explore that question, Accenture sponsored a Cambridge Wireless event on Tuesday last week at the Møller Centre at Churchill College in Cambridge. The idea was to bring together a panel of mobile industry experts who would be prepared to share forthright but informed opinions on the highlights and lowlights of this year’s MWC.

Panellists

The event was entitled “Mobile World Congress: Fiesta or Siesta?!”. The panellists who kindly agreed to take part were:

  • Paul Ceely, Head of Network Strategy at EE
  • Raj Gawera, VP Marketing at Samsung Cambridge Mobile Solutions
  • Dr Tony Milbourn, VP Strategy at u-blox AG
  • Geoff Stead, Senior Director, Mobile Learning at Qualcomm
  • Professor William Webb, CTO at Neul
  • Dr. Richard Windsor, Founder of Radio Free Mobile.

The meeting was structured around three questions:

  1. The announcements at MWC that people judged to be the most significant – the news stories with the greatest implications
  2. The announcements at MWC that people judged to be the most underwhelming – the news stories with the least real content
  3. The announcements people might have expected at MWC but which failed to materialise – speaking volumes by their silence.

In short, what were the candidates for what we termed the Fiesta, the Siesta, and the Niesta of the event? Which trends should be picked out as the most exciting, the most snooze-worthy, and as sleeping giants liable to burst forth into new spurts of activity? And along the way, what future could we discern, not just for individual mobile trends, but for the MWC event itself?

I had the pleasure to chair the discussion. All panellists were speaking on their own behalf, rather than necessarily representing the corporate viewpoints of their companies. That helped to encourage a candid exchange of views. The meeting also found time to hear suggestions from the audience – which numbered around 100 members of the extended Cambridge Wireless community. Finally, there was a lively networking period, in which many of the audience good-humouredly button-holed me with additional views.

We were far from reaching any unanimous conclusion. Items that were picked as “Fiesta” by one panellist sometimes featured instead on the “Siesta” list of another. But I list below some key perceptions that commanded reasonable assent on the evening.

Machine to machine, connected devices, and wearable computers

MWC showed a lot of promise for machine-to-machine (M2M) communications and for connected devices (devices that contain communications functionality but which are not phones). But more remains to be done, for this promise to reach its potential.

The GSMA Connected City gathered together a large number of individual demos, but the demos were mainly separated from each other, without there being a clear overall architecture incorporating them all.

Connected car was perhaps the field showing the greatest progress, but even there, practical questions remain – for example, should the car rely on its own connectivity, or instead rely on connectivity of smartphones brought into the car?

For MWC to retain its relevance, it needs to bring M2M and connected devices further to the forefront.

Quite likely, wearable computers will be showing greater prominence by this time next year - whether via head-mounted displays (such as Google Glass) or via the smart watches allegedly under development at several leading companies.

NFC – Near Field Communications

No one spoke up with any special excitement about NFC. Words used about it were “boring” and “complicated”.

Handset evolution

The trend towards larger screen sizes was evident. This seems to be driven by the industry as much as by users, since larger screens encourage greater amounts of data usage.

On the other hand, flexible screens, which have long been anticipated, and which might prompt significant innovation in device form factors, showed little presence at the show. This is an area to watch closely.

Perhaps the most innovative device on show was the dual display Yota Phone – with a standard LCD on one side, and an eInk display on the other. As can be seen in this video from Ben Wood of CCS Insight, the eInk display remains active even if the device is switched off or runs out of battery.

Two other devices received special mention:

  • The Nokia Lumia 520, because of its low pricepoint
  • The Lenovo K900, because of what it showed about the capability of Intel’s mobile architecture.

Mobile operating systems

Panellists had dim views on some of the Android devices they saw. Some of these devices showed very little differentiation from each other. Indeed, some “formerly innovative” handset manufacturers seem to have lost their direction altogether.

Views were mixed on the likely impact of Mozilla’s Firefox OS. Is the user experience going to be sufficiently compelling for phones based on this OS to gain significant market traction? It seems too early to tell.

Panellists were more open to the idea that the marketplace could tolerate a considerable number of different mobile operating systems. Gone are  the days when CEOs of network operators would call for the industry to agree on just three platforms. The vast numbers of smartphones expected over the next few years (with one billion likely to be sold in 2013) mean there is room for quite a few second-tier platforms behind the market leaders iOS and Android.

Semiconductor suppliers

If the mobile operating system has two strong leaders, the choice of leading semiconductor supplier is even more limited. One company stands far out from the crowd: Qualcomm. In neither case is the rest of the industry happy with the small number of leading choices available.

For this reason, the recently introduced Tegra 4i processor from Nvidia was seen as potentially highly significant. This incorporates an LTE modem.

Centre of gravity of innovation

In past years, Europe could hold its head high as being at the vanguard of mobile innovation. Recent years have seen more innovation from America, e.g. from Silicon Valley. MWC this year also saw a lot of innovation from the Far East – especially Korea and China. Some audience members suggested they would be more interested in attending an MWC located in the Far East than in Barcelona.

Could the decline in Europe’s position be linked to regulatory framework issues? It had been striking to listen to the pleas during keynotes from CEOs of European network operators, requesting more understanding from governments and regulators. Perhaps some consolidation needs to take place, to address the fragmentation among different network operators. This view was supported by the observation that a lot of the attempted differentiation between different operators – for example, in the vertical industry solutions they offer – fail to achieve any meaningful distinctions.

State of maturity of the industry

In one way, the lack of tremendous excitement at MWC this year indicates the status of the mobile industry as being relatively mature. This is in line with the observation that there were “a lot of suits” at the event. Arguably, the industry is ripe for another round of major disruption – similar to that triggered by Apple’s introduction of the iPhone.

Unsurprisingly, given the setting of the Fiesta or Siesta meeting, many in the audience hold the view that “the next big mobile innovation” could well involve companies with strong footholds in Cambridge.

Moller Centre

Footnote: Everything will be connected

Some of the same themes from the Fiesta or Siesta discussion will doubtless re-appear in “The 5th Future of Wireless International Conference” being run by Cambridge Wireless at the same venue, the Møller Centre, on 1st and 2nd of July this year. Registration is already open. To quote from the event website:

Everything Will Be Connected (Did you really say 50 billion devices?)

Staggeringly, just 30 years since the launch of digital cellular, over 6 billion people now have a mobile phone. Yet we may be on the threshold of a far bigger global shift in humanity’s use and application of wireless and communications. It’s now possible to connect large numbers of physical objects to the Internet and Cloud and give each of them an online digital representation. What really happens when every ‘thing’ is connected to the Cloud and by implication to everything else; when computers know where everything is and can enhance our perception and understanding of our surroundings? How will we interact with this augmented physical world in the future, and what impact will this have on services, infrastructure and devices? More profoundly, how might this change our society, business and personal lives?

In 2013, The Future of Wireless International Conference explores strategic questions about this “Internet of Things”. How transformational could it be and how do we distinguish reality from hyperbole? What about the societal, business and technical challenges involved in moving to a future world where everyday objects are connected and autonomous? What are the benefits and pitfalls – will this be utopia or dystopia? What is the likely impact on your business and what new opportunities will this create? Is your business strategy correct, are you too early, or do you risk being too late? Will this change your business, your life? – almost certainly. Come to hear informed analysis, gain insight, and establish new business connections at this un-missable event.

The agenda for this conference is already well-developed – with a large number of highlights all the way through. I’ll restrict myself to mentioning just two of them. The opening session is described as an executive briefing “What is the Internet Of Things and Why Should I Care?”, and features a keynote “A Vision of the Connected World” by Prof Christopher M. Bishop, FREng, FRSE, Distinguished Scientist, Microsoft Research. The closing session is a debate on the motion “This house believes that mobile network operators will not be winners in the Internet of Things”, between

12 March 2013

The coming revolution in mental enhancement

Filed under: entrepreneurs, futurist, intelligence, neuroengineering, nootropics, risks, UKH+ — David Wood @ 2:50 pm

Here’s a near-future scenario: Within five years, 10% of people in the developed world will be regularly taking smart drugs that noticeably enhance their mental performance.

It turns out there may be a surprising reason for this scenario to fail to come to pass. I’ll get to that shortly. But first, let’s review why the above scenario would be a desirable one.

nbpicAs so often, Nick Bostrom presents the case well. Nick is Professor at the Faculty of Philosophy & Oxford Martin School, Director at the Future of Humanity Institute, and Director of the Programme on the Impacts of Future Technology, all at the University of Oxford. He wrote in 2008,

Those who seek the advancement of human knowledge should [consider] kinds of indirect contribution…

No contribution would be more generally applicable than one that improves the performance of the human brain.

Much more effort ought to be devoted to the development of techniques for cognitive enhancement, be they drugs to improve concentration, mental energy, and memory, or nutritional enrichments of infant formula to optimize brain development.

Society invests vast resources in education in an attempt to improve students’ cognitive abilities. Why does it spend so little on studying the biology of maximizing the performance of the human nervous system?

Imagine a researcher invented an inexpensive drug which was completely safe and which improved all‐round cognitive performance by just 1%. The gain would hardly be noticeable in a single individual. But if the 10 million scientists in the world all benefited from the drug the inventor would increase the rate of scientific progress by roughly the same amount as adding 100,000 new scientists. Each year the invention would amount to an indirect contribution equal to 100,000 times what the average scientist contributes. Even an Einstein or a Darwin at the peak of their powers could not make such a great impact.

Meanwhile others too could benefit from being able to think better, including engineers, school children, accountants, and politicians.

This example illustrates the enormous potential of improving human cognition by even a tiny amount…

The first objection to the above scenario is that it is technically infeasible. People imply that no such drug could possibly exist. Any apparent evidence offered to the contrary is inevitably suspect. Questions can be raised over the anecdotes shared in the Longecity thread “Ten months of research condensed – A total newbies guide to nootropics” or in the recent Unfinished Man review “Nootropics – The Facts About ‘Smart Drugs’”. After all, the reasoning goes, the brain is too complex. So these anecdotes are likely to involve delusion – whether it is self-delusion (people not being aware of placebo effects and similar) or delusion from snake oil purveyors who have few scruples in trying to sell products.

A related objection is that the side-effects of such drugs are unknown or difficult to assess. Yes, there are substances (take alcohol as an example) which can aid our creativity, but with all kinds of side-effects. The whole field is too dangerous – or so it is said.

These objections may have carried weight some years ago, but increasingly they have less force. Other complex aspects of human functionality can be improved by targeted drugs; why not also the brain? Yes, people vary in how they respond to specific drug combinations, but that’s something that can be taken into account. Indeed, more data is being collected all the time.

Evidence of progress in the study of these smart drugs is one thing I expect to feature in an event taking place in central London this Wednesday (13th March).

next big thingThe event, The Miracle Pill: What do brain boosting drugs mean for the future? is being hosted by Nesta as part of the Policy Exchange “Next big thing” series.

Here’s an extract from the event website:

If you could take a drug to boost your brain-power, would you?

Drugs to enhance human performance are nothing new. Long-haul lorry drivers and aircraft pilots are known to pop amphetamines to stay alert, and university students down caffeine tablets to ward off drowsiness during all-nighters. But these stimulants work by revving up the entire nervous system and the effect is only temporary.

Arguments over smart drugs are raging. If a drug can improve an individual’s performance, and they do not experience side-effects, some argue, it cannot be such a bad thing.

But where will it all stop? Ambitious parents may start giving mind-enhancing pills to their children. People go to all sorts of lengths to gain an educational advantage and eventually success might be dependent on access to these mind-improving drugs…

This event will ask:

  • What are the limits to performance enhancement drugs, both scientifically and ethically? And who decides?
  • Is there a role for such pills in developing countries, where an extra mental boost might make a distinct difference to those in developing countries?
  • Does there need to be a global agreement to monitor the development of these pills?
  • Should policymakers give drug companies carte blanche to develop these products or is a stricter regulatory regime required?

The event will be chaired by Louise Marston, Head of Innovation and Economic Growth, Nesta. The list of panelists is impressive:

  • Dr Bennett Foddy, Deputy Director and Senior Research Fellow, Institute for Science and Ethics, Oxford Martin School, University of Oxford
  • Dr Anders Sandberg, James Martin Fellow, Future of Humanity Institute, Oxford Martin School, University of Oxford
  • Dr Hilary Leevers, Head of Education & Learning, the Wellcome Trust
  • Dame Sally Davies, Chief Medical Officer for England.

Under-currents of mistrust

From my own experience in discussing smart drugs that could enhance mental performance, I’m aware that objections to their use often run more deeply than the technical questions covered above. There are often under-currents of mistrust:

  • Reliance of smart drugs is viewed as irresponsible, self-indulgent, or as cheating
  • There’s an association with the irresponsible advocacy of so-called “recreational” mind-altering drugs
  • Surely, it is said, there are more reliable and more honourable ways of enhancing our mental powers
  • Besides, what is the point of simply being able to think faster?

I strongly reject the implication of irresponsibility or self-indulgence. Increased mental capability can be applied to all sorts of important questions, resulting in scientific progress, technological breakthrough, more elegant product development, and social benefit. The argument I quoted earlier, from Nick Bostrom, applies here.

I also strongly reject the “either/or” implication, when people advocate pursuit of more traditional methods of mental enhancement instead of reliance of modern technology. Why cannot we do both? When considering our physical health, we pay attention to traditional concerns, such as diet and rest, as well as to the latest medical findings. It should be the same for our mental well-being.

No, the real question is: does it work? And once it becomes clearer that certain combinations of smart drugs can make a significant difference to our mental prowess, with little risk of unwelcome side effects, the other objections to their use will quickly fade away.

It will be similar to the rapid change in attitudes towards IVF (“test tube babies”). I remember a time when all sorts of moral and theological hand-wringing took place over the possibility of in-vitro fertilisation. This hubristic technology, it was said, might create soul-less monstrosities; only wickedly selfish people would ever consider utilising the treatment. That view was held by numerous devout observers – but quickly faded away, in the light of people’s real-world experience with the resulting babies.

Timescales

This brings us back to the question: how quickly can we expect progress with smart drugs? It’s the 64 million dollar question. Actually it might be a 640 million dollar question. Possibly even more. The entrepreneurs and companies who succeed in developing and marketing good products in the field of mental enhancement stand to tap into very sizeable revenue streams. Pfizer, the developer of Viagra, earned revenues of $509 million in 2008 alone, from that particular enhancement drug. The developers of a Viagra for the mind could reasonably imagine similar revenues.

The barriers here are regulatory as well as technical. But with a rising public interest in the possibility of significant mental enhancement, the mood could swing quickly, enabling much more vigorous investment by highly proficient companies.

The biophysical approach

But there’s one more complication.

Actually this is a positive complication rather than a negative one.

Critics who suggest that there are better approaches to enhancing mental powers than smart drugs, might turn out to be right in a way they didn’t expect. The candidate for a better approach is to use non-invasive electrical and magnetic stimulation of the brain, targeted to specific functional areas.

headset-renderA variety of “helmets” are already available, or have been announced as being under development.

The start-up website Flow State Engaged raises and answers a few questions on this topic, as follows:

Q: What is tDCS?

A: Transcranial direct-current stimulation (tDCS) is one of the coolest health/self improvement technologies available today. tDCS is a form of neurostimulation which uses a constant, low current delivered directly to the brain via small electrodes to affect brain function.

Q: Is this for real?

A: The US Army and DARPA both currently use tDCS devices to train snipers and drone pilots, and have recorded 2.5x increases in learning rates. This incredible phenomenon of increased learning has been documented by multiple clinical studies as well.

Q: You want one?

A: Today if you want a tDCS machine it’s nearly impossible to find one for less than $600, and you need a prescription to order one. We wanted a simpler cheaper option. So we made our own kit, for ourselves and for all you body hackers out there…

AndrewVSomeone who has made a close personal study of the whole field of nootropics and biophysical approaches (including tDCS) is London-based researcher Andrew Vladimirov.

Back in November, Andrew gave a talk to the London Futurists on “Hacking our wetware: smart drugs and beyond”. It was a well-attended talk that stirred up lots of questions, both in the meeting itself, and subsequently online.

The good news is that Andrew is returning to London Futurists on Saturday 23rd March, where his talk this time will focus on biophysical approaches to “hacking our wetware”.

You can find more details of this meeting here – including how to register to attend.

Introducing the smart-hat

In advance of the meeting, Andrew has shared an alternative vision of the ways in which many people in the not-so-distant future will pursue mental enhancement.

He calls this vision “Towards digital nootropics”:

You are tired, anxious and stressed, and perhaps suffer from a mild headache. Instead of reaching for a pack from Boots the local pharmacists, you put on a fashionable “smarthat” (a neat variation of an “electrocap” with a comfortable 10-20 scheme placement for both small electrodes and solenoids) or, perhaps, its lighter version – a “smart bandana”.

Your phone detects it and a secure wireless connection is instantly established. A Neurostimulator app opens. You select “remove anxiety”, “anti-headache” and “basic relaxation” options, press the button and continue with your business. In 10-15 minutes all these problems are gone.

However, there is still much to do, and an important meeting is looming. So, you go to the “enhance” menu of the Neurostimulator and browse through the long list of options which include “thinking flexibility”, “increase calculus skills”, “creative imagination”, “lateral brainstorm”, “strategic genius”, “great write-up”, “silver tongue” and “cram before exam” amongst many others. There is even a separate night menu with functionality such as “increase memory consolidation while asleep”. You select the most appropriate options, press the button and carry on the meeting preparations.

There are still 15 minutes to go, which is more than enough for the desired effects to kick in. If necessary, they can be monitored and adjusted via the separate neurofeedback menu, as the smarthat also provides limited EEG measurement capabilities. You may use a tablet or a laptop instead of the phone for that.

A new profession: neuroanalyst

Entrepreneurs reading this article may already have noticed the very interesting business-development opportunities this whole field offers. These same entrepreneurs may pay further attention to the next stage of Andrew Vladimirov’s “Towards digital nootropics” vision of the not-so-distant future:

Your neighbour Jane is a trained neuroanalyst, an increasingly popular trade that combines depth psychology and a variety of advanced non-invasive neurostimulation means. Her machinery is more powerful and sophisticated than your average smartphone Neurostim.

While you lie on her coach with the mindhelmet on, she can induce highly detailed memory recall, including memories of early childhood to go through as a therapist. With a flick of a switch, she can also awake dormant mental abilities and skills you’ve never imagined. For instance, you can become a savant for the time it takes to solve some particularly hard problem and flip back to your normal state as you leave Jane’s office.

Since she is licensed, some ethical modulation options are also at her disposal. For instance, if Jane suspects that you are lying and deceiving her, the mindhelmet can be used to reduce your ability to lie – and you won’t even notice it.

Sounds like science fiction? The bulk of necessary technologies is already there, and with enough effort the vision described can be realised in five years or so.

If you live in the vicinity of London, you’ll have the opportunity to question Andrew on aspects of this vision at the London Futurists meetup.

Smart drugs or smart hats?

Will we one day talk as casually about our smarthats as we currently do about our smartphones? Or will there be more focus, instead, on smart drugs?

Personally I expect we’ll be doing both. It’s not necessarily an either/or choice.

And there will probably be even more dramatic ways to enhance our mental powers, that we currently can scarcely conceive.

« Newer PostsOlder Posts »

The Silver is the New Black Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 104 other followers