dw2

7 August 2015

Brave new world – bold new adaptation

Filed under: futurist, happiness, irrationality, theatre — Tags: , — David Wood @ 9:05 am

Q: What do the following cities have in common: Northampton, Edinburgh, Oxford, Nottingham, Cheltenham, Wolverhampton, Darlington, Blackpool, and Bradford?

A: They’re the locations which have theatres featuring in the forthcoming tour of a bold new production of Aldous Huxley’s Brave New World.

Brave-New-World

“Brave New World” is a phrase that frequently enters discussions about the future. Even people who have never read Huxley’s book – or people who have long forgotten the precise contents – recognise the phrase as a warning about the future misuse of technology. In Brave New World, people lead lives that are… comfortable, even blissful, but which lack authentic emotional experience. As a result, technology leads to a curtailment of human potential. Overall, humanity is diminished in that Brave New World, despite the gadgetry and convenience of that future society.

The version of Brave New World that’s about to go on tour has a script by Dawn King, is directed by James Dacre, features original music from These New Puritans, and is produced by Touring Consortium Theatre Company. The cast includes Sophie Ward, Abigail McKern, William Postlethwaite, Gruffudd Glyn, Olivia Morgan and Scott Karim.

bnw-castheadI found out about this forthcoming tour a couple of months ago, when I was asked to come to speak, as a futurist, to representatives of the different theatres which would be hosting the play. Could I provide some perspective on the play, and why it is particularly relevant today?

I took the chance to read the script, and was struck by its depth. There are many layers to it. And despite Huxley having written the novel as long ago as 1931, it has many highly contemporary themes. So I was happy to become involved.

The team at Touring Consortium Theatre Company filmed what I had to say. Here are some short extracts:

Are we nearer to a Brave New World than we think? The pace of change is accelerating.

Factors that will shape the next 10-20 years.

Technologies from Brave New World that are almost within our grasp

Some of the social changes we’ve seen that are eerily close to what Aldous Huxley predicted in 1931.

What questions does Brave New World pose for today’s society?

Note: for the touring schedule – from 4 Sept to 5 Dec 2015 – see this listing.

To read more about Brave New World from a transhumanist perspective, see this website by philosopher David Pearce.

30 June 2015

Securing software updates

Software frequently goes wrong. That’s a fact of life whose importance is growing – becoming, so to speak, a larger fact of life. That’s for three reasons.

  1. Complex software is spreading more widely into items where, previously, it was present (if at all) only in simpler form. This includes clothing (“wearable computing”), healthcare accessories, “smart home” consumer goods, automobiles (“connected vehicles”), and numerous “Internet of Things” sensors and actuators. More software means a greater likelihood of software error – and a greater likelihood of being hacked (compromised).
  2. Software in these items is increasingly networked together, so that defects in one piece of software can have effects that ricochet unexpectedly. For example, a hacked thermostat can end up reporting industrial secrets to eavesdroppers on the other side of the planet.
  3. By design, modern-day software is frequently open – meaning that its functionality can be configured and extended by other pieces of software that plug into it. Openness provides the possibility for positive innovation, in the way that apps enhance smartphones, or new themes enhance a webpage design. But that same openness enables negative innovation, in which plug-ins subvert the core product. This type of problem arises due to flaws in the set of permissions that expose software functionality from one module to another.

All three of these factors – the intrinsic defects in software, defects in its network connectivity, and defects in permission systems – can be exploited by writers of malware. Worryingly, there’s a mushrooming cybercrime industry that creates, modifies, and deploys increasingly sophisticated malware. There can be rich pickings in this industry. The denizens of Cybercrime Inc. can turn the principles of software and automation to their advantage, resulting in mass-scale deployment of their latest schemes for deception, intrusion, subterfuge, and extortion.

I recently raised these issues in my article “Eating the world: the growing importance of software security”. In that article, I predicted an imminent sea-change in the attitude which users tend to display towards the possibility of software security vulnerabilities. The attitude will change from complacency into purposeful alarm. Companies which are slow to respond to this change in attitude will find their products discarded by users – regardless of how many “cool” features they contain. Security is going to trump functionality, in a way it hasn’t done previously.

One company that has long been aware of this trend is Redbend Software. They’ve been thinking hard for more than a dozen years about the dynamics of OTA (over the air, i.e. wireless) software updates. Software updates are as much of a fact of life as software bugs – in fact, more so. Updates deliver fixes to bugs in previous versions; they also roll out new functionality. A good architecture for efficient, robust, secure software updates is, therefore, a key differentiator:

  • The efficiency of an update means that it happens quickly, with minimal data costs, and minimal time inconvenience to users
  • The robustness of an update means that, even if the update were to be interrupted partway through, the device will remain in a usable state
  • The security of an update means that it will reliably deliver software that is valid and authentic, rather than some “Trojan horse” malware masquerading as bona-fide.

According to my email archives, my first meeting with representatives of Redbend was as long ago as December 2002. At that time, I was Executive VP at Symbian with responsibility for Partnering. Since Redbend was one of the new “Platinum Partners” of Symbian, I took the time to learn more about their capabilities.

One person I met in these initial meetings was Gil Cordova, Director of Strategic Marketing at Redbend. Gil wrote to me afterwards, confirming our common view as to what lay ahead in the future:

Redbend deals with an enabling technology and solution for OTA updating of mobile devices.

Our solution enables device manufacturers and operators to update any part of the device software including OS, middleware systems and applications.

The solution is based on our patented technology for creating delta-updates which minimize the update package size ensuring it can be cost-effectively sent and stored on the device with little bandwidth and memory consumption. In addition we enable the update to occur within the device memory constraints ensuring no cost-prohibitive memory needs to be added…

OTA updates can help answer the needs of remote software repair and fixing to the device software, as well as streamline logistics when deploying devices…

At that time, some dozen years ago, the idea that mobile phones would have more and more software in them was still relatively new – and was far from being widely accepted as a good thing. But Redbend and Symbian foresaw the consequences, as in the final paragraph of Gil’s email to me:

All the above points to the fact that if software is a new paradigm in the industry then OTA updating is a very crucial and strategic issue that must be taken into account.

OTA has, indeed, been an important issue since that time. But it’s my view that the full significance is only now becoming apparent. As security is poised to “eat the world”, efficient and reliable OTA capabilities will grow yet further in importance. It will be something that more and more companies will need to include at the heart of their own product offerings. The world will insist on it.

A few days ago, I took a closer look at recent news from Redbend – in particular at their architecture for cybersecurity. I saw a great deal that I liked:

Secure Car

  • Domain isolation – to provide a strict separation between different subsystems (e.g. parts of the overall software system on a car), with the subsystems potentially running different operating systems
  • Type-1 hypervisor – to isolate different subsystems from hardware resources, except when such access is explicitly designed
  • Driver virtualization – to allow additional peripherals (such as Wi-Fi, cameras, Bluetooth, and GPS) to be added quickly into an existing device with the same secure architecture
  • Software update systems – to enable separate remote software management for the head (dashboard) unit, telematics (black-box) unit, and numerous ECUs (engine control units) – with a 100% success record in deploying updates on more than one million vehicles
  • State of the art FIPS (Federal Information Processing Standard) encryption – applied to the entirety of the update process
  • Intrusion Detection and Prevention systems – to identify and report any malicious or erroneous network activity, and to handle the risks arising before the car or any of its components suffers any ill-effect.

I know from my own background in designing software systems that this kind of all-points-considered security cannot be tacked onto an existing system. Provision for it needs to be designed in from the beginning. That’s where Redbend’s long heritage in this space shows its value.

The full benefit of taking an architectural approach to secure software updates – as opposed to trying to fashion security on top of fundamentally insecure components – is that the same architecture is capable of re-use in different domains. It’s therefore no surprise that Redbend software management solutions are available, not only for connected cars, but also for wearable computers, connected homes, and machine-to-machine (M2M) devices.

Of course, despite all these precautions, I expect the security arms race to continue. Software will continue to have bugs, and the cybercrime industry will continue to find ingenious ways to exploit these bugs. The weakest part of any security system, indeed, is frequently the humans involved, who can fall victim to social engineering. In turn, providers of security software are seeking to improve the usability of their systems, to reduce both the likelihood and the impact of human operator error.

This race probably has many laps to run, with new surprises ahead on each lap. To keep ahead, we need allies and partners who constantly look ahead, straining to discern the forthcoming new battlegrounds, and to prepare new defences in sufficient time. But we also need to avail ourselves of the best present tools, so that our businesses have the best chance of avoiding being eaten in the meantime. Figuring out which security tools really are best in class is fast becoming a vital core competency for people in ever-growing numbers of industries.

Footnote: I was inspired to write this post after discussions with some industry colleagues at Redbend, as part of their Life Over The Air program. The views and opinions expressed in this post are my own and don’t necessarily represent Redbend’s positions, strategies or opinions.

11 June 2015

Eating the world – the growing importance of software security

Security is eating the world

In August 2011, Marc Andreessen famously remarked that “software is eating the world”. Writing in the Wall Street Journal, Andreessen set out his view that society was “in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy”.

With his background as pioneering web software architect at Netscape, and with a string of successful investments under his belt at venture capital firm Andreessen-Horowitz, Andreessen was well placed to comment on the potency of software. As he observed,

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defence. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures.

He then made the following prediction:

Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

Industries to be impacted in this way, Andreessen suggested, would include entertainment, communications, recruitment, automotive, retail, energy, agriculture, finance, healthcare, education, and defence.

In the four years since the phrase was coined, “software is eating the world” has shown every sign of being a profound truth. In more and more sectors of industry, companies that lack deep expertise in software have found themselves increasingly by-passed by competitors. Software skills are no longer a “nice-to have” optional extra. They’re core to numerous aspects of product development.

But it’s time to propose a variant to the original phrase. A new set of deep skills are going to prove themselves as indispensable for ever larger numbers of industries. This time, the skills are in security. Before long, security will be eating the world. Companies whose software systems fall short on security will be driven out of business.

Dancing pigs

My claim about the growing importance of security may appear to fly in opposition to a general principle of user behaviour. This principle was described by renowned security writer Bruce Schneier in his 2000 book “Secrets and Lies”:

If J. Random Websurfer clicks on a button that promises dancing pigs on his computer monitor, and instead gets a hortatory message describing the potential dangers of the applet — he’s going to choose dancing pigs over computer security any day. If the computer prompts him with a warning screen like: “The applet DANCING PIGS could contain malicious code that might do permanent damage to your computer, steal your life’s savings, and impair your ability to have children,” he’ll click OK without even reading it. Thirty seconds later he won’t even remember that the warning screen even existed.

In other words, despite whatever users may say about the importance of security when directly asked about that question (“yes, of course I take security seriously”), in practice they put a higher priority on watching animated graphics (of flying pigs, cute kittens, celebrity wardrobe malfunctions, or whatever), and readily accept security risks in pursuit of that goal.

A review paper (PDF) published in 2009 by Cormac Herley of Microsoft Research shared findings that supported this view. Herley reports that, for example, users still typically choose the weakest passwords they can get away with, rather than making greater efforts to keep their passwords unguessable. Users also frequently ignore the advice against re-using the same passwords on different sites (so that, if there’s a security problem with any one of these sites, the user’s data on all other sites becomes vulnerable too).

Herley comments:

There are several ways of viewing this. A traditional view is that users are hopelessly lazy: in the face of dire descriptions of the threat landscape and repeated warnings, they do the minimum possible…

But by the end of his review, he offers a more sympathetic assessment:

“Given a choice between dancing pigs and security, users will pick dancing pigs every time.” While amusing, this is unfair: users are never offered security, either on its own or as an alternative to anything else. They are offered long, complex and growing sets of advice, mandates, policy updates and tips… We have shown that much of this advice does nothing to make users more secure, and some of it is harmful in its own right. Security is not something users are offered and turn down. What they are offered and do turn down is crushingly complex security advice that promises little and delivers less.

Herley’s paper concludes:

How can we help users avoid harm? This begins with a clear understanding of the actual harms they face, and a realistic understanding of their constraints. Without these we are proceeding blindly.

Exponential change

What are the “actual harms” that users face, as a result of insecure software systems or poor personal security habits?

We live in a time of rapid technology change. As software eats the world, it leaves more and more aspects of the world vulnerable to problems in the software – and vulnerable to problems in how that software is used, deployed, and updated.

As a result, the potential harm to users from poor security is constantly increasing. Users are vulnerable in new ways that they had never considered before.

Hacking embedded medical devices

For example, consider one possible unexpected side-effect of being fitted with one of the marvels of modern technology, an implantable heart pacemaker. Security researcher Barnaby Jack of IOActive gave a devastating demo at the Breakpoint conference in October 2012 of how easy it was for an outsider to interfere with the system whereby a pacemaker can be wirelessly recalibrated. The result is summed up in this Computerworld headline, “Pacemaker hack can deliver deadly 830-volt jolt”:

The flaw lies with the programming of the wireless transmitters used to give instructions to pacemakers and implantable cardioverter-defibrillators (ICDs), which detect irregular heart contractions and deliver an electric shock to avert a heart attack.

A successful attack using the flaw “could definitely result in fatalities,” said Jack…

In a video demonstration, Jack showed how he could remotely cause a pacemaker to suddenly deliver an 830-volt shock, which could be heard with a crisp audible pop.

Hacking vehicle control systems

Consider also the predicament that many car owners in Austin, Texas experienced, as a result of the actions of a disgruntled former employee of used car retail firm Texas Auto Center. As Wired reported,

More than 100 drivers in Austin, Texas found their cars disabled or the horns honking out of control, after an intruder ran amok in a web-based vehicle-immobilization system normally used to get the attention of consumers delinquent in their auto payments.

Police with Austin’s High Tech Crime Unit on Wednesday arrested 20-year-old Omar Ramos-Lopez, a former Texas Auto Center employee who was laid off last month, and allegedly sought revenge by bricking the cars sold from the dealership’s four Austin-area lots.

Texas Auto Center had included some innovative new technology in the cars they sold:

The dealership used a system called Webtech Plus as an alternative to repossessing vehicles that haven’t been paid for. Operated by Cleveland-based Pay Technologies, the system lets car dealers install a small black box under vehicle dashboards that responds to commands issued through a central website, and relayed over a wireless pager network. The dealer can disable a car’s ignition system, or trigger the horn to begin honking, as a reminder that a payment is due.

The beauty of the system is that it allows a greater number of customers to purchase cars, even when their credit history looks poor. Rather than extensive up-front tests of the credit-worthiness of a potential purchaser, the system takes advantage of the ability to immobilise a car if repayments should cease. However, as Wired reports,

Texas Auto Center began fielding complaints from baffled customers the last week in February, many of whom wound up missing work, calling tow trucks or disconnecting their batteries to stop the honking. The troubles stopped five days later, when Texas Auto Center reset the Webtech Plus passwords for all its employee accounts… Then police obtained access logs from Pay Technologies, and traced the saboteur’s IP address to Ramos-Lopez’s AT&T internet service, according to a police affidavit filed in the case.

Omar Ramos-Lopez had lost his position at Texas Auto Center the previous month. Following good security practice, his own account on the Webtech Plus system had been disabled. However, it seems he gained access by using an account assigned to a different employee.

At first, the intruder targeted vehicles by searching on the names of specific customers. Then he discovered he could pull up a database of all 1,100 Auto Center customers whose cars were equipped with the device. He started going down the list in alphabetical order, vandalizing the records, disabling the cars and setting off the horns.

His manager ruefully remarked, “Omar was pretty good with computers”.

Hacking thermostats and lightbulbs

Finally, consider a surprise side-effect of attaching a new thermostat to a building. Modern thermostats exchange data with increasingly sophisticated systems that control heating, ventilation, and air conditioning. In turn, these systems can connect into corporate networks, which contain email archives and other confidential documents.

The Washington Chamber of Commerce discovered in 2011 that a thermostat in a townhouse they used was surreptitiously communicating with an Internet address somewhere in China. All the careful precautions of the Chamber’s IT department, including supervision of the computers and memory sticks used by employees, to guard against the possibility of such data seepage, was undone by this unexpected security vulnerability in what seemed to be an ordinary household object. Information that leaked from the Chamber potentially included sensitive information about US policy for trade with China, as well as other key IP (Intellectual Property).

It’s not only thermostats that have much greater network connectivity these days. Toasters, washing machines, and even energy-efficient lightbulbs contain surprising amounts of software, as part of the implementation of the vision of “smart homes”. And in each case, it opens the potential for various forms of espionage and/or extortion. Former CIA Director David Petraeus openly rejoiced in that possibility, in remarks noted in a March 2012 Wired article “We’ll spy on you through your dishwasher”:

Items of interest will be located, identified, monitored, and remotely controlled through technologies such as RFID, sensor networks, tiny embedded servers, and energy harvesters — all connected to the next-generation internet using abundant, low-cost, and high-power computing…

Transformational is an overused word, but I do believe it properly applies to these technologies, particularly to their effect on clandestine tradecraft.

To summarise: smart healthcare, smart cars, and smart homes, all bring new vulnerabilities as well as new benefits. The same is true for other fields of exponentially improving technology, such as 3D printing, unmanned aerial vehicles (“drones”), smart toys, and household robots.

The rise of robots

Sadly, malfunctioning robots have already been involved in a number of tragic fatalities. In May 2009, an Oerlikon MK5 anti-aircraft system was part of the equipment used by 5,000 South African troops in a large-scale military training exercise. On that morning, the controlling software suffered what a subsequent enquiry would call a “glitch”. Writing in the Daily Mail, Gavin Knight recounted what happened:

The MK5 anti-aircraft system, with two huge 35mm cannons, is essentially a vast robotic weapon, controlled by a computer.

While it’s one thing when your laptop freezes up, it’s quite another when it is controlling an auto-loading magazine containing 500 high-explosive rounds…

“There was nowhere to hide,” one witness stated in a report. “The rogue gun began firing wildly, spraying high explosive shells at a rate of 550 a minute, swinging around through 360 degrees like a high-pressure hose.”

By the time the robot has emptied its magazine, nine soldiers lie dead. Another 14 are seriously injured.

Deaths due to accidents involving robots have also occurred throughout the United States. A New York Times article in June 2014 gives the figure of “at least 33 workplace deaths and injuries in the United States in the last 30 years.” For example, in a car factory in December 2001,

An employee was cleaning at the end of his shift and entered a robot’s unlocked cage. The robot grabbed his neck and pinned the employee under a wheel rim. He was asphyxiated.

And in an aluminium factory in February 1996,

Three workers were watching a robot pour molten aluminium when the pouring unexpectedly stopped. One of them left to flip a switch to start the pouring again. The other two were still standing near the pouring operation, and when the robot restarted, its 150-pound ladle pinned one of them against the wall. He was killed.

To be clear, in none of these cases is there any suggestion of foul play. But to the extent that robots can be remotely controlled, the possibility arises for industrial vandalism.

Indeed, one of the most infamous cases of industrial vandalism (if that is the right description in this case) is the way in which the Stuxnet computer worm targeted the operation of fast-spinning centrifuges inside the Iranian programme to enrich uranium. Stuxnet took advantage of at least four so-called “zero-day security vulnerabilities” in Microsoft Windows software – vulnerabilities that Microsoft did not know about, and for which no patches were available. When the worm found itself installed on computers with particular programmable logic controllers (PLCs), it initiated a complex set of monitoring and alteration of the performance of the equipment attached to the PLC. The end result was that the centrifuges tore themselves apart, reportedly setting back the Iranian nuclear programme by a number of years.

Chillingly, what Stuxnet could do to centrifuges, variant software configurations could have similar effects on other industrial infrastructure – including energy and communication grids.

Therefore, whereas there is much to celebrate about the growing connectivity of “the Internet of Things”, there is also much to fear about it.

The scariest book

Many of the examples I’ve briefly covered above – the hacking of embedded medical devices, vehicle control systems, and thermostats and lightbulbs – as well as the upsides and downsides of “the rise of robots” – are covered in greater detail in a book I recently finished reading. The book is “Future Crimes”, by former LAPD police officer Marc Goodman. Goodman has spent the last twenty years working on cyber security risks with organisations such as Interpol, NATO, and the United Nations.

The full title of Goodman’s book is worth savouring: “Future Crimes: Everything is connected, everything is vulnerable, and what we can do about it.” Singularity 1on1 podcast interview Nikola Danaylov recently described Future Crimes as “the scariest book I have ever read in my life”. That’s a sentiment I fully understand. The book has a panoply of “Oh my god” moments.

What the book covers is not only the exponentially growing set of vulnerabilities that our exponentially connected technology brings in its wake, but also the large set of people who may well be motivated to exploit these vulnerabilities. This includes home and overseas government departments, industrial competitors, disgruntled former employees, angry former friends and spouses, ideology-fuelled terrorists, suicidal depressives, and a large subset of big business known as “Crime Inc”. Criminals have regularly been among the very first to adopt new technology – and it will be the same with the exploitation of new generations of security vulnerabilities.

There’s much in Future Crimes that is genuinely frightening. It’s not alone in the valuable task of raising public awareness of increasing security vulnerabilities. I also recommend Kim Zetter’s fine investigative work “Countdown To Zero Day: Stuxnet and the launch of the world’s first digital weapon”. Some of the same examples appear in both books, providing added perspective. In both cases the message is clear – the threats from cybersecurity are likely to mushroom.

On the positive front, technology can devise countermeasures as well as malware. There has long been an arms race between software virus writers and software antivirus writers. This arms race is now expanding into many new areas.

If the race is lost, it means that security will eat the world in a bad way: the horror stories that are told throughout both Future Crimes and Countdown To Zero Day will magnify in both number and scope. In that future scenario, people will look back fondly on the present day as a kind of innocent paradise, in which computers and computer-based systems generally worked reliably (despite occasional glitches). Safe, clean computer technology might become as rare as bottled oxygen in an environment where smog and pollution dominates – something that is only available in small quantities, to the rich and powerful.

If the race is won, there will still be losers. I’m not just referring to Crime Inc, and other would-be exploiters of security vulnerabilities, whose ambitions will be thwarted. I’m referring to all the companies whose software will fall short of the security standards of the new market leaders. These are companies who pay lip service to the importance of robust, secure software, but whose products in practice disappoint customers. By that time, indeed, customers will long have moved on from preferring dancing pigs to good security. The prevalence of bad news stories – in their daily social media traffic – will transform their appreciation of the steps they need to take to remain as safe as possible. Their priorities will have changed. They’ll be eagerly scouring reports as to which companies have world-class software security, and which companies, on the other hand, have products that should be avoided. Companies in the former camp will eat those in the latter camp.

Complications with software updates

As I mentioned above, there can be security vulnerabilities, not only intrinsic in a given piece of software, but also in how that software is used, deployed, and updated. I’ll finish this article by digging more deeply into the question of software updates. These updates have a particularly important role in the arms race between security vulnerabilities and security improvements.

Software updates are a key part of modern technological life. These updates deliver new functionality to users – such as a new version of a favourite app, or an improved user interface for an operating system. They also deliver security fixes, along with other bug fixes. In principle, as soon as possible after a major security vulnerability has been identified and analysed, the vendor will make available a fix to that programming error.

However, updates are something that many users dislike. On the one hand, they like receiving improved functionality. But they fear on the other hand that:

  • The upgrade will be time-consuming, locking them out of their computer systems at a time when they need to press on with urgent work
  • The upgrade will itself introduce new bugs, and break familiar patterns of how they use the software
  • Some of their applications will stop working, or will work in strange ways, after the upgrade.

The principle of “once bitten, twice shy” applies here. One bad experience with upgrade software – such as favourite add-on applications getting lost in the process – may prejudice users against accepting any new upgrades.

My own laptop recently popped up an invitation for me to reserve a free upgrade from its current operating system – Windows 7.1 – to the forthcoming Windows 10. I confess that I have yet to click the “yes, please reserve this upgrade” button. I fear, indeed, that some of the legacy software on my laptop (including apps that are more than ten years old, and whose vendors no longer exist) will become dysfunctional.

The Android operating system for smartphones faces a similar problem. New versions of the operating system, which include fixes to known security problems, often fail to make their way onto users of Android phones. In some cases, this is because the phones are running a reconfigured version of Android, which includes modifications introduced by a phone manufacturer and/or network operator. Any update has to wait until similar reconfigurations have been applied to the new version of the operating system – and that can take a long time, due to reluctance on the part of the phone manufacturer or network operator. In other cases, it’s simply because users decline to accept an Android upgrade when it is offered to them. Once bitten, twice shy.

Accordingly, there’s competitive advantage available, to any company that makes software upgrades as smooth and reliable as possible. This will become even more significant, as users grow in their awareness of the need to have security vulnerabilities in their computer systems fixed speedily.

But there’s a very awkward problem lurking around the upgrade process. Computer systems can sometimes be tricked into installing malicious software, whilst thinking it is a positive upgrade. In other words, the upgrade process can itself be hacked. For example, at the Black Hat conference in July 2009, IOActive security researcher Mike Davis demonstrated a nasty vulnerability in the software update mechanism in the smart electricity meters that were to be installed in homes throughout the Pacific North West of the United States.

For a riveting behind-the-scenes account of this particular research, see the book Countdown To Zero Day. In brief, Davis found a way to persuade a smart meter that it was being offered a software upgrade by a neighbouring, trusted smart meter, whereas it was in fact receiving software from an external source. This subterfuge was accomplished by extracting the same network encryption key that was hard-wired into every smart meter in the collection, and then presenting that encryption key as apparent (but bogus) evidence that the communication could be trusted. Once the meter had installed the upgrade, the new software could disable the meter from responding to any further upgrades. It could also switch off any electricity supply to the home. As a result, the electricity supplier would be obliged to send engineers to visit every single house that had been affected by the malware. In the simulated demo shown by Davis, this was as many as 20,000 separate houses within just a 24 hour period.

Uncharitably, we might think to ourselves that an electricity supplier is probably the kind of company to make mistakes with its software upgrade mechanism. As Mike Davis put it, “the guys that built this meter had a short-term view of how it would work”. We would expect, in contrast, that a company whose core business was software (and which had been one of the world’s leading software companies for several decades) would have no such glitches in its system for software upgrades.

Unexpectedly, one of the exploits utilised by Stuxnet team was a weakness in part of the Microsoft Update system – a part that had remained unchanged for many years. The exploit was actually used by a piece of malware, known as Flame which shared many characteristics with Stuxnet. Mikko Hyppönen, Chief Research Officer of Finnish antivirus firm F-Secure, reported the shocking news as follows in a corporate blogpost tellingly entitled “Microsoft Update and The Nightmare Scenario”:

About 900 million Windows computers get their updates from Microsoft Update. In addition to the DNS root servers, this update system has always been considered one of the weak points of the net. Antivirus people have nightmares about a variant of malware spoofing the update mechanism and replicating via it.

Turns out, it looks like this has now been done. And not by just any malware, but by Flame…

Flame has a module which appears to attempt to do a man-in-the-middle attack on the Microsoft Update or Windows Server Update Services system. If successful, the attack drops a file called WUSETUPV.EXE to the target computer.

This file is signed by Microsoft with a certificate that is chained up to Microsoft root.

Except it isn’t signed really by Microsoft.

Turns out the attackers figured out a way to misuse a mechanism that Microsoft uses to create Terminal Services activation licenses for enterprise customers. Surprisingly, these keys could be used to also sign binaries…

Having a Microsoft code signing certificate is the Holy Grail of malware writers. This has now happened.

Hyppönen’s article ends with some “good news in the bad news” which nevertheless sounds a strong alarm about similar things going wrong (with worse consequences) in the future:

I guess the good news is that this wasn’t done by cyber criminals interested in financial benefit. They could have infected millions of computers. Instead, this technique has been used in targeted attacks, most likely launched by a Western intelligence agency.

How not to be eaten

Despite the threats that I’ve covered above, I’m optimistic that software security and software updates can be significantly improved in the months and years ahead. In other words, there’s plenty of scope for improvements in the quality of software security.

One reason for this optimism is that I know that smart people have been thinking hard about these topics for many years. Good solutions are already available, ready for wider deployment, in response to stronger market readiness for such solutions.

But it will take more than technology to win this arms race. It will take political resolve. For too long, software companies have been able to ship software that has woefully substandard security. For too long, companies have prioritised dancing pigs over rock-hard security. They’ve written into their software licences that they accept no liability for problems arising from bugs in their software. They’ve followed, sometimes passionately, and sometimes half-heartedly, the motto from Facebook’s Mark Zuckerberg that software developers should “move fast and break things”.

That kind of behaviour may have been appropriate in the infancy of software. No longer.

Move fast and break things

21 May 2015

Anticipating 2040: The triple A, triple h+ vision

Abundance Access Action

The following vision arises from discussions with colleagues in the Transhumanist Party.

TPUK_LOGO3_400pxAbundance

Abundance – sustainable abundance – is just around the corner – provided we humans collectively get our act together.

We have within our grasp a sustainable abundance of renewable energy, material goods, health, longevity, intelligence, creativity, freedom, and positive experience.

This can be attained within one human generation, by wisely accelerating the green technology revolution – including stem cell therapies, 3D printing, prosthetics, robotics, nanotechnology, genetic engineering, synthetic biology, neuro-enhancement, artificial intelligence, and supercomputing.

TPUK_LOGO2_400pxAccess

The rich fruits of technology – abundance – can and should be provided for all, not just for those who manage to rise to the top of the present-day social struggle.

A bold reorganisation of society can and should take place in parallel with the green technology revolution – so that everyone can freely access the education, healthcare, and everything else needed to flourish as a full member of society.

Action

TPUK_LOGO1_400pxTo channel the energies of industry, business, finance, universities, and the media, for a richly positive outcome within the next generation, swift action is needed:

  • Widespread education on the opportunities – and risks – of new technology
  • Regulations and checks to counter short-termist action by incumbent vested interests
  • The celebration and enablement of proactive innovation for the common good
  • The promotion of scientific, rational, evidence-based methods for taking decisions, rather than ideologies
  • Transformation of our democracy so that governance benefits from the wisdom of all of society, and serves the genuine needs of everyone, rather than perpetuating the existing establishment.

Transhumanism 2040

2040Within one generation – 25 years, that is, by 2040 – human society can and should be radically transformed.

This next step of conscious evolution is called transhumanism. Transhumanists see, and welcome, the opportunity to intelligently redesign humanity, drawing wisely on the best resources of existing humanity.

The transhumanist party is the party of abundance, access, and action. It is the party with a programme to transcend (overcome) our ingrained human limitations – limitations of animal biology, primate psychology, antiquated philosophy, and 20th century social structures.

Transhumanism 2020

2020As education spreads about the potential for a transhumanist future of abundance, access, and action – and as tangible transhumanist projects are seen to be having an increasingly positive political impact – more and more people will start to identify themselves as transhumanists.

This growing movement will have consequences around the world. For example, in the general election in 2020 in the UK, there may well be, in every constituency, either a candidate from the Transhumanist Party, or a candidate from one of the other parties who openly and proudly identifies as a transhumanist.

The political landscape will never be the same again.

Call to action

To offer support to the Transhumanist Party in the UK (regardless of where you are based in the world), you can join the party by clicking the following PayPal button:

Join now

Membership costs £25 per annum. Members will be invited to participate in internal party discussions of our roadmap.

For information about the Transhumanist Party in other parts of the world, see http://transhumanistpartyglobal.org/.

For a worldwide transhumanist network without an overt political angle, consider joining Humanity+.

To discuss the politics of the future, without any exclusive link to the Transhumanist Party, consider participating in one of the Transpolitica projects – for example, the project to publish the book “Politics 2.0”.

Anticipating the Transhumanist Party roadmap to 2040

Footnote: Look out for more news of a conference to be held in London during Autumn (*), entitled “Anticipating 2040: The Transhumanist Party roadmap”, featuring speakers, debates, open plenaries, and closed party sessions.

If anyone would like to speak at this event, please get in touch.

Anticipating 2040
(*) Possible date is 3-4 October 2015, though planning is presently at a preliminary stage.

 

10 May 2015

When the future of smartphones was in doubt

It’s hard to believe it now. But ten years ago, the future of smartphones was in doubt.

At that time, I wrote these words:

Smartphones in 2005 are roughly where the Internet was in 1995. In 1995, there were, worldwide, around 20-40 million users of the Internet. That’s broadly the same number of users of smartphones there are in the world today. In 1995, people were debating the real value of Internet usage. Was it simply an indulgent plaything for highly technical users, or would it have lasting wider attraction? In 2005, there’s a similar debate about smartphones. Will smartphones remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

That was the opening paragraph in an essay which the Internet site Archive.org has preserved. The original location for the essay, the Symbian corporate website, has long since been retired, having been absorbed inside Nokia infrastructure in 2009 (and, perhaps, being absorbed in turn into Microsoft in 2014).

Symbian Way Back

The entire essay can be found here, warts and all. That essay was the first in a monthly series known as “David Wood Insight” which extended from September 2005 to September 2006. (The entire set still exists on Archive.org – and, for convenience, I’ve made a copy here.)

Ten years later, it seems to me that wearable computers in 2015 are roughly where smartphones were in 2005 (and where the Internet was in 1995). There’s considerable scepticism about their future. Will they remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

Some commentators look at today’s wearable devices, such as Google Glass and Apple Watch, and express disappointment. There are many ways these devices can be criticised. They lack style. They lack “must have” functionality. Their usability leaves a lot to be desired. Battery life is too short. And so on.

But, like smartphones before them – and like the world-wide web ten years earlier – they’re going to get much, much better as time passes. Positive feedback cycles will ensure that happens.

I share the view of Augmented Reality analyst Ori Inbar, who wrote the following a few months ago in an updated version of his “Smart Glasses Market Report”:

When contemplating the evolution of technology in the context of the evolution of humanity, augmented reality (AR) is inevitable.

Consider the innovation cycles of computing from mainframes, to personal computers, to mobile computing, to wearables: It was driven by our need for computers to get smaller, better, and cheaper. Wearables are exactly that – mini computers on track to shrink and disappear on our bodies. In addition, there is a fundamental human desire for larger and sharper displays – we want to see and feel the world at a deeper level. These two trends will be resolved with Augmented Reality; AR extends our natural senses and will become humans’ primary interface for interaction with the world.

If the adoption curve of mobile phones is to repeat itself with glasses – within 10 years, over 1 billion humans will be “wearing.”

The report is packed with insight – I fully recommend it. For example, here’s Ori’s depiction of four waves of adoption of smart glasses:

Smart Glasses Adoption

(For more info about Augmented Reality and smart glasses, readers may be interested in the forthcoming Augmented World Expo, held 8-10 June at the Santa Clara Convention Centre in Silicon Valley.)

What about ten more years into the future?

All being well, here’s what I might be writing some time around 2025, foreseeing the growing adoption of yet another wave of computers.

If 1995-2005 saw the growth of desktop and laptop computers and the world wide web, 2005-2015 saw the growing ubiquity of smartphones, and 2015-2025 will see the triumph of wearable computers and augmented reality, then 2025-2035 is likely to see the increasingly widespread usage of nanobots (nano-computers) that operate inside our bodies.

The focus of computer innovation and usage will move from portables to mobiles to wearables to insideables.

And the killer app of these embedded nanobots will be internal human enhancement:

  • Biological rejuvenation
  • Body and brain repair
  • Body and brain augmentation.

By 2025, these applications will likely be in an early, rudimentary state. They’ll be buggy, irritating, and probably expensive. With some justification, critics will be asking: Will nanobots remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

28 April 2015

Why just small fries? Why no big potatoes?

Filed under: innovation, politics, Transpolitica, vision — Tags: , , , , — David Wood @ 3:12 pm

Big potatoesLast night I joined a gathering known as “Big Potatoes”, for informal discussion over dinner at the De Santis restaurant in London’s Old Street.

The potatoes in question weren’t on the menu. They were the potential big innovations that politicians ought to be contemplating.

The Big Potatoes group has a tag-line: “The London Manifesto for Innovation”.

As their website states,

The London Manifesto for Innovation is a contribution to improving the climate for innovation globally.

The group first formed in the run-up to the previous UK general election (2010). I blogged about them at that time, here, when I listed the principles from their manifesto:

  • We should “think big” about the potential of innovation, since there’s a great deal that innovation can accomplish;
  • Rather than “small is beautiful” we should keep in mind the slogan “scale is beautiful”;
  • We should seek more than just a continuation of the “post-war legacy of innovation” – that’s only the start;
  • Breakthrough innovations are driven by new technology – so we should prioritise the enablement of new technology;
  • Innovation is hard work and an uphill struggle – so we need to give it our full support;
  • Innovation arises from pure scientific research as well as from applied research – both are needed;
  • Rather than seeking to avoid risk or even to manage risk, we have to be ready to confront risk;
  • Great innovation needs great leaders of innovation, to make it happen;
  • Instead of trusting regulations, we should be ready to trust people;
  • Markets, sticks, carrots and nudges are no substitute for what innovation itself can accomplish.

That was 2010. What has caused the group to re-form now, in 2015, is the question:

Why is so much of the campaigning for the 2015 election preoccupied with small fries, when it could – and should – be concentrating on big potatoes?

Last night’s gathering was facilitated by three of the writers of the 2010 big potato manifestoNico MacdonaldJames Woudhuysen, and Martyn Perks. The Chatham House rules that were in place prevents me from quoting directly from the participants. But the discussion stirred up plenty of thoughts in my own mind, which I’ll share now.

The biggest potato

FreemanDysonI share the view expressed by renowned physicist Freeman Dyson, in the book “Infinite in all directions” from his 1985 Gifford lectures:

Technology is… the mother of civilizations, of arts, and of sciences

Technology has given rise to enormous progress in civilization, arts and sciences over recent centuries. New technology is poised to have even bigger impacts on civilization in the next 10-20 years. So why aren’t politicians paying more attention to it?

MIT professor Andrew McAfee takes up the same theme, in an article published in October last year:

History teaches us that nothing changes the world like technology

McAfee spells out a “before” and “after” analysis. Here’s the “before”:

For thousands of years, until the middle of the 18th century, there were only glacial rates of population growth, economic expansion, and social development.

And the “after”:

Then an industrial revolution happened, centred around James Watt’s improved steam engine, and humanity’s trajectory bent sharply and permanently upward

AndrewMcAfeeOne further quote from McAfee’s article rams home the conclusion:

Great wars and empires, despots and democrats, the insights of science and the revelations of religion – none of them transformed lives and civilizations as much as a few practical inventions

Inventions ahead

In principle, many of the grave challenges facing society over the next ten years could be solved by “a few practical inventions”:

  • Students complain, with some justification, about the costs of attending university. But technology can enable better MOOCs – Massive Online Open Courses – that can deliver high quality lectures, removing significant parts of the ongoing costs of running universities; free access to such courses can do a lot to help everyone re-skill, as new occupational challenges arise
  • With one million people losing their lives to traffic accidents worldwide every year, mainly caused by human driver error, we should welcome the accelerated introduction of self-driving cars
  • Medical costs could be reduced by greater application of the principles of preventive maintenance (“a stitch in time saves nine”), particularly through rejuvenation biotechnology and healthier diets
  • A sustained green tech new deal should push society away from dependency on fuels that emit dangerous amounts of greenhouse gases, resulting in lifestyles that are positive for the environment as well as positive for humanity
  • The growing costs of governmental bureaucracy itself could be reduced by whole-heartedly embracing improved information technology and lean automation.

Society has already seen remarkable changes in the last 10-20 years as a result of rapid progress in fields such as electronics, computers, digitisation, and automation. In each case, the description “revolution” is appropriate. But even these revolutions pale in significance to the changes that will, potentially, arise in the next 10-20 years from extraordinary developments in healthcare, brain sciences, atomically precise manufacturing, 3D printing, distributed production of renewable energy, artificial intelligence, and improved knowledge management.

Indeed, the next 10-20 years look set to witness four profound convergences:

  • Between artificial intelligence and human intelligence – with next generation systems increasingly embodying so-called “deep learning”, “hybrid intelligence”, and even “artificial emotional intelligence”
  • Between machine and human – with smart technology evolving from “mobile” to “wearable” and then to “insideable”, and with the emergence of exoskeletons and other cyborg technology
  • Between software and biology – with programming moving from silicon (semiconductor) to carbon (DNA and beyond), with the expansion of synthetic biology, and with the application of genetic engineering
  • Between virtual and physical – with the prevalence of augmented reality vision systems, augmented reality education via new MOOCs (massive open online courses), cryptocurrencies that remove the need for centralised audit authorities, and lots more.

To take just one example: Wired UK has just reported a claim by Brad Perkins, chief medical offer at Human Longevity Inc., that

A “supercharged” approach to human genome research could see as many health breakthroughs made in the next decade as in the previous century

The “supercharging” involves taking advantage of four converging trends:

“I don’t have a pill” to boost human lifespan, Perkins admitted on stage at WIRED Health 2015. But he has perhaps the next best thing — data, and the means to make sense of it. Based in San Diego, Human Longevity is fixed on using genome data and analytics to develop new ways to fight age-related diseases.

Perkins says the opportunity for humanity — and Human Longevity — is the result of the convergence of four trends: the reduction in the cost of genome sequencing (from $100m per genome in 2000, to just over $1,000 in 2014); the vast improvement in computational power; the development of large-scale machine learning techniques; and the wider movement of health care systems towards ‘value-based’ models. Together these trends are making it easier than ever to analyse human genomes at scale.

Small fries

french-fries-525005_1280Whilst entrepreneurs and technologists are foreseeing comprehensive solutions to age-related diseases – as well as the rise of smart automation that could free almost every member of the society of the need to toil in employment that they dislike – what are politicians obsessing about?

Instead of the opportunities of tomorrow, politicians are caught up in the challenges of yesteryear and today. Like a short-sighted business management team obsessed by the next few quarterly financial results but losing sight of the longer term, these politicians are putting all their effort into policies for incremental changes to present-day metrics – metrics such as tax thresholds, the gross domestic product, policing levels, the degree of privatisation in the health service, and the rate of flow of migrants from Eastern Europe into the United Kingdom.

It’s like the restricted vision which car manufacturing pioneer Henry Ford is said to have complained about:

If I had asked people what they wanted, they would have said faster horses.

This is light years away from leadership. It’s no wonder that electors are deeply dissatisfied.

The role of politics

To be clear, I’m not asking for politicians to dictate to entrepreneurs and technologists which products they should be creating. That’s not the role of politicians.

However, politicians should be ensuring that the broad social environment provides as much support as possible to:

  • The speedy, reliable development of those technologies which have the potential to improve our lives so fully
  • The distribution of the benefits of these technologies to all members of society, in a way that preserves social cohesion without infringing individual liberties
  • Monitoring for risks of accidental outcomes from these technologies that would have disastrous unintended consequences.

PeterDruckerIn this way, politicians help to address the human angle to technology. It’s as stated by management guru Peter Drucker in his 1986 book “Technology, Management, and Society”:

We are becoming aware that the major questions regarding technology are not technical but human questions.

Indeed, as the Transpolitica manifesto emphasises:

The speed and direction of technological adoption can be strongly influenced by social and psychological factors, by legislation, by subsidies, and by the provision or restriction of public funding.

Political action can impact all these factors, either for better or for worse.

The manifesto goes on to set out its objectives:

Transpolitica wishes to engage with politicians of all parties to increase the likelihood of an attractive, equitable, sustainable, progressive future. The policies we recommend are designed:

  • To elevate the thinking of politicians and other leaders, away from being dominated by the raucous issues of the present, to addressing the larger possibilities of the near future
  • To draw attention to technological opportunities, map out attractive roads ahead, and address the obstacles which are preventing us from fulfilling our cosmic potential.

Specific big potatoes that are missing from the discussion

If our political leaders truly were attuned to the possibilities of disruptive technological change, here’s a selection of the topics I believe would find much greater prominence in political discussion:

  1. How to accelerate lower-cost high quality continuous access to educational material, such as MOOCs, that will prepare people for the radically different future that lies ahead
  2. How to accelerate the development of personal genome healthcare, stem cell therapies, rejuvenation biotech, and other regenerative medicine, in order to enable much healthier people with much lower ongoing healthcare costs
  3. How to ensure that a green tech new deal succeeds, rather than continues to fall short of expectations (as it has been doing for the last 5-6 years)
  4. How to identify and accelerate the new industries where the UK can be playing a leading role over the next 5-10 years
  5. How to construct a new social contract – perhaps involving universal basic income – in order to cope with the increased technological unemployment which is likely to arise from improved automation
  6. How society should be intelligently assessing any new existential risks that emerging technologies may unintentionally trigger
  7. How to transition the network of bodies that operate international governance to a new status that is fit for the growing challenges of the coming decades (rather than perpetuating the inertia from the times of their foundations)
  8. How technology can involve more people – and more wisdom and insight from more people – in the collective decision-making that passes for political processes
  9. How to create new goals for society that embody a much better understanding of human happiness, human potential, and human flourishing, rather than the narrow economic criteria that currently dominate decisions
  10. How to prepare everyone for the next leaps forward in human consciousness which will be enabled by explorations of both inner and outer space.

Why small fries?

But the biggest question of all isn’t anything I’ve just listed. It’s this:

  • Why are politicians still stuck in present-day small fries, rather than focusing on the big potatoes?

I’ll be interested in answers to that question from readers. In the meantime, here are my own initial thoughts:

  • The power of inertia – politicians, like the rest of us, tend to keep doing what they’re used to doing
  • Too few politicians have any deep personal insight (from their professional background) into the promise (and perils) of disruptive technology
  • The lack of a specific vision for how to make progress on these Big Potato questions
  • The lack of clamour from the electorate as a whole for answers on these Big Potato questions.

If this is true, we must expect it will take some time for public pressure to grow, leading politicians in due course to pay attention to these topics.

It will be like the growth in capability of any given exponential technology. At first, development takes a long time. It seems as if nothing much is changing. But finally, tipping points are reached. At that stage, it become imperative to act quickly. And at that stage, politicians (and their advisors) will be looking around urgently for ready-made solutions they can adapt from think tanks. So we should be ready.

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Criticisms

Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

11 March 2015

My vision for Humanity+, 2015-2017

Filed under: Humanity Plus, vision — Tags: , , — David Wood @ 1:40 pm

The most important task for the worldwide Humanity+ organisation, over the next three years, is to dramatically raise the calibre of public discussion about transhumanism and radical futurism.

As an indication of the status quo of the public discussion about transhumanism, type the words “Transhumanists are” into a Google search bar. Google charmingly suggests the following auto-completions:

  • Transhumanists are stupid
  • Transhumanists are evil
  • Transhumanists are crazy.

Transhumanists Are

These sentiments are at stark variance with what I believe to be the case: transhumanists have an insight that deserves much wider support – an insight that, if acted on, will lead to vast improvements in the quality of life of people all over the planet.

That insight – known as the “central meme of transhumanism” – is that we can and should improve the human condition through technology. Rather than continuing to be diminished by limitations inherited from our evolutionary heritage – limitations in our physiology, our psychology, our philosophy, and our social structures – we can and should take conscious control of the next stage of human evolution. We can and should move from a long phase of Darwinian natural selection to a phase of accelerated intelligent design.

Transhumanists boldly assert, in the FAQ maintained on the Humanity+ website, that

Transhumanism is a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.

Transhumanism is the viewpoint that human society should embrace, wisely, thoughtfully, and compassionately, the radical transformational potential of technology. Recent and forthcoming breakthroughs in technology fields such as nanotechnology, synthetic biology, renewable energy, regenerative medicine, brain sciences, big data analytics, robotics, and artificial intelligence can:

  • Enable humans to transcend (overcome) many of the deeply debilitating, oppressive, and hazardous aspects of our lives
  • Allow everyone a much wider range of personal autonomy, choice, experience, and fulfilment
  • Facilitate dramatically improved international relations, social harmony, and a sustainable new cooperation with nature and the environment.

Different opinions

But as I said, most people see things differently. They doubt that technology will change human nature, any time soon. Or, inasmuch as technology might change core aspects of human existence, they fear these changes will be for the worst. Or, if they think technology is likely to improve human experience, they see no need for any “ism” – any philosophy or movement – that promotes such an outcome; instead, they think it will be sufficient to leave technologists and entrepreneurs to get on with the task, unencumbered by philosophical baggage.

I’m very happy to enter discussion on all these points with informed critics of transhumanism – with people who are open to constructive dialogue. That’s a dialogue I wish to promote. That dialogue is, as I see things, a core part of the mission of the Humanity+ organisation.

All too often, however, critics of transhumanism (including the people noticed by Google as thinking that transhumanists are “stupid”, “evil”, and “crazy”) have only a hazy understanding of transhumanism. Worse, all too often the same people have only a hazy idea of the radical transformative potential of accelerating technology. To the extent that these people (who probably form the vast majority of the population) are futurists at all, they are “slow-paced” futurists rather than fast-paced futurists (to use a couple of terms I’ve written about previously). They’re largely oblivious to the far-reaching nature of changes that may take place in the next few decades.

To an extent, we transhumanists and other radical futurists share part of the blame for this situation. In our discussions of the positive transformational potential of technology, we’ve sometimes been collectively guilty of:

  • Presenting these technological developments as more-or-less inevitable, and as happening according to an inviolable timescale (linked over-closely to Moore’s Law)
  • Emphasising only the positive implications of these changes, and giving scant attention to potential negative implications
  • Taking it for granted that these positive benefits will become accessible to everyone, regardless of income, without there being any risk of them primarily benefiting the people who are already powerful and rich.

In other words, our collective advocacy of transhumanism has sometimes suffered from science fiction hype, wishful thinking, and political naivety. The popular negative appraisal of transhumanism stems, in part, from a reaction against these missteps.

A better dialogue

That’s what I believe the Humanity+ organisation can fix. Humanity+ can lead the way in encouraging a wiser, more credible, and more compelling assessment of transhumanism and radical futurism. This will involve multi-dimensional communications – short form and long form, written and video, intellectual and artistic, prose and poetry, serious and humorous, scientific and literary, real-time and recorded, face-to-face and online. As this library of material grows, it will be less and less possible for critics to radically misrepresent the intent and vision of transhumanists. Neutral observers will quickly call them out: you say such-and-such, but the clear evidence is that transhumanists have a much better understanding than that.

As time progresses, more and more people will understand the central messages of transhumanism. They’ll identify with these messages, viewing them as sensible, reasonable, and praiseworthy. And they’ll put more pressure on leaders of all sectors of society to prioritise changes which will accelerate the attainment of the positive evolution of humanity.

Practical steps

The outgoing board of directors of Humanity+ have already sketched a high-level strategic plan which will, in effect, put the organisation in a much better shape to carry out the role I’ve described above. I was part of the team that drew up that plan, and I’m now asking the set of Full Members of the organisation to choose me as one of their preferred candidates for the four elected vacancies on the board.

The strategic plan can be described in terms of five components: stability, speed, scale, vision, and engagement:

  • Stability: Recent changes in the constitution of Humanity+ have been designed to ensure greater stability in the format and membership of the board of directors. Rather than elections being held on an annual basis, the board now operates with a three-year cycle. For each three-year period, five of the directors are appointed to their roles by the outgoing board, and four more are elected by a vote by all Full Members. This hybrid structure seems to me to provide a strong basis for the other changes which I will describe next
  • Speed: For the last few years, Humanity+ has shown some aspects of being a bureaucratic organisation, held back from its true potential by a mix of inertia and unclear (diffuse) vision. By adopting modern principles of lean organisations and exponential organisations – learning from principles of successful business startups – the organisation can, and should, move more quickly. I offer my own experience in getting things done quickly – experience which I have honed over 25 years in the mobile computing and smartphone industry
  • Scale: To have a bigger impact, Humanity+ needs to be able to make better use of its wide network of potential supporters. In part, this involves hiring a Development Director, to improve the financial footing of the organisation. In part, this involves revitalising our structure of chapters, affiliates, and volunteer effort. Finally, this also involves modernising our use of information technology. I expect each of the new board members to play important roles in improving these structures
  • Vision: Perhaps the single most important energiser of action is to have a clear, inspiring, stretch goal – a so-called “massively transformational purpose”. My own personal vision is “transhumanism for all” – something I have spelt out in more detail in my online declaration of interest in being elected to continue my role on the board. In terms of a vision for Humanity+, I offer “dramatically raise the calibre of public discussion about transhumanism and radical futurism” (though I’m open to re-wording). That is, I offer the vision that I’ve described in the opening part of this article
  • Engagement: The public discussion about transhumanism has recently been heating up. Transhumanist ideas are appearing more and more often in popular magazines, including Time, Newsweek, and Bloomberg Markets (as I covered in a recent blogpost). Significant credit is due here to the high-energy work of the recently formed Transhumanist Party, led by Zoltan Istvan. The headline in a recent article in The Leftist Review put it as follows: “The age of transhumanist politics has begun”. As that article goes on to say, “transhumanist politics has momentous growth potential but with uncertain outcomes. The coming years will probably see a dialogue between humanism and transhumanism in — and about — most crucial fields of human endeavor, with strong political implications”. Humanity+ cannot stand aside from this engagement. Over the next few years, our engagement needs to continue to expand – not just in the worlds of science and technology, but also in the worlds of art, economics, and (last but not least) politics. One reason I recently founded the Transpolitica think-tank was to accelerate exactly that kind of dialogue. I’ll be delighted to position Humanity+ as being at the heart of that dialogue, rather than standing at the periphery.

A resilient, long-term contributor

I’ve recently passed the landmark of having organised 100 London Futurists events. As I covered in a previous blogpost, that series of meetings has extended for seven years (March 2008 to March 2015). I mention this as an example of the way I am able to work:

  • Long-term commitment
  • Regular incremental improvements
  • Success via building a collaborative team (including volunteers and regular audience members)
  • Hands-on facilitation and leadership.

That’s the kind of working discipline that I wish to continue to apply on the Humanity+ board.

The endorsements framework on LinkedIn is far from being a watertight reputation management system, but the set of endorsements that my professional colleagues have kindly provided for me surely gives at least some indication of my positive qualities.

For Humanity+ Full Members wishing to check out my personal history and philosophy in more detail, one option is to dip into my book “Smartphones and beyond: lessons from the remarkable rise and fall of Symbian”. Other options are to leaf through the eclectic set of articles on my personal blog (a couple of representative examples are “A muscular new kid on the block” and “Towards inner Humanity+”), and to view the videos on the Delta Wisdom and London Futurists channels on YouTube.

For transhumanists (old and new) who are currently not Full Members of Humanity+, you can find more details here about how to join the organisation. The election runs until midnight PST on 31st March. People who become Full Members up to 24 hours before the end of the election period will be added to the set of electors.

UKHplus FB header HD

10 March 2015

100 not out: 7 years of London Futurists

100 not outWhen my mouse skimmed across the page of the London Futurists meetup site a few days ago, it briefly triggered a pop-up display that caught my eye. The display summarised my own activities within London Futurists. “Been to 100 Meetups” was the phrase that made me pause. That’s a lot of organising, I thought.

That figure of 100 doesn’t quite tell the full story. The events that I’ve organised under the London Futurists umbrella, roughly once or twice a month, are part of a longer series that go all the way back to the 15th of March 2008. In those days, I used the UK Humanity+ group in Facebook to publicise these events (along with some postings in blogs such as Extrobritannia). I discovered the marvels of Meetup in 2009, and adopted the name “London Futurists” from that time.

Browsing the history of these events in Facebook’s archive, over the seven years from March 2008 to the present day, I see there have been periods of relative activity and periods of relative quiet:

  • 10 events in 2008, 13 in 2009, and 11 in 2010
  • a period of relative quiet, 2011-2012, when more of my personal focus was pre-occupied by projects at my then employer, Accenture
  • 21 events in 2013, and another 21 in 2014
  • 6 events already in 2015.

This long series of events has evolved as time has progressed:

  • Initially they were free to attend, but for the last few years, I’ve charged a £5 entrance fee, to cover the room hire costs
  • We’ve added occasional Hangout-on-Air video events, to complement the in-real-life meetups
  • More recently, we’ve videoed the events, and make the recordings available afterwards.

For example, here’s the video of our most recent event: The winning of the carbon war, featuring speaker Jeremy Leggett. (Note: turn down your volume before listening, as the audio isn’t great on this occasion.)

Another important change over the years is that the set of regular and occasional attendees has grown into a fine, well-informed audience, who reliably ask speakers a probing and illuminating set of questions. If I think about the factors that make these meetups successful, the audience deserves significant credit.

But rather than looking backwards, I prefer to look forwards. As was said of me in a recent profile article in E&T, “David Wood: why the future matters”,

Wood’s contribution to the phenomenon of smart, connected mobile devices has earned him plenty of recognition… While others with a similar track record might consider their mid-50s to be the time to start growing wine or spending afternoons on the golf course, Wood thinks his “next 25 years will take that same vision and give it a twist. I now look more broadly at how technology can help all of us to become smarter and more mobile”.

Thankfully, mainstream media have recently been carrying more and more articles about radical futurist topics that would, until only recently, have been regarded as fringe and irresponsible. These are topics that have regularly been addressed during London Futurists events over the last seven years. To take just one example, consider the idea that technology may soon provide the ability to radically extend healthy human lifespan – perhaps indefinitely:

  • The cover of Time for February 12th displayed a baby, with the accompanying text: This baby could live to be 142 years old. Despatches from the frontiers of longevity
    baby-final1
  • The cover of Newsweek on March 5th proclaimed the message Never say die: billionaires, science, and immortality
    immortality-cover
  • The cover for Bloomberg Markets for April will bear the headline Google wants you to live forever
    Bill Maris

It’s worth reiterating the quote which starts the Bloomberg Markets article – a quote from Bill Maris, the president and managing director of Google Ventures:

If you ask me today, is it possible to live to be 500? The answer is yes.

Alongside articles on particular transhumanist and radical futurist themes – such as healthy life-extension, superhuman artificial intelligence, and enhanced mental well-being – there have been a recent flurry of general assessments of the growing importance of the transhumanist philosophy. For example, note the article “The age of transhumanist politics has begun” from The Leftist Review a few days ago. Here’s a brief extract:

According to political scientist and sociologist Roland Benedikter, research scholar at the University of California at Santa Barbara, “transhumanist” politics has momentous growth potential but with uncertain outcomes. The coming years will probably see a dialogue between humanism and transhumanism in — and about — most crucial fields of human endeavor, with strong political implications that will challenge, and could change the traditional concepts, identities and strategies of Left and Right.

The age of transhumanist politics may well have begun, but it has a long way to run. And as Benedikter sagely comments, although there is momentous growth potential, the outcome remains uncertain. That’s why the next item in the London Futurists series – the one which will be the 101st meetup in that series – is on the theme “Anticipating tomorrow’s politics”. You can find more details here:

This London Futurists event marks two developments in the political landscape:

  • The launch of the book “Anticipating tomorrow’s politics”
  • The launch of the Transhumanist Party in the UK.

The speakers at this event, Amon Twyman and David Wood, will be addressing the following questions:

  • How should politics change, so that the positive potential of technology can be safely harnessed to most fully improve human society?
  • What are the topics that politicians generally tend to ignore, but which deserve much more attention?
  • How should futurists and transhumanists regard the political process?
  • Which emerging political movements are most likely to catalyse these needed changes?

All being well, a video of that event will be posted online shortly afterwards, for those unable to attend in person. But for those who attend, there will be plenty of opportunity to contribute to the real-time discussion.

Footnote: The UK Humanity+ events were themselves preceded by a series organised by “Estropico”, that stretch back at least as far as 2003. (A fuller history of transhumanism in the UK is being assembled as part of the background briefing material for the Transhumanist Party.)

15 February 2015

Ten years of quantified self

Filed under: books, healthcare — Tags: , , , , , , , — David Wood @ 12:02 am

Ten years. Actually 539 weeks. I’ve been recording my weight every morning from 23 October 2004, and adding a new data point to my chart every weekend.

10 years of Quantified Self

I’ve been recording my weight ever since I read that people who monitor their weight on a regular basis are more likely to avoid it ballooning upwards. There’s an instant feedback which allows me to seek adjustments in my personal health regime. With ten years of experience under my (varyingly-sized) belt, I’m strongly inclined to continue the experiment.

The above chart started life on my Psion Series 5mx PDA. Week after week, I added data, and watched as the chart expanded. Eventually, the graph hit the limits of what could be displayed on a single screen on the S5mx (width = 480 pixels), so I had to split the chart into two. And then three. Finally, after a number of hardware failures in my stock of S5mx devices, I transferred the data into an Excel spreadsheet on my laptop several months ago. Among other advantages, it once again lets me see the entire picture.

20150214_084625This morning, 14th Feb 2015, I saw the scales dip down to a point I had last reached in September 2006. This result seems to confirm the effectiveness of my latest dietary regime – which I’ve been following since July. Over these seven months, I’ve shrunk from a decidedly unhealthy (and unsightly) 97 kg down to 81 kg.

In terms of the BMI metric (Body Mass Index), that’s a reduction from 31.2 – officially “obese” – down to 26.4. 26.4 is still “marginally overweight”, since, for men, the top end of the BMI scale for a “healthy weight for adults” is 24.9. With my height, that would mean a weight of 77 kg. So there’s still a small journey for me to travel. But I’m happy to celebrate this incremental improvement!

The NHS page on BMI issues this sobering advice:

BMI of 30 or more: a BMI above 30 is classified as obese. Being obese puts you at a raised risk of health problems such as heart disease, stroke and type 2 diabetes. Losing weight will bring significant health improvements..

BMI score of 25 or more: your BMI is above the ideal range and this score means you may be overweight. This means that you’re heavier than is healthy for someone of your height. Excess weight can put you at increased risk of heart disease, stroke and type 2 diabetes. It’s time to take action…

As the full chart of my weight over the last ten years shows, I’ve had three major attempts at “action” to achieve a healthier body mass.

The first: For a while in 2004 and 2005, I restricted myself to two Herbalife meal preparations a day – even when I was travelling.

Later, in 2011, I ran across the book by Gary Taubes, “Why We Get Fat: And What to Do About It”, which made a great deal of sense to me. Taubes emphasises that some kinds of calories are more damaging to health than others. Specifically, carbohydrates, such as wheat, change the body metabolism to make it retain more weight. I also read “Wheat belly” by William Davis. Here’s an excerpt from the description of that book:

Renowned cardiologist William Davis explains how eliminating wheat from our diets can prevent fat storage, shrink unsightly bulges and reverse myriad health problems.

Every day we eat food products made of wheat. As a result millions of people experience some form of adverse health effect, ranging from minor rashes and high blood sugar to the unattractive stomach bulges that preventative cardiologist William Davis calls ‘wheat bellies’. According to Davis, that fat has nothing to do with gluttony, sloth or too much butter: it’s down to the whole grain food products so many people eat for breakfast, lunch and dinner.

After witnessing over 2,000 patients regain their health after giving up wheat, Davis reached the disturbing conclusion that wheat is the single largest contributor to the nationwide obesity epidemic – and its elimination is key to dramatic weight loss and optimal health.

In Wheat Belly, Davis exposes the harmful effects of what is actually a product of genetic tinkering being sold to the public as ‘wheat’ and provides readers with a user-friendly, step-by-step plan to navigate a new, wheat-free lifestyle. Benefits include: substantial weight loss, correction of cholesterol abnormalities, relief from arthritis, mood benefits and prevention of heart disease.

As a result, I cut back on carbohydrates – and was pleased to see my weight plummet once again. For a while – until I re-acquired many of my former carb-enjoying habits, whoops.

That takes me to regime number three. This time, I’ve followed the more recent trend known as “5+2”. According to this idea, people can eat normally for, say, five days in the week, and then eat a very reduced amount of calories on the other two days (known as “fasting days”). My initial worry about this approach was that I wasn’t sure I’d eat sensible foods on the two low-calorie days.

That’s when I ran across the meal preparations of the LighterLife company. These include soups, shakes, savoury meals, porridge, and bars. Each of these meals is just 150-200 calories. LighterLife suggest that people eat, on their low-calorie days, four of these meals. These preparations include sufficient proteins, fibre, and 100% of the recommended daily intake of key vitamins and minerals.

To be clear, I am not a medical doctor, and I urge anyone who is considering adopting a diet to obtain their own medical advice. I also recognise that different people have different metabolisms, so a diet that works for one person won’t necessarily work for someone else. However, I can share my own personal experience, in case it inspires others to do their own research:

  • Instead of 5+2, I generally follow 3+4. That is, I have four low-calorie days each week, along with three other days in which I tend to indulge myself (except that, on these other days, I still try to avoid consuming too many carbs, such as wheat, bread, rice, and potatoes)
  • On the low-calorie days, I generally eat around 11.30am, 2.30pm, 5.30pm, and 8.30pm
  • If I’m working at home, I’ll include soups, a savoury meal, and shakes; if I’m away from home, I’ll eat three (or four) different bars, that I pack into my back-pack at the beginning of the day
  • On the low-calorie days, it’s important to drink as well as to eat, but I avoid any drinks with calories in them. In practice, I find drinks of herbal teas to be very effective at dulling any sense of hunger I’m experiencing
  • In addition to eating less, I continue to do a lot of walking (e.g. between Waterloo Station and meeting locations in Central London), as well as other forms of exercise (like on the golf driving range or golf course).

Note: I know that BMI is far from being a complete representation of personal healthiness. However, I view it as a good starting point.

To round off my recommendations for diet-related books that I have particularly enjoyed reading, I’ll add “Mindless eating” by Brian Wansink to the two I mentioned earlier. I listened to the Audible version of that book. It’s hilarious, but thought-provoking, and the research it describes seems very well founded:

Every day, we each make around 200 decisions about eating. But studies have shown that 90% of these decisions are made without any conscious choice. Dr Brian Wansink lays bare the facts about our true eating habits to show that awareness of our patterns can allow us to lose weight effectively and without serious changes to our lives. Dr Wansink’s revelations include:

  • Food mistakes we all make in restaurants, supermarkets and at home
  • How we are manipulated by brand, appearance and parental habits more than price and our choices
  • Our emotional relationship with food and how we can overcome it to revitalise our diets.

Forget calorie counting and starving yourself and learn the truth about why we overeat in this fascinating, innovative guide.

Three books

I’ll finish by thanking my friends, family, and colleagues for their gentle and thoughtful encouragement, over the years, for me to keep an eye on my body mass, and on the general goodness of what I eat. “Health is the first wealth”.

Older Posts »

The Silver is the New Black Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 140 other followers