dw2

19 March 2020

Improving online events, for the sake of a better discussion of what truly matters

In a time of travel restrictions and operating from home, we’re all on a learning curve. There’s much for us to find out about alternatives to meeting in our usual physical locations.

London Futurists have been meeting in various physical locations for twelve years. We’ve also held a number of online gatherings over that time, using tools such as Google Hangouts on Air. But now the balance needs to shift. Given the growing Covid-19 lockdown, all London Futurists physical meetings are cancelled for the time being. While the lockdown continues, the group’s activities will be 100% online.

But what does this mean in practice?

I’d like to share some reflections from the first of this new wave of London Futurists events. That online gathering took place on Saturday, 14th March, using the meeting platform Zoom.

Hopefully my observations can help others to improve their own online events. Hopefully, too, readers of this blog will offer answers or suggestions in response to questions I raise.

Context: our event

Our event last Saturday was recorded, and the footage subsequently edited – removing, for example, parts where speakers needed to be told their microphones were muted. Here’s a copy of the resulting video:

By prior arrangement, five panellists gave short introductory talks, each lasting around 5-10 minutes, to set the stage for group discussion. Between 50 and 60 audience participants were logged into the event throughout. Some of them spoke up during the event; a larger number participated in an online text chat discussion that proceeded in parallel (there’s a lightly edited copy of the text discussion here).

As you can see from the recording, the panellists and the other participants raised lots of important points during the discussion. I’ll get back to these shortly, in another blogpost. But first, some thoughts about the tools and the process that were used for this event.

Context: Zoom

Zoom is available at a number of different price levels:

  • The “Free” level is restricted to meetings of up to 40 minutes.
  • The “Pro” level – which costs UKP £11.99 per month – supports longer meetings (up to 24 hours), recording of events, and other elements of admin and user management. This is what I use at the moment.
  • I’ve not yet explored the more expensive versions.

Users participating in an event can can turn their cameras on or off, and can share their screen (in order, for example, to present slides). Participants can also choose at any time to see a view of the video feeds from all participants (up to 25 on each page), or a “presenter view” that focuses on the person who Zoom detects as the speaker.

Recording can take place locally, on the host’s computer (and, if enabled by the organiser, on participants’ computers). Recording can also take place on the Zoom cloud. In this case, what is recorded (by default) is the “presenter view”.

The video recording can subsequently be downloaded and edited (using any video editing software – what I use is Cyberlink PowerDirector).

Limitations and improvements

I switched some time ago from Google Hangouts-on-Air (HoA) to Zoom, when Google reorganised their related software offerings during 2019.

One feature of the HoA software that I miss in Zoom is the ability for the host to temporarily “blue box” a participant, so that their screen remains highlighted, regardless of which video feeds contain speech or other noises. Without this option, what happens – as you can see from the recording of Saturday’s event – is that the presentation view can jump to display the video from a participant that is not speaking at that moment. For five seconds or so, the display shows the participant staring blankly at the camera, generally without realising that the focus is now on them. What made Zoom shift the focus is that it detected some noise from that video feed -perhaps a cough, a laugh, a moan, a chair sliding across the floor, or some background discussion.

(Participants in the event needn’t worry, however, about their blank stares or other inadvertent activity being contained in the final video. While editing the footage, I removed all such occurrences, covering up the displays, while leaving the main audio stream in place.)

In any case, participants should mute their microphones when not speaking. That avoids unwanted noise reaching the event. However, it’s easy for people to neglect to do so. For that reason, Zoom provides the host with admin control over which mics are on or off at any time. But the host may well be distracted too… so the solution is probably for me to enrol one or two participants with admin powers for the event, and ask them to keep an eye on any mics being left unmuted at the wrong times.

Another issue is the variable quality of the microphones participants were using. If the participant turns their head while speaking – for example, to consult some notes – it can make it hard to hear what they’re saying. A better solution here is to use a head-mounted microphone.

A related problem is occasional local bandwidth issues when a participant is speaking. Some or all of what they say may be obscured, slurred, or missed altogether. The broadband in my own house is a case in point. As it happens, I have an order in the queue to switch my house to a different broadband provider. But this switch is presently being delayed.

Deciding who speaks

When a topic is thought-provoking, there are generally are lots of people with things to contribute to the discussion. Evidently, they can’t all talk at once. Selecting who speaks next – and deciding how long they can speak before they might need to be interrupted – is a key part of chairing successful meetings.

One guide to who should be invited to speak next, at any stage in a meeting, is the set of comments raised in the text chat window. However, in busy meetings, important points raised can become lost in the general flow of messages. Ideally, the meeting software will support a system of voting, so that other participants can indicate their choices of which questions are the most interesting. The questions that receive the most upvotes will become the next focus of the discussion.

London Futurists have used such software in the past, including Glisser and Slido, at our physical gatherings. For online events, ideally the question voting mechanism will be neatly integrated with the underlying platform.

I recently took part in one online event (organised by the Swiss futurist Gerd Leonhard) where the basic platform was Zoom and where there was a “Q&A” voting system for questions from the audience. However, I don’t see such a voting system in the Zoom interface that I use.

Added on 20th March

Apparently there’s a Webinar add-on for Zoom that provides better control of meetings, including the Q&A voting system. The additional cost of this add-on starts from UKP £320 per annum. I’ll be looking into this further. See this feature comparison page.

Thanks to Joe Kay for drawing this to my attention!

Summarising key points

The video recording of our meeting on Saturday lasts nearly 100 minutes. To my mind, the discussion remained interesting throughout. However, inevitably, many potential viewers will hesitate before committing 100 minutes of their time to watch the entirety of that recording. Even if they watch the playback at an accelerated speed, they would probably still prefer access to some kind of edited highlights.

Creating edited highlights of recordings of London Futurists events has long been a “wish list” item for me. I can appreciate that there’s a particular skill to identifying which parts should be selected for inclusion in any such summary. I’ll welcome suggestions on how to do this!

Learning together

More than ever, what will determine our success or failure in coming to terms with the growing Covid-19 crisis is the extent to which positive collaboration and a proactive technoprogressive mindset can pull ahead of humanity’s more destructive characteristics.

That “race” was depicted on the cover of the collection of the ebook of essays published by London Futurists in June 2014, “Anticipating 2025”. Can we take advantage of our growing interconnectivity to spread, not dangerous pathogens or destructive “fake news”, but good insights about building a better future?

That was a theme that emerged time and again during our online event last Saturday.

I’ll draw this blogpost towards a close by sharing some excepts from the opening chapter from Anticipating 2025.

Four overlapping trajectories

The time period up to 2025 can be considered as a race involving four overlapping trajectories: technology, crisis, collaboration, and mindset.

The first trajectory is the improvement of technology, with lots of very positive potential. The second, however, has lots of very negative potential: it is the growth in likelihood of societal crisis:

  • Stresses and strains in the environment, with increased climate chaos, and resulting disputes over responsibility and corrective action
  • Stresses and strains in the financial system, which share with the environment the characteristics of being highly complex, incompletely understood, weakly regulated, and subject to potential tipping points for fast-accelerating changes
  • Increasing alienation, from people who feel unable to share in the magnitude of the riches flaunted by the technologically fortunate; this factor is increased by the threats from technological unemployment and the fact that, whilst the mean household income continues to rise, the median household income is falling
  • Risks from what used to be called “weapons of mass destruction” – chemical, biological, or even nuclear weapons, along with cyber-weapons that could paralyse our electronics infrastructure; there are plenty of “angry young men” (and even angry middle-aged men) who seem ready to plunge what they see as a corrupt world into an apocalyptic judgement.

What will determine the outcome of this race, between technological improvement and growing risk of crises? It may be a third trajectory: the extent to which people around the world are able to collaborate, rather than compete. Will our tendencies to empathise, and to build a richer social whole, triumph over our equally deep tendencies to identify more closely with “people like us” and to seek the well-being of our “in-group” ahead of that of other groups?

In principle, we probably already have sufficient knowledge, spread around the world, to solve all the crises facing us, in a smooth manner that does not require any significant sacrifices. However, that knowledge is, as I said, spread – it does not cohere in just a single place. If only we knew what we knew. Nor does that knowledge hold universal assent – far from it. It is mocked and distorted and undermined by people who have vested interests in alternative explanations – with the vested interests varying among economic, political, ideological, and sometimes sheer human cussedness. In the absence of improved practical methods for collaboration, our innate tendencies to short-term expedience and point-scoring may rule the day – especially when compounded by an economic system that emphasises competition and “keeping up with the Joneses”.

Collaborative technologies such as Wikipedia and open-source software point the way to what should be possible. But they are unlikely to be sufficient, by themselves, to heal the divisions that tend to fragment human endeavours. This is where the fourth, and final, trajectory becomes increasingly important – the transformation of the philosophies and value systems that guide our actions.

If users are resolutely suspicious of technologies that would disturb key familiar aspects of “life as we know it”, engineers will face an uphill battle to secure sufficient funding to bring these technologies to the market – even if society would eventually end up significantly improved as a result.

Politicians generally take actions that reflect the views of the electorate, as expressed through public media, opinion polls, and (occasionally) in the ballot box. However, the electorate is subject to all manners of cognitive bias, prejudice, and continuing reliance on rules of thumb which made sense in previous times but which have been rendered suspect by changing circumstances. These viewpoints include:

  • Honest people should put in forty hours of work in meaningful employment each week
  • People should be rewarded for their workplace toil by being able to retire around the age of 65
  • Except for relatively peripheral matters, “natural methods” are generally the best ones
  • Attempts to redesign human nature – or otherwise to “play God” – will likely cause disaster
  • It’s a pointless delusion to think that the course of personal decay and death can be averted.

In some cases, long-entrenched viewpoints can be overturned by a demonstration that a new technology produces admirable results – as in the case of IVF (in-vitro fertilisation). But in other cases, minds need to be changed even before a full demonstration can become possible.

It’s for this reason that I see the discipline of “culture engineering” as being equally important as “technology engineering”. The ‘culture’ here refers to cultures of humans, not cells. The ‘engineering’ means developing and applying a set of skills – skills to change the set of prevailing ideas concerning the desirability of particular technological enhancements. Both technology engineering and culture engineering are deeply hard skills; both need a great deal of attention.

A core part of “culture engineering” fits under the name “marketing”. Some technologists bristle at the concept of marketing. They particularly dislike the notion that marketing can help inferior technology to triumph over superior technology. But in this context, what do “inferior” and “superior” mean? These judgements are relative to how well technology is meeting the dominant desires of people in the marketplace.

Marketing means selecting, understanding, inspiring, and meeting key needs of what can be called “influence targets” – namely, a set of “tipping point” consumers, developers, and partners. Specifically, marketing includes:

  • Forming a roadmap of deliverables, that build, step-by-step, to delivering something of great benefit to the influence targets, but which also provide, each step of the way, something with sufficient value to maintain their active interest
  • Astutely highlighting the ways in which present (and forthcoming) products will, indeed, provide value to the influence targets
  • Avoiding any actions which, despite the other good things that are happening, alienate the influence targets; and in the event any such alienation emerges, taking swift and decisive action to address it.

Culture engineering involves politics as well as marketing. Politics means building alliances that can collectively apply power to bring about changes in regulations, standards, subsidies, grants, and taxation. Choosing the right partners, and carefully managing relationships with them, can make a big difference to the effectiveness of political campaigns. To many technologists, “politics” is as dirty a word as “marketing”. But once again, mastery of the relevant skillset can make a huge difference to the adoption of technologies.

The final component of culture engineering is philosophy – sets of arguments about fundamentals and values. For example, will human flourishing happen more fully under simpler lifestyles, or by more fully embracing the radical possibilities of technology? Should people look to age-old religious traditions to guide their behaviour, or instead seek a modern, rational, scientific basis for morality? And how should the freedoms of individuals to experiment with potentially dangerous new kinds of lifestyle be balanced against the needs of society as a whole?

“Philosophy” is (you guessed it) yet another dirty word, in the minds of many technologists. To these technologists, philosophical arguments are wastes of time. Yet again, I will disagree. Unless we become good at philosophy – just as we need to become good at both politics and marketing – we will fail to rescue the prevailing culture from its unhelpful mix of hostility and apathy towards the truly remarkable potential to use technology to positively transcend human nature. And unless that change in mindset happens, the prospects are uncertain for the development and adoption of the remarkable technologies of abundance mentioned earlier.

[End of extract from Anticipating 2025.]

How well have we done?

On the one hand, the contents of the 2014 London Futurists book “Anticipating 2025” are prescient. These chapters highlight many issues and opportunities that have grown in importance in the intervening six years.

On the other hand, I was brought down to earth by an email reply I received last week to the latest London Futurists newsletter:

I’m wondering where the Futurism is in this reaction.

Maybe the group is more aptly Reactionism.

I wanted to splutter out an answer: the group (London Futurists) has done a great deal of forward thinking over the years. We have looked at numerous trends and systems, and considered possible scenarios arising from extrapolations and overlaps. We have worked hard to clarify, for these scenarios, the extent to which they are credible and desirable, and ways in which the outcomes can be influenced.

But on reflection, a more sober thought emerged. Yes, we futurists have been trying to alert the rest of society to our collective lack of preparedness for major risks and major opportunities ahead. We have discussed the insufficient resilience of modern social systems – their fragility and lack of sustainability.

But have our messages been heard?

The answer is: not really. That’s why Covid-19 is causing such a dislocation.

It’s tempting to complain that the population as a whole should have been listening to futurists. However, we can also ask, how should we futurists change the way we talk about our insights, so that people pay us more attention?

After all, there are many worse crises potentially just around the corner. Covid-19 is by no means the most dangerous new pathogen that could strike humanity. And there are many other types of risk to consider, including malware spreading out of control, the destruction of our electronics infrastructure by something similar to the 1859 Carrington Event, an acceleration of chaotic changes in weather and climate, and devastating wars triggered by weapons systems overseen by AI software whose inner logic no-one understands.

It’s not just a new mindset that humanity needs. It’s a better way to have discussions about fundamentals – discussions about what truly matters.

Footnote: with thanks

Special thanks are due to the people who boldly stepped forwards at short notice as panellists for last Saturday’s event:

and to everyone else who contributed to that discussion. I’m sorry there was no time to give sufficient attention to many of the key points raised. As I said at the end of the recording, this is a kind of cliffhanger.

29 August 2014

Can technology bring us peace?

SevereThe summer months of 2014 have brought us a sickening surfeit of awful news. Our newsfeeds have been full of conflict, casualties, and brutalities in Iraq, Syria, Ukraine, Gaza, and so on. For example, just a couple of days ago, my browser screamed at me, Let’s be clear about this: Russia is invading Ukraine right now. And my TV has just informed me that the UK’s terror threat level is being raised from “substantial” to “severe”:

The announcement comes amid increasing concern about hundreds of UK nationals who are believed by security services to have travelled to fight in Iraq and Syria.

These real-world conflicts have been giving rise to online mirror conflicts among many of the people that I tend to respect. These online controversies play out heated disputes about the rights and wrongs of various participants in the real-world battles. Arguments ding-dong ferociously: What is the real reason that MH17 plane was shot down? How disproportionate is the response by Israel to provocations from Hamas? How much is Islamic belief to blame for the barbarism of the self-proclaimed Islamic State? Or is the US to blame, on account of its ill-advised meddling in far-off lands? And how fair is it to compare Putin to Hitler?

But at a recent informal pub gathering of London Futurists, one of the long-time participants in these meetups, Andrius Kasparavicius, asked a hard question. Shouldn’t those of us who believe in the transformational potential of new technology – those of us who dare to call ourselves technoprogressives, transhumanists, or social futurists – have a better answer to these conflict flashpoints? Rather than falling back into twentieth century diatribes against familiar bête noir villains, isn’t it worth striving to find a 21st century viewpoint that transcends such rivalries? We talk a lot about innovation: can’t we be innovative about solving these global flashpoints?

A similar thought gnawed at me a few weeks later, during a family visit to Inverness. A local production of West Side Story was playing at the Eden Court theatre. Bernstein’s music was exhilarating. Sondheim’s lyrics were witty and provocative. The cast shimmied and slunk around the stage. From our vantage point in the second row of seats, we could see all the emotions flit across the faces of the performers. The sudden tragic ending hit hard. And I thought to myself: These two gangs, the Jets and the Sharks, were locked into a foolish, needless struggle. They lacked an adult, future perspective. Isn’t it the same with the tragic conflicts that occupy our newsfeeds? These conflicts have their own Jets and Sharks, and, yes, a lack of an adult, future perspective. Can’t they see the better future which is within our collective grasp, if only they can cast aside their tribal perspectives?

That thought was soon trumped by another: the analogy is unfair. Some battles are worth fighting. For example, if we take no action against Islamic State, we shouldn’t be surprised if there’s an ever worse spate of summary beheadings, forced conversions, women being driven into servitude roles in societies all over the middle east, and terrorist strikes throughout the wider world.

But still… isn’t it worth considering possible technological, technoprogressive, or transhumanist approaches to peace?

  • After all, we say that technology changes everything. History is the story of the continual invention and enhancement of tools, machines, and devices of numerous sorts, which transform human experience in all fields of life.
  • Indeed, human progress has taken place by the discovery and mastery of engineering solutions – such as fire, the wheel, irrigation, sailing ships, writing, printing, the steam engine, electricity, domestic kitchen appliances, railways and automobiles, computers and the Internet, plastics, vaccinations, anaesthetic, contraception, and better hygiene.
  • What’s more, the rate of technological change is increasing, as larger numbers of engineers, scientists, designers, and entrepreneurs from around the globe participate in a rich online network exchange of ideas and information. Forthcoming technological improvements can propel human experience onto an even higher plane – with our minds and bodies both being dramatically enhanced.
  • So shouldn’t the further development of technology give us more options to achieve lasting resolution of global flashpoints?

Event previewTherefore I have arranged an online hangout discussion meeting: Global flashpoints: what do transhumanists have to say? This will be taking place at 7pm UK time this Sunday, 31st August. The corresponding YouTube video page (for people who prefer not to log into Google+ in order to view the Hangout that way) is here. I’ll be joined in this discussion by a number of thinkers from different transhumanist perspectives, based around Europe.

I’ve put a plaintive note on the meeting invite:

In our discussion, we’ll try to transcend the barbs and scape-goating that fills so much of existing online discussion about Iraq/Syria/Ukraine/Gaza/etc.

I honestly don’t know how the discussion is going to unfold. But here are some possible lines of argument:

  1. Consider the flashpoint in Ferguson, Missouri, after the shooting dead of teenager Michael Brown. That particular conflict arose, in part, because of disputes over what actually happened at the time of the shooting. But if the police in Ferguson had all been wearing and operating personal surveillance cameras,  then perhaps a lot of the heat would have gone out of the issue. That would be one example of taking advantage of recent improvements in technology in order to defuse a potential conflict hotspot
  2. Much conflict is driven by people feeling a sense of profound alienation from mainstream culture. Disaffected youths from all over Europe are leaving their families behind to travel to support fundamentalist Islamic causes in the middle east. They need a much better vision of the future, to reduce the chance that they will fall prey to these particular mind viruses. Could social futurism, technoprogressivism, and transhumanism offer that alternative vision?
  3. Rather than technology helping to create peace, there’s a major risk it will help to worsen conflicts. Powerful arsenals in the hands of malcontents are likely to have a more horrific impact nowadays – and an even worse one in the near future – than corresponding weaponry had in the past. Think also of the propaganda value of Islamic State execution videos distributed via YouTube – that kind of effect was unthinkable just a decade ago.

Existential ThreatOf these three lines of discussion, I am most persuaded by the third one. The implications are as follows. The message that we social futurists and transhumanists should be highlighting, in response to these outrages is, sadly, “You ain’t seen nothing yet”. There are actually existential risks that will deserve very serious collective action, in order to solve. In that case, it’s even more imperative that the global community gets its act together, and finds a more effective way to resolve the conflicts in our midst.

At the same time, we do need to emphasise the positive vision of where the world could reach in, say, just a few decades: a world with enormous abundance, fuelled by new technologies (nanotech, solar energy, rejuvenation biotech, ubiquitous smart robots) – a world that will transcend the aspirations of all existing ideologies. If we can make the path to this future more credible, there’s good reason to hope that people all over the world will set aside their previous war-like tendencies, tribal loyalties, and dark age mythologies.

 

23 January 2014

The future of learning and the future of climate change

Filed under: climate change, collaboration, education — Tags: , , , , — David Wood @ 6:52 pm

Yesterday, I spent some time at the BETT show in London’s ExCeL centre. BETT describes itself as:

the world’s leading event for learning technology for education professionals…  dedicated to showcasing the best in UK and international learning technology products, resources, and best practice… in times where modern learning environments are becoming more mobile and ‘learning anywhere’ is more of a possibility.

I liked the examples that I saw of increasing use of Google Apps in education, particularly on Chrome Books. These examples were described by teachers who had been involved in trials, at all levels of education. The teachers had plenty of heart-warming stories of human wonderment, of pupils helping each other, and of technology taking a clear second place to learning.

FutureLearn logoI was also impressed to hear some updates about the use of MOOCs – “Massive open online courses”. For example, I was encouraged about what I heard at BETT about the progress of the UK-based FutureLearn initiative.

As Wikipedia describes FutureLearn,

FutureLearn is a massive open online course (MOOC) platform founded in December 2012 as a company majority owned by the UK’s Open University. It is the first UK-led massive open online course platform, and as of October 2013 had 26 University partners and – unlike similar platforms – includes three non-university partners: the British Museum, the British Council and the British Library.

Among other things, my interest in FutureLearn was to find out if similar technology might be used, at some stage, to help raise better awareness of general futurist topics, such as the Technological Singularity, Radical Life Extension, and Existential Risks – the kind of topics that feature in the Hangout On Air series that I run. I remain keen to develop what I’ve called “London Futurists Academy”. Could a MOOC help here?

I resolved that it was time for me to gain first-hand experience of one of these systems, rather than just relying on second-hand experience from other people.

Climate_change_course_image-01

I clicked on the FutureLearn site to see which courses might be suitable for me to join. I was soon avidly reading the details of their course Climate change: challenges and solutions:

This course aims to explain the science of climate change, the risks it poses and the solutions available to reduce those risks.

The course is aimed at the level of students entering university, and seeks to provide an inter-disciplinary introduction to what is a broad field. It engages a number of experts from the University of Exeter and a number of partner organisations.

The course will set contemporary human-caused climate change within the context of past nature climate variability. Then it will take a risk communication approach, balancing the ‘bad news’ about climate change impacts on natural and human systems with the ‘good news’ about potential solutions. These solutions can help avoid the most dangerous climate changes and increase the resilience of societies and ecosystems to those climate changes that cannot be avoided.

The course lasts eight weeks, and is described as requiring about three hours of time every week. Participants take part entirely from their own laptop. There is no fee to join. The course material is delivered via a combination of videos (with attractive graphics), online documents, and quizzes and tests. Participants are also encouraged to share some of their experiences, ideas, and suggestions via the FutureLearn online social network.

For me, the timing seemed almost ideal. The London Futurists meetup last Saturday had addressed the topic of climate change. There’s an audio recording of the event here (it lasts just over two hours). The speaker, Duncan Clark, was excellent. But discussion at the event (and subsequently continued online) confirmed that there remain lots of hard questions needing further analysis.

I plan to invite other speakers on climate change topics to forthcoming London Futurists events, but in the meantime, this FutureLearn course seems like an excellent opportunity for many people to collectively deepen their knowledge of the overall subject.

I say this after having worked my way through the material for the first week of the course. I can’t say I learnt anything surprising, but the material was useful background to many of the discussions that I keep getting involved in. It was well presented and engaging. I paid careful attention, knowing there would be an online multiple choice test at the end of the week’s set of material. A couple of the questions in the test needed me to think quite carefully before answering. After I answered the final question, I was pleased to see the following screen:

Week 1 resultIt’s fascinating to read online the comments from other participants in the course. It looks like over 1,700 people have completed the first week’s material. Some of the participants are aged in their 70s or 80s, and it’s their first experience with computer learning.

There hasn’t been much controversy in the first week’s topics. One part straightforwardly explained the reasons why the observed changes in global temperature over the last century cannot be attributed to changes in solar radiation, even though changes in solar radiation could be responsible for the “Little Ice Age” between 1550-1850. That part, like all the other material from the first week, seemed completely fair and objective to me. I look forward to the subsequent sections.

I said that the timing of the course was almost ideal. However, it started on the 13th of January, and FutureLearn only allow people to join the course for up to 14 days after the official start date.

That means if any readers of this blog wish to follow my example and enrol in this course too, you’ll have to do so by this Sunday, the 26th of January.

I do hope that other people join the course, so we can compare notes, as we explore pathways to improved collaborative learning.

PS for my overall thoughts on climate change, see some previous posts in this blog, such as “Six steps to climate catastrophe” and “Risk blindness and the forthcoming energy crash”.

13 January 2014

Six steps to climate catastrophe

In a widely read Rolling Stone article from July 2012, “Global Warming’s Terrifying New Math”, Bill McKibben introduced what he called

Three simple numbers that add up to global catastrophe.

The three numbers are as follows:

  1. 2 degrees Celsius – the threshold of average global temperature rise “which scientists (and recently world leaders at the G8 summit) have agreed we must not cross, for fear of triggering climate feedbacks which, once started, will be almost impossible to stop and will drive accelerated warming out of our control”
  2. 565 Gigatons – the amount of carbon dioxide that can be added into the atmosphere by mid-century with still an 80% chance of the temperature rise staying below two degrees
  3. 2,795 Gigatons“the amount of carbon already contained in the proven coal and oil and gas reserves of the fossil-fuel companies, and the countries (think Venezuela or Kuwait) that act like fossil-fuel companies. In short, it’s the fossil fuel we’re currently planning to burn”.

As McKibben highlights,

The key point is that this new number – 2,795 – is higher than 565. Five times higher.

He has a vivid metaphor to drive his message home:

Think of two degrees Celsius as the legal drinking limit – equivalent to the 0.08 blood-alcohol level below which you might get away with driving home. The 565 gigatons is how many drinks you could have and still stay below that limit – the six beers, say, you might consume in an evening. And the 2,795 gigatons? That’s the three 12-packs the fossil-fuel industry has on the table, already opened and ready to pour.

We have five times as much oil and coal and gas on the books as climate scientists think is safe to burn. We’d have to keep 80 percent of those reserves locked away underground to avoid that fate. Before we knew those numbers, our fate had been likely. Now, barring some massive intervention, it seems certain.

He continues,

Yes, this coal and gas and oil is still technically in the soil. But it’s already economically above ground – it’s figured into share prices, companies are borrowing money against it, nations are basing their budgets on the presumed returns from their patrimony. It explains why the big fossil-fuel companies have fought so hard to prevent the regulation of carbon dioxide – those reserves are their primary asset, the holding that gives their companies their value. It’s why they’ve worked so hard these past years to figure out how to unlock the oil in Canada’s tar sands, or how to drill miles beneath the sea, or how to frack the Appalachians.

The burning question

bqcoverbig

A version of Bill McKibben’s Global Warming’s Terrifying New Math essay can be found as the foreword to the recent book “The Burning Question” co-authored by Duncan Clark and Mike Berners-Lee. The subtitle of the book has a somewhat softer message than in the McKibben essay:

We can’t burn half the world’s oil, coal, and gas. So how do we quit?

But the introduction makes it clear that constraints on our use of fossil fuel reserves will need to go deeper than “one half”:

Avoiding unacceptable risks of catastrophic climate change means burning less than half of the oil, coal, and gas in currently commercial reserves – and a much smaller fraction of all the fossil fuels under the ground…

Notoriously, climate change is a subject that is embroiled in controversy and intemperance. The New York Times carried an opinion piece, “We’re All Climate-Change Idiots” containing this assessment from Anthony Leiserowitz, director of the Yale Project on Climate Change Communication:

You almost couldn’t design a problem that is a worse fit with our underlying psychology.

However, my assessment of the book “The burning question” by Berners-Lee and Clark is that it is admirably objective and clear. That impression was reinforced when I saw Duncan Clark speak about the contents of the book at London’s RSA a couple of months ago. On that occasion, the meeting was constrained to less than an hour, for both presentation and audience Q&A. It was clear that the speaker had a lot more that he could have said.

I was therefore delighted when he agreed to speak on the same topic at a forthcoming London Futurists event, happening in Birkbeck College from 6.15pm to 8.30pm on Saturday 18th January. You can find more details of the London Futurists event here. Following our normal format, we’ll have a full two hours of careful examination of the overall field.

Six steps to climate catastrophe

One way to examine the risks of climate catastrophe induced by human activity is to consider the following six-step chain of cause and effect:

  1. Population – the number of people on the earth
  2. Affluence – the average wealth of people on the earth
  3. Energy intensity – the average amount of energy used to create a unit of wealth
  4. Carbon intensity – the average carbon emissions caused by each unit of energy
  5. Temperature impact – the average increase of global temperature caused by carbon emissions
  6. Global impact – the broader impact on life on earth caused by increased average temperature.

Six steps

As Berners-Lee and Clark discuss in their book, there’s scope to debate, and/or to alter, each of these causal links. Various commentators recommend:

  • A reduction in the overall human population
  • Combatting society’s deep-seated imperatives to pursue economic growth
  • Achieving greater affluence with less energy input
  • Switching to energy sources (such as “renewables”) with reduced carbon emissions
  • Seeing (or engineering) different causes that complicate the relation between carbon emissions and temperature rises
  • Seeing (or engineering) beneficial aspects to global increases in temperature, rather than adverse ones.

What they point out, however, is that despite significant progress to reduce energy intensity and carbon intensity, the other factors seem to be increasing out of control, and dominate the overall equation. Specifically, affluence shows no signs of decreasing, especially when the aspirations of huge numbers of people in emerging economies are taken into consideration.

I see this as an argument to accelerate work on technical solutions – further work to reduce the energy intensity and carbon intensity factors. I also see it as an argument to rapidly pursue investigations of what Berners-Lee and Clark call “Plan B”, namely various forms of geoengineering. This extends beyond straightforward methods for carbon capture and storage, and includes possibilities such as

  • Trying to use the oceans to take more carbon dioxide out of the air and store it in an inert form
  • Screen some of the incoming heat from the sun, by, for example, creating more clouds, or injecting aerosols into the upper atmosphere.

But Berners-Lee and Clark remain apprehensive about one overriding factor. This is the one described earlier: the fact that so much investment is tied up in the share-prices of oil companies that assume that huge amounts within the known reserves of fossil fuels will all be burnt, relatively soon. Providing better technical fixes will, they argue, be insufficient to prevent the ongoing juggernaut steamroller of conversion from fossil fuels into huge cash profits for industry – a juggernaut with the side-effect of accumulated carbon emissions that increase the risk of horrendous climate consequences.

For this reason, they see the need for concerted global action to ensure that the prices being paid for the acquisition and/or consumption of fossil fuels fully take into account the downside costs to the global environment. This will be far from easy to achieve, but the book highlights some practical steps forwards.

Waking up

The first step – as so often, in order to succeed in a complex change project – is to engender a sustained sense of urgency. Politicians won’t take action unless there is strong public pressure for action. This public pressure won’t exist whilst people remain in a state of confusion, disinterest, dejection, and/or helplessness. Here’s an extract from near the end of their book:

It’s crucial that more people hear the simple facts loud and clear: that climate change presents huge risks, that our efforts to solve it so far haven’t worked, and that there’s a moral imperative to constrain unabated fossil fuel use on behalf of current and especially future generations.

It’s often assumed that the world isn’t ready for this kind of message – that it’s too negative or scary or confrontational. But reality needs facing head on – and anyhow the truth may be more interesting and inspiring than the watered down version.

I expect many readers of this blogpost to have questions in their mind – or possibly objections (rather than just questions) – regarding at least some of what’s written above. This topic deserves a 200 page book rather than just a short blogpost.

Rather than just urging people to read the book in question, I have set up the London Futurists event previously mentioned. I am anticipating robust but respectful in-depth discussion.

Beyond technology

One possible response is that the acceleration of technological solutions will deliver sufficient solutions (e.g. reducing energy intensity and carbon intensity) long before we need to worry about the climate reaching any tipping point. Solar energy may play a decisive role – possibly along with new generations of nuclear power technology.

That may turn out to be true. But my own engineering experience with developing complex technological solutions is that the timetable is rarely something that anyone can be confident about in advance. So yes, we need to accelerate the technology solutions. But equally, as an insurance policy, we need to take actions that will buy ourselves more time, in order for these technological solutions to come to full fruition. This insurance policy inevitably involves the messy worlds of politics and economics, alongside the developments that happen in the technological arena.

This last message comes across uncomfortably to people who dislike any idea of global coordinated action in politics or economics. People who believe in “small government” and “markets as free as possible” don’t like to contemplate global scale political or economic action. That is, no doubt, another reason why the analysis of global warming and climate change is such a contentious issue.

20 December 2013

Kick-starting the future – less than 24 hours to go

Filed under: Anticipating 2025, collaboration, communications, futurist — David Wood @ 10:18 am

By chance, two really interesting projects both seeking support on the crowd-funding site Kick Starter are coming to their conclusions in the next 24 hours.

They’re both well worth a look.

shift2020-book-cover

shift 2020 is a collaborative book about how technology will impact our future. The book is curated by Rudy De Waele and designed by Louise Campbell.

As Rudy explains,

The idea of shift 2020 is based upon Mobile Trends 2020, a collaborative project I launched early 2010. It’s one of the highest viewed decks on Slideshare (in the Top 50 of All Time in Technology / +320k views). Reviewing the document a couple of weeks ago, I realised the future is catching up on us much faster than many of the predictions that were made. I thought it was time to ask the original contributors for an update on their original predictions and new foresights for the year 2020.

The list of authors is extensive. I would copy out all the names here, but urge you to click on the links to see the full list.

My own set of five predictions from early 2010 that I submitted  to Rudy’s earlier project Mobile Trends 2020 seems to be holding up well for fulfilment by 2020 (if not sooner). See slide 36 of the 2010 presentation:

  1. Mobiles manifesting AI – fulfilling, at last, the vision of “personal digital assistants”
  2. Powerful, easily wearable head-mounted accessories: audio, visual, and more
  3. Mobiles as gateways into vivid virtual reality – present-day AR is just the beginning
  4. Mobiles monitoring personal health – the second brains of our personal networks
  5. Mobiles as universal remote controls for life – a conductor’s baton as much as a viewing portal.

5 predictions for 2010

I’ve added some extra content for shift 2020, but that’s embargoed for now!

People who give financial support via Kick Starter to shift 2020 have lots of options to consider. For example, a pledge of £13 will deliver you the following:

NO FRILLS PAPERBACK (UK SHIPPING).
shift 2020 Black and white printing on cream-coloured paper with a full-colour soft cover (5×8 in 13×20 cm) + 80 pages specially designed for business travellers, printed by blurb.com. Shipping costs included.

Estimated delivery: Jan 2014; Ships within the UK only

And £60 will deliver this:

PERSONAL NAME IN THE BOOK (PRINT VERSION).
shift 2020 shift 2020 nicely designed quality Hardcover, ImageWrap Standard Landscape 10×8 (25×20 cm) +80 pages Photo Book printed by blurb.com on Premium Semi Matt Paper, including a mention of your (personal) name in the acknowledgements page.

Estimated delivery: Jan 2014
Add £5 to ship outside the UK

Whereas shift 2020 seeks funding to support book publication, PostHuman seeks funding to support a series of videos about transhumanism.

The three supersThe “BIOPS” team behind this campaign have already created one first class video:

The first video by the British Institute of Posthuman Studies (BIOPS), entitled “PostHuman: An Introduction To Transhumanism”, investigates three dominant areas of transhumanist thought: super longevity, super intelligence and super wellbeing. It covers the ideas of Aubrey de Grey, Ray Kurzweil and David Pearce.

I’ll let the BIOPS team tell their story:

Writers Marco Vega and Peter Brietbart (that’s us!) have shared a passion for philosophy since we first met at Sussex University five years ago. Over time, we became frustrated with the classical, removed armchair philosophy, and began to look for philosophically sophisticated ideas with real human impact. Transhumanism stood out as a practical, far-seeing, radical and urgent field, informed by science and guided by moral philosophy.

We soon realised that our philosophy buddies and lecturers had barely heard of it, though the ideas involved were exciting and familiar. The problem for us is that even though transhumanism is incredibly relevant, it’s practically invisible in mainstream thought.

Influenced by YouTubers like QualiaSoup3vid3nc3CGPGreyRSA Animate,TheraminTreesVsauceCrashCourse and many more, we saw that complex ideas can be made accessible, entertaining and educational.

Our dream is to make this project – the culmination of five years of thought, reflection and research – a reality.

We’ve just released the first video – PostHuman: An Introduction to Transhumanism. We made it over the course of a year, in volunteered time, paid with favours and fuelled by enthusiasm. Now we need your help to keep going…

In the year 2014, we want to write, produce and release at least 6 more fully animated episodes. We’ll investigate a range of different transhumanist themes, consider their arguments in favour, highlight our greatest worries, and articulate what we perceive to be the most significant implications for humanity.

We’re worried that such critical topics and concepts are not getting the coverage they need. Our aim for the video series is to bring awareness to the most important conversation humanity needs to be having, and to do it in a way that’s accessible, balanced and educational.

In addition to animating the ideas and concepts, we also want to seek out and challenge influential transhumanist thinkers. We’ll record the interviews, and include the highlights at the end of the videos.

We’re looking to raise £65,000 to allow the production crew to make this happen.

I’m delighted that Marco Vega and Peter Brietbart of BIOPS will be among the speakers at the Anticipating 2025 event I’m holding at Birkbeck College on 22-23 March:

I wish both shift 2020 and PostHuman the best of luck with their fundraising and delivery!

30 September 2013

Questions about Hangouts on Air

Filed under: collaboration, Google, Hangout On Air, intelligence — David Wood @ 11:05 pm

HOA CaptureI’m still learning about how to get the best results from Google Hangouts On Air – events that are broadcast live over the Internet.

On Sunday, I hosted a Hangout On Air which ran pretty well. However, several features of the experience were disappointing.

Here, I’m setting aside questions about what the panellists said. It was a fascinating discussion, but in this blogpost, I want to ask some questions, instead, about the technology involved in creating and broadcasting the Hangout On Air. That was the disappointing part.

If anyone reading this can answer my questions, I’ll be most grateful.

If you take a quick look at the beginning of the YouTube video of the broadcast, you’ll immediately see the first problem I experienced:

The problem was that the video uplink from my own laptop didn’t get included in the event. Instead of what I thought I was contributing to the event, the event just showed my G+ avatar (a static picture of my face). That was in contrast to situation for the other four participants.

When I looked at the Hangout On Air window on my laptop as I was hosting the call, it showed me a stream of images recorded by my webcam. It also showed, at other times, slides which I was briefly presenting. That’s what I saw, but no-one else saw it. None of these displays made it into the broadcast version.

Happily, the audio feed from my laptop did reach the broadcast version. But not the video.

As it happens, I think that particular problem was “just one of those things”, which happen rarely, and in circumstances that are difficult to reproduce. I doubt this problem will recur in this way, the next time I do such an event. I believe that the software system on my laptop simply got itself into a muddle. I saw other evidence for the software being in difficulty:

  • As the event was taking place, I got notifications that people had added me to their G+ circles. But when I clicked on these notifications, to consider reciprocally adding these people into my own circles, I got an error message, saying something like “Cannot retrieve circle status info at this time”
  • After the event had finished, I tried to reboot my laptop. The shutdown hung, twice. First, it hung with a most unusual message, “Waiting for explorer.exe – playing logoff sound”. Second, after I accepted the suggestion from the shutdown dialog to close down that app regardless, the laptop hung indefinitely in the final “shutting down” display. In the end, I pressed the hardware reset button.

That muddle shouldn’t have arisen, especially as I had taken the precaution of rebooting my laptop some 30 minutes before the event was due to start. But it did. However, what made things worse is that I only became aware of this issue once the Hangout had already started its broadcast phase.

At that time, the other panellists told me they couldn’t see any live video from my laptop. I tried various quick fixes (e.g. switching my webcam off and on), but to no avail. I also wondered whether I was suffering from a local bandwidth restriction, but I had reset my broadband router 30 minutes before the call started, and I was the only person in my house at that time.

Exit the hangout and re-enter it, was the next suggestion offered to me. Maybe that will fix things.

But this is where I see a deeper issue with the way Hangouts On Air presently work.

From my experience (though I’ll be delighted if people can tell me otherwise), when the person who started the Hangout On Air exits the event, the whole event shuts down. It’s therefore different from if any of the other panellists exits and rejoins. The other panellists can exit and rejoin without terminating the event. Not so for the host.

By the time I found out about the video uplink problem, I had already published the URL of where the YouTube of the Hangout would be broadcast. After starting the Hangout On Air (but before discovering the problem with my video feed), I had copied this URL to quite a few different places on social media – Meetup.com, Facebook, etc. I knew that people were already watching the event. If I exited the Hangout, to see if that would get the video uplink working again, we would have had to start a new Hangout, which would have had a different YouTube URL. I would have had to manually update all these social networking pages.

I can imagine two possible solutions to this – but I don’t think either are available yet, right?

  1. There may be a mechanism for the host to leave the Hangout On Air, without that Hangout terminating
  2. There may be a mechanism for something like a URL redirector to work, even for a second Hangout instance, which replaces a previous instance. The same URL would work for two different Hangouts.

Incidentally, in terms of URLs for the Hangout, note that there are at least three different such URLs:

  1. The URL of the “inside” of the Hangout, which the host can share with panellists to allow them to join it
  2. The URL of the Google+ window where the Hangout broadcast runs
  3. The URL of the YouTube window where the Hangout broadcast runs.

As far as I know, all three URLs change when a Hangout is terminated and restarted. What’s more, #1 and #3 are created when the Hangout starts, even before it switches into Broadcast mode, whereas #2 is only available when the host presses the “Start broadcasting” button.

In short, it’s a pretty complicated state of affairs. I presume that Google are hard at work to simplify matters…

To look on the positive side, one outcome that I feared (as I mentioned previously) didn’t come to pass. That outcome was my laptop over-heating. Instead, according to the CPU temperature monitor widget that I run on my laptop, the temperature remained comfortable throughout (reaching the 70s Centigrade, but staying well short of the 100 degree value which triggers an instant shutdown). I imagine that, because no video uplink was taking place, there was no strong CPU load on my laptop. I’ll have to wait to see what happens next time.

After all, over-heating is another example of something that might cause a Hangout host to want to temporarily exit the Hangout, without bringing the whole event to a premature end. There are surely other examples as well.

27 September 2013

Technology for improved collaborative intelligence

Filed under: collaboration, Hangout On Air, intelligence, Symbian — David Wood @ 1:02 pm

Interested in experiences in using Google Hangout On Air, as a tool to improve collaborative intelligence? Read on.

Google’s Page Rank algorithm. The Wikipedia editing process. Ranking of reviewers on Amazon.com. These are all examples of technology helping to elevate useful information above the cacophony of background noise.

To be clear, in such examples, insight doesn’t just come from technology. It comes from a combination of good tools plus good human judgement – aided by processes that typically evolve over several iterations.

For London Futurists, I’m keen to take advantage of technology to accelerate the analysis of radical scenarios for the next 3-40 years. One issue is that the general field of futurism has its own fair share of background noise:

  • Articles that are full of hype or sensationalism
  • Articles motivated by commercial concerns, with questionable factual accuracy
  • Articles intended for entertainment purposes, but which end up overly influencing what people think.

Lots of people like to ramp up the gas while talking about  the future, but that doesn’t mean they know what they’re talking about.

I’ve generally been pleased with the quality of discussion in London Futurists real-life meetings, held (for example) in Birkbeck College, Central London. The speaker contributions in these meetings are important, but the audience members collectively raise a lot of good points too. I do my best to ‘referee’ the discussions, in a way that a range of opinions have a chance to be aired. But there have been three main limitations with these meetups:

  1. Meetings often come to an end well before we’ve got to the bottom of some of the key lines of discussion
  2. The insights from individual meetings can sometimes fail to be taken forward into subsequent meetings – where the audience members are different
  3. Attendance is limited to people who live near to London, and who have no other commitments when the meetup is taking place.

These limitations won’t disappear overnight, but I have plans to address them in stages.

I’ve explained some of my plans in the following video, which is also available at http://londonfuturists.com/2013/08/30/introducing-london-futurists-academy/.

As the video says, I want to be able to take advantage of the same kind of positive feedback cycles that have accelerated the progress of technology, in order to accelerate in a similar way the generation of reliable insight about the future.

As a practical step, I’m increasingly experimenting with Google Hangouts, as a way to:

  • Involve a wider audience in our discussions
  • Preserve an online record of the discussions
  • Find out, in real-time, which questions the audience collectively believes should be injected into a conversation.

In case it helps others who are also considering the usage of Google Hangouts, here’s what I’ve found out so far.

The Hangouts are a multi-person video conference call. Participants have to log in via one of their Google accounts. They also have to download an app, inside Google Plus, before they can take part in the Hangout. Google Plus will prompt them to download the app.

The Hangout system comes with its own set of plug-in apps. For example, participants can share their screens, which is a handy way of showing some PowerPoint slides that back up a point you are making.

By default, the maximum number of attendees is 10. However, if the person who starts the Hangout has a corporate account with Google (as I have, for my company Delta Wisdom), that number can increase to 15.

For London Futurists meetings, instead of a standard “Hangout”, I’m using “Hangouts On Air” (sometime abbreviated as ‘HOA’). These are started from within their own section of the Google Plus page:

  • The person starting the call (the “moderator”) creates the session in a “pre-broadcast” state, in which he/she can invite a number of participants
  • At this stage, the URL is generated, for where the Hangout can be viewed on YouTube; this vital piece of information can be published on social networking sites
  • The moderator can also take some other pre-broadcast steps, such as enabling the “Questions” app (further mentioned below)
  • When everyone is ready, the moderator presses the big red “Start broadcast” button
  • A wide audience is now able to watch the panellists discussion via the YouTube URL, or on the Google Plus page of the moderator.

For example, there will be a London Futurists HOA this Sunday, starting 7pm UK time. There will be four panellists, plus me. The subject is “Projects to accelerate radical healthy longevity”. The details are here. The event will be visible on my own Google Plus page, https://plus.google.com/104281987519632639471/posts. Note that viewers don’t need to be included in any of the Circles of the moderator.

As the HOA proceeds, viewers typically see the current speaker at the top of the screen, along with the other panellists in smaller windows below. The moderator has the option to temporarily “lock” one of the participants into the top area, so that their screen has prominence at that time, even though other panellists might be speaking.

It’s good practice for panellists to mute their microphones when they’re not speaking. That kind of thing is useful for the panellists to rehearse with the moderator before the call itself (perhaps in a brief preview call several days earlier), in order to debug connectivity issues, the installation of apps, camera positioning, lighting, and so forth. Incidentally, it’s best if there’s a source of lighting in front of the speaker, rather than behind.

How does the audience get to interact with the panellists in real-time? Here’s where things become interesting.

First, anyone watching via YouTube can place text comments under the YouTube window. These comments are visible to the panellists:

  • Either by keeping an eye on the same YouTube window
  • Or, simpler, within the “Comment Tracker” tab of the “Hangout Toolbox” app that is available inside the Hangout window.

However, people viewing the HOA via Google Plus have a different option. Provided the moderator has enabled this feature before the start of the broadcast, viewers will see a big button inviting them to ask a question, in a text box. They will also be able to view the questions that other viewers have submitted, and to give a ‘+1’ thumbs up endorsement.

In real-time, the panellists can see this list of questions appear on their screens, inside the Hangout window, along with an indication of how many ‘+1′ votes they have received. Ideally, this will help the moderator to pick the best question for the panel to address next. It’s a small step in the direction of greater collaborative intelligence.

At time of writing, I don’t think there’s an option for viewers to downvote each others’ questions. However, there is an option to declare that a question is spam. I expect the Google team behind HOA will be making further enhancements before long.

This Questions app is itself an example of how the Google HOA technology is improving. The last time I ran a HOA for London Futurists, the Questions apps wasn’t available, so we just used the YouTube comments mechanism. One of the panellists for that call, David Orban, suggested I should look into another tool, called Google Moderator, for use in a subsequent occasion. I took a look, and liked what I saw, and my initial announcement of my next HOA (the one happening on Sunday) mentioned that I would be using Google Moderator. However, as I said, technology moves on quickly. Giulio Prisco drew my attention to the recently announced Questions feature of the HOA itself – a feature that had previously been in restricted test usage, but which is now available for all users of HOA. So we’ll be using that instead of Google Moderator (which is a rather old tool, without any direct connection into the Hangout app).

The overall HOA system is still new, and it’s not without its issues. For example, panellists have a lot of different places they might need to look, as the call progresses:

  • The “YouTube comment tracker” screen is mutually exclusive from the “Questions” screen: panellists can only have one of these visible to them at a time
  • These screens are in turn mutually exclusive from a text chat window which the panellists can use to chat amongst themselves (for example, to coordinate who will be speaking next) while one of the other panellists is speaking.

Second – and this is what currently makes me most apprehensive – the system seems to put a lot of load on my laptop, whenever I am the moderator of a HOA. I’ve actually seen something similar whenever my laptop is generating video for any long call. The laptop gets hotter and hotter as time progresses, and might even cut out altogether – as happened one hour into the last London Futurists HOA (see the end of this video).

Unfortunately, when the moderator’s PC loses connection to the HOA, the HOA itself seems to shut down (after a short delay, to allow quick reconnections). If this happens again on Sunday, we’ll restart the HOA as soon as possible. The “part two” will be visible on the same Google Plus page, but the corresponding YouTube video will have its own, brand new URL.

Since the last occurrence of my laptop overheating during a video call, I’ve had a new motherboard installed, plus a new hard disk (as the old one was giving some diagnostic errors), and had all the dust cleaned out of my system. I’m keeping my fingers crossed for this Sunday. Technology brings its challenges as well as many opportunities…

Footnote: This threat of over-heating reminds me of a talk I gave on several occasions as long ago as 2006, while at Symbian, about “Horsemen of the apocalypse”, including fire. Here’s a brief extract:

Standing in opposition to the potential for swift continuing increase in mobile technology, however, we face a series of major challenges. I call them “horsemen of the apocalypse”.  They include fire, flood, plague, and warfare.

“Fire” is the challenge of coping with the heat generated by batteries running ever faster. Alas, batteries don’t follow Moore’s Law. As users demand more work from their smartphones, their battery lifetimes will tend to plummet. The solution involves close inter-working of new hardware technology (including multi-core processors) and highly sophisticated low-level software. Together, this can reduce the voltage required by the hardware, and the device can avoid catching fire as it performs its incredible calculations…

18 May 2013

Breakthroughs with M2M: moving beyond the false starts

Filed under: collaboration, Connectivity, Internet of Things, leadership, M2M, standards — David Wood @ 10:06 am

Forecasts of machine-to-machine wireless connectivity envision 50 billion, or even one trillion, wirelessly connected devices, at various times over the next 5-10 years. However, these forecasts date back several years, and there’s a perception in some quarters that all is not well in the M2M world.

HeronTowerThese were the words that I used to set the scene for a round-table panel discussion at the beginning of this month, at the Harvey Nash offices in high-rise Heron Tower in the City of London. Participants included senior managers from Accenture Mobility, Atholl Consulting, Beecham Research, Eseye, Interskan, Machina Research, Neul, Oracle, Samsung, Telefonica Digital, U-Blox, Vodafone, and Wyless – all attending in a personal capacity. I had the privilege to chair the discussion.

My goal for the discussion was that participants would leave the meeting with clearer ideas and insights about:

  • Obstacles hindering wider adoption of M2M connectivity
  • Potential solutions to these obstacles.

The gathering was organised by Ian Gale, Senior Telecoms Consultant of Harvey Nash. The idea for the event arose in part from reflections from a previous industry round-table that I had also chaired, organised by Cambridge Wireless and Accenture. My online notes on that meeting – about the possible future of the Mobile World Congress (MWC) – included the following thoughts about M2M:

MWC showed a lot of promise for machine-to-machine (M2M) communications and for connected devices (devices that contain communications functionality but which are not phones). But more remains to be done, for this promise to reach its potential.

The GSMA Connected City gathered together a large number of individual demos, but the demos were mainly separated from each other, without there being a clear overall architecture incorporating them all.

Connected car was perhaps the field showing the greatest progress, but even there, practical questions remain – for example, should the car rely on its own connectivity, or instead rely on connectivity of smartphones brought into the car?

For MWC to retain its relevance, it needs to bring M2M and connected devices further to the forefront…

The opening statements from around the table at Harvey Nash expressed similar views about M2M not yet living up to its expected potential. Several of the participants had written reports and/or proposals about machine-to-machine connectivity as long as 10-12 years ago. It was now time, one panellist suggested, to “move beyond the false starts”.

Not one, but many opportunities

An emerging theme in the discussion was that it distorts perceptions to talk about a single, unified M2M opportunity. Headline figures for envisioned near-future numbers of “connected devices” add to the confusion, since:

  • Devices can actually connect in many different ways
  • The typical data flow can vary widely, between different industries, and different settings
  • Differences in data flow means that the applicable standards and regulations also vary widely
  • The appropriate business models vary widely too.

Particular focus on particular industry opportunities is more likely to bring tangible results than a general broad-brush approach to the entire potential space of however many billion devices might become wirelessly connected in the next 3-5 years. One panellist remarked:

Let’s not try to boil the ocean.

And as another participant put it:

A desire for big volume numbers is understandable, but isn’t helpful.

Instead, it would be more helpful to identify different metrics for different M2M opportunities. For example, these metrics would in some cases track credible cost-savings, if various M2M solutions were to be put in place.

Compelling use-cases

To progress the discussion, I asked panellists for their suggestions on compelling use-cases for M2M connectivity. Two of the most interesting answers also happened to be potentially problematic answers:

  • There are many opportunities in healthcare, if people’s physiological and medical data can be automatically communicated to monitoring software; savings include freeing up hospital beds, if patients can be reliably monitored in their own homes, as well as proactively detecting early warning signs of impending health issues
  • There are also many opportunities in automotive, with electronic systems inside modern cars generating huge amounts of data about performance, which can be monitored to identify latent problems, and to improve the algorithms that run inside on-board processors.

However, the fields of healthcare and automotive are, understandably, both heavily regulated. As appropriate for life-and-death issues, these industries are risk-averse, so progress is slow. These fields are keener to adopt technology systems that have already been well-proven, rather than carrying out bleeding-edge experimentation on their own. Happily, there are other fields which have a lighter regulatory touch:

  • Several electronics companies have plans to wirelessly connect all their consumer devices – such as cameras, TVs, printers, fridges, and dishwashers – so that users can be alerted when preventive maintenance should be scheduled, or when applicable software upgrades are available; a related example is that a printer could automatically order a new ink cartridge when ink levels are running low
  • Dustbins can be equipped with sensors that notify collection companies when they are full enough to warrant a visit to empty them, avoiding unnecessary travel costs
  • Sensors attached to roadway lighting systems can detect approaching vehicles and pedestrians, and can limit the amount of time lights are switched on to the time when there is a person or vehicle in the vicinity
  • Gas pipeline companies can install numerous sensors to monitor flow and any potential leakage
  • Tracking devices can be added to items of equipment to prevent them becoming lost inside busy buildings (such as hospitals).

Obstacles

It was time to ask the first big question:

What are the obstacles that stand in the way of the realisation of the grander M2M visions?

That question prompted a raft of interesting observations from panellists. Several of the points raised can be illustrated by a comparison with the task of selling smartphones into organisations for use by employees:

  • These devices only add business value if several different parts of the “value chain” are in good working order – not only the device itself, but also the mobile network, the business-specific applications, and connectivity for the mobile devices into the back-end data systems used by business processes in the company
  • All the different parts of the value chain need to be able to make money out of their role in this new transaction
  • To avoid being locked into products from only one supplier, the organisation will wish to see evidence of interoperability with products from different suppliers – in order words, a certain degree of standardisation is needed.

At the same time, there are issues with hardware and network performance:

  • Devices might need to be able to operate with minimal maintenance for several years, and with long-lived batteries
  • Systems need to be immune from tampering or hacking.

Companies and organisations generally need assurance, before making the investments required to adopt M2M technology, that:

  • They have a clear idea of likely ongoing costs – they don’t want to be surprised by needs for additional expenditure, system upgrades, process transformation, repeated re-training of employees, etc
  • They have a clear idea of at least minimal financial benefits arising to them.

Especially in a time of uncertain financial climate, companies are reluctant to invest money now with the promise of potential savings being realised at some future date. This results in long, slow sales cycles, in which several layers of management need to be convinced that an investment proposal makes sense. For these reasons, panellists listed the following set of obstacles facing M2M adoption:

  • The end-to-end technology story is often too complicated – resulting in what one panellist called “a disconnected value chain”
  • Lack of clarity over business model; price points often seem unattractive
  • Shortage of unambiguous examples of “quick wins” that can drum up more confidence in solutions
  • Lack of agreed standards – made worse by the fact that standardisation processes seem to move so slowly
  • Conflicts of interest among the different kinds of company involved in the extended value chain
  • Apprehension about potential breaches of security or privacy
  • The existing standards are often unsuitable for M2M use cases, having been developed, instead, for voice calls and video connectivity.

Solutions

My next question turned the discussion to a more positive direction:

Based on your understanding of the obstacles, what initiatives would you recommend, over the next 18-24 months, to accelerate the development of one or more M2M solution?

In light of the earlier observation that M2M brings “not one, but many opportunities”, it’s no surprise that panellists had divergent views on how to proceed and how to prioritise the opportunities. But there were some common thoughts:

  1. We should expect it to take a long time for complete solutions to be established, but we should be able to plan step-by-step improvements
  2. Better “evangelisation” is needed – perhaps a new term to replace “M2M”
  3. There is merit in pooling information and examples that can help people who are writing business cases for adopting M2M solutions in their organisations
  4. There is particular merit in simplifying the M2M value chain and in accelerating the definition and adoption of fit-for-purpose standards
  5. Formal standardisation review processes are obliged to seek to accommodate the conflicting needs of large numbers of different perspectives, but de facto standards can sometimes be established, a lot more quickly, by mechanisms that are more pragmatic and more focused.

To expand on some of these points:

  • One way to see incremental improvements is by finding new business models that work with existing M2M technologies. Another approach is to change the technology, but without disrupting the existing value chains. The more changes that are attempted at the same time, the harder it is to execute everything successfully
  • Rather than expecting large enterprises to lead changes, a lesson can be learned from what has happened with smartphones over the last few years, via the “consumer-led IT”; new devices appealed to individuals as consumers, and were then taken into the workforce to be inserted into business processes. One way for M2M solutions to progress to a point when enterprises would be forced to take them more seriously is if consumers adopt them first for non-work purposes
  • One key to consumer and developer experimentation is to make it easier for small groups of people to create their own M2M solutions. For example, an expansion in the reach of Embedded Java could enable wider experimentation. The Arduino open-source electronics prototyping platform can play a role here too, as can the Raspberry Pi
  • Weightless.org is an emerging standard in which several of the panellists expressed considerable interest. To quote from the Weightless website:

White space spectrum provides the scope to realise tens of billions of connected devices worldwide overcoming the traditional problems associated with current wireless standards – capacity, cost, power consumption and coverage. The forecasted demand for this connectivity simply cannot be accommodated through existing technologies and this is stifling the potential offered by the machine to machine (M2M) market. In order to reach this potential a new standard is required – and that standard is called Weightless.

Grounds for optimism

As the discussion continued, panellists took the opportunity to highlight areas where they, individually, saw prospects for more rapid progress with M2M solutions:

  • The financial transactions industry is one in which margins are still high; these margins should mean that there is greater possibility for creative experimentation with the adoption of new M2M business models, in areas such as reliable automated authentication for mobile payments
  • The unsustainability of current transport systems, and pressures for greater adoption of new cars with hybrid or purely electric power systems, both provide opportunities to include M2M technology in so-called “intelligent systems”
  • Rapid progress in the adoption of so-called “smart city” technology by cities such as Singapore might provide showcase examples to spur adoption elsewhere in the world, and in new industry areas
  • Progress by weightless.org, which addresses particular M2M use cases, might also serve as a catalyst and inspiration for faster progress in other standards processes.

Some take-aways

To wind up the formal part of our discussion, I asked panellists if they could share any new thoughts that had occurred to them in the course of the preceding 120 minutes of round-table discussion. Here’s some of what I heard:

  • It’s like the early days of the Internet, in which no-one had a really good idea of what would happen next, but where there are clearly plenty of big opportunities ahead
  • There is no “one correct answer”
  • Systems like Arduino will allow young developers to flex their muscles and, no doubt, make lots of mistakes; but a combination of youthful vigour and industry experience (such as represented by the many “grey hairs” around the table) provide good reason for hope
  • We need a better message to evangelise with; “50 billion connected devices” isn’t sufficient
  • Progress will result from people carefully assessing the opportunities and then being bold
  • Progress in this space will involve some “David” entities taking the courage to square up to some of the “Goliaths” who currently have vested interests in the existing technology systems
  • Speeding up time-to-market will require companies to take charge of the entire value chain
  • Enabling consumerisation is key
  • We have a powerful obligation to make the whole solution stack simpler; that was already clear before today, but the discussion has amply reinforced this conclusion.

Next steps

A number of forthcoming open industry events are continuing the public discussion of M2M opportunities.

M2M World

With thanks to…

I’d like to close by expressing my thanks to the hosts of the event, Harvey Nash, and to the panellists who took the time to attend the meeting and freely share their views:

1 April 2012

Why good people are divided by politics and religion

Filed under: books, collaboration, evolution, motivation, passion, politics, psychology, RSA — David Wood @ 10:58 pm

I’ve lost count of the number of people who have thanked me over the years for drawing their attention to the book “The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” written by Jonathan Haidt, Professor of Social Psychology at the University of Virginia. That was a book with far-reaching scope and penetrating insight. Many of the ideas and metaphors in it have since become fundamental building blocks for other writers to use – such as the pithy metaphor of the human mind being divided like a rider on an elephant, with the job of the rider (our stream of conscious reasoning) being to serve the elephant (the other 99% of our mental processes).

This weekend, I’ve been reading Haidt’s new book, “The Righteous Mind: Why Good People Are Divided by Politics and Religion”. It’s a great sequel. Like its predecessor, it ranges across more than 2,400 years of thought, highlighting how recent research in social psychology sheds clear light on age-old questions.

Haidt’s analysis has particular relevance for two deeply contentious sets of debates that each threaten to destabilise and divide contemporary civil society:

  • The “new atheism” critique of the relevance and sanctity of religion in modern life
  • The political fissures that are coming to the fore in the 2012 US election year – fissures I see reflected in messages full of contempt and disdain in the Facebook streams of some several generally sensible US-based people I know.

There’s so much in this book that it’s hard to summarise it without doing an injustice to huge chunks of fascinating material:

  • the importance of an empirical approach to understanding human morality – an approach based on observation, rather than on a priori rationality
  • moral intuitions come first, strategic reasoning comes second, to justify the intuitions we have already reached
  • there’s more to morality than concerns over harm and fairness; Haidt memorably says that “the righteous mind is like a tongue with six taste receptors”
  • the limitations of basing research findings mainly on ‘WEIRD‘ participants (people who are Western, Educated, Industrialised, Rich, and Democratic)
  • the case for how biological “group selection” helped meld humans (as opposed to natural selection just operating at the level of individual humans)
  • a metaphor that “human beings are 90 percent chimp and 10 percent bee”
  • the case that “The most powerful force ever known on this planet is human cooperation — a force for construction and destruction”
  • methods for flicking a “hive switch” inside human brains that open us up to experiences of self-transcendence (including a discussion of rave parties).

The first chapter of the book is available online – as part of a website dedicated to the book. You can also get a good flavour of some of the ideas in the book from two talks Haidt has given at TED: “Religion, evolution, and the ecstasy of self-transcendence” (watch it full screen to get the full benefits of the video effects):

and (from a few years back – note that Haidt has revised some of his thinking since the date of this talk) “The moral roots of liberals and conservatives“:

Interested to find out more? I strongly recommend that you read the book itself. You may also enjoy watching a wide-ranging hour-long interview between Haidt and Robert Wright – author of Nonzero: The Logic of Human Destiny and The Evolution of God.

Footnote: Haidt is talking at London’s Royal Society of Arts on lunchtime on Tuesday 10th April; you can register to be included on the waiting list in case more tickets become available. The same evening, he’ll be speaking at the Royal Institution; happily, the Royal Institution website says that there is still “good availability” for tickets:

Jonathan Haidt, the highly influential psychologist, is here to show us why we all find it so hard to get along. By examining where morality comes from, and why it is the defining characteristic of humans, Haidt will show why we cannot dismiss the views of others as mere stupidity or moral corruption. Our moral roots run much deeper than we realize. We are hardwired not just to be moral, but moralistic and self-righteous. From advertising to politics, morality influences all aspects of behaviour. It is the key to understanding everybody. It explains why some of us are liberals, others conservatives. It is often the difference between war and peace. It is also why we are the only species that will kill for an ideal.

Haidt argues we are always talking past each other because we are appealing to different moralities: it is not just about justice and fairness – for some people authority, sanctity or loyalty are more important. With new evidence from his own empirical research, Haidt will show it is possible to liberate us from the disputes that divide good people. We can either stick to comforting delusions about others, or learn some moral psychology. His hope is that ultimately we can cooperate with those whose morals differ from our own.

2 October 2011

Prioritising the best peer pressure

Filed under: BHAG, catalysts, collaboration, futurist, Humanity Plus — David Wood @ 9:36 am

In a world awash with conflicting influences and numerous potential interesting distractions, how best to keep “first things first“?

A big part of the answer is to ensure that the influences we are closest to us are influences:

  • Whose goals are aligned with our own
  • Who can give us prompt, helpful feedback when we are falling short of our own declared intentions
  • Who can provide us with independent viewpoints that enrich, complement, and challenge our current understanding.

In my own case, that’s the reason why I have been drawn to the community known as “Humanity+“:

Humanity+ is an international nonprofit membership organization which advocates the ethical use of technology to expand human capacities. We support the development of and access to new technologies that enable everyone to enjoy better minds, better bodies and better lives. In other words, we want people to be better than well.

I deeply share the goals of Humanity+, and I find some of the world’s most interesting thinkers within that community.

It’s also the reason I have sought to aid the flourishing of the Humanity+ community, particularly in the UK, by organising a series of speaker meetings in London.  The speakers at these meetings are generally fascinating, but its the extended networking that follows (offline and online) which provides the greatest value.

My work life has been very busy in the last few months, leaving me less time to organise regular H+UK meetings.  However, to keep myself grounded in a community that contains many people who can teach me a great deal – a community that can provide powerful positive peer pressure – I’ve worked with some H+UK colleagues to pull together an all day meeting that is taking place at the Saturday at the end of this week (8th October).

The theme of this meeting is “Beyond Human: Rethinking the Technological Extension of the Human Condition“.  It splits into three parts:

  • Beyond human: The science and engineering
  • Beyond human: Implications and controversies
  • Beyond human: Getting involved

The event is free to attend.  There’s no need to register in advance. The meeting is taking place in lecture room B34 in the Malet Street building (the main building) of Birkbeck College.  This is located in Torrington Square (which is a pedestrian-only square), London WC1E 7HX.

Full details are on the official event website.  In this blogpost, to give a flavour of what will be covered, I’ll just list the agenda with the speakers and panellists.

09.30 – Finding the room, networking
Opening remarks
Beyond human: The science and engineering
11.40 – Audience Q&A with the panel consisting of the above four speakers
Lunch break
12.00 – People make their own arrangements for lunch (there are some suggestions on the event website)
Beyond human: Implications and controversies
14.40 – Audience Q&A with the panel consisting of the above four speakers
Extended DIY coffee break
15.00 – Also a chance for extended networking
Beyond human: Getting involved
17.25 – Audience Q&A with the panel consisting of the above four speakers
End of conference
17.45 – Hard stop – the room needs to be empty by 18.00

You can follow the links to find out more information about each speaker. You’ll see that several are eminent university professors. Several have written key articles or books on the theme of technology that significantly enhances human potential. Some complement their technology savvy with an interest in performance art.  All are distinguished and interesting futurists in their own way.

I don’t expect I’ll agree with everything that’s said, but I do expect that great personal links will be made – and strengthened – during the course of the day.  I also expect that some of the ideas shared at the conference – some of the big, hairy, audacious goals unveiled – will take on a major life of their own, travelling around the world, offline and online, catalysing very significant positive change.

Older Posts »

Blog at WordPress.com.