dw2

29 August 2014

Can technology bring us peace?

SevereThe summer months of 2014 have brought us a sickening surfeit of awful news. Our newsfeeds have been full of conflict, casualties, and brutalities in Iraq, Syria, Ukraine, Gaza, and so on. For example, just a couple of days ago, my browser screamed at me, Let’s be clear about this: Russia is invading Ukraine right now. And my TV has just informed me that the UK’s terror threat level is being raised from “substantial” to “severe”:

The announcement comes amid increasing concern about hundreds of UK nationals who are believed by security services to have travelled to fight in Iraq and Syria.

These real-world conflicts have been giving rise to online mirror conflicts among many of the people that I tend to respect. These online controversies play out heated disputes about the rights and wrongs of various participants in the real-world battles. Arguments ding-dong ferociously: What is the real reason that MH17 plane was shot down? How disproportionate is the response by Israel to provocations from Hamas? How much is Islamic belief to blame for the barbarism of the self-proclaimed Islamic State? Or is the US to blame, on account of its ill-advised meddling in far-off lands? And how fair is it to compare Putin to Hitler?

But at a recent informal pub gathering of London Futurists, one of the long-time participants in these meetups, Andrius Kasparavicius, asked a hard question. Shouldn’t those of us who believe in the transformational potential of new technology – those of us who dare to call ourselves technoprogressives, transhumanists, or social futurists – have a better answer to these conflict flashpoints? Rather than falling back into twentieth century diatribes against familiar bête noir villains, isn’t it worth striving to find a 21st century viewpoint that transcends such rivalries? We talk a lot about innovation: can’t we be innovative about solving these global flashpoints?

A similar thought gnawed at me a few weeks later, during a family visit to Inverness. A local production of West Side Story was playing at the Eden Court theatre. Bernstein’s music was exhilarating. Sondheim’s lyrics were witty and provocative. The cast shimmied and slunk around the stage. From our vantage point in the second row of seats, we could see all the emotions flit across the faces of the performers. The sudden tragic ending hit hard. And I thought to myself: These two gangs, the Jets and the Sharks, were locked into a foolish, needless struggle. They lacked an adult, future perspective. Isn’t it the same with the tragic conflicts that occupy our newsfeeds? These conflicts have their own Jets and Sharks, and, yes, a lack of an adult, future perspective. Can’t they see the better future which is within our collective grasp, if only they can cast aside their tribal perspectives?

That thought was soon trumped by another: the analogy is unfair. Some battles are worth fighting. For example, if we take no action against Islamic State, we shouldn’t be surprised if there’s an ever worse spate of summary beheadings, forced conversions, women being driven into servitude roles in societies all over the middle east, and terrorist strikes throughout the wider world.

But still… isn’t it worth considering possible technological, technoprogressive, or transhumanist approaches to peace?

  • After all, we say that technology changes everything. History is the story of the continual invention and enhancement of tools, machines, and devices of numerous sorts, which transform human experience in all fields of life.
  • Indeed, human progress has taken place by the discovery and mastery of engineering solutions – such as fire, the wheel, irrigation, sailing ships, writing, printing, the steam engine, electricity, domestic kitchen appliances, railways and automobiles, computers and the Internet, plastics, vaccinations, anaesthetic, contraception, and better hygiene.
  • What’s more, the rate of technological change is increasing, as larger numbers of engineers, scientists, designers, and entrepreneurs from around the globe participate in a rich online network exchange of ideas and information. Forthcoming technological improvements can propel human experience onto an even higher plane – with our minds and bodies both being dramatically enhanced.
  • So shouldn’t the further development of technology give us more options to achieve lasting resolution of global flashpoints?

Event previewTherefore I have arranged an online hangout discussion meeting: Global flashpoints: what do transhumanists have to say? This will be taking place at 7pm UK time this Sunday, 31st August. The corresponding YouTube video page (for people who prefer not to log into Google+ in order to view the Hangout that way) is here. I’ll be joined in this discussion by a number of thinkers from different transhumanist perspectives, based around Europe.

I’ve put a plaintive note on the meeting invite:

In our discussion, we’ll try to transcend the barbs and scape-goating that fills so much of existing online discussion about Iraq/Syria/Ukraine/Gaza/etc.

I honestly don’t know how the discussion is going to unfold. But here are some possible lines of argument:

  1. Consider the flashpoint in Ferguson, Missouri, after the shooting dead of teenager Michael Brown. That particular conflict arose, in part, because of disputes over what actually happened at the time of the shooting. But if the police in Ferguson had all been wearing and operating personal surveillance cameras,  then perhaps a lot of the heat would have gone out of the issue. That would be one example of taking advantage of recent improvements in technology in order to defuse a potential conflict hotspot
  2. Much conflict is driven by people feeling a sense of profound alienation from mainstream culture. Disaffected youths from all over Europe are leaving their families behind to travel to support fundamentalist Islamic causes in the middle east. They need a much better vision of the future, to reduce the chance that they will fall prey to these particular mind viruses. Could social futurism, technoprogressivism, and transhumanism offer that alternative vision?
  3. Rather than technology helping to create peace, there’s a major risk it will help to worsen conflicts. Powerful arsenals in the hands of malcontents are likely to have a more horrific impact nowadays – and an even worse one in the near future – than corresponding weaponry had in the past. Think also of the propaganda value of Islamic State execution videos distributed via YouTube – that kind of effect was unthinkable just a decade ago.

Existential ThreatOf these three lines of discussion, I am most persuaded by the third one. The implications are as follows. The message that we social futurists and transhumanists should be highlighting, in response to these outrages is, sadly, “You ain’t seen nothing yet”. There are actually existential risks that will deserve very serious collective action, in order to solve. In that case, it’s even more imperative that the global community gets its act together, and finds a more effective way to resolve the conflicts in our midst.

At the same time, we do need to emphasise the positive vision of where the world could reach in, say, just a few decades: a world with enormous abundance, fuelled by new technologies (nanotech, solar energy, rejuvenation biotech, ubiquitous smart robots) – a world that will transcend the aspirations of all existing ideologies. If we can make the path to this future more credible, there’s good reason to hope that people all over the world will set aside their previous war-like tendencies, tribal loyalties, and dark age mythologies.

 

23 January 2014

The future of learning and the future of climate change

Filed under: climate change, collaboration, education — Tags: , , , , — David Wood @ 6:52 pm

Yesterday, I spent some time at the BETT show in London’s ExCeL centre. BETT describes itself as:

the world’s leading event for learning technology for education professionals…  dedicated to showcasing the best in UK and international learning technology products, resources, and best practice… in times where modern learning environments are becoming more mobile and ‘learning anywhere’ is more of a possibility.

I liked the examples that I saw of increasing use of Google Apps in education, particularly on Chrome Books. These examples were described by teachers who had been involved in trials, at all levels of education. The teachers had plenty of heart-warming stories of human wonderment, of pupils helping each other, and of technology taking a clear second place to learning.

FutureLearn logoI was also impressed to hear some updates about the use of MOOCs – “Massive open online courses”. For example, I was encouraged about what I heard at BETT about the progress of the UK-based FutureLearn initiative.

As Wikipedia describes FutureLearn,

FutureLearn is a massive open online course (MOOC) platform founded in December 2012 as a company majority owned by the UK’s Open University. It is the first UK-led massive open online course platform, and as of October 2013 had 26 University partners and – unlike similar platforms – includes three non-university partners: the British Museum, the British Council and the British Library.

Among other things, my interest in FutureLearn was to find out if similar technology might be used, at some stage, to help raise better awareness of general futurist topics, such as the Technological Singularity, Radical Life Extension, and Existential Risks – the kind of topics that feature in the Hangout On Air series that I run. I remain keen to develop what I’ve called “London Futurists Academy”. Could a MOOC help here?

I resolved that it was time for me to gain first-hand experience of one of these systems, rather than just relying on second-hand experience from other people.

Climate_change_course_image-01

I clicked on the FutureLearn site to see which courses might be suitable for me to join. I was soon avidly reading the details of their course Climate change: challenges and solutions:

This course aims to explain the science of climate change, the risks it poses and the solutions available to reduce those risks.

The course is aimed at the level of students entering university, and seeks to provide an inter-disciplinary introduction to what is a broad field. It engages a number of experts from the University of Exeter and a number of partner organisations.

The course will set contemporary human-caused climate change within the context of past nature climate variability. Then it will take a risk communication approach, balancing the ‘bad news’ about climate change impacts on natural and human systems with the ‘good news’ about potential solutions. These solutions can help avoid the most dangerous climate changes and increase the resilience of societies and ecosystems to those climate changes that cannot be avoided.

The course lasts eight weeks, and is described as requiring about three hours of time every week. Participants take part entirely from their own laptop. There is no fee to join. The course material is delivered via a combination of videos (with attractive graphics), online documents, and quizzes and tests. Participants are also encouraged to share some of their experiences, ideas, and suggestions via the FutureLearn online social network.

For me, the timing seemed almost ideal. The London Futurists meetup last Saturday had addressed the topic of climate change. There’s an audio recording of the event here (it lasts just over two hours). The speaker, Duncan Clark, was excellent. But discussion at the event (and subsequently continued online) confirmed that there remain lots of hard questions needing further analysis.

I plan to invite other speakers on climate change topics to forthcoming London Futurists events, but in the meantime, this FutureLearn course seems like an excellent opportunity for many people to collectively deepen their knowledge of the overall subject.

I say this after having worked my way through the material for the first week of the course. I can’t say I learnt anything surprising, but the material was useful background to many of the discussions that I keep getting involved in. It was well presented and engaging. I paid careful attention, knowing there would be an online multiple choice test at the end of the week’s set of material. A couple of the questions in the test needed me to think quite carefully before answering. After I answered the final question, I was pleased to see the following screen:

Week 1 resultIt’s fascinating to read online the comments from other participants in the course. It looks like over 1,700 people have completed the first week’s material. Some of the participants are aged in their 70s or 80s, and it’s their first experience with computer learning.

There hasn’t been much controversy in the first week’s topics. One part straightforwardly explained the reasons why the observed changes in global temperature over the last century cannot be attributed to changes in solar radiation, even though changes in solar radiation could be responsible for the “Little Ice Age” between 1550-1850. That part, like all the other material from the first week, seemed completely fair and objective to me. I look forward to the subsequent sections.

I said that the timing of the course was almost ideal. However, it started on the 13th of January, and FutureLearn only allow people to join the course for up to 14 days after the official start date.

That means if any readers of this blog wish to follow my example and enrol in this course too, you’ll have to do so by this Sunday, the 26th of January.

I do hope that other people join the course, so we can compare notes, as we explore pathways to improved collaborative learning.

PS for my overall thoughts on climate change, see some previous posts in this blog, such as “Six steps to climate catastrophe” and “Risk blindness and the forthcoming energy crash”.

13 January 2014

Six steps to climate catastrophe

In a widely read Rolling Stone article from July 2012, “Global Warming’s Terrifying New Math”, Bill McKibben introduced what he called

Three simple numbers that add up to global catastrophe.

The three numbers are as follows:

  1. 2 degrees Celsius – the threshold of average global temperature rise “which scientists (and recently world leaders at the G8 summit) have agreed we must not cross, for fear of triggering climate feedbacks which, once started, will be almost impossible to stop and will drive accelerated warming out of our control”
  2. 565 Gigatons – the amount of carbon dioxide that can be added into the atmosphere by mid-century with still an 80% chance of the temperature rise staying below two degrees
  3. 2,795 Gigatons“the amount of carbon already contained in the proven coal and oil and gas reserves of the fossil-fuel companies, and the countries (think Venezuela or Kuwait) that act like fossil-fuel companies. In short, it’s the fossil fuel we’re currently planning to burn”.

As McKibben highlights,

The key point is that this new number – 2,795 – is higher than 565. Five times higher.

He has a vivid metaphor to drive his message home:

Think of two degrees Celsius as the legal drinking limit – equivalent to the 0.08 blood-alcohol level below which you might get away with driving home. The 565 gigatons is how many drinks you could have and still stay below that limit – the six beers, say, you might consume in an evening. And the 2,795 gigatons? That’s the three 12-packs the fossil-fuel industry has on the table, already opened and ready to pour.

We have five times as much oil and coal and gas on the books as climate scientists think is safe to burn. We’d have to keep 80 percent of those reserves locked away underground to avoid that fate. Before we knew those numbers, our fate had been likely. Now, barring some massive intervention, it seems certain.

He continues,

Yes, this coal and gas and oil is still technically in the soil. But it’s already economically above ground – it’s figured into share prices, companies are borrowing money against it, nations are basing their budgets on the presumed returns from their patrimony. It explains why the big fossil-fuel companies have fought so hard to prevent the regulation of carbon dioxide – those reserves are their primary asset, the holding that gives their companies their value. It’s why they’ve worked so hard these past years to figure out how to unlock the oil in Canada’s tar sands, or how to drill miles beneath the sea, or how to frack the Appalachians.

The burning question

bqcoverbig

A version of Bill McKibben’s Global Warming’s Terrifying New Math essay can be found as the foreword to the recent book “The Burning Question” co-authored by Duncan Clark and Mike Berners-Lee. The subtitle of the book has a somewhat softer message than in the McKibben essay:

We can’t burn half the world’s oil, coal, and gas. So how do we quit?

But the introduction makes it clear that constraints on our use of fossil fuel reserves will need to go deeper than “one half”:

Avoiding unacceptable risks of catastrophic climate change means burning less than half of the oil, coal, and gas in currently commercial reserves – and a much smaller fraction of all the fossil fuels under the ground…

Notoriously, climate change is a subject that is embroiled in controversy and intemperance. The New York Times carried an opinion piece, “We’re All Climate-Change Idiots” containing this assessment from Anthony Leiserowitz, director of the Yale Project on Climate Change Communication:

You almost couldn’t design a problem that is a worse fit with our underlying psychology.

However, my assessment of the book “The burning question” by Berners-Lee and Clark is that it is admirably objective and clear. That impression was reinforced when I saw Duncan Clark speak about the contents of the book at London’s RSA a couple of months ago. On that occasion, the meeting was constrained to less than an hour, for both presentation and audience Q&A. It was clear that the speaker had a lot more that he could have said.

I was therefore delighted when he agreed to speak on the same topic at a forthcoming London Futurists event, happening in Birkbeck College from 6.15pm to 8.30pm on Saturday 18th January. You can find more details of the London Futurists event here. Following our normal format, we’ll have a full two hours of careful examination of the overall field.

Six steps to climate catastrophe

One way to examine the risks of climate catastrophe induced by human activity is to consider the following six-step chain of cause and effect:

  1. Population – the number of people on the earth
  2. Affluence – the average wealth of people on the earth
  3. Energy intensity – the average amount of energy used to create a unit of wealth
  4. Carbon intensity – the average carbon emissions caused by each unit of energy
  5. Temperature impact – the average increase of global temperature caused by carbon emissions
  6. Global impact – the broader impact on life on earth caused by increased average temperature.

Six steps

As Berners-Lee and Clark discuss in their book, there’s scope to debate, and/or to alter, each of these causal links. Various commentators recommend:

  • A reduction in the overall human population
  • Combatting society’s deep-seated imperatives to pursue economic growth
  • Achieving greater affluence with less energy input
  • Switching to energy sources (such as “renewables”) with reduced carbon emissions
  • Seeing (or engineering) different causes that complicate the relation between carbon emissions and temperature rises
  • Seeing (or engineering) beneficial aspects to global increases in temperature, rather than adverse ones.

What they point out, however, is that despite significant progress to reduce energy intensity and carbon intensity, the other factors seem to be increasing out of control, and dominate the overall equation. Specifically, affluence shows no signs of decreasing, especially when the aspirations of huge numbers of people in emerging economies are taken into consideration.

I see this as an argument to accelerate work on technical solutions – further work to reduce the energy intensity and carbon intensity factors. I also see it as an argument to rapidly pursue investigations of what Berners-Lee and Clark call “Plan B”, namely various forms of geoengineering. This extends beyond straightforward methods for carbon capture and storage, and includes possibilities such as

  • Trying to use the oceans to take more carbon dioxide out of the air and store it in an inert form
  • Screen some of the incoming heat from the sun, by, for example, creating more clouds, or injecting aerosols into the upper atmosphere.

But Berners-Lee and Clark remain apprehensive about one overriding factor. This is the one described earlier: the fact that so much investment is tied up in the share-prices of oil companies that assume that huge amounts within the known reserves of fossil fuels will all be burnt, relatively soon. Providing better technical fixes will, they argue, be insufficient to prevent the ongoing juggernaut steamroller of conversion from fossil fuels into huge cash profits for industry – a juggernaut with the side-effect of accumulated carbon emissions that increase the risk of horrendous climate consequences.

For this reason, they see the need for concerted global action to ensure that the prices being paid for the acquisition and/or consumption of fossil fuels fully take into account the downside costs to the global environment. This will be far from easy to achieve, but the book highlights some practical steps forwards.

Waking up

The first step – as so often, in order to succeed in a complex change project – is to engender a sustained sense of urgency. Politicians won’t take action unless there is strong public pressure for action. This public pressure won’t exist whilst people remain in a state of confusion, disinterest, dejection, and/or helplessness. Here’s an extract from near the end of their book:

It’s crucial that more people hear the simple facts loud and clear: that climate change presents huge risks, that our efforts to solve it so far haven’t worked, and that there’s a moral imperative to constrain unabated fossil fuel use on behalf of current and especially future generations.

It’s often assumed that the world isn’t ready for this kind of message – that it’s too negative or scary or confrontational. But reality needs facing head on – and anyhow the truth may be more interesting and inspiring than the watered down version.

I expect many readers of this blogpost to have questions in their mind – or possibly objections (rather than just questions) – regarding at least some of what’s written above. This topic deserves a 200 page book rather than just a short blogpost.

Rather than just urging people to read the book in question, I have set up the London Futurists event previously mentioned. I am anticipating robust but respectful in-depth discussion.

Beyond technology

One possible response is that the acceleration of technological solutions will deliver sufficient solutions (e.g. reducing energy intensity and carbon intensity) long before we need to worry about the climate reaching any tipping point. Solar energy may play a decisive role – possibly along with new generations of nuclear power technology.

That may turn out to be true. But my own engineering experience with developing complex technological solutions is that the timetable is rarely something that anyone can be confident about in advance. So yes, we need to accelerate the technology solutions. But equally, as an insurance policy, we need to take actions that will buy ourselves more time, in order for these technological solutions to come to full fruition. This insurance policy inevitably involves the messy worlds of politics and economics, alongside the developments that happen in the technological arena.

This last message comes across uncomfortably to people who dislike any idea of global coordinated action in politics or economics. People who believe in “small government” and “markets as free as possible” don’t like to contemplate global scale political or economic action. That is, no doubt, another reason why the analysis of global warming and climate change is such a contentious issue.

20 December 2013

Kick-starting the future – less than 24 hours to go

Filed under: Anticipating 2025, collaboration, communications, futurist — David Wood @ 10:18 am

By chance, two really interesting projects both seeking support on the crowd-funding site Kick Starter are coming to their conclusions in the next 24 hours.

They’re both well worth a look.

shift2020-book-cover

shift 2020 is a collaborative book about how technology will impact our future. The book is curated by Rudy De Waele and designed by Louise Campbell.

As Rudy explains,

The idea of shift 2020 is based upon Mobile Trends 2020, a collaborative project I launched early 2010. It’s one of the highest viewed decks on Slideshare (in the Top 50 of All Time in Technology / +320k views). Reviewing the document a couple of weeks ago, I realised the future is catching up on us much faster than many of the predictions that were made. I thought it was time to ask the original contributors for an update on their original predictions and new foresights for the year 2020.

The list of authors is extensive. I would copy out all the names here, but urge you to click on the links to see the full list.

My own set of five predictions from early 2010 that I submitted  to Rudy’s earlier project Mobile Trends 2020 seems to be holding up well for fulfilment by 2020 (if not sooner). See slide 36 of the 2010 presentation:

  1. Mobiles manifesting AI – fulfilling, at last, the vision of “personal digital assistants”
  2. Powerful, easily wearable head-mounted accessories: audio, visual, and more
  3. Mobiles as gateways into vivid virtual reality – present-day AR is just the beginning
  4. Mobiles monitoring personal health – the second brains of our personal networks
  5. Mobiles as universal remote controls for life – a conductor’s baton as much as a viewing portal.

5 predictions for 2010

I’ve added some extra content for shift 2020, but that’s embargoed for now!

People who give financial support via Kick Starter to shift 2020 have lots of options to consider. For example, a pledge of £13 will deliver you the following:

NO FRILLS PAPERBACK (UK SHIPPING).
shift 2020 Black and white printing on cream-coloured paper with a full-colour soft cover (5×8 in 13×20 cm) + 80 pages specially designed for business travellers, printed by blurb.com. Shipping costs included.

Estimated delivery: Jan 2014; Ships within the UK only

And £60 will deliver this:

PERSONAL NAME IN THE BOOK (PRINT VERSION).
shift 2020 shift 2020 nicely designed quality Hardcover, ImageWrap Standard Landscape 10×8 (25×20 cm) +80 pages Photo Book printed by blurb.com on Premium Semi Matt Paper, including a mention of your (personal) name in the acknowledgements page.

Estimated delivery: Jan 2014
Add £5 to ship outside the UK

Whereas shift 2020 seeks funding to support book publication, PostHuman seeks funding to support a series of videos about transhumanism.

The three supersThe “BIOPS” team behind this campaign have already created one first class video:

The first video by the British Institute of Posthuman Studies (BIOPS), entitled “PostHuman: An Introduction To Transhumanism”, investigates three dominant areas of transhumanist thought: super longevity, super intelligence and super wellbeing. It covers the ideas of Aubrey de Grey, Ray Kurzweil and David Pearce.

I’ll let the BIOPS team tell their story:

Writers Marco Vega and Peter Brietbart (that’s us!) have shared a passion for philosophy since we first met at Sussex University five years ago. Over time, we became frustrated with the classical, removed armchair philosophy, and began to look for philosophically sophisticated ideas with real human impact. Transhumanism stood out as a practical, far-seeing, radical and urgent field, informed by science and guided by moral philosophy.

We soon realised that our philosophy buddies and lecturers had barely heard of it, though the ideas involved were exciting and familiar. The problem for us is that even though transhumanism is incredibly relevant, it’s practically invisible in mainstream thought.

Influenced by YouTubers like QualiaSoup3vid3nc3CGPGreyRSA Animate,TheraminTreesVsauceCrashCourse and many more, we saw that complex ideas can be made accessible, entertaining and educational.

Our dream is to make this project – the culmination of five years of thought, reflection and research – a reality.

We’ve just released the first video – PostHuman: An Introduction to Transhumanism. We made it over the course of a year, in volunteered time, paid with favours and fuelled by enthusiasm. Now we need your help to keep going…

In the year 2014, we want to write, produce and release at least 6 more fully animated episodes. We’ll investigate a range of different transhumanist themes, consider their arguments in favour, highlight our greatest worries, and articulate what we perceive to be the most significant implications for humanity.

We’re worried that such critical topics and concepts are not getting the coverage they need. Our aim for the video series is to bring awareness to the most important conversation humanity needs to be having, and to do it in a way that’s accessible, balanced and educational.

In addition to animating the ideas and concepts, we also want to seek out and challenge influential transhumanist thinkers. We’ll record the interviews, and include the highlights at the end of the videos.

We’re looking to raise £65,000 to allow the production crew to make this happen.

I’m delighted that Marco Vega and Peter Brietbart of BIOPS will be among the speakers at the Anticipating 2025 event I’m holding at Birkbeck College on 22-23 March:

I wish both shift 2020 and PostHuman the best of luck with their fundraising and delivery!

30 September 2013

Questions about Hangouts on Air

Filed under: collaboration, Google, Hangout On Air, intelligence — David Wood @ 11:05 pm

HOA CaptureI’m still learning about how to get the best results from Google Hangouts On Air – events that are broadcast live over the Internet.

On Sunday, I hosted a Hangout On Air which ran pretty well. However, several features of the experience were disappointing.

Here, I’m setting aside questions about what the panellists said. It was a fascinating discussion, but in this blogpost, I want to ask some questions, instead, about the technology involved in creating and broadcasting the Hangout On Air. That was the disappointing part.

If anyone reading this can answer my questions, I’ll be most grateful.

If you take a quick look at the beginning of the YouTube video of the broadcast, you’ll immediately see the first problem I experienced:

The problem was that the video uplink from my own laptop didn’t get included in the event. Instead of what I thought I was contributing to the event, the event just showed my G+ avatar (a static picture of my face). That was in contrast to situation for the other four participants.

When I looked at the Hangout On Air window on my laptop as I was hosting the call, it showed me a stream of images recorded by my webcam. It also showed, at other times, slides which I was briefly presenting. That’s what I saw, but no-one else saw it. None of these displays made it into the broadcast version.

Happily, the audio feed from my laptop did reach the broadcast version. But not the video.

As it happens, I think that particular problem was “just one of those things”, which happen rarely, and in circumstances that are difficult to reproduce. I doubt this problem will recur in this way, the next time I do such an event. I believe that the software system on my laptop simply got itself into a muddle. I saw other evidence for the software being in difficulty:

  • As the event was taking place, I got notifications that people had added me to their G+ circles. But when I clicked on these notifications, to consider reciprocally adding these people into my own circles, I got an error message, saying something like “Cannot retrieve circle status info at this time”
  • After the event had finished, I tried to reboot my laptop. The shutdown hung, twice. First, it hung with a most unusual message, “Waiting for explorer.exe – playing logoff sound”. Second, after I accepted the suggestion from the shutdown dialog to close down that app regardless, the laptop hung indefinitely in the final “shutting down” display. In the end, I pressed the hardware reset button.

That muddle shouldn’t have arisen, especially as I had taken the precaution of rebooting my laptop some 30 minutes before the event was due to start. But it did. However, what made things worse is that I only became aware of this issue once the Hangout had already started its broadcast phase.

At that time, the other panellists told me they couldn’t see any live video from my laptop. I tried various quick fixes (e.g. switching my webcam off and on), but to no avail. I also wondered whether I was suffering from a local bandwidth restriction, but I had reset my broadband router 30 minutes before the call started, and I was the only person in my house at that time.

Exit the hangout and re-enter it, was the next suggestion offered to me. Maybe that will fix things.

But this is where I see a deeper issue with the way Hangouts On Air presently work.

From my experience (though I’ll be delighted if people can tell me otherwise), when the person who started the Hangout On Air exits the event, the whole event shuts down. It’s therefore different from if any of the other panellists exits and rejoins. The other panellists can exit and rejoin without terminating the event. Not so for the host.

By the time I found out about the video uplink problem, I had already published the URL of where the YouTube of the Hangout would be broadcast. After starting the Hangout On Air (but before discovering the problem with my video feed), I had copied this URL to quite a few different places on social media – Meetup.com, Facebook, etc. I knew that people were already watching the event. If I exited the Hangout, to see if that would get the video uplink working again, we would have had to start a new Hangout, which would have had a different YouTube URL. I would have had to manually update all these social networking pages.

I can imagine two possible solutions to this – but I don’t think either are available yet, right?

  1. There may be a mechanism for the host to leave the Hangout On Air, without that Hangout terminating
  2. There may be a mechanism for something like a URL redirector to work, even for a second Hangout instance, which replaces a previous instance. The same URL would work for two different Hangouts.

Incidentally, in terms of URLs for the Hangout, note that there are at least three different such URLs:

  1. The URL of the “inside” of the Hangout, which the host can share with panellists to allow them to join it
  2. The URL of the Google+ window where the Hangout broadcast runs
  3. The URL of the YouTube window where the Hangout broadcast runs.

As far as I know, all three URLs change when a Hangout is terminated and restarted. What’s more, #1 and #3 are created when the Hangout starts, even before it switches into Broadcast mode, whereas #2 is only available when the host presses the “Start broadcasting” button.

In short, it’s a pretty complicated state of affairs. I presume that Google are hard at work to simplify matters…

To look on the positive side, one outcome that I feared (as I mentioned previously) didn’t come to pass. That outcome was my laptop over-heating. Instead, according to the CPU temperature monitor widget that I run on my laptop, the temperature remained comfortable throughout (reaching the 70s Centigrade, but staying well short of the 100 degree value which triggers an instant shutdown). I imagine that, because no video uplink was taking place, there was no strong CPU load on my laptop. I’ll have to wait to see what happens next time.

After all, over-heating is another example of something that might cause a Hangout host to want to temporarily exit the Hangout, without bringing the whole event to a premature end. There are surely other examples as well.

27 September 2013

Technology for improved collaborative intelligence

Filed under: collaboration, Hangout On Air, intelligence, Symbian — David Wood @ 1:02 pm

Interested in experiences in using Google Hangout On Air, as a tool to improve collaborative intelligence? Read on.

Google’s Page Rank algorithm. The Wikipedia editing process. Ranking of reviewers on Amazon.com. These are all examples of technology helping to elevate useful information above the cacophony of background noise.

To be clear, in such examples, insight doesn’t just come from technology. It comes from a combination of good tools plus good human judgement – aided by processes that typically evolve over several iterations.

For London Futurists, I’m keen to take advantage of technology to accelerate the analysis of radical scenarios for the next 3-40 years. One issue is that the general field of futurism has its own fair share of background noise:

  • Articles that are full of hype or sensationalism
  • Articles motivated by commercial concerns, with questionable factual accuracy
  • Articles intended for entertainment purposes, but which end up overly influencing what people think.

Lots of people like to ramp up the gas while talking about  the future, but that doesn’t mean they know what they’re talking about.

I’ve generally been pleased with the quality of discussion in London Futurists real-life meetings, held (for example) in Birkbeck College, Central London. The speaker contributions in these meetings are important, but the audience members collectively raise a lot of good points too. I do my best to ‘referee’ the discussions, in a way that a range of opinions have a chance to be aired. But there have been three main limitations with these meetups:

  1. Meetings often come to an end well before we’ve got to the bottom of some of the key lines of discussion
  2. The insights from individual meetings can sometimes fail to be taken forward into subsequent meetings – where the audience members are different
  3. Attendance is limited to people who live near to London, and who have no other commitments when the meetup is taking place.

These limitations won’t disappear overnight, but I have plans to address them in stages.

I’ve explained some of my plans in the following video, which is also available at http://londonfuturists.com/2013/08/30/introducing-london-futurists-academy/.

As the video says, I want to be able to take advantage of the same kind of positive feedback cycles that have accelerated the progress of technology, in order to accelerate in a similar way the generation of reliable insight about the future.

As a practical step, I’m increasingly experimenting with Google Hangouts, as a way to:

  • Involve a wider audience in our discussions
  • Preserve an online record of the discussions
  • Find out, in real-time, which questions the audience collectively believes should be injected into a conversation.

In case it helps others who are also considering the usage of Google Hangouts, here’s what I’ve found out so far.

The Hangouts are a multi-person video conference call. Participants have to log in via one of their Google accounts. They also have to download an app, inside Google Plus, before they can take part in the Hangout. Google Plus will prompt them to download the app.

The Hangout system comes with its own set of plug-in apps. For example, participants can share their screens, which is a handy way of showing some PowerPoint slides that back up a point you are making.

By default, the maximum number of attendees is 10. However, if the person who starts the Hangout has a corporate account with Google (as I have, for my company Delta Wisdom), that number can increase to 15.

For London Futurists meetings, instead of a standard “Hangout”, I’m using “Hangouts On Air” (sometime abbreviated as ‘HOA’). These are started from within their own section of the Google Plus page:

  • The person starting the call (the “moderator”) creates the session in a “pre-broadcast” state, in which he/she can invite a number of participants
  • At this stage, the URL is generated, for where the Hangout can be viewed on YouTube; this vital piece of information can be published on social networking sites
  • The moderator can also take some other pre-broadcast steps, such as enabling the “Questions” app (further mentioned below)
  • When everyone is ready, the moderator presses the big red “Start broadcast” button
  • A wide audience is now able to watch the panellists discussion via the YouTube URL, or on the Google Plus page of the moderator.

For example, there will be a London Futurists HOA this Sunday, starting 7pm UK time. There will be four panellists, plus me. The subject is “Projects to accelerate radical healthy longevity”. The details are here. The event will be visible on my own Google Plus page, https://plus.google.com/104281987519632639471/posts. Note that viewers don’t need to be included in any of the Circles of the moderator.

As the HOA proceeds, viewers typically see the current speaker at the top of the screen, along with the other panellists in smaller windows below. The moderator has the option to temporarily “lock” one of the participants into the top area, so that their screen has prominence at that time, even though other panellists might be speaking.

It’s good practice for panellists to mute their microphones when they’re not speaking. That kind of thing is useful for the panellists to rehearse with the moderator before the call itself (perhaps in a brief preview call several days earlier), in order to debug connectivity issues, the installation of apps, camera positioning, lighting, and so forth. Incidentally, it’s best if there’s a source of lighting in front of the speaker, rather than behind.

How does the audience get to interact with the panellists in real-time? Here’s where things become interesting.

First, anyone watching via YouTube can place text comments under the YouTube window. These comments are visible to the panellists:

  • Either by keeping an eye on the same YouTube window
  • Or, simpler, within the “Comment Tracker” tab of the “Hangout Toolbox” app that is available inside the Hangout window.

However, people viewing the HOA via Google Plus have a different option. Provided the moderator has enabled this feature before the start of the broadcast, viewers will see a big button inviting them to ask a question, in a text box. They will also be able to view the questions that other viewers have submitted, and to give a ‘+1’ thumbs up endorsement.

In real-time, the panellists can see this list of questions appear on their screens, inside the Hangout window, along with an indication of how many ‘+1′ votes they have received. Ideally, this will help the moderator to pick the best question for the panel to address next. It’s a small step in the direction of greater collaborative intelligence.

At time of writing, I don’t think there’s an option for viewers to downvote each others’ questions. However, there is an option to declare that a question is spam. I expect the Google team behind HOA will be making further enhancements before long.

This Questions app is itself an example of how the Google HOA technology is improving. The last time I ran a HOA for London Futurists, the Questions apps wasn’t available, so we just used the YouTube comments mechanism. One of the panellists for that call, David Orban, suggested I should look into another tool, called Google Moderator, for use in a subsequent occasion. I took a look, and liked what I saw, and my initial announcement of my next HOA (the one happening on Sunday) mentioned that I would be using Google Moderator. However, as I said, technology moves on quickly. Giulio Prisco drew my attention to the recently announced Questions feature of the HOA itself – a feature that had previously been in restricted test usage, but which is now available for all users of HOA. So we’ll be using that instead of Google Moderator (which is a rather old tool, without any direct connection into the Hangout app).

The overall HOA system is still new, and it’s not without its issues. For example, panellists have a lot of different places they might need to look, as the call progresses:

  • The “YouTube comment tracker” screen is mutually exclusive from the “Questions” screen: panellists can only have one of these visible to them at a time
  • These screens are in turn mutually exclusive from a text chat window which the panellists can use to chat amongst themselves (for example, to coordinate who will be speaking next) while one of the other panellists is speaking.

Second – and this is what currently makes me most apprehensive – the system seems to put a lot of load on my laptop, whenever I am the moderator of a HOA. I’ve actually seen something similar whenever my laptop is generating video for any long call. The laptop gets hotter and hotter as time progresses, and might even cut out altogether – as happened one hour into the last London Futurists HOA (see the end of this video).

Unfortunately, when the moderator’s PC loses connection to the HOA, the HOA itself seems to shut down (after a short delay, to allow quick reconnections). If this happens again on Sunday, we’ll restart the HOA as soon as possible. The “part two” will be visible on the same Google Plus page, but the corresponding YouTube video will have its own, brand new URL.

Since the last occurrence of my laptop overheating during a video call, I’ve had a new motherboard installed, plus a new hard disk (as the old one was giving some diagnostic errors), and had all the dust cleaned out of my system. I’m keeping my fingers crossed for this Sunday. Technology brings its challenges as well as many opportunities…

Footnote: This threat of over-heating reminds me of a talk I gave on several occasions as long ago as 2006, while at Symbian, about “Horsemen of the apocalypse”, including fire. Here’s a brief extract:

Standing in opposition to the potential for swift continuing increase in mobile technology, however, we face a series of major challenges. I call them “horsemen of the apocalypse”.  They include fire, flood, plague, and warfare.

“Fire” is the challenge of coping with the heat generated by batteries running ever faster. Alas, batteries don’t follow Moore’s Law. As users demand more work from their smartphones, their battery lifetimes will tend to plummet. The solution involves close inter-working of new hardware technology (including multi-core processors) and highly sophisticated low-level software. Together, this can reduce the voltage required by the hardware, and the device can avoid catching fire as it performs its incredible calculations…

18 May 2013

Breakthroughs with M2M: moving beyond the false starts

Filed under: collaboration, Connectivity, Internet of Things, leadership, M2M, standards — David Wood @ 10:06 am

Forecasts of machine-to-machine wireless connectivity envision 50 billion, or even one trillion, wirelessly connected devices, at various times over the next 5-10 years. However, these forecasts date back several years, and there’s a perception in some quarters that all is not well in the M2M world.

HeronTowerThese were the words that I used to set the scene for a round-table panel discussion at the beginning of this month, at the Harvey Nash offices in high-rise Heron Tower in the City of London. Participants included senior managers from Accenture Mobility, Atholl Consulting, Beecham Research, Eseye, Interskan, Machina Research, Neul, Oracle, Samsung, Telefonica Digital, U-Blox, Vodafone, and Wyless – all attending in a personal capacity. I had the privilege to chair the discussion.

My goal for the discussion was that participants would leave the meeting with clearer ideas and insights about:

  • Obstacles hindering wider adoption of M2M connectivity
  • Potential solutions to these obstacles.

The gathering was organised by Ian Gale, Senior Telecoms Consultant of Harvey Nash. The idea for the event arose in part from reflections from a previous industry round-table that I had also chaired, organised by Cambridge Wireless and Accenture. My online notes on that meeting – about the possible future of the Mobile World Congress (MWC) – included the following thoughts about M2M:

MWC showed a lot of promise for machine-to-machine (M2M) communications and for connected devices (devices that contain communications functionality but which are not phones). But more remains to be done, for this promise to reach its potential.

The GSMA Connected City gathered together a large number of individual demos, but the demos were mainly separated from each other, without there being a clear overall architecture incorporating them all.

Connected car was perhaps the field showing the greatest progress, but even there, practical questions remain – for example, should the car rely on its own connectivity, or instead rely on connectivity of smartphones brought into the car?

For MWC to retain its relevance, it needs to bring M2M and connected devices further to the forefront…

The opening statements from around the table at Harvey Nash expressed similar views about M2M not yet living up to its expected potential. Several of the participants had written reports and/or proposals about machine-to-machine connectivity as long as 10-12 years ago. It was now time, one panellist suggested, to “move beyond the false starts”.

Not one, but many opportunities

An emerging theme in the discussion was that it distorts perceptions to talk about a single, unified M2M opportunity. Headline figures for envisioned near-future numbers of “connected devices” add to the confusion, since:

  • Devices can actually connect in many different ways
  • The typical data flow can vary widely, between different industries, and different settings
  • Differences in data flow means that the applicable standards and regulations also vary widely
  • The appropriate business models vary widely too.

Particular focus on particular industry opportunities is more likely to bring tangible results than a general broad-brush approach to the entire potential space of however many billion devices might become wirelessly connected in the next 3-5 years. One panellist remarked:

Let’s not try to boil the ocean.

And as another participant put it:

A desire for big volume numbers is understandable, but isn’t helpful.

Instead, it would be more helpful to identify different metrics for different M2M opportunities. For example, these metrics would in some cases track credible cost-savings, if various M2M solutions were to be put in place.

Compelling use-cases

To progress the discussion, I asked panellists for their suggestions on compelling use-cases for M2M connectivity. Two of the most interesting answers also happened to be potentially problematic answers:

  • There are many opportunities in healthcare, if people’s physiological and medical data can be automatically communicated to monitoring software; savings include freeing up hospital beds, if patients can be reliably monitored in their own homes, as well as proactively detecting early warning signs of impending health issues
  • There are also many opportunities in automotive, with electronic systems inside modern cars generating huge amounts of data about performance, which can be monitored to identify latent problems, and to improve the algorithms that run inside on-board processors.

However, the fields of healthcare and automotive are, understandably, both heavily regulated. As appropriate for life-and-death issues, these industries are risk-averse, so progress is slow. These fields are keener to adopt technology systems that have already been well-proven, rather than carrying out bleeding-edge experimentation on their own. Happily, there are other fields which have a lighter regulatory touch:

  • Several electronics companies have plans to wirelessly connect all their consumer devices – such as cameras, TVs, printers, fridges, and dishwashers – so that users can be alerted when preventive maintenance should be scheduled, or when applicable software upgrades are available; a related example is that a printer could automatically order a new ink cartridge when ink levels are running low
  • Dustbins can be equipped with sensors that notify collection companies when they are full enough to warrant a visit to empty them, avoiding unnecessary travel costs
  • Sensors attached to roadway lighting systems can detect approaching vehicles and pedestrians, and can limit the amount of time lights are switched on to the time when there is a person or vehicle in the vicinity
  • Gas pipeline companies can install numerous sensors to monitor flow and any potential leakage
  • Tracking devices can be added to items of equipment to prevent them becoming lost inside busy buildings (such as hospitals).

Obstacles

It was time to ask the first big question:

What are the obstacles that stand in the way of the realisation of the grander M2M visions?

That question prompted a raft of interesting observations from panellists. Several of the points raised can be illustrated by a comparison with the task of selling smartphones into organisations for use by employees:

  • These devices only add business value if several different parts of the “value chain” are in good working order – not only the device itself, but also the mobile network, the business-specific applications, and connectivity for the mobile devices into the back-end data systems used by business processes in the company
  • All the different parts of the value chain need to be able to make money out of their role in this new transaction
  • To avoid being locked into products from only one supplier, the organisation will wish to see evidence of interoperability with products from different suppliers – in order words, a certain degree of standardisation is needed.

At the same time, there are issues with hardware and network performance:

  • Devices might need to be able to operate with minimal maintenance for several years, and with long-lived batteries
  • Systems need to be immune from tampering or hacking.

Companies and organisations generally need assurance, before making the investments required to adopt M2M technology, that:

  • They have a clear idea of likely ongoing costs – they don’t want to be surprised by needs for additional expenditure, system upgrades, process transformation, repeated re-training of employees, etc
  • They have a clear idea of at least minimal financial benefits arising to them.

Especially in a time of uncertain financial climate, companies are reluctant to invest money now with the promise of potential savings being realised at some future date. This results in long, slow sales cycles, in which several layers of management need to be convinced that an investment proposal makes sense. For these reasons, panellists listed the following set of obstacles facing M2M adoption:

  • The end-to-end technology story is often too complicated – resulting in what one panellist called “a disconnected value chain”
  • Lack of clarity over business model; price points often seem unattractive
  • Shortage of unambiguous examples of “quick wins” that can drum up more confidence in solutions
  • Lack of agreed standards – made worse by the fact that standardisation processes seem to move so slowly
  • Conflicts of interest among the different kinds of company involved in the extended value chain
  • Apprehension about potential breaches of security or privacy
  • The existing standards are often unsuitable for M2M use cases, having been developed, instead, for voice calls and video connectivity.

Solutions

My next question turned the discussion to a more positive direction:

Based on your understanding of the obstacles, what initiatives would you recommend, over the next 18-24 months, to accelerate the development of one or more M2M solution?

In light of the earlier observation that M2M brings “not one, but many opportunities”, it’s no surprise that panellists had divergent views on how to proceed and how to prioritise the opportunities. But there were some common thoughts:

  1. We should expect it to take a long time for complete solutions to be established, but we should be able to plan step-by-step improvements
  2. Better “evangelisation” is needed – perhaps a new term to replace “M2M”
  3. There is merit in pooling information and examples that can help people who are writing business cases for adopting M2M solutions in their organisations
  4. There is particular merit in simplifying the M2M value chain and in accelerating the definition and adoption of fit-for-purpose standards
  5. Formal standardisation review processes are obliged to seek to accommodate the conflicting needs of large numbers of different perspectives, but de facto standards can sometimes be established, a lot more quickly, by mechanisms that are more pragmatic and more focused.

To expand on some of these points:

  • One way to see incremental improvements is by finding new business models that work with existing M2M technologies. Another approach is to change the technology, but without disrupting the existing value chains. The more changes that are attempted at the same time, the harder it is to execute everything successfully
  • Rather than expecting large enterprises to lead changes, a lesson can be learned from what has happened with smartphones over the last few years, via the “consumer-led IT”; new devices appealed to individuals as consumers, and were then taken into the workforce to be inserted into business processes. One way for M2M solutions to progress to a point when enterprises would be forced to take them more seriously is if consumers adopt them first for non-work purposes
  • One key to consumer and developer experimentation is to make it easier for small groups of people to create their own M2M solutions. For example, an expansion in the reach of Embedded Java could enable wider experimentation. The Arduino open-source electronics prototyping platform can play a role here too, as can the Raspberry Pi
  • Weightless.org is an emerging standard in which several of the panellists expressed considerable interest. To quote from the Weightless website:

White space spectrum provides the scope to realise tens of billions of connected devices worldwide overcoming the traditional problems associated with current wireless standards – capacity, cost, power consumption and coverage. The forecasted demand for this connectivity simply cannot be accommodated through existing technologies and this is stifling the potential offered by the machine to machine (M2M) market. In order to reach this potential a new standard is required – and that standard is called Weightless.

Grounds for optimism

As the discussion continued, panellists took the opportunity to highlight areas where they, individually, saw prospects for more rapid progress with M2M solutions:

  • The financial transactions industry is one in which margins are still high; these margins should mean that there is greater possibility for creative experimentation with the adoption of new M2M business models, in areas such as reliable automated authentication for mobile payments
  • The unsustainability of current transport systems, and pressures for greater adoption of new cars with hybrid or purely electric power systems, both provide opportunities to include M2M technology in so-called “intelligent systems”
  • Rapid progress in the adoption of so-called “smart city” technology by cities such as Singapore might provide showcase examples to spur adoption elsewhere in the world, and in new industry areas
  • Progress by weightless.org, which addresses particular M2M use cases, might also serve as a catalyst and inspiration for faster progress in other standards processes.

Some take-aways

To wind up the formal part of our discussion, I asked panellists if they could share any new thoughts that had occurred to them in the course of the preceding 120 minutes of round-table discussion. Here’s some of what I heard:

  • It’s like the early days of the Internet, in which no-one had a really good idea of what would happen next, but where there are clearly plenty of big opportunities ahead
  • There is no “one correct answer”
  • Systems like Arduino will allow young developers to flex their muscles and, no doubt, make lots of mistakes; but a combination of youthful vigour and industry experience (such as represented by the many “grey hairs” around the table) provide good reason for hope
  • We need a better message to evangelise with; “50 billion connected devices” isn’t sufficient
  • Progress will result from people carefully assessing the opportunities and then being bold
  • Progress in this space will involve some “David” entities taking the courage to square up to some of the “Goliaths” who currently have vested interests in the existing technology systems
  • Speeding up time-to-market will require companies to take charge of the entire value chain
  • Enabling consumerisation is key
  • We have a powerful obligation to make the whole solution stack simpler; that was already clear before today, but the discussion has amply reinforced this conclusion.

Next steps

A number of forthcoming open industry events are continuing the public discussion of M2M opportunities.

M2M World

With thanks to…

I’d like to close by expressing my thanks to the hosts of the event, Harvey Nash, and to the panellists who took the time to attend the meeting and freely share their views:

1 April 2012

Why good people are divided by politics and religion

Filed under: books, collaboration, evolution, motivation, passion, politics, psychology, RSA — David Wood @ 10:58 pm

I’ve lost count of the number of people who have thanked me over the years for drawing their attention to the book “The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” written by Jonathan Haidt, Professor of Social Psychology at the University of Virginia. That was a book with far-reaching scope and penetrating insight. Many of the ideas and metaphors in it have since become fundamental building blocks for other writers to use – such as the pithy metaphor of the human mind being divided like a rider on an elephant, with the job of the rider (our stream of conscious reasoning) being to serve the elephant (the other 99% of our mental processes).

This weekend, I’ve been reading Haidt’s new book, “The Righteous Mind: Why Good People Are Divided by Politics and Religion”. It’s a great sequel. Like its predecessor, it ranges across more than 2,400 years of thought, highlighting how recent research in social psychology sheds clear light on age-old questions.

Haidt’s analysis has particular relevance for two deeply contentious sets of debates that each threaten to destabilise and divide contemporary civil society:

  • The “new atheism” critique of the relevance and sanctity of religion in modern life
  • The political fissures that are coming to the fore in the 2012 US election year – fissures I see reflected in messages full of contempt and disdain in the Facebook streams of some several generally sensible US-based people I know.

There’s so much in this book that it’s hard to summarise it without doing an injustice to huge chunks of fascinating material:

  • the importance of an empirical approach to understanding human morality – an approach based on observation, rather than on a priori rationality
  • moral intuitions come first, strategic reasoning comes second, to justify the intuitions we have already reached
  • there’s more to morality than concerns over harm and fairness; Haidt memorably says that “the righteous mind is like a tongue with six taste receptors”
  • the limitations of basing research findings mainly on ‘WEIRD‘ participants (people who are Western, Educated, Industrialised, Rich, and Democratic)
  • the case for how biological “group selection” helped meld humans (as opposed to natural selection just operating at the level of individual humans)
  • a metaphor that “human beings are 90 percent chimp and 10 percent bee”
  • the case that “The most powerful force ever known on this planet is human cooperation — a force for construction and destruction”
  • methods for flicking a “hive switch” inside human brains that open us up to experiences of self-transcendence (including a discussion of rave parties).

The first chapter of the book is available online – as part of a website dedicated to the book. You can also get a good flavour of some of the ideas in the book from two talks Haidt has given at TED: “Religion, evolution, and the ecstasy of self-transcendence” (watch it full screen to get the full benefits of the video effects):

and (from a few years back – note that Haidt has revised some of his thinking since the date of this talk) “The moral roots of liberals and conservatives“:

Interested to find out more? I strongly recommend that you read the book itself. You may also enjoy watching a wide-ranging hour-long interview between Haidt and Robert Wright – author of Nonzero: The Logic of Human Destiny and The Evolution of God.

Footnote: Haidt is talking at London’s Royal Society of Arts on lunchtime on Tuesday 10th April; you can register to be included on the waiting list in case more tickets become available. The same evening, he’ll be speaking at the Royal Institution; happily, the Royal Institution website says that there is still “good availability” for tickets:

Jonathan Haidt, the highly influential psychologist, is here to show us why we all find it so hard to get along. By examining where morality comes from, and why it is the defining characteristic of humans, Haidt will show why we cannot dismiss the views of others as mere stupidity or moral corruption. Our moral roots run much deeper than we realize. We are hardwired not just to be moral, but moralistic and self-righteous. From advertising to politics, morality influences all aspects of behaviour. It is the key to understanding everybody. It explains why some of us are liberals, others conservatives. It is often the difference between war and peace. It is also why we are the only species that will kill for an ideal.

Haidt argues we are always talking past each other because we are appealing to different moralities: it is not just about justice and fairness – for some people authority, sanctity or loyalty are more important. With new evidence from his own empirical research, Haidt will show it is possible to liberate us from the disputes that divide good people. We can either stick to comforting delusions about others, or learn some moral psychology. His hope is that ultimately we can cooperate with those whose morals differ from our own.

2 October 2011

Prioritising the best peer pressure

Filed under: BHAG, catalysts, collaboration, futurist, Humanity Plus — David Wood @ 9:36 am

In a world awash with conflicting influences and numerous potential interesting distractions, how best to keep “first things first“?

A big part of the answer is to ensure that the influences we are closest to us are influences:

  • Whose goals are aligned with our own
  • Who can give us prompt, helpful feedback when we are falling short of our own declared intentions
  • Who can provide us with independent viewpoints that enrich, complement, and challenge our current understanding.

In my own case, that’s the reason why I have been drawn to the community known as “Humanity+“:

Humanity+ is an international nonprofit membership organization which advocates the ethical use of technology to expand human capacities. We support the development of and access to new technologies that enable everyone to enjoy better minds, better bodies and better lives. In other words, we want people to be better than well.

I deeply share the goals of Humanity+, and I find some of the world’s most interesting thinkers within that community.

It’s also the reason I have sought to aid the flourishing of the Humanity+ community, particularly in the UK, by organising a series of speaker meetings in London.  The speakers at these meetings are generally fascinating, but its the extended networking that follows (offline and online) which provides the greatest value.

My work life has been very busy in the last few months, leaving me less time to organise regular H+UK meetings.  However, to keep myself grounded in a community that contains many people who can teach me a great deal – a community that can provide powerful positive peer pressure – I’ve worked with some H+UK colleagues to pull together an all day meeting that is taking place at the Saturday at the end of this week (8th October).

The theme of this meeting is “Beyond Human: Rethinking the Technological Extension of the Human Condition“.  It splits into three parts:

  • Beyond human: The science and engineering
  • Beyond human: Implications and controversies
  • Beyond human: Getting involved

The event is free to attend.  There’s no need to register in advance. The meeting is taking place in lecture room B34 in the Malet Street building (the main building) of Birkbeck College.  This is located in Torrington Square (which is a pedestrian-only square), London WC1E 7HX.

Full details are on the official event website.  In this blogpost, to give a flavour of what will be covered, I’ll just list the agenda with the speakers and panellists.

09.30 – Finding the room, networking
Opening remarks
Beyond human: The science and engineering
11.40 – Audience Q&A with the panel consisting of the above four speakers
Lunch break
12.00 – People make their own arrangements for lunch (there are some suggestions on the event website)
Beyond human: Implications and controversies
14.40 – Audience Q&A with the panel consisting of the above four speakers
Extended DIY coffee break
15.00 – Also a chance for extended networking
Beyond human: Getting involved
17.25 – Audience Q&A with the panel consisting of the above four speakers
End of conference
17.45 – Hard stop – the room needs to be empty by 18.00

You can follow the links to find out more information about each speaker. You’ll see that several are eminent university professors. Several have written key articles or books on the theme of technology that significantly enhances human potential. Some complement their technology savvy with an interest in performance art.  All are distinguished and interesting futurists in their own way.

I don’t expect I’ll agree with everything that’s said, but I do expect that great personal links will be made – and strengthened – during the course of the day.  I also expect that some of the ideas shared at the conference – some of the big, hairy, audacious goals unveiled – will take on a major life of their own, travelling around the world, offline and online, catalysing very significant positive change.

29 August 2010

Understanding humans better by understanding evolution better

Filed under: collaboration, deception, evolution, RSA, UKH+ — David Wood @ 5:54 am

Many aspects of human life that at first seem weird and hard to explain can make a lot more sense once you see them from the viewpoint of evolution.

It was Richard Dawkins’ book “The Selfish Gene” which first led me to that conclusion, whilst I was still at university.  After “The Selfish Gene”, I read “Sociobiology: the new synthesis“, by E.O. Wilson, which gave other examples.  I realised it was no longer necessary to refer to concepts such as “innate wickedness” or “original sin” to explain why people often did daft things.  Instead, people do things because (in part) of underlying behavioural patterns which tended to make their ancestors more likely to leave successful offspring.

In short, you can deepen your understanding of  humans if you understand evolution.  On the whole, attempts to get humans to change their behaviour will be more likely to succeed if they are grounded in an understanding of the real factors that led humans to tend to behave as they do.

What’s more, you can understand humans better if you understand evolution better.

In a moment, I’ll come to some interesting new ideas about the role played by technology in evolution.  But first, I’ll mention two other ways in which an improved understanding of evolution sheds richer light on the human condition.

1. Evolution often results in sub-optimal solutions

In places where an intelligent (e.g. human) designer would “go back to the drawing board” and introduce a new design template, biological evolution has been constrained to keep working with the materials that are already in play.  Biological evolution lacks true foresight, and cannot do what human designers would call “re-factoring an existing design”.

I’ve written on this subject before, in my review “The human mind as a flawed creation of nature” of the book by Gary Marcus, “Kluge – the haphazard construction of the human mind” – so I won’t say much more about that particular topic right now.  But I can’t resist including a link to a fascinating video in which Richard Dawkins demonstrates the absurdly non-optimal route taken by the laryngeal nerve of the giraffe.  As Dawkins says in the video, this nerve “is a beautiful example of historical legacy, as opposed to design”.  If you haven’t seen this clip before, it’s well worth watching, and thinking about the implications.

2. Evolution can operate at multiple levels

For a full understanding of evolution, you have to realise it can operate at multiple levels:

  • At the level of individual genes
  • At the level of individual organisms
  • At the level of groups of cooperating organisms.

At each level, there are behaviours which exist because they made it more likely for an entity (at that level) to leave descendants.  For example, groups of animals tend to survive as a group, if individuals within that group are willing, from time to time, to sacrifice themselves for the sake of the group.

The notion of group selection is, however, controversial among evolutionary theorists.  Part of the merit of books such as The Selfish Gene was that it showed how altruistic behaviour could be explained, in at least some circumstances, by looking at the point of survival of individual genes.  If individual A sacrifices himself for the sake of individuals B and C within the same group, it may well be that B and C carry many of the same genes as individual A.  This analysis seems to deal with the major theoretical obstacle to the idea of group selection, which is as follows:

  • If individuals A1, A2, A3,… all have an instinct to sacrifice themselves for the sake of their wider group, it may well mean, other things being equal, that this group is initially more resilient than competing groups
  • However, an individual A4 who is individually selfish, within that group, will get the benefit of the success of the group, and the benefit of individual survival
  • So, over time, the group will tend to contain more individuals like the “free-rider” A4, and fewer like A1, A2, and A3
  • Therefore the group will degenerate into selfish behaviour … and this shows that the notion of “group selection” is flawed.

Nevertheless, I’ve been persuaded by writer David Sloan Wilson that the notion of group selection can still apply.  He gives an easy-to-read account of his ideas in his wide-ranging book “Evolution for Everyone: How Darwin’s Theory Can Change the Way We Think About Our Lives“.  In summary:

  • Group selection can apply, provided the group also has mechanisms to reduce free-riding behaviour by individuals
  • For example, people in the group might have strong instincts to condemn and punish people who try to take excess advantage of the generosity of others
  • So long as these mechanisms keep the prevalence of free-riding below a certain threshold, a group can reach a stable situation in which the altruism of the majority continues to benefit the group as a whole.

(To be clear: this kind of altruism generally looks favourably only at others within the same group.  People who are outside your group won’t benefit from it.  An injunction such as “love your neighbour as yourself” applied in practice only to people within your group – not to people outside it.)

To my mind, this makes sense of a great deal of the mental gymnastics that we can observe: people combine elements of surreptitiously trying to benefit themselves (and their own families) whilst seeking to appearing to the group as a whole as being “good citizens”.  In turn, we are adept at seeing duplicity and hypocrisy in others.  There’s been a long “arms race” in which brains have been selected that are better at playing both sides of this game.

Incidentally, for another book that takes an entertaining and audacious “big picture” view of evolution and group selection, see the barn-storming “The Lucifer Principle: A Scientific Expedition into the Forces of History” by Howard Bloom.

3. The role of technology in evolution

At first sight, technology has little to do with evolution.  Evolution occurred in bygone times, whilst technology is a modern development – right?

Not true. First, evolution is very much a present-day phenomenon (as well as something that has been at work throughout the whole history of life).  Diseases evolve rapidly, under pressures of different regimes of anti-bacterial cocktails.  And there is evidence that biological evolution still occurs for humans.  A 2009 article in Time magazine was entitled “Darwin Lives! Modern Humans Are Still Evolving“.  Here’s a brief extract:

One study, published in PNAS in 2007 and led by John Hawks, an anthropologist at the University of Wisconsin at Madison, found that some 1,800 human gene variations had become widespread in recent generations because of their modern-day evolutionary benefits. Among those genetic changes, discovered by examining more than 3 million DNA variants in 269 individuals: mutations that allow people to digest milk or resist malaria and others that govern brain development.

Second, technology is itself an ancient phenomenon – including creative use of sticks and stones.  Benefits of very early human use of sticks and stones included fire, weapons, and clothing.  What’s more, the advantages of use of tools allowed a strange side-effect in human genetic evolution: as we became technologically stronger, we also became biologically weaker.  The Time magazine article mentioned above goes on to state the following:

According to anthropologist Peter McAllister, author of “Manthropology: the Science of Inadequate Modern Man“, the contemporary male has evolved, at least physically, into “the sorriest cohort of masculine Homo sapiens to ever walk the planet.” Thanks to genetic differences, an average Neanderthal woman, McAllister notes, could have whupped Arnold Schwarzenegger at his muscular peak in an arm-wrestling match. And prehistoric Australian Aborigines, who typically built up great strength in their joints and muscles through childhood and adolescence, could have easily beat Usain Bolt in a 100-m dash.

Timothy Taylor, Reader in Archaeology at the University of Bradford and editor-in-chief of the Journal of World Prehistory, tackles this same topic in his recent book “The Artificial Ape: How Technology Changed the Course of Human Evolution“.

Amazon.com describes this book as following:

A breakthrough theory that tools and technology are the real drivers of human evolution.

Although humans are one of the great apes, along with chimpanzees, gorillas, and orangutans, we are remarkably different from them. Unlike our cousins who subsist on raw food, spend their days and nights outdoors, and wear a thick coat of hair, humans are entirely dependent on artificial things, such as clothing, shelter, and the use of tools, and would die in nature without them. Yet, despite our status as the weakest ape, we are the masters of this planet. Given these inherent deficits, how did humans come out on top?

In this fascinating new account of our origins, leading archaeologist Timothy Taylor proposes a new way of thinking about human evolution through our relationship with objects. Drawing on the latest fossil evidence, Taylor argues that at each step of our species’ development, humans made choices that caused us to assume greater control of our evolution. Our appropriation of objects allowed us to walk upright, lose our body hair, and grow significantly larger brains. As we push the frontiers of scientific technology, creating prosthetics, intelligent implants, and artificially modified genes, we continue a process that started in the prehistoric past, when we first began to extend our powers through objects.

Weaving together lively discussions of major discoveries of human skeletons and artifacts with a reexamination of Darwin’s theory of evolution, Taylor takes us on an exciting and challenging journey that begins to answer the fundamental question about our existence: what makes humans unique, and what does that mean for our future?

In an interview in the New Scientist, Timothy Taylor gives more details of his ideas:

Upright female hominins walking the savannah had a real problem: their babies couldn’t cling to them the way a chimp baby could cling to its mother. Carrying an infant would have been the highest drain on energy for a hominin female – higher than lactation. So what did they do? I believe they figured out how to carry their newborns using a loop of animal tissue. Evidence of the slings hasn’t survived, but in the same way that we infer lungs and organs from the bones of fossils that survive, it is from the stone tools that we can infer the bits that don’t last: things made from sinew, wood, leather and grasses…

Once you have slings to carry babies, you have broken a glass ceiling – it doesn’t matter whether the infant is helpless for a day, a month or a year. You can have ever more helpless young and that, as far as I can see, is how encephalisation took place in the genus Homo. We used technology to turn ourselves into kangaroos. Our children are born more and more underdeveloped because they can continue to develop outside the womb – they become an extra-uterine fetus in the sling. This means their heads can continue to grow after birth, solving the smart biped paradox. In that sense technology comes before the ascent to Homo. Our brain expansion only really took off half a million years after the first stone tools. And they continued to develop within an increasingly technological environment…

I’ve ordered Taylor’s book from Amazon and I expect it to be waiting for me at my home in the UK once I return from my current trip in Asia.  I’m also looking forward to hosting a discussion meeting on Saturday 11th Sept under the auspices of Humanity+ UK in London, where Timothy Taylor himself will be the main speaker. People on Facebook can register their interest in this meeting by RSVPing here.  There’s no charge to attend.

Another option to see Timothy Taylor lecture in person – for those able to spare time in the middle of the day on a Thursday (9th Sept) – will be at the RSA.  I expect there will be good discussion at both events, but the session at H+UK is longer (two hours, as opposed to just one at the RSA), and I expect more questions there about matters such as the likely role of technology radically re-shaping the future development of humans.

Footnote: of course, the fact that evolution guided our ancestors to behave in certain ways is no reason for us to want to continue to behave in these ways.  But understanding the former is, in my view, very useful background knowledge for being to devise practical measures to change ourselves.

Older Posts »

Blog at WordPress.com.