dw2

28 July 2012

More for us all, or more for them?

Filed under: books, disruption, Economics, futurist, Humanity Plus, politics, UKH+ — David Wood @ 1:15 pm

The opening keynote speaker at this weekend’s World Future 2012 conference this weekend was Lee Rainie, the Director of the Pew Research Center’s Internet & American Life Project. Lee’s topic, “Future of the Internet”, was described as follows in the conference agenda:

In this keynote presentation based on his latest book, Networked: The New Social Operating System (co-authored with Barry Wellman), Dr. Rainie will discuss the findings of the most recent expert surveys on the future of teens’ brains, the future of universities, the future of money, the impact of Big Data, the battle between apps and the Web, the spread of gamefication, and the impact of smart systems on consumers.

That was a lot to cover in 45 minutes, but Lee said he would speak fast – and he did!

Analysis  of the Pew Internet expert surveys Lee mentions is available online at http://www.elon.edu/e-web/predictions/expertsurveys/2012survey/ – where you’ll find a wealth of fascinating material.

But Lee’s summary of the summary (if I can put it like that) is that there are two potential pathways ahead, between now and 2020, regarding what happens with Internet technology:

  1. In one pathway, we all benefit: the fruits of improving Internet technologies are widely shared
  2. In another pathway, the benefits are much more restricted, to “them”.

Hence the question: “More for us all, or more for them?”

“Them” could be political leaders, or it could be corporate leaders.

Listening to Lee’s words, I was struck by a powerful resonance with the main theme of a BIG book on history that I’m the processing of reading. (Actually, I’m listening to an Audible version of it on my iPod.)

The book is “Why nations fail the origins of power, prosperity, and poverty“, by Daron Acemoglu and James Robinson. It has a marvelous sweep through events in all eras of human history and in all corners of the globe.

I’m only six hours into what will be a 15 hour long listen, but I already suspect this to be the most important book about history that I’ve ever read. (And as regular readers of this blog know, I read a lot.)

It’s not just “one thing after another” (fascinating though that kind of history book can be), but a profound analysis of the causes of divergence between societies with prosperity and societies with poverty.

In brief, the primary differentiator is the kind of political institutions that exist:

  • Are they extractive, in which the outputs of society are siphoned by a relatively small elite
  • Or are they inclusive, which a much greater sharing of power, influence, and potential benefit?

The nature of political institutions in turn influence the nature and operation of economic institutions.

The book has many striking examples of how ruling elites blocked the development or the wider application of technology and/or market reforms, fearing the “creative destruction” which would be likely to follow – threatening their grip on power.

One example in the book is the story of the reaction of the Roman emperor Tiberius to the invention of unbreakable glass. This story is also told by, for example, Computer World blogger John Riley, in his article “Why innovation is not always welcomed with open arms“:

The story of the Roman inventor of flexible glass 2000 years ago is a salutary lesson for all innovators, especially those within organisations.

As Isadore of Seville tells it, the inventor went to the Roman Emperor Tiberius (14-37 AD) with a drinking bowl made of this flexible and ductile glass, and threw in on the ground to demonstrate that it didn’t shatter.

Tiberius asked him if anyone else knew about the invention. The inventor said he’d told no-one.

And he was instantly beheaded.

What the unfortunate inventor hadn’t seen was the big picture – Tiberius had instantly realised that cheap, easily produced flexible glass that didn’t break would wreck the Imperial monopoly on gold and silver!

That, sadly is too often the case in big companies, where someone comes up with a bright idea which in practice means interfering with a profitable short term operation or disintermediating a product line.

In such cases innovators these days don’t get killed – they get ignored, sidelined, blocked or gagged under non-disclosures. Many innovative products have been  rejected by large suppliers controlling projects when they threaten monopolistic inefficiencies. There are many cases of other innovations being bought up and mothballed…

As Acemoglu and Robinson point out, in a different political climate, the inventor could have taken his invention to market without the explicit knowledge and permission of the ruling elite (such as the emperor). That would be a world in which, to return to my opening question, there would more likely be technology benefits for everyone, rather than control of technology being subordinated to the benefits of a clique.

But our current political climate is highly troubled. My friend and Humanity+ UK co-organiser Amon Kalkin describes the situation like this:

We are in a time of crisis. Large numbers of people are increasingly disenfranchised, squeezed on all sides and with no hope of appeal to authorities. Why? Because those very same authorities – our governments – are virtually indistinguishable from the corporate interests who are gaining most from the current situation. We live under a system where our votes essentially don’t matter. You can pick a team, but you’re not allowed to change the rules of the game. Even worse, we have been trained to think of this as a normal and natural situation. Who are we to question these powerful people? Who are we to awaken, to unify and demand change?

Rather than just considering the topic “Why Nations Fail“, we might well consider “Why Transnational Institutions Fail” – referring to, for example, the evident problems within the Eurozone and within the United Nations.

The same imperative applies: we need to find a mode of collaboration that avoids being subverted by the special interests of ruling minorities – whether these minorities be economic elites or political elites.

That imperative has led Amon to found the Zero State movement:

Zero State is a movement for positive social change through technology.

We’re a grass-roots world community pursuing smart, compassionate solutions to problems, and improving the human condition.

Personal transformative technologies we pursue include life extension and Artificial Intelligence. Social projects include accelerating changebasic incomeMeshnet and Bitcoin, while lifestyle initiatives explore areas such as the arts, spirituality, fashion and culture.

More recently, Amon and various other Zero State members are launching a political party, “Consensus“, to promote their Zero State vision. My quote above (“We are in a time of crisis…”) comes from Amon’s description of what he plans to say at the launch meeting for Consensus that will take place next weekend in London.

For more details about this launch event, see this announcement on the London Futurist meetup site:

The case for a Zero State political movement

Many futurists envisage a better, more compassionate society, organized in terms of using technological developments to maximize well-being rather than simply concentrating resources in a few ultra-rich hands and leaving everybody else increasingly worse off. But all too often, futurists talk about positive outcomes as if such things come without work or struggle. They are apparently oblivious to the fact that right now our society is stalling, strangled by a tiny proportion of citizens who do not share our values.

There have been few moments in the history of our society like the one facing us now, where deep crisis also offers the opportunity for deep, positive change. It’s the time for futurists to step up to actively guide our society toward the better futures we envisage.

The CONSENSUS is a newly-formed UK-based political party which seeks to harness the intelligence and compassion to be found among futurists and other subcultures to the design of a real, improved future, for us and our children. Drawing inspiration from the eight principles of the Zero State movement, the CONSENSUS will encourage people to think about the possibilities open to society once again.

The CONSENSUS is the first party of its kind. We intend to reach out to other parties and groups who share similar views and goals. If we are successful, there will soon be a number of CONSENSUS parties at the national level around the world, all part of an organization known as the Consensus of Democratic Futurist Parties (CDFP). The UK CONSENSUS is already affiliated with an international futurist organization in the Zero State movement, and the seeds of multiple local political parties have been sown.

We welcome the opportunity to hear your views about the state of society today and its future, and what are the issues and goals that we should focus on. No matter what your own views are, this is your chance to have your say, and to have it influence a concrete course of action.

If you care about our future, and the possibility of finding intelligent, compassionate solutions to our problems, then we encourage you to come to this meeting to:

  • find out how you can help
  • join the conversation
  • offer your views on how we should move forward…

16 June 2012

Beyond future shock

Filed under: alienation, books, change, chaos, futurist, Humanity Plus, rejuveneering, robots, Singularity, UKH+ — David Wood @ 3:10 pm

They predicted the “electronic frontier” of the Internet, Prozac, YouTube, cloning, home-schooling, the self-induced paralysis of too many choices, instant celebrities, and the end of blue-collar manufacturing. Not bad for 1970.

That’s the summary, with the benefit of four decades of hindsight, given by Fast Company writer Greg Lindsay, of the forecasts made in the 1970 bestseller “Future Shock” by husband-and-wife authors Alvin and Heidi Toffler.

As Lindsay comments,

Published in 1970, Future Shock made its author Alvin Toffler – a former student radical, welder, newspaper report and Fortune editor – a household name. Written with his wife (and uncredited co-author), Heidi Toffler, the book was The World Is Flat of its day, selling 6 million copies and single-handedly inventing futurism…

“Future shock is the shattering stress and disorientation that we induce in individuals by subjecting them to too much change in too short a time”, the pair wrote.

And quoting Deborah Westphal, the managing partner of Toffler Associates, in an interview at an event marking the 40th anniversary of the publication of Future Shock, Lindsay notes the following:

In Future Shock, the Tofflers hammered home the point that technology, culture, and even life itself was evolving too fast for governments, policy-makers and regulators to keep up. Forty years on, that message hasn’t changed. “The government needs to understand the dependencies and the convergence of networks through information,” says Westphal. “And there still needs to be some studies done around rates of change and the synchronization of these systems. Business, government, and organizational structures need to be looked at and redone. We’ve built much of the world economy on an industrial model, and that model doesn’t work in an information-centric society. That’s probably the greatest challenge we still face -understanding the old rules don’t apply for the future.”

Earlier this week, another book was published, that also draws on Future Shock for inspiration.  Again, the authors are a husband-and-wife team, Parag and Ayesha Khanna.  And again, the book looks set to redefine key aspects of the futurist endeavour.

This new book is entitled “Hybrid Reality: Thriving in the Emerging Human-Technology Civilization“.  The Khannas refer early on to the insights expressed by the Tofflers in Future Shock:

The Tofflers’ most fundamental insight was that the pace of change has become as important as the content of change… The term Future Shock was thus meant to capture our intense anxiety in the face of technology’s seeming ability to accelerate time. In this sense, technology’s true impact isn’t just physical or economic, but social and psychological as well.

One simple but important example follows:

Technologies such as mobile phones can make us feel empowered, but also make us vulnerable to new pathologies like nomophobia – the fear of being away from one’s mobile phone. Fifty-eight percent of millennials would rather give up their sense of smell than their mobile phone.

As befits the theme of speed, the book is a fast read. I downloaded it onto my Kindle on the day of its publication, and have already read it all the way through twice. It’s short, but condensed. The text contains many striking turns of phrase, loaded with several layers of meaning, which repay several rethinks. That’s the best kind of sound-bite.

Despite its short length, there are too many big themes in the book for me to properly summarise them here. The book portrays an optimistic vision, alongside a series of challenges and risks. As illustrations, let me pick out a selection of phrases, to convey some of the flavour:

The cross-pollination of leading-edge sectors such as information technology, biotechnology, pervasive computing, robotics, neuroscience, and nanotechnology spells the end of certain turf wars over nomenclature. It is neither the “Bio Age” nor the “Nano Age” nor the “Neuro Age”, but the hybrid of all of these at the same time…

Our own relationship to technology is moving beyond the instrumental to the existential. There is an accelerating centripetal dance between what technologies are doing outside us and inside us. Externally, technology no longer simply processes our instructions on a one-way street. Instead, it increasingly provides intelligent feedback. Internally, we are moving beyond using technology only to dominate nature towards making ourselves the template for technology, integrating technologies within ourselves physically. We don’t just use technology; we absorb it

The Hybrid Age is the transition period between the Information Age and the moment of Singularity (when machine surpass human intelligence) that inventor Ray Kurzweil estimates we may reach by 2040 (perhaps sooner). The Hybrid Age is a liminal phase in which we cross the threshold toward a new mode of arranging global society…

You may continue to live your life without understanding the implications of the still-distant Singularity, but you should not underestimate how quickly we are accelerating into the Hybrid Age – nor delay in managing this transition yourself

The dominant paradigm to explain global change in the Hybrid Age will be geotechnnology. Technology’s role in shaping and reshaping the prevailing order, and accelerating change between orders, forces us to rethink the intellectual hegemony of geopolitics and geoeconomics…

It is geotechnology that is the underlying driver of both: Mastery in the leading technology sectors of any era determines who leads in geoeconomics and dominates in geopolitics…

The shift towards a geotechnology paradigm forces us to jettison centuries of foundational assumptions of geopolitics. The first is our view on scale: “Bigger is better” is no longer necessarily true. Size can be as much a liability as an asset…

We live and die by our Technik, the capacity to harness emerging technologies to improve our circumstances…

We will increasingly differentiate societies on the basis not of their regime type or income, but of their capacity to harness technology. Societies that continuously upgrade their Technik will thrive…

Meeting the grand challenge of improving equity on a crowded planet requires spreading Technik more than it requires spreading democracy

And there’s lots more, applying the above themes to education, healthcare, “better than new” prosthetics, longevity and rejuvenation, 3D printing, digital currencies, personal entrepreneurship and workforce transformation, the diffusion of authority, the rise of smart cities and their empowered “city-zens”, augmented reality and enhanced personal avatars, robots and “avoiding robopocalypse”, and the prospect for a forthcoming “Pax Technologica”.

It makes me breathless just remembering all these themes – and how they time and again circle back on each other.

Footnote: Readers who are in the vicinity of London next Saturday (23rd June) are encouraged to attend the London Futurist / Humanity+ UK event “Hybrid Reality, with Ayesha Khanna”. Click on the links for more information.

3 June 2012

Super-technology and a possible renaissance of religion

Filed under: death, disruption, Humanity Plus, rejuveneering, religion, Singularity, UKH+ — David Wood @ 11:02 pm

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke

Imagine that the human race avoids self-destruction and continues on the path of increased mastery of technology. Imagine that, as seems credible some time in the future, humans will eventually gain the ability to keep everyone alive indefinitely, in an environment of great abundance, variety, and  intrinsic interest.

That paradise may be a fine outcome for our descendants, but unless the pace of technology improvement becomes remarkably rapid, it seems to have little direct impact on our own lives. Or does it?

It may depend on exactly how much power our god-like descendants eventually acquire.  For example, here are two of the points from a radical vision of the future known as the Ten cosmist convictions:

  • 5) We will develop spacetime engineering and scientific “future magic” much beyond our current understanding and imagination.
  • 6) Spacetime engineering and future magic will permit achieving, by scientific means, most of the promises of religions — and many amazing things that no human religion ever dreamed. Eventually we will be able to resurrect the dead by “copying them to the future”.

Whoa! “Resurrect the dead”, by “copying them to the future”. How might that work?

In part, by collecting enormous amount of data about the past – reconstructing information from numerous sources. It’s similar to collecting data about far-distant stars using a very large array of radio telescopes. And in part, by re-embodying that data in a new environment, similar to copying running software onto a new computer, giving it a new lease of life.

Lots of questions can be asked about the details:

  • Can sufficient data really be gathered in the future, in the face of all the degradation commonly called “the second law of thermodynamics”, that would allow a sufficiently high-fidelity version of me (or anyone else) to be re-created?
  • If a future super-human collected lots of data about me and managed to get an embodiment of that data running on some future super-computer, would that really amount to resurrecting me, as opposed to creating a copy of me?

I don’t think anyone can confident about answers to such questions. But it’s at least conceivable that remarkably advanced technology of the future may allow positive answers.

In other words, it’s at least conceivable that our descendants will have the god-like ability to recreate us in the future, giving us an unexpected prospect for immortality.

This makes sense of the remark by radical futurist and singularitarian Ray Kurzweil at the end of the film “Transcendent Man“:

Does God exist? Well I would say, not yet

Other radical futurists quibble over the “not yet” caveat. In his recent essay “Yes, I am a believer“, Giulio Prisco takes the discussion one stage further:

Gods will exist in the future, and they may be able to affect their past — our present — by means of spacetime engineering. Probably other civilizations out there already attained God-like powers.

Giulio notes that even the celebrated critic of theism, Richard Dawkins, gives some support to this line of thinking.  For example, here’s an excerpt from a 2011 New York Times interview, in which Dawkins discusses an essay written by theoretic physicist Freeman Dyson:

In one essay, Professor Dyson casts millions of speculative years into the future. Our galaxy is dying and humans have evolved into something like bolts of superpowerful intelligent and moral energy.

Doesn’t that description sound an awful lot like God?

“Certainly,” Professor Dawkins replies. “It’s highly plausible that in the universe there are God-like creatures.”

He raises his hand, just in case a reader thinks he’s gone around a religious bend. “It’s very important to understand that these Gods came into being by an explicable scientific progression of incremental evolution.”

Could they be immortal? The professor shrugs.

“Probably not.” He smiles and adds, “But I wouldn’t want to be too dogmatic about that.”

As Giulio points out, Dawkins develops a similar line of argument in part of his book “The God Delusion”:

Whether we ever get to know them or not, there are very probably alien civilizations that are superhuman, to the point of being god-like in ways that exceed anything a theologian could possibly imagine. Their technical achievements would seem as supernatural to us as ours would seem to a Dark Age peasant transported to the twenty-first century…

In what sense, then, would the most advanced SETI aliens not be gods? In what sense would they be superhuman but not supernatural? In a very important sense, which goes to the heart of this book. The crucial difference between gods and god-like extraterrestrials lies not in their properties but in their provenance. Entities that are complex enough to be intelligent are products of an evolutionary process. No matter how god-like they may seem when we encounter them, they didn’t start that way…

Giulio seems more interested in the properties than the provenance. The fact that these entities have god-like powers prompts him to proclaim “Yes, I am a believer“.  He gives another reason in support of that proclamation: In contrast to the views of so-called militant atheists, Giulio is “persuaded that religion can be a powerful and positive force”.

Giulio sees this “powerful and positive force” as applying to him personally as well as to groups in general:

“In my beliefs I find hope, happiness, meaning, the strength to get through the night, and a powerful sense of wonder at our future adventures out there in the universe, which gives me also the drive to try to be a better person here-and-now on this little planet and make it a little better for future generations”.

More controversially, Giulio has taken to describing himself (e.g. on his Facebook page) as a “Christian”. Referring back to his essay, and to the ensuing online discussion:

Religion can, and should, be based on mutual tolerance, love and compassion. Jesus said: “love thy neighbor as thyself,” and added: “let he who is without sin, cast the first stone”…

This is the important part of his teachings in my opinion. Christian theology is interesting, but I think it should be reformulated for our times…

Was Jesus the Son of God? I don’t think this is a central issue. He certainly was, in the sense that we all are, and he may have been one of those persons in tune with the universe, more in tune with the universe than the rest of us, able to glimpse at veiled realities beyond our senses.

I’ve known Giulio for several years, from various Humanity+ and Singularity meetings we’ve both attended – dating back to “Transvision 2006” in Helsinki. I respect him as a very capable thinker, and I take his views seriously. His recent “Yes, I am a believer” article has stirred up a hornets’ nest of online criticism.

Accordingly, I was very pleased that Giulio accepted my invitation to come to London to speak at a London Futurist / Humanity+ UK meeting on Saturday 14th July: “Transhumanist Religions 2.0: New Cosmist religion and spirituality for our boundless future (and our troubled present)”. For all kinds of reason, this discussion deserves a wider airing.

First, I share the view that religious sentiments can provide cohesion and energy to propel individuals and groups to undertake enormously difficult projects (such as the project to avoid the self-destruction of the human race, or any drastic decline in the quality of global civilisation).  The best analysis I’ve read of this point is in the book “Darwin’s Cathedral: Evolution, Religion, and the Nature of Society” by David Sloan Wilson.  As I’ve written previously:

This book has sweeping scope, but makes its case very well.  The case is that religion has in general survived inasmuch as it helped groups of people to achieve greater cohesion and thereby acquire greater fitness compared to other groups of people.  This kind of religion has practical effect, independent of whether or not its belief system corresponds to factual reality.  (It can hardly be denied that, in most cases, the belief system does not correspond to factual reality.)

The book has some great examples – from the religions in hunter-gatherer societies, which contain a powerful emphasis on sharing out scarce resources completely equitably, through examples of religions in more complex societies.  The chapter on John Calvin was eye-opening (describing how his belief system brought stability and prosperity to Geneva) – as were the sections on the comparative evolutionary successes of Judaism and early Christianity.  But perhaps the section on the Balinese water-irrigation religion is the most fascinating of the lot.

Of course, there are some other theories for why religion exists (and is so widespread), and this book gives credit to these theories in appropriate places.  However, this pro-group selection explanation has never before been set out so carefully and credibly, and I think it’s no longer possible to deny that it plays a key role.

The discussion makes it crystal clear why many religious groups tend to treat outsiders so badly (despite treating insiders so well).  It also provides a fascinating perspective on the whole topic of “forgiveness”.  Finally, the central theme of “group selection” is given a convincing defence.

But second, there’s no doubt that religion can fit blinkers over people’s thinking abilities, and prevent them from weighing up arguments dispassionately. Whenever people talk about the Singularity movement as having the shape of a religion – with Ray Kurzweil as a kind of infallible prophet – I shudder. But we needn’t lurch to that extreme. We should be able to maintain the discipline of rigorous independent thinking within a technologically-informed renaissance of positive religious sentiment.

Third, if the universe really does have beings with God-like powers, what attitude should we adopt towards these beings? Should we be seeking in some way to worship them, or placate them, or influence them? It depends on whether these beings are able to influence human history, here and now, or whether they are instead restricted (by raw facts of space and time that even God-like beings have to respect) to observing us and (possibly) copying us into the future.

Personally my bet is on the latter choice. For example, I’m not convinced by people who claim evidence to the contrary. And if these beings did have the ability to intervene in human history, but have failed to do so, it would be evidence of them having scant interest in widespread intense human suffering. They would hardly be super-beings.

In that case, the focus of our effort should remain squarely on building the right conditions for super-technology to benefit humanity as a whole (this is the project I call “Inner Humanity+“), rather than on somehow seeking to attract the future attention of these God-like beings. But no doubt others will have different views!

2 October 2011

Prioritising the best peer pressure

Filed under: BHAG, catalysts, collaboration, futurist, Humanity Plus — David Wood @ 9:36 am

In a world awash with conflicting influences and numerous potential interesting distractions, how best to keep “first things first“?

A big part of the answer is to ensure that the influences we are closest to us are influences:

  • Whose goals are aligned with our own
  • Who can give us prompt, helpful feedback when we are falling short of our own declared intentions
  • Who can provide us with independent viewpoints that enrich, complement, and challenge our current understanding.

In my own case, that’s the reason why I have been drawn to the community known as “Humanity+“:

Humanity+ is an international nonprofit membership organization which advocates the ethical use of technology to expand human capacities. We support the development of and access to new technologies that enable everyone to enjoy better minds, better bodies and better lives. In other words, we want people to be better than well.

I deeply share the goals of Humanity+, and I find some of the world’s most interesting thinkers within that community.

It’s also the reason I have sought to aid the flourishing of the Humanity+ community, particularly in the UK, by organising a series of speaker meetings in London.  The speakers at these meetings are generally fascinating, but its the extended networking that follows (offline and online) which provides the greatest value.

My work life has been very busy in the last few months, leaving me less time to organise regular H+UK meetings.  However, to keep myself grounded in a community that contains many people who can teach me a great deal – a community that can provide powerful positive peer pressure – I’ve worked with some H+UK colleagues to pull together an all day meeting that is taking place at the Saturday at the end of this week (8th October).

The theme of this meeting is “Beyond Human: Rethinking the Technological Extension of the Human Condition“.  It splits into three parts:

  • Beyond human: The science and engineering
  • Beyond human: Implications and controversies
  • Beyond human: Getting involved

The event is free to attend.  There’s no need to register in advance. The meeting is taking place in lecture room B34 in the Malet Street building (the main building) of Birkbeck College.  This is located in Torrington Square (which is a pedestrian-only square), London WC1E 7HX.

Full details are on the official event website.  In this blogpost, to give a flavour of what will be covered, I’ll just list the agenda with the speakers and panellists.

09.30 – Finding the room, networking
Opening remarks
Beyond human: The science and engineering
11.40 – Audience Q&A with the panel consisting of the above four speakers
Lunch break
12.00 – People make their own arrangements for lunch (there are some suggestions on the event website)
Beyond human: Implications and controversies
14.40 – Audience Q&A with the panel consisting of the above four speakers
Extended DIY coffee break
15.00 – Also a chance for extended networking
Beyond human: Getting involved
17.25 – Audience Q&A with the panel consisting of the above four speakers
End of conference
17.45 – Hard stop – the room needs to be empty by 18.00

You can follow the links to find out more information about each speaker. You’ll see that several are eminent university professors. Several have written key articles or books on the theme of technology that significantly enhances human potential. Some complement their technology savvy with an interest in performance art.  All are distinguished and interesting futurists in their own way.

I don’t expect I’ll agree with everything that’s said, but I do expect that great personal links will be made – and strengthened – during the course of the day.  I also expect that some of the ideas shared at the conference – some of the big, hairy, audacious goals unveiled – will take on a major life of their own, travelling around the world, offline and online, catalysing very significant positive change.

8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

17 April 2011

Towards inner humanity+

Filed under: challenge, films, Humanity Plus, intelligence, vision — David Wood @ 11:06 am

There’s a great scene near the beginning of the film “Limitless“.  The central character, Eddie (played by Bradley Cooper), has just been confronted by his neighbour, Valerie. It’s made clear to the viewers that Valerie is generally nasty and hostile to Eddie. Worse, Eddie owes money to Valerie, and is overdue payment. It seems that a fruitless verbal confrontation looms. Or perhaps Eddie will try to quickly evade her.

But this time it’s different.  Eddie’s brain has been switched into a super-fast enhanced mode (which is the main theme of the film).  Does he take the opportunity to weaken Valerie with fast verbal gymnastics and put-downs?

Instead, he uses his new-found rocket-paced analytic abilities to a much better purpose.  Picking up the tiniest of clues, he realises that Valerie’s foul mood is caused by something unconnected with Eddie himself: Valerie is having a particular problem with her legal studies.  Gathering memories out of the depths of his brain from long-past discussions with former student friends, Eddie is able to suggest ideas to Valerie that rouse her interest and defuse her hostility.  Soon, she’s more receptive.  The two sit down together, and Eddie guides her in the swift completion of a brilliant essay for the tricky homework assignment that has been preying on Valerie’s nerves.

Anyone who watches Limitless is bound to wonder: can technology – such as a smart drug – really have that kind of radical transformative effect on human ability?

Humanity+ is the name of the worldview that says, not only is that kind of technology feasible (within the lifetimes of many people now alive), but it is desirable.  If you watch Limitless right through to the end, you’ll find plenty in the film that offers broad support to the Humanity+ mindset.  That’s a pleasant change from the usual Hollywood conviction that technology-induced human enhancement typically ends up in dysfunction and loss of important human characteristics.

But the question remains: if we become smarter, does it mean we would be better people?  Or would we tend to use accelerated mental faculties to advance our own self-centred personal agendas?

A similar question was raised by an audience member at the “Post Transcendent Man” event in Birkbeck in London last weekend.  Is it appropriate to consider intellectual enhancement without also considering moral enhancement?  Or is it like giving a five year old the keys to a sports car?  Or like handing a bunch of Mujahideen terrorists the instructions to create advanced nuclear weaponry?

Take another example of accelerating technology: the Internet.  This can be used to spy and to hassle, as well as to educate and uplift.  Consider the chilling examples mentioned in the recent Telegraph article “The toxic rise of internet bullies“:

At first glance, Natasha MacBryde’s Facebook page is nothing unusual. A pretty, slightly self-conscious blonde teenager gazes out, posed in the act of taking her own picture. But unlike other pages, this has been set up in commemoration, following her death under a train earlier this month. Now though it has had to be moderated after it was hijacked by commenters who mocked both Natasha and the manner of her death heartlessly.

“Natasha wasn’t bullied, she was just a whore,” said one, while another added: “I caught the train to heaven LOL [laugh out loud].” Others clicked on the “like” symbol, safe in their anonymity, to indicate that they agreed. The messages were removed after a matter of hours, but Natasha’s grieving father Andrew revealed that Natasha’s brother had also discovered a macabre video – entitled “Tasha The Tank Engine” on YouTube (it has since been removed). “I simply cannot understand how or why these people get any enjoyment or satisfaction from making such disgraceful comments,” he said.

He is far from alone. Following the vicious sexual assault on NBC reporter Lara Logan in Cairo last week, online debate on America’s NPR website became so ugly that moderator Mark Memmott was forced to remove scores of comments and reiterate the organisation’s stance on offensive message-posting…

It’s not just anonymous comments that cause concern.  As Richard Adhikari notes in his article “The Internet’s Destruction of Critical Thinking“,

Prior to the dawn of the Internet Age, anyone who wanted to keep up with current events could pretty much count on being exposed to a diversity of subjects and viewpoints. News consumers were passive recipients of content delivered by print reporters or TV anchors, and choices were few. Now, it’s alarmingly easy to avoid any troublesome information that might provoke one to really think… few people do more than skim the surface — and as they do with newspapers, most people tend to read only what interests them. Add to that the democratization of the power to publish, where anyone with access to the Web can put up a blog on any topic whatsoever, and you have a veritable Tower of Babel…

Of course, the more powerful the technology, the bigger the risks if it is used in pursuit of our lower tendencies.  For a particularly extreme example, review the plot of the 1956 science fiction film “Forbidden planet”, as covered here.  As Roko Mijic has explained:

Here are two ways in which the amplification of human intelligence could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

For all these reasons, it’s my strong conviction that any quest to what might be called “outer Humanity+” must be accompanied (and, indeed, preceded) by a quest for “inner Humanity+”.  Both these quests consider the ways in which accelerating technology can enhance human capabilities.  However the differences are summed up in the following comparison:

Outer Humanity+

  • Seeks greater strength
  • Seeks greater speed
  • Seeks to transcend limits
  • Seeks life extension
  • Seeks individual progress
  • Seeks more experiences
  • Seeks greater intelligence
  • Generally optimistic about technology
  • Generally hostile to goals and practice of religion and meditation

Inner Humanity+

  • Seeks greater kindness
  • Seeks deeper insight
  • Seeks self-mastery
  • Seeks life expansion
  • Seeks cooperation
  • Seeks more fulfilment
  • Seeks greater wisdom
  • Has major concerns about technology
  • Has some sympathy to goals and practice of religion and meditation

Back to Eddie in Limitless.  It’s my hunch he was basically a nice guy to start with – except that he was ineffectual.  Once his brainpower was enhanced, he could be a more effectual nice guy.  His brain provided rapid insight on the problems and issues being faced by his neighbour – and proposed effective solutions.  In this example, greater strength led to a more effective kindness.  But if real-life technology delivers real-life intellect enhancement any time soon, all bets are off regarding whether it will result in greater kindness or greater unkindness.  In other words, all bets are off as to whether we’ll create a heaven-like state, or hell on earth.  For this reason, the quest to achieve Inner Humanity+ must overtake the quest to achieve Outer Humanity+.

2 April 2011

Virtual futures and digital natives

Filed under: disruption, Events, futurist, Humanity Plus — David Wood @ 6:34 pm

A child born today will be immersed in a world that is, more than ever, virtual…  With a single Google search, a child has instant access to a plethora of information. With Google Earth the entire globe can be navigated with little travel-cost endured. And languages can be translated without a single understanding of the complex linguistics of other cultures…

These words are taken from the blog for the forthcoming University of Warwick Virtual Futures 2.0’11 conference.  The stated theme of the conference is “Digital natives: fear of the flesh?”.  The phrase “digital native” refers to someone young enough (in body or in spirit) to find themselves at home in the fast-evolving digital connected world.

But is anyone truly at home in this world?  The author of the blog, Luke Robert Mason, continues as follows, drawing on comments made by performance artist Stelarc who took part in an earlier Virtual Futures conference:

But this virtual world is also plagued by complexity – a complexity born of information which the  biological brain is not designed to comprehend.  As performance artist Stelarc stated in his early work, “It is time to question whether a bipedal, breathing body with binocular vision and a 1400cc brain is an adequate biological form. It cannot cope with the quantity, complexity and quality of information it has accumulated; it is intimidated by the precision, speed and power of technology and it is biologically ill-equipped to cope with its new extraterrestrial environment.”

The Virtual Futures 2.0 conference rekindles a series of trailblazing conferences that the University of Warwick hosted in 1994, 1995, and 1996, attracting upwards of 300 attendees:

These conferences questioned the future possibility of the ‘virtual’ and alluded towards the impact of emerging technologies on society and culture. They were, at their time, revolutionary…

The topics discussed at the conferences in the 90’s included chaos theory, geopolitics, feminism, nanotechnology, cyberpunk fiction, machine music, net security, military strategy, plastic surgery, hacking, bio-computation, cognition, cryptography & capitalism. These topics are still poignant today with perhaps the addition of genetics, bio-engineering, neuroscience, artificial intelligence, bio-ethics and social media.

Call for papers

The conference organisers have now issued a call for papers:

The revival aims to reignite the debates over the implications of new and future communication technologies on art, society and politics. The conference will take place on the 18th-19th June 2011 and include paper presentations, panels, performances, screenings and installations.

We welcome researchers, scholars and artists to submit proposals for papers and/or performances around this year’s theme of: “Digital Natives: Fear of the Flesh?”…

Please send proposals (250 words max) to papers@virtualfutures.co.uk by 1st May 2011.

Interested in presenting or performing at the event?  As for myself, I’m preparing a proposal to speak at the conference.  I’m thinking about speaking on the topic “Beyond super phones to super humans – a journey along the spectrum of personal commitment to radical technological transformation“.

I like the conference focus on “digital natives” but I’m less convinced about the “Fear of the flesh?” coda.  Yes, my human flesh has lots of limitations.  But I look ahead to far-reaching bodily improvement, rather than to leaving my flesh altogether behind.  Other radical futurists, in contrast, seem to eagerly anticipate a time when their mind will be entirely uploaded into a virtual world.  There’s ground for lots of debate here:

  • Are these visions credible?
  • Are these visions desirable?
  • How should such visions be evaluated, in a world full of pressing everyday problems?
  • Which of these personal futures should we prioritise?

No doubt these questions, along with many others, will be tackled at the event.

Note: Virtual Futures 2.0 is organised at the University of Warwick with support from the Institute for Advanced Teaching and Learning, the School of Theatre, Performance and Cultural Policy Studies, and the Centre for History of Medicine, in association with Humanity+ UK.

19 March 2011

A singularly fine singularitarian panel?

Filed under: futurist, Humanity Plus, Kurzweil, Singularity — David Wood @ 12:37 pm

In a moment, I’ll get to the topic of a panel discussion on the Singularity – a panel I’ve dubbed (for reasons which should become clear) “Post Transcendent Man“. It’s a great bunch of speakers, and I’m expecting an intellectual and emotional mindfest.  But first, some background.

In the relatively near future, I expect increasing numbers of people to navigate the sea change described recently by writer Philippe Verdoux in his article Transhumanists coming out of the closet:

It wasn’t that long ago that listing transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on, as interests on one’s CV might result in a bit of embarrassment.

Over just the past decade and a half, though, there seems to have been a sea change in how these issues are perceived by philosophers and others: many now see them as legitimate subjects of research; they have, indeed, acquired a kind of academic respectability that they didn’t previously possess.

There are no doubt many factors behind this shift. For one, it seems to be increasingly apparent, in 2011, that technology and biology are coming together to form a new kind of cybernetic unity, and furthermore that such technologies can be used to positively enhance (rather than merely alter) features of our minds and bodies.

In other words, the claim that humans can “transcend” (a word I don’t much like, by the way) our biological limitations through the use of enhancement technologies seems to be increasingly plausible – that is, empirically speaking.

Thus, it seems to be a truism about our contemporary world that technology will, in the relatively near future, enable us to alter ourselves in rather significant ways. This is one reason, I believe, that more philosophers are taking transhumanism seriously…

On a personal note, when I first discovered transhumanism, I was extremely skeptical about its claims (which, by the way, I think every good scientific thinker should be). I take it that transhumanism makes two claims in particular, the first “descriptive” and the second “normative”: (i) that future technologies will make it possible for us to radically transform the human organism, potentially enabling us to create a new species of technologized “posthumans”; and (ii) that such a future scenario is preferable to all other possible scenarios. In a phrase: we not only can but ought to pursue a future marked by posthumanity…

One factor that leads people to pay more serious attention to this bundle of ideas – transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on – is the increasing coverage of these ideas in thoughtful articles in the mainstream media.  In turn, many of these articles have been triggered by the film Transcendent Man by director Barry Ptolemy, featuring the groundbreaking but controversial ideas and projects of inventor and futurist Ray Kurzweil.  Here’s a trailer for the film:

The film has received interesting commentary in, among other places:

I had mixed views when watching the movie myself:

  • On the one hand, it contains a large number of profound sound bites – statements made by many of the talking heads on screen; any of these sound bites could, potentially, change someone’s life, if they reflect on the implications;
  • The film also covers many details of Kurzweil’s own biography, with archive footage of him at different stages of his career – this filled in many gaps in my own understanding, and gave me renewed respect for what he has accomplished as a professional;
  • On the other hand, although there are plenty of critical comments among the sound bites – comments highlighting potential problems or issues with Kurzweil’s ideas – the film never really lets the debate fly;
  • I found myself thinking – yes, that’s an interesting and important point, now let’s explore this further – but then the movie switched to a different frame.

The movie has its official UK premier at the London Science Museum on Tuesday 5th April.  Kurzweil himself will be in attendance, to answer questions raised by the audience.  The last time I checked, tickets were sold out.

Post Transcendent Man

To drill down more deeply into the potentially radical implications of Kurzweil’s ideas and projects, the UK chapter of Humanity+ has arranged an event in  Birkbeck College (WC1E 7HX), Torrington Square in Central London on the afternoon (2pm-4.15pm) of Saturday 9th April.  We’ll be in Malet Street lecture room B34 – which seats a capacity audience of 177 people.  For more details about logistics, registration, and so on, see the official event website, or the associated Facebook page.

The event is privileged to feature an outstanding set of speakers and panellists who represent a range of viewpoints about the Singularity, transhumanism, and human transcendence.  In alphabetical order by first name:

Dr Anders Sandberg is a James Martin research fellow at the Future of Humanity Institute at Oxford University. As a part of the Oxford Martin School he is involved in interdisciplinary research on cognitive enhancement, neurotechnology, global catastrophic risks, emerging technologies and applied rationality. He has been writing about and debating transhumanism, future studies, neuroethics and related questions for a long time. He is also an associate of the Oxford Centre for Neuroethics and the Uehiro Centre for Practical Ethics, as well as co-founder of the Swedish think tank Eudoxa.

Jaan Tallinn is one of the programmers behind Kazaa and a founding engineer of Skype. He is also a partner in Ambient Sound Investments as well as a member of the Estonian President’s Academic Advisory Board. He describes himself as singularitarian/hacker/investor/physicist (in that order). In recent years Jaan has found himself closely following and occasionally supporting the work that SIAI and FHI are doing. He agrees with Kurzweil in that the topic of Singularity can be extremely counterintuitive to general public, and has tried to address this problem in a few public presentations at various venues.

Nic Brisbourne is a partner at venture capital fund DFJ Esprit and blogger on technology and startup issues at The Equity Kicker. As such he’s interested in when technology and science projects become products and businesses. He has a personal interest in Kurzweil’s ideas and longevity in particular and he says he’s keen to cross the gap from personal to professional and find exciting startups generating products in this area, although he thinks that the bulk of the commercialisation opportunities are still a year or two out.

Paul Graham Raven is a writer, literary critic and bootstrap big-picture futurist; he prods regularly at the fuzzy boundary of the unevenly-distributed future at futurismic.com. He is Editor-in-Chief and Publisher of The Dreaded Press, a rock music reviews webzine, and Publicist and PR officer for PS Publishing – perhaps the UK’s foremost boutique genre publisher. He says he’s also a freelance web-dev to the publishing industry, a cack-handed fuzz-rock guitarist, and in need of a proper haircut.

Russell Buckley is a leading practitioner, speaker and thinker about mobile and mobile marketing. MobHappy, his blog about mobile technology, is one of the most established focusing on this area. He is also a previous Global Chairman of the Mobile Marketing Association, a founder of Mobile Monday in Germany and holds numerous non-executive positions in mobile technology companies. Russell learned about mobile advertising startup, AdMob, soon after its launch, and joined as its first employee in 2006, with the remit of launching AdMob into the EMEA market. Four years later, AdMob was sold to Google for $750m. By night though, Russell is fascinated by the socio-political implications of technology and recently graduated from the Executive Program at the Singularity University, founded by Ray Kurtzweil and Peter Diamandis to “educate and inspire leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges”.

The discussion continues

The event will start, at 2pm, with the panellists introducing themselves, and their core thinking about the topics under discussion.  As chair, I’ll ask a few questions, and then we’ll open up for questions and comments from the audience.  I’ll be particularly interested to explore:

  • How people see the ideas of accelerating technology making a difference in their own lives – both personally or professionally.  Three of us on the stage were on founding teams of companies that made sizeable waves in the technology world (Jaan Tallinn, Skype; Russell Buckley, AdMob; myself, Symbian).  Where do we see rapidly evolving technology (as often covered by Kurzweil) taking us next?
  • People’s own experiences with bodies such as the Singularity University, the Singularity Institute, and the Future of Humanity Institute at Oxford University.  Are these bodies just talking shops?  Are they grounded in reality?  Are they making a substantial positive difference in how humanity responds to the issues and challenges of technology?
  • Views as to the best way to communicate ideas like the Singularity – favourite films, science fiction, music, and other media.  How does the move “Transcendent Man” compare?
  • Reservations and worries (if any) about the Singularity movement and the ways in which Kurzweil expresses his ideas.  Are the parallels with apocalyptic religions too close for comfort?
  • Individuals’ hopes and aspirations for the future of technology.  What role do they personally envision playing in the years ahead?  And what timescales do they see as credible?
  • Calls to action – what (if anything) should members of the audience change about their lives, in the light of analysing technology trends?

Which questions do you think are the most important to raise?

Request for help

If you think this is an important event, I have a couple of suggestions for you:

The discussion continues (more)

Dean Bubley, founder of Disruptive Analysis and a colleague of mine from the mobile telecomms industry, has organised the “Inaugural UK Humanity+ Evening Salon” on Wednesday April 13th, from 7pm to 10pm.  Dean describes it as follows:

Interested in an evening discussing the future of the human species & society? Aided by a drink or two?

This is the first “salon” event for the London branch of “Humanity Plus”, or H+ for short. It’s going to be an informal evening event involving a stimulating guest speaker, Q&A and lively discussion, all aided by a couple of drinks. It fits alongside UKH+’s larger Saturday afternoon lecture sessions, and occasional all-day major conferences…

It will be held in central London, in a venue TBC closer to the time. Please contact Dean Bubley (facebook.com/bubley), the convener & moderator, for more details.

For more details, see the corresponding Facebook page, and RSVP there so that Dean has an idea of the likely numbers.

31 December 2010

Welcome 2011 – what will the future hold?

Filed under: aging, futurist, Humanity Plus, intelligence, rejuveneering — David Wood @ 6:42 pm

As 2010 turns into 2011, let me offer some predictions about topics that will increasingly be on people’s minds, as 2011 advances.

(Spoiler: these are all topics that will feature as speaker presentations at the Humanity+ UK 2011 conference that I’m organising in London’s Conway Hall on 29th January.  At time of writing, I’m still waiting to confirm possibly one or two more speakers for this event, but registration is already open.)

Apologies for omitting many other key emerging tech-related trends from this list.  If there’s something you care strongly about – and if you live within striking distance of London – you’ll be more than welcome to join the discussion on 29th January!

10 October 2010

Call for speakers: Humanity+ UK2011

Filed under: Events, Humanity Plus, UKH+ — David Wood @ 2:18 pm

Although I haven’t allocated much time over the last few months to organising Humanity+ activities, I still assist the organisation on an occasional basis.

Earlier today, I issued a “call for speakers” for the January 2011 Humanity+ UK conference that will be taking place on Saturday 29 January 2011, in London’s Conway Hall.

Here’s a summary of the call:

Submissions are requested for talks lasting no more than 20 minutes on the general theme of Making a human difference. Submissions should address one or more of the follow sub-themes:

  1. Technology that enhances humans
  2. Existential risks: the biggest human difference
  3. Citizen activism in support of Humanity+
  4. Humanity vs. Humanity+: criticisms and renewal
  5. Roadmapping the new human future.

Submissions need not be lengthy – around the equivalent of one page of A4 material should be sufficient. They should cover:

  • Proposed title of the talk, and which of the above sub-themes apply to it
  • Brief description of the talk
  • Brief description of the speaker
  • An explanation of why the presentation will provide value to the expected audience.

The 20 minute limit on the length of presentations is intended to ensure that speakers focus on communicating their most important messages. It will also allow a larger number of speakers (and, hence, a larger number of points of view to be considered during the day).

A small number of speakers will also be invited to take part in panel Q&A discussions. These will be decided nearer the time of the conference.

Speaker submissions should be emailed as soon as possible to humanityplusuk AT gmail DOT com.

Speaker slots will be allocated as soon as good submissions are received, and announced on the conference blog. The call for submissions will be closed once there are no available speaking slots left.

Note: at this conference, all speakers will be required to provide slides (e.g. PowerPoint) to accompany their presentation. Speakers who fail to provide their slides to the organisers at least 48 hours before the start of the conference will be removed from the programme.

The organisers also regret that no speaker expenses, fees, or honoraria can be paid. However, speakers will receive free registration for the conference.

Footnote: For background, here’s the site for the corresponding 2010 conference, which attracted an audience of just under 200 people.

« Newer PostsOlder Posts »

Blog at WordPress.com.