If you’re interested in politics, then I recommend the video recording of yesterday’s presentation at London’s RSA by Yascha Mounk.
Mounk is a Lecturer on Government at Harvard University, a Senior Fellow at New America, a columnist at Slate, and the host of The Good Fight podcast. He’s also the author of the book “The People vs. Democracy: Why Our Freedom Is in Danger and How to Save It” which I finished reading yesterday. The RSA presentation provides a good introduction to the ideas in the book.
The book marshals a set of arguments in a compelling way, which I hadn’t seen lined up in that way before. It provides some very useful insight as to the challenges being posed to the world by the growth of “illiberal democracy” (“populism”). It also explains why these challenges might be more dangerous than many commentators have tended to assume.
(“Don’t worry about the rise of strong men like Trump”, these commentators say. “Sure, Trump is obnoxious. But the traditions of liberal democracy are strong. The separation of powers are such that the excesses of any would-be autocrat will surely be tamed”. Alas, there’s no “surely” about it.)
Here’s how Mounk’s book is described on its website:
The world is in turmoil. From India to Turkey and from Poland to the United States, authoritarian populists have seized power. As a result, Yascha Mounk shows, democracy itself may now be at risk.
Two core components of liberal democracy—individual rights and the popular will—are increasingly at war with each other. As the role of money in politics soared and important issues were taken out of public contestation, a system of “rights without democracy” took hold. Populists who rail against this say they want to return power to the people. But in practice they create something just as bad: a system of “democracy without rights.”
The consequence, Mounk shows in The People vs. Democracy, is that trust in politics is dwindling. Citizens are falling out of love with their political system. Democracy is wilting away. Drawing on vivid stories and original research, Mounk identifies three key drivers of voters’ discontent: stagnating living standards, fears of multiethnic democracy, and the rise of social media. To reverse the trend, politicians need to enact radical reforms that benefit the many, not the few.
The People vs. Democracy is the first book to go beyond a mere description of the rise of populism. In plain language, it describes both how we got here and where we need to go. For those unwilling to give up on either individual rights or the popular will, Mounk shows, there is little time to waste: this may be our last chance to save democracy.
One drawback of the book, however, is that the solutions it offers, although “worthy”, seem unlikely to stir sufficient popular engagement to become turned from idea into reality. I believe something bigger is needed.
On Tuesday last week I joined members of “The Big Potatoes” for a spirited discussion entitled “Automation Anxiety”. Participants became embroiled in questions such as:
To what extent will increasingly capable automation (robots, software, and AI) displace humans from the workforce?
To what extent should humans be anxious about this process?
The Big Potatoes website chose an image from the marvellously provocative Channel 4 drama series “Humans” to set the scene for the discussion:
“Closer to humans” than ever before, the fictional advertisement says, referring to humanoid robots with multiple capabilities. In the TV series, many humans became deeply distressed at the way their roles are being usurped by these new-fangled entities.
Back in the real world, many critics reject these worries. “We’ve heard it all before”, they assert. Every new wave of technological automation has caused employment disruption, yes, but it has also led to new types of employment. The new jobs created will compensate for the old ones destroyed, the critics say.
I see these critics as, most likely, profoundly mistaken. This time things are different. That’s because of the general purpose nature of ongoing improvements in the algorithms for automation. Machine learning algorithms that are developed with one set of skills in mind turn out to fit, reasonably straightforwardly, into other sets of skills as well.
The master algorithm
That argument is spelt out in the recent book “The master algorithm” by University of Washington professor of computer science and engineering Pedro Domingos.
The subtitle of that book refers to a “quest for the ultimate learning machine”. This ultimate learning machine can be contrasted with another universal machine, namely the universal Turing machine:
The universal Turing machine accepts inputs and applies a given algorithm to compute corresponding outputs
The universal learning machine accepts a set of corresponding input and output data, and makes the best possible task of inferring the algorithm that would obtain the outputs from the inputs.
For example, given sets of texts written in English, and matching texts written in French, the universal learning machine would infer an algorithm that will convert English into French. Given sets of biochemical reactions of various drugs on different cancers, the universal learning machine would infer an algorithm to suggest the best treatment for any given cancer.
As Domingos explains, there are currently five different “tribes” within the overall machine learning community. Each tribe has its separate origin, and also its own idea for the starting point of the (future) master algorithm:
“Symbolists” have their origin in logic and philosophy; their core algorithm is “inverse deduction”
“Connectionists” have their origin in neuroscience; their core algorithm is “back-propagation”
“Evolutionaries” have their origin in evolutionary biology; their core algorithm is “genetic programming”
“Bayesians” have their origin in statistics; their core algorithm is “probabilistic inference”
“Analogizers” have their origin in psychology; their core algorithm is “kernel machines”.
What’s likely to happen over the next decade, or two, is that a single master algorithm will emerge that unifies all the above approaches – and, thereby, delivers great power. It will be similar to the progress made by physics as the fundamental force of natures have gradually been unified into a single theory.
And as that unification progresses, more and more occupations will be transformed, more quickly than people generally expect. Technological unemployment will rise and rise, as software embodying the master algorithm handles tasks previously thought outside the scope of automation.
The goal is to do for data science what “Chaos” [by James Gleick] did for complexity theory, or “The Selfish Gene” [by Richard Dawkins] for evolutionary game theory: introduce the essential ideas to a broader audience, in an entertaining and accessible way, and outline the field’s rich history, connections to other fields, and implications.
Now that everyone is using machine learning and big data, and they’re in the media every day, I think there’s a crying need for a book like this. Data science is too important to be left just to us experts! Everyone – citizens, consumers, managers, policymakers – should have a basic understanding of what goes on inside the magic black box that turns data into predictions.
People who comment about the likely impact of automation on employment would do particularly well to educate themselves about the ideas covered by Domingos.
Rise of the robots
There’s a second reason why “this time it’s different” as regards the impact of new waves of automation on the employment market. This factor is the accelerating pace of technological change. As more areas of industry become subject to digitisation, they become, at the same time, subject to automation.
Lucid, comprehensive and unafraid to grapple fairly with those who dispute Ford’s basic thesis, Rise of the Robots is an indispensable contribution to a long-running argument. —Los Angeles Times
If The Second Machine Age was last year’s tech-economy title of choice, this book may be 2015’s equivalent. —Financial Times, Summer books 2015, Business, Andrew Hill
[Ford’s] a careful and thoughtful writer who relies on ample evidence, clear reasoning, and lucid economic analysis. In other words, it’s entirely possible that he’s right. —Daily Beast
Surveying all the fields now being affected by automation, Ford makes a compelling case that this is an historic disruption—a fundamental shift from most tasks being performed by humans to one where most tasks are done by machines. —Fast Company
Well-researched and disturbingly persuasive. —Financial Times
Martin Ford has thrust himself into the center of the debate over AI, big data, and the future of the economy with a shrewd look at the forces shaping our lives and work. As an entrepreneur pioneering many of the trends he uncovers, he speaks with special credibility, insight, and verve. Business people, policy makers, and professionals of all sorts should read this book right away—before the ‘bots steal their jobs. Ford gives us a roadmap to the future. —Kenneth Cukier, Data Editor for the Economist and co-author of Big Data: A Revolution That Will Transform How We Live, Work, and Think
Ever since the Luddites, pessimists have believed that technology would destroy jobs. So far they have been wrong. Martin Ford shows with great clarity why today’s automated technology will be much more destructive of jobs than previous technological innovation. This is a book that everyone concerned with the future of work must read. —Lord Robert Skidelsky, Emeritus Professor of Political Economy at the University of Warwick, co-author of How Much Is Enough?: Money and the Good Life and author of the three-volume biography of John Maynard Keynes
If you’re still not convinced, I recommend that you listen to this audio podcast of a recent event at London’s RSA, addressed by Ford.
Yes, humans can retrain over time, to learn new skills, in readiness for new occupations when their former employment has been displaced by automation
However, the speed of improvement of the capabilities of automation will increasingly exceed that of humans
Coupled with the general purpose nature of these capabilities, it means that, conceivably, from some time around 2040, very few humans will be able to find paid work.
A worked example: a site carpenter
During the Big Potatoes debate on Tuesday, I pressed the participants to name an occupation that would definitely be safe from incursion by robots and automation. What jobs, if any, will robots never be able to do?
One suggestion that came back was “site carpenter”. In this thinking, unfinished buildings are too complex, and too difficult for robots to navigate. Robots who try to make their way through these buildings, to tackle carpentry tasks, will likely fall down. Or assuming they don’t fall down, how will they cope with finding out that the reality in the building often varies sharply from the official specification? These poor robots will try to perform some carpentry task, but will get stymied when items are in different places from where they’re supposed to be. Or have different tolerances. Or alternatives have been used. Etc. Such systems are too messy for robots to compute.
My answer is as follows. Yes, present-day robots currently often do fall down. Critics seem to find this hilarious. But this is pretty similar to the fact that young children often fall down, while learning to walk. Or novice skateboarders often fall down, when unfamiliar with this mode of transport. However, robots will learn fast. One example is shown in this video, of the “Atlas” humanoid robot from Boston Dynamics (now part of Google):
As for robots being able to deal with uncertainty and surprises, I’m frankly struck by the naivety of this question. Of course software can deal with uncertainty. Software calculates courses of action statistically and probabilistically, the whole time. When software encounters information at variance from what it previously expected, it can adjust its planned course of action. Indeed, it can take the same kinds of steps that a human would consider – forming new hypotheses, and, when needed, checking back with management for confirmation.
The question is a reminder to me that the software and AI community need to do a much better job to communicate the current capabilities of their field, and the likely improvements ahead.
What does it mean to be human?
For me, the most interesting part of Tuesday’s discussion was when it turned to the following questions:
Should these changes be welcomed, rather than feared?
What will these forthcoming changes imply for our conception of what it means to be human?
To my mind, technological unemployment will force us to rethink some of the fundamentals of the “protestant work ethic” that permeates society. That ethic has played a decisive positive role for the last few centuries, but that doesn’t mean we should remain under its spell indefinitely.
If we can change our conceptions, and if we can manage the resulting social transition, the outcome could be extremely positive.
One of the many speakers at that conference, Scott Santens, has kindly made his slides available, here. Alongside many graphs on the increasing “winner takes all” nature of modern employment (in which productivity increases but median income declines), Santens offers a different way of thinking about how humans should be spending their time:
When you hate what you do as a job, you are definitely getting paid in return for doing it. But when you love what you do as a job or as unpaid work, you’re only able to do it because of somehow earning sufficient income to enable you to do it.
The difference between these two forms of work cannot be overstated…
Traditionally speaking, most of the work going on around us is only considered work, if one gets paid to do it. Are you a parent? Sorry, that’s not work. Are you in paid childcare? Congratulations, that’s work. Are you an open source programmer? Sorry, that’s not work. Are you a paid software engineer? Congratulations, that’s work…
What enables this transformation would be some variant of a “basic income guarantee” – a concept that is introduced in the slides by Santens, and also in the above-mentioned book by Martin Ford. You can hear Ford discuss this option in his RSA podcast, where he ably handles a large number of questions from the audience.
What I found particularly interesting from that podcast was a comment made by Anthony Painter, the RSA’s Director of Policy and Strategy who chaired the event:
The RSA will be advocating support for Basic Income… in response to Technological Unemployment.
(This comment comes about 2/3 of the way through the podcast.)
To be clear, I recognise that there will be many difficulties in any transition from the present economic situation to one in which a universal basic income applies. That transition is going to be highly challenging to manage. But these problems of transition are a far better problem to have, than dealing with the consequences of vastly increased unpaid unemployment and social alienation.
Life is being redefined
Just in case you’re still tempted to dismiss the above scenarios as some kind of irresponsible fantasy, there’s one more resource you might like to consult. It’s by Janna Q. Anderson, Professor of Communications at Elon University, and is an extended write-up of a presentation I heard her deliver at the World Future 2015 conference in San Francisco this July.
You can find Anderson’s article here. It starts as follows:
The Robot Takeover is Already Here
The machines that replace us do not have to have superintelligence to execute a takeover with overwhelming impacts. They must merely extend as they have been, rapidly becoming more and more instrumental in our essential systems.
It’s the Algorithm Age. In the next few years humans in most positions in the world of work will be nearly 100 percent replaced by or partnered with smart software and robots —’black box’ invisible algorithm-driven tools. It is that which we cannot see that we should question, challenge and even fear the most. Algorithms are driving the world. We are information. Everything is code. We are becoming dependent upon and even merging with our machines. Advancing the rights of the individual in this vast, complex network is difficult and crucial.
The article is described as being a “45 minute read”. In turn, it contains numerous links, so you could spend lots longer following the resulting ideas. In view of the momentous consequences of the trends being discussed, that could prove to be a good use of your time.
By way of summary, I’ll pull out a few sentences from the middle of the article:
One thing is certain: Employment, as it is currently defined, is already extremely unstable and today many of the people who live a life of abundance are not making nearly enough of an effort yet to fully share what they could with those who do not…
It’s not just education that is in need of an overhaul. A primary concern in this future is the reinvention of humans’ own perceptions of human value…
[Another] thing is certain: Life is being redefined.
Who controls the robots?
Despite the occasional certainty in this field (as just listed above, extracted from the article by Janna Anderson), there remains a great deal of uncertainty. I share with my Big Potatoes colleagues the viewpoint that technology does not determine social responses. The question of which future scenario will unfold isn’t just a question of cheer-leading (if you’re an optimist) or cowering (if you’re a pessimist). It’s a question of choice and action.
From Metropolis through to recent hit film Ex Machina, concerns about intelligent robots enslaving humanity are a sci-fi staple. Yet recent headlines suggest the reality is catching up with the cultural imagination. The World Economic Forum in Davos earlier this year hosted a serious debate around the Campaign to Stop Killer Robots, organised by the NGO Human Rights Watch to oppose the rise of drones and other examples of lethal autonomous warfare. Moreover, those expressing the most vocal concerns around the march of the robots can hardly be dismissed as Luddites: the Elon-Musk funded and MIT-backed Future of Life Institute sparked significant debate on artificial intelligence (AI) by publishing an open letter signed by many of the world’s leading technologists and calling for robust guidelines on AI research to ‘avoid potential pitfalls’. Stephen Hawking, one of the signatories, has even warned that advancing robotics could ‘spell the end of the human race’.
On the other hand, few technophiles doubt the enormous potential benefits of intelligent robotics: from robot nurses capable of tending to the elderly and sick through to the labour-saving benefits of smart machines performing complex and repetitive tasks. Indeed, radical ‘transhumanists’ openly welcome the possibility of technological singularity, where AI will become so advanced that it can far exceed the limitations of human intelligence and imagination. Yet, despite regular (and invariably overstated) claims that a computer has managed to pass the Turing Test, many remain sceptical about the prospect of a significant power shift between man and machine in the near future…
Why has this aspect of robotic development seemingly caught the imagination of even experts in the field, when even the most remarkable developments still remain relatively modest? Are these concerns about the rise of the robots simply a high-tech twist on Frankenstein’s monster, or do recent breakthroughs in artificial intelligence pose new ethical questions? Is the question more about from who builds robots and why, rather than what they can actually do? Does the debate reflect the sheer ambition of technologists in creating smart machines or a deeper philosophical crisis in what it means to be human?
As you can imagine, I’ll be taking serious issue with the above claim, from the session description, that progress with robots will “remain relatively modest”. However, I’ll be arguing for strong focus on questions of control.
It’s not just a question of whether it’s humans or robots that end up in control of the planet. There’s a critical preliminary question as to which groupings and systems of humans end up controlling the evolution of robots, software, and automation. Should we leave this control to market mechanisms, aided by investment from the military? Or should we exert a more general human control of this process?
I wait with interest to find out how much this viewpoint will be shared by the other speakers at this session:
Dr Robert Clowes – chair, Mind & Cognition Group, Nova Institute of Philosophy, Lisbon University; chair, Lisbon Salon
Professor Steve Fuller – Auguste Comte Chair in Social Epistemology, University of Warwick
Timandra Harkness – journalist, writer & broadcaster; presenter, The Singularity & other BBC Radio 4 programmes; writer & performer, science-based comedy shows, including BrainSex
The key point is that this new number – 2,795 – is higher than 565. Five times higher.
He has a vivid metaphor to drive his message home:
Think of two degrees Celsius as the legal drinking limit – equivalent to the 0.08 blood-alcohol level below which you might get away with driving home. The 565 gigatons is how many drinks you could have and still stay below that limit – the six beers, say, you might consume in an evening. And the 2,795 gigatons? That’s the three 12-packs the fossil-fuel industry has on the table, already opened and ready to pour.
We have five times as much oil and coal and gas on the books as climate scientists think is safe to burn. We’d have to keep 80 percent of those reserves locked away underground to avoid that fate. Before we knew those numbers, our fate had been likely. Now, barring some massive intervention, it seems certain.
He continues,
Yes, this coal and gas and oil is still technically in the soil. But it’s already economically above ground – it’s figured into share prices, companies are borrowing money against it, nations are basing their budgets on the presumed returns from their patrimony. It explains why the big fossil-fuel companies have fought so hard to prevent the regulation of carbon dioxide – those reserves are their primary asset, the holding that gives their companies their value. It’s why they’ve worked so hard these past years to figure out how to unlock the oil in Canada’s tar sands, or how to drill miles beneath the sea, or how to frack the Appalachians.
The burning question
A version of Bill McKibben’s Global Warming’s Terrifying New Math essay can be found as the foreword to the recent book “The Burning Question” co-authored by Duncan Clark and Mike Berners-Lee. The subtitle of the book has a somewhat softer message than in the McKibben essay:
We can’t burn half the world’s oil, coal, and gas. So how do we quit?
But the introduction makes it clear that constraints on our use of fossil fuel reserves will need to go deeper than “one half”:
Avoiding unacceptable risks of catastrophic climate change means burning less than half of the oil, coal, and gas in currently commercial reserves – and a much smaller fraction of all the fossil fuels under the ground…
Notoriously, climate change is a subject that is embroiled in controversy and intemperance. The New York Times carried an opinion piece, “We’re All Climate-Change Idiots” containing this assessment from Anthony Leiserowitz, director of the Yale Project on Climate Change Communication:
You almost couldn’t design a problem that is a worse fit with our underlying psychology.
However, my assessment of the book “The burning question” by Berners-Lee and Clark is that it is admirably objective and clear. That impression was reinforced when I saw Duncan Clark speak about the contents of the book at London’s RSA a couple of months ago. On that occasion, the meeting was constrained to less than an hour, for both presentation and audience Q&A. It was clear that the speaker had a lot more that he could have said.
I was therefore delighted when he agreed to speak on the same topic at a forthcoming London Futurists event, happening in Birkbeck College from 6.15pm to 8.30pm on Saturday 18th January. You can find more details of the London Futurists event here. Following our normal format, we’ll have a full two hours of careful examination of the overall field.
Six steps to climate catastrophe
One way to examine the risks of climate catastrophe induced by human activity is to consider the following six-step chain of cause and effect:
Population – the number of people on the earth
Affluence – the average wealth of people on the earth
Energy intensity – the average amount of energy used to create a unit of wealth
Carbon intensity – the average carbon emissions caused by each unit of energy
Temperature impact – the average increase of global temperature caused by carbon emissions
Global impact – the broader impact on life on earth caused by increased average temperature.
As Berners-Lee and Clark discuss in their book, there’s scope to debate, and/or to alter, each of these causal links. Various commentators recommend:
A reduction in the overall human population
Combatting society’s deep-seated imperatives to pursue economic growth
Achieving greater affluence with less energy input
Switching to energy sources (such as “renewables”) with reduced carbon emissions
Seeing (or engineering) different causes that complicate the relation between carbon emissions and temperature rises
Seeing (or engineering) beneficial aspects to global increases in temperature, rather than adverse ones.
What they point out, however, is that despite significant progress to reduce energy intensity and carbon intensity, the other factors seem to be increasing out of control, and dominate the overall equation. Specifically, affluence shows no signs of decreasing, especially when the aspirations of huge numbers of people in emerging economies are taken into consideration.
I see this as an argument to accelerate work on technical solutions – further work to reduce the energy intensity and carbon intensity factors. I also see it as an argument to rapidly pursue investigations of what Berners-Lee and Clark call “Plan B”, namely various forms of geoengineering. This extends beyond straightforward methods for carbon capture and storage, and includes possibilities such as
Trying to use the oceans to take more carbon dioxide out of the air and store it in an inert form
Screen some of the incoming heat from the sun, by, for example, creating more clouds, or injecting aerosols into the upper atmosphere.
But Berners-Lee and Clark remain apprehensive about one overriding factor. This is the one described earlier: the fact that so much investment is tied up in the share-prices of oil companies that assume that huge amounts within the known reserves of fossil fuels will all be burnt, relatively soon. Providing better technical fixes will, they argue, be insufficient to prevent the ongoing juggernaut steamroller of conversion from fossil fuels into huge cash profits for industry – a juggernaut with the side-effect of accumulated carbon emissions that increase the risk of horrendous climate consequences.
For this reason, they see the need for concerted global action to ensure that the prices being paid for the acquisition and/or consumption of fossil fuels fully take into account the downside costs to the global environment. This will be far from easy to achieve, but the book highlights some practical steps forwards.
Waking up
The first step – as so often, in order to succeed in a complex change project – is to engender a sustained sense of urgency. Politicians won’t take action unless there is strong public pressure for action. This public pressure won’t exist whilst people remain in a state of confusion, disinterest, dejection, and/or helplessness. Here’s an extract from near the end of their book:
It’s crucial that more people hear the simple facts loud and clear: that climate change presents huge risks, that our efforts to solve it so far haven’t worked, and that there’s a moral imperative to constrain unabated fossil fuel use on behalf of current and especially future generations.
It’s often assumed that the world isn’t ready for this kind of message – that it’s too negative or scary or confrontational. But reality needs facing head on – and anyhow the truth may be more interesting and inspiring than the watered down version.
I expect many readers of this blogpost to have questions in their mind – or possibly objections (rather than just questions) – regarding at least some of what’s written above. This topic deserves a 200 page book rather than just a short blogpost.
Rather than just urging people to read the book in question, I have set up the London Futurists event previously mentioned. I am anticipating robust but respectful in-depth discussion.
Beyond technology
One possible response is that the acceleration of technological solutions will deliver sufficient solutions (e.g. reducing energy intensity and carbon intensity) long before we need to worry about the climate reaching any tipping point. Solar energy may play a decisive role – possibly along with new generations of nuclear power technology.
That may turn out to be true. But my own engineering experience with developing complex technological solutions is that the timetable is rarely something that anyone can be confident about in advance. So yes, we need to accelerate the technology solutions. But equally, as an insurance policy, we need to take actions that will buy ourselves more time, in order for these technological solutions to come to full fruition. This insurance policy inevitably involves the messy worlds of politics and economics, alongside the developments that happen in the technological arena.
This last message comes across uncomfortably to people who dislike any idea of global coordinated action in politics or economics. People who believe in “small government” and “markets as free as possible” don’t like to contemplate global scale political or economic action. That is, no doubt, another reason why the analysis of global warming and climate change is such a contentious issue.
I’ve lost count of the number of people who have thanked me over the years for drawing their attention to the book “The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” written by Jonathan Haidt, Professor of Social Psychology at the University of Virginia. That was a book with far-reaching scope and penetrating insight. Many of the ideas and metaphors in it have since become fundamental building blocks for other writers to use – such as the pithy metaphor of the human mind being divided like a rider on an elephant, with the job of the rider (our stream of conscious reasoning) being to serve the elephant (the other 99% of our mental processes).
This weekend, I’ve been reading Haidt’s new book, “The Righteous Mind: Why Good People Are Divided by Politics and Religion”. It’s a great sequel. Like its predecessor, it ranges across more than 2,400 years of thought, highlighting how recent research in social psychology sheds clear light on age-old questions.
Haidt’s analysis has particular relevance for two deeply contentious sets of debates that each threaten to destabilise and divide contemporary civil society:
The “new atheism” critique of the relevance and sanctity of religion in modern life
The political fissures that are coming to the fore in the 2012 US election year – fissures I see reflected in messages full of contempt and disdain in the Facebook streams of some several generally sensible US-based people I know.
There’s so much in this book that it’s hard to summarise it without doing an injustice to huge chunks of fascinating material:
the importance of an empirical approach to understanding human morality – an approach based on observation, rather than on a priori rationality
moral intuitions come first, strategic reasoning comes second, to justify the intuitions we have already reached
there’s more to morality than concerns over harm and fairness; Haidt memorably says that “the righteous mind is like a tongue with six taste receptors”
the limitations of basing research findings mainly on ‘WEIRD‘ participants (people who are Western, Educated, Industrialised, Rich, and Democratic)
the case for how biological “group selection” helped meld humans (as opposed to natural selection just operating at the level of individual humans)
a metaphor that “human beings are 90 percent chimp and 10 percent bee”
the case that “The most powerful force ever known on this planet is human cooperation — a force for construction and destruction”
methods for flicking a “hive switch” inside human brains that open us up to experiences of self-transcendence (including a discussion of rave parties).
Jonathan Haidt, the highly influential psychologist, is here to show us why we all find it so hard to get along. By examining where morality comes from, and why it is the defining characteristic of humans, Haidt will show why we cannot dismiss the views of others as mere stupidity or moral corruption. Our moral roots run much deeper than we realize. We are hardwired not just to be moral, but moralistic and self-righteous. From advertising to politics, morality influences all aspects of behaviour. It is the key to understanding everybody. It explains why some of us are liberals, others conservatives. It is often the difference between war and peace. It is also why we are the only species that will kill for an ideal.
Haidt argues we are always talking past each other because we are appealing to different moralities: it is not just about justice and fairness – for some people authority, sanctity or loyalty are more important. With new evidence from his own empirical research, Haidt will show it is possible to liberate us from the disputes that divide good people. We can either stick to comforting delusions about others, or learn some moral psychology. His hope is that ultimately we can cooperate with those whose morals differ from our own.
Many aspects of human life that at first seem weird and hard to explain can make a lot more sense once you see them from the viewpoint of evolution.
It was Richard Dawkins’ book “The Selfish Gene” which first led me to that conclusion, whilst I was still at university. After “The Selfish Gene”, I read “Sociobiology: the new synthesis“, by E.O. Wilson, which gave other examples. I realised it was no longer necessary to refer to concepts such as “innate wickedness” or “original sin” to explain why people often did daft things. Instead, people do things because (in part) of underlying behavioural patterns which tended to make their ancestors more likely to leave successful offspring.
In short, you can deepen your understanding of humans if you understand evolution. On the whole, attempts to get humans to change their behaviour will be more likely to succeed if they are grounded in an understanding of the real factors that led humans to tend to behave as they do.
What’s more, you can understand humans better if you understand evolution better.
In a moment, I’ll come to some interesting new ideas about the role played by technology in evolution. But first, I’ll mention two other ways in which an improved understanding of evolution sheds richer light on the human condition.
1. Evolution often results in sub-optimal solutions
In places where an intelligent (e.g. human) designer would “go back to the drawing board” and introduce a new design template, biological evolution has been constrained to keep working with the materials that are already in play. Biological evolution lacks true foresight, and cannot do what human designers would call “re-factoring an existing design”.
I’ve written on this subject before, in my review “The human mind as a flawed creation of nature” of the book by Gary Marcus, “Kluge – the haphazard construction of the human mind” – so I won’t say much more about that particular topic right now. But I can’t resist including a link to a fascinating video in which Richard Dawkins demonstrates the absurdly non-optimal route taken by the laryngeal nerve of the giraffe. As Dawkins says in the video, this nerve “is a beautiful example of historical legacy, as opposed to design”. If you haven’t seen this clip before, it’s well worth watching, and thinking about the implications.
2. Evolution can operate at multiple levels
For a full understanding of evolution, you have to realise it can operate at multiple levels:
At the level of individual genes
At the level of individual organisms
At the level of groups of cooperating organisms.
At each level, there are behaviours which exist because they made it more likely for an entity (at that level) to leave descendants. For example, groups of animals tend to survive as a group, if individuals within that group are willing, from time to time, to sacrifice themselves for the sake of the group.
The notion of group selection is, however, controversial among evolutionary theorists. Part of the merit of books such as The Selfish Gene was that it showed how altruistic behaviour could be explained, in at least some circumstances, by looking at the point of survival of individual genes. If individual A sacrifices himself for the sake of individuals B and C within the same group, it may well be that B and C carry many of the same genes as individual A. This analysis seems to deal with the major theoretical obstacle to the idea of group selection, which is as follows:
If individuals A1, A2, A3,… all have an instinct to sacrifice themselves for the sake of their wider group, it may well mean, other things being equal, that this group is initially more resilient than competing groups
However, an individual A4 who is individually selfish, within that group, will get the benefit of the success of the group, and the benefit of individual survival
So, over time, the group will tend to contain more individuals like the “free-rider” A4, and fewer like A1, A2, and A3
Therefore the group will degenerate into selfish behaviour … and this shows that the notion of “group selection” is flawed.
Group selection can apply, provided the group also has mechanisms to reduce free-riding behaviour by individuals
For example, people in the group might have strong instincts to condemn and punish people who try to take excess advantage of the generosity of others
So long as these mechanisms keep the prevalence of free-riding below a certain threshold, a group can reach a stable situation in which the altruism of the majority continues to benefit the group as a whole.
(To be clear: this kind of altruism generally looks favourably only at others within the same group. People who are outside your group won’t benefit from it. An injunction such as “love your neighbour as yourself” applied in practice only to people within your group – not to people outside it.)
At first sight, technology has little to do with evolution. Evolution occurred in bygone times, whilst technology is a modern development – right?
Not true. First, evolution is very much a present-day phenomenon (as well as something that has been at work throughout the whole history of life). Diseases evolve rapidly, under pressures of different regimes of anti-bacterial cocktails. And there is evidence that biological evolution still occurs for humans. A 2009 article in Time magazine was entitled “Darwin Lives! Modern Humans Are Still Evolving“. Here’s a brief extract:
One study, published in PNAS in 2007 and led by John Hawks, an anthropologist at the University of Wisconsin at Madison, found that some 1,800 human gene variations had become widespread in recent generations because of their modern-day evolutionary benefits. Among those genetic changes, discovered by examining more than 3 million DNA variants in 269 individuals: mutations that allow people to digest milk or resist malaria and others that govern brain development.
Second, technology is itself an ancient phenomenon – including creative use of sticks and stones. Benefits of very early human use of sticks and stones included fire, weapons, and clothing. What’s more, the advantages of use of tools allowed a strange side-effect in human genetic evolution: as we became technologically stronger, we also became biologically weaker. The Time magazine article mentioned above goes on to state the following:
According to anthropologist Peter McAllister, author of “Manthropology: the Science of Inadequate Modern Man“, the contemporary male has evolved, at least physically, into “the sorriest cohort of masculine Homo sapiens to ever walk the planet.” Thanks to genetic differences, an average Neanderthal woman, McAllister notes, could have whupped Arnold Schwarzenegger at his muscular peak in an arm-wrestling match. And prehistoric Australian Aborigines, who typically built up great strength in their joints and muscles through childhood and adolescence, could have easily beat Usain Bolt in a 100-m dash.
A breakthrough theory that tools and technology are the real drivers of human evolution.
Although humans are one of the great apes, along with chimpanzees, gorillas, and orangutans, we are remarkably different from them. Unlike our cousins who subsist on raw food, spend their days and nights outdoors, and wear a thick coat of hair, humans are entirely dependent on artificial things, such as clothing, shelter, and the use of tools, and would die in nature without them. Yet, despite our status as the weakest ape, we are the masters of this planet. Given these inherent deficits, how did humans come out on top?
In this fascinating new account of our origins, leading archaeologist Timothy Taylor proposes a new way of thinking about human evolution through our relationship with objects. Drawing on the latest fossil evidence, Taylor argues that at each step of our species’ development, humans made choices that caused us to assume greater control of our evolution. Our appropriation of objects allowed us to walk upright, lose our body hair, and grow significantly larger brains. As we push the frontiers of scientific technology, creating prosthetics, intelligent implants, and artificially modified genes, we continue a process that started in the prehistoric past, when we first began to extend our powers through objects.
Weaving together lively discussions of major discoveries of human skeletons and artifacts with a reexamination of Darwin’s theory of evolution, Taylor takes us on an exciting and challenging journey that begins to answer the fundamental question about our existence: what makes humans unique, and what does that mean for our future?
Upright female hominins walking the savannah had a real problem: their babies couldn’t cling to them the way a chimp baby could cling to its mother. Carrying an infant would have been the highest drain on energy for a hominin female – higher than lactation. So what did they do? I believe they figured out how to carry their newborns using a loop of animal tissue. Evidence of the slings hasn’t survived, but in the same way that we infer lungs and organs from the bones of fossils that survive, it is from the stone tools that we can infer the bits that don’t last: things made from sinew, wood, leather and grasses…
Once you have slings to carry babies, you have broken a glass ceiling – it doesn’t matter whether the infant is helpless for a day, a month or a year. You can have ever more helpless young and that, as far as I can see, is how encephalisation took place in the genus Homo. We used technology to turn ourselves into kangaroos. Our children are born more and more underdeveloped because they can continue to develop outside the womb – they become an extra-uterine fetus in the sling. This means their heads can continue to grow after birth, solving the smart biped paradox. In that sense technology comes before the ascent to Homo. Our brain expansion only really took off half a million years after the first stone tools. And they continued to develop within an increasingly technological environment…
I’ve ordered Taylor’s book from Amazon and I expect it to be waiting for me at my home in the UK once I return from my current trip in Asia. I’m also looking forward to hosting a discussion meeting on Saturday 11th Sept under the auspices of Humanity+ UK in London, where Timothy Taylor himself will be the main speaker. People on Facebook can register their interest in this meeting by RSVPing here. There’s no charge to attend.
Another option to see Timothy Taylor lecture in person – for those able to spare time in the middle of the day on a Thursday (9th Sept) – will be at the RSA. I expect there will be good discussion at both events, but the session at H+UK is longer (two hours, as opposed to just one at the RSA), and I expect more questions there about matters such as the likely role of technology radically re-shaping the future development of humans.
Footnote: of course, the fact that evolution guided our ancestors to behave in certain ways is no reason for us to want to continue to behave in these ways. But understanding the former is, in my view, very useful background knowledge for being to devise practical measures to change ourselves.
Recently, there have been several RSA meetings addressing the need for significant reform of how the global economy operates. Otherwise, these speakers imply, the future will be much bleaker than the present.
I find myself in agreement a great deal of what the book says:
Continuous economic growth is a shallow and, by itself, dangerous goal;
Beyond an initial level, greater wealth has only a weak correlation with greater prosperity;
Greater affluence can bring malaise – especially in countries with significant internal inequalities;
Consumers frequently find themselves spending money they don’t have, to buy new goods they don’t really need;
The recent economic crisis provides us with an important opportunity to reflect on the operation of economics;
“Business as usual” is not a sustainable answer;
There is an imperative to consider whether society can operate without its existing commitment to regular GDP growth.
What makes this book stand out is its recognition of the enormous practical problems in stopping growth. Both growth and de-growth face significant perils. As the start of chapter 12 of the book states:
Society is faced with a profound dilemma. To resist growth is to risk economic and social collapse. To pursue it relentlessly is to endanger the ecosystems on which we depend for long-term survival.
For the most part, this dilemma goes unrecognised in mainstream policy… When reality begins to impinge on the collective consciousness, the best suggestion to hand is that we can somehow ‘decouple‘ growth from its material impacts…
The sheer scale of this task is rarely acknowledged. In a world of 9 billion people all aspiring to western lifestyles, the carbon intensity of every dollar of output must be at least 130 times lower in 2050 than it it today…
Never mind that no-one knows what such an economy looks like. Never mind that decoupling isn’t happening on anything like that scale. Never mind that all our institutions and incentive structures continually point in the wrong direction. The dilemma, once recognised, looms so dangerously over our future that we are desperate to believe in miracles. Technology will save us. Capitalism is good at technology…
This delusional strategy has reached its limits. Simplistic assumptions that capitalism’s propensity for efficiency will stabilise the climate and solve the problem of resource scarcity are almost literally bankrupt. We now stand in urgent need of a clearer vision, braver policy-making, something more robust in the way of a strategy with which to confront the dilemma of growth.
The starting point must be to unravel the forces that keep us in damaging denial. Nature and structure conspire together here. The profit motive stimulates a continual search for newer, better or cheaper products and services. Our own relentless search for novelty and social status locks us into an iron cage of consumerism. Affluence itself has betrayed us.
Affluence breeds – and indeed relies on – the continual production and reproduction of consumer novelty. But relentless novelty reinforces anxiety and weakens our ability to protect long-term social goals. In doing so it ends up undermining our own well-being and the well-being of those around us. Somewhere along the way, we lose the shared prosperity we sought int he first place.
None of this is inevitable. We can’t change ecological limits. We can’t alter human nature. But we can and do create and recreate the social world. Its norms are our norms. Its visions are our visions. Its structures and institutions shape and are shaped by those norms and visions. This is where transformation is needed…
As I said, I find myself in agreement a great deal of what the book says. The questions raised in the book deserve a wide hearing. Society needs higher overarching goals than merely increasing our GDP. Society needs to focus on new priorities, which take into account the finite nature of the resources available to us, and the risks of imminent additional ecological and economic disaster.
However, I confess to being one of the people who believe (with some caveats…) that “technology will save us“. Let’s look again at this figure of a 130-fold descrease needed, between now and 2050.
The figure of 130 comes from a calculation in chapter 5 of the book. I have no quibble with the figure. It comes from the Paul Ehrlich equation
I = P * A * T
where:
I is the impact on the environment resulting from consumption
P is the population
A is the consumption or income level per capita (affluence)
T is the technological intensity of economic output.
Jackson’s book considers various scenarios. Scenario 4 assumes a global population of 9 billion by 2050, all enjoying a lifestyle equivalent to that of the average EU citizen, which has grown by the modest amount of only 2% per annum over the intervening 40 years. To bring down today’s I level for carbon intensity of economic level, to that seen by the IPCC as required to avoid catastrophic climate change, will require a 130-fold reduction in T in the meantime.
How feasible is an improvement factor of 130 in technology, over the next 40 years? How good is the track record of technology at solving this kind of problem?
Some of the other speakers at the RSA event were hesitant to make any predictions for a 40 year time period. They noted that history has a habit of making this kind of prediction irrelevant. Jackson’s answer is that since we have little confidence of making a significant change in T, we should look to ways to reduce A. Jackson is also worried that recent talk of a ‘Green New Deal’:
Is still couched in language of economic growth, rather than improvement in prosperity;
Has seen little translation into action, since first raised during 2008-9.
My own answer is that 130 represents just over 7 doublings (2 raised to the 7th power is 128) and that at least some parts of technology have no problems in improving by seven doubling generations over 40 years. Indeed, taking two years as the usual Moore’s Law doubling period, for improvements in semiconductor density, would require only 14 years for this kind of improvement, rather than 40.
Futurist and inventor Ray Kurzweil is part of distinguished panel of engineers that says solar power will scale up to produce all the energy needs of Earth’s people in 20 years.
There is 10,000 times more sunlight than we need to meet 100 percent of our energy needs, he says, and the technology needed for collecting and storing it is about to emerge as the field of solar energy is going to advance exponentially in accordance with Kurzweil’s Law of Accelerating Returns. That law yields a doubling of price performance in information technologies every year.
Kurzweil, author of “The Singularity Is Near” and “The Age of Intelligent Machines,” worked on the solar energy solution with Google Co-Founder Larry Page as part of a panel of experts convened by the National Association of Engineers to address the 14 “grand challenges of the 21st century,” including making solar energy more economical. The panel’s findings were announced here last week at the annual meeting of the American Association for the Advancement of Science.
Solar and wind power currently supply about 1 percent of the world’s energy needs, Kurzweil said, but advances in technology are about to expand with the introduction of nano-engineered materials for solar panels, making them far more efficient, lighter and easier to install. Google has invested substantially in companies pioneering these approaches.
Regardless of any one technology, members of the panel are “confident that we are not that far away from a tipping point where energy from solar will be [economically] competitive with fossil fuels,” Kurzweil said, adding that it could happen within five years.
The reason why solar energy technologies will advance exponentially, Kurzweil said, is because it is an “information technology” (one for which we can measure the information content), and thereby subject to the Law of Accelerating Returns.
“We also see an exponential progression in the use of solar energy,” he said. “It is doubling now every two years. Doubling every two years means multiplying by 1,000 in 20 years. At that rate we’ll meet 100 percent of our energy needs in 20 years.”
Other technologies that will help are solar concentrators made of parabolic mirrors that focus very large areas of sunlight onto a small collector or a small efficient steam turbine. The energy can be stored using nano-engineered fuel cells, Kurzweil said.
“You could, for example, create hydrogen or hydrogen-based fuels from the energy produced by solar panels and then use that to create fuel for fuel cells”, he said. “There are already nano-engineered fuel cells, microscopic in size, that can be scaled up to store huge quantities of energy”, he said…
To be clear, I don’t see any of this as inevitable. The economy as a whole could falter again, jeopardising “Kurzweil’s Law of Accelerating Returns”. Less dramatically, Moore’s Law could run out of steam, or it might prove harder than expected to apply silicon improvements in systems for generating, storing, and transporting energy. I therefore share Professor Jackson’s warning that capitalism, by itself, cannot be trusted to get the best out of technology. That’s why this debate is particularly important.