dw2

31 July 2020

The future of AI: 12 possible breakthroughs, and beyond

Filed under: AGI, books, disruption — Tags: , , , , — David Wood @ 1:30 pm

The AI of 5-10 years time could be very different from today’s AI. The most successful AI systems of that time will not simply be extensions of today’s deep neural networks. Instead, they are likely to include significant conceptual breakthroughs or other game-changing innovations.

That was the argument I made in a presentation on Thursday to the Global Data Sciences and Artificial Intelligence meetup. The chair of that meetup, Pramod Kunji, kindly recorded the presentation.

You can see my opening remarks in this video:

A copy of my slides can be accessed on Slideshare.

The ideas in this presentation raise many important questions, for which there are, as yet, only incomplete answers.

Indeed, the future of AI is a massive topic, touching nearly every area of human life. The greater the possibility that AI will experience cascading improvements in capability, the greater the urgency of exploring these scenarios in advance. In other words, the greater the need to set aside hype and predetermined ideas, in order to assess matters objectively and with an independent mind.

For that reason, I’ve joined with Rohit Talwar of Fast Future and Ben Goertzel of SingularityNET in a project to commission and edit chapters in a forthcoming book, “The Future of AI: Pathways to Artificial General Intelligence”.

forward-2083419_1920

We’re asking AI researchers, practitioners, analysts, commentators, policy makers, investors, futurists, economists, and writers from around the world, to submit chapters of up to 1,000 words, by the deadline of 15th September, that address one or more of the following themes:

  • Capability, Applications, and Impacts
    • How might the capabilities of AI systems evolve in the years ahead?
    • What can we anticipate about the potential evolution from today’s AI to AGI and beyond, in which software systems will match or exceed human cognitive abilities in every domain of thought?
    • What possible scenarios for the emergence of significantly more powerful AI deserve the most attention?
    • What new economic concepts, business models, and intellectual property ownership frameworks might be enabled and required as a result of advances that help us transition from today’s AI to AGI?
  • Pathways to AGI
    • What incremental steps might help drive practical commercial and humanitarian AI applications in the direction of AGI?
    • What practical ideas and experiences can be derived from real-world applications of technologies like transfer learning, unsupervised and reinforcement learning, and lifelong learning?
    • What are the opportunities and potential for “narrow AGI” applications that bring increasing levels of AGI to bear within specific vertical markets and application areas?
  • Societal Readiness
    • How can we raise society-wide awareness and understanding of the underlying technologies and their capabilities?
    • How can governments, businesses, educators, civil society organizations, and individuals prepare for the range of possible impacts and implications?
    • What other actions might be taken by individuals, by local groups, by individual countries, by non-governmental organizations (NGOs), by businesses, and by international institutions, to help ensure positive outcomes with advanced AI? How might we reach agreement on what constitutes a positive societal outcome in the context of AI and AGI?
  • Governance
    • How might societal ethical frameworks need to evolve to cope with the new challenges and opportunities that AGI is likely to bring?
    • What preparations can be made, at the present time, for the introduction and updating of legal and political systems to govern the development and deployment of AGI?

For more details of this new book, the process by which chapters will be selected, and processing fees that may apply, click here.

I’m very much looking forward to the insights that will arise – and to the critical new questions that will no doubt arise along the way.

 

18 June 2020

Transhumanist alternatives to contempt and fear

Contempt and fear. These are the public reactions that various prominent politicians increasingly attract these days.

  • We feel contempt towards these politicians because they behave, far too often, in contemptible ways.
  • We feel fear regarding these politicians on account of the treacherous paths they appear to be taking us down.

That’s why many fans of envisioning and building a better world – including many technologists and entrepreneurs – would prefer to ignore politics, or to minimise its influence.

These critics of politics wish, instead, to keep their focus on creating remarkable new technology or on building vibrant new business.

Politics is messy and ugly, say these critics. It’s raucous and uncouth. It’s unproductive. Some would even say that politics is unnecessary. They look forward to politics reducing in size and influence.

Their preferred alternative to contempt and fear is to try to put the topic our of their minds.

I disagree. Putting our heads in the sand about politics is a gamble fraught with danger. Looking the other way won’t prevent our necks from being snapped when the axe falls. As bad outcomes increase from contemptible, treacherous politics, they will afflict everyone, everywhere.

We need a better alternative. Rather than distancing ourselves from the political sphere, we need to engage, intelligently and constructively.

As I’ll review below, technology can help us in that task.

Constructive engagement

Happily, as confirmed by positive examples from around the world, there’s no intrinsic reason for politics to be messy or ugly, raucous or uncouth.

Nor should politics be seen as some kind of unnecessary activity. It’s a core part of human life.

Indeed, politics arises wherever people gather together. Whenever we collectively decide the constraints we put on each other’s freedom, we’re taking part in politics.

Of course, this idea of putting constraints on each other’s freedoms is deeply unpopular in some circles. Liberty means liberty, comes the retort.

My answer is: things are more complicated. That’s for two reasons.

To start with, there are multiple kinds of freedom, each of which are important.

For example, consider the “four essential human freedoms” highlighted by US President FD Roosevelt in a speech in January 1941:

We look forward to a world founded upon four essential human freedoms.

The first is freedom of speech and expression – everywhere in the world.

The second is freedom of every person to worship God in their own way – everywhere in the world.

The third is freedom from want – which, translated into world terms, means economic understandings which will secure to every nation a healthy peacetime life for its inhabitants – everywhere in the world.

The fourth is freedom from fear – which, translated into world terms, means a world-wide reduction of armaments to such a point and in such a thorough fashion that no nation will be in a position to commit an act of physical aggression against any neighbour – anywhere in the world.

As well as caring about freeing people from constraints on their thoughts, speech, and actions, we generally also care about freeing people from hunger, disease, crime, and violence. Steps to loosen some of these constraints often risk decreasing other types of liberty. As I said, things are complicated.

The second reason builds on the previous point and makes it clearer why any proclamation “liberty means liberty” is overly simple. It is that our actions impact on each other’s wellbeing, both directly and indirectly.

  • If we speed in our cars, confident in our own ability to drive faster than the accepted norms, we risk seriously reducing the personal liberties of others if we suffer a momentary lapse in concentration.
  • If we share a hateful and misleading message on social media, confident in our own intellectual robustness, we might push someone reading that message over a psychological ledge.
  • If we discard waste products into the environment, confident that little additional harm will come from such pollution, we risk an unexpected accumulation of toxins and other harms.
  • If we grab whatever we can in the marketplace, confident that our own vigour and craftiness deserve a large reward, we could deprive others of the goods, services, and opportunities they need to enjoy a good quality of life.
  • If we publicise details of bugs in software that is widely used, or ways to increase the deadliness of biological pathogens, confident that our own reputation will rise as a result inside the peer groups we wish to impress, we risk enabling others to devastate the infrastructures upon which so much of life depends – electronic infrastructure and/or biological infrastructure.
  • If we create and distribute software that can generate mind-bending fake videos, we risk precipitating a meltdown in the arena of public discussion.
  • If we create and distribute software that can operate arsenals of weapons autonomously, freed from the constraints of having to consult slow-thinking human overseers before initiating an attack, we might gain lots of financial rewards, but at the risk of all manner of catastrophe from any defects in the design or implementation of that system.

In all these examples, there’s a case to agree some collective constraints on personal freedoms.

The rationale for imposing and accepting specific constraints on our freedom is in order to secure a state of affairs where overall freedom flourishes more fully. That’s a state of affairs in which we will all benefit.

In summary, greater liberty arises as a consequence of wise social coordination, rather than existing primarily as a reaction against such coordination. Selecting and enforcing social constraints is the first key task of politics.

Recognising and managing complexes

But who is the “we” who decides these constraints? And who will ensure that constraints put in place at one time, reflecting the needs of that time, are amended promptly when circumstances change, rather than remaining in place, disproportionately benefiting only a subset of society?

That brings us to a second key task of politics: preventing harmful dominance of society by self-interested groups of individuals – groups sometimes known as “complexes”.

This concept of the complex featured in the farewell speech made by President Eisenhower in January 1961. Eisenhower issued a profound warning that “the military industrial complex” posed a growing threat to America’s liberty and democracy:

In the councils of government, we must guard against the acquisition of unwarranted influence, whether sought or unsought, by the military-industrial complex. The potential for the disastrous rise of misplaced power exists and will persist.

We must never let the weight of this combination endanger our liberties or democratic processes. We should take nothing for granted. Only an alert and knowledgeable citizenry can compel the proper meshing of the huge industrial and military machinery of defence with our peaceful methods and goals, so that security and liberty may prosper together.

As a distinguished former military general, Eisenhower spoke with evident authority on this topic:

Until the latest of our world conflicts, the United States had no armaments industry. American makers of ploughshares could, with time and as required, make swords as well. But now we can no longer risk emergency improvisation of national defence; we have been compelled to create a permanent armaments industry of vast proportions. Added to this, three and a half million men and women are directly engaged in the defence establishment. We annually spend on military security more than the net income of all United States corporations.

This conjunction of an immense military establishment and a large arms industry is new in the American experience. The total influence – economic, political, even spiritual – is felt in every city, every Statehouse, every office of the Federal government. We recognize the imperative need for this development. Yet we must not fail to comprehend its grave implications. Our toil, resources and livelihood are all involved; so is the very structure of our society.

It’s one thing to be aware of the risks posed by a military industrial complex (and the associated trade in armaments). It’s another thing to successfully manage these risks. Similar risks apply as well, for other vested interest “complexes” that can likewise subvert societal wellbeing:

  • A carbon energy complex, which earns huge profits from the ongoing use of carbon-based fuels, and which is motivated to minimise appreciation of the risks to climate from continuing use of these fuels
  • A financial complex, which (likewise) earns huge profits, by means of complicated derivative products that are designed to evade regulatory scrutiny whilst benefiting in cases of financial meltdown from government handouts to banks that are perceived as “too big to fail”
  • An information technology complex, which collects vast amounts of data about citizens, and which enables unprecedented surveillance, manipulation, and control of people by corporations and/or governments
  • A medical industrial complex, which is more interested in selling patients expensive medical treatment over a long period of time than in low-cost solutions which would prevent illnesses in the first place (or cure them quickly)
  • A political complex, which seeks above all else to retain its hold on political power, often by means of undermining a free press, an independent judiciary, and any credible democratic opposition.

You can probably think of other examples.

In all these cases, the practical goals of the complex are only weakly aligned with the goals of society as a whole. If society is not vigilant, the complex will subvert the better intentions of citizens. The complex is so powerful that it cannot be controlled by mere words of advocacy.

Beyond advocacy, we need effective politics. This politics can be supported by a number of vital principles:

  • Transparency: The operations of the various complexes need to be widely publicised and analysed, bringing them out of the shadows into the light of public understanding
  • Disclosure: Conflicts of interest must be made clear, to avoid the public being misled by individuals with ulterior motives
  • Accountability: Instances where key information is found to have been suppressed or distorted need to be treated very seriously, with the guilty parties having their reputations adjusted and their privileges diminished
  • Assessment of externalities: Evaluation systems should avoid focusing too narrowly on short-term metrics (such as financial profit) but should take into full account both positive and negative externalities – including new opportunities and new risks arising
  • Build bridges rather than walls: Potential conflicts should be handled by diplomacy, negotiation, and seeking a higher common purpose, rather than by driving people into antagonistic rival camps that increasingly bear hatred towards one another
  • Leanness: Decisions should focus on questions that matter most, rather than dictating matters where individual differences can easily be tolerated
  • Democratic oversight: People in leadership positions in society should be subject to regular assessment of their performance by a democratic review, that involves a dynamic public debate aiming to reach a “convergent opinion” rather than an “average opinion”.

Critically, all the above principles can be assisted by smart adoption of technology that enhances collaboration. This includes wikis (or similar) that map out the landscape of decisions. This also includes automated logic-checkers, and dynamic modelling systems. And that’s just the start of how technology can help support a better politics.

Transhumanist approaches to politics

The view that technology can assist humans to carry out core parts of our lives better than before, is part of the worldview known as transhumanism.

Transhumanism asserts, further, than the assistance available from technology, wisely applied, extends far beyond superficial changes. What lies within our grasp is a set of radical improvements in the human condition.

As in the short video “An Introduction to Transhumanism” – which, with over a quarter of a million views, is probably the most widely watched video on the subject – transhumanism is sometimes expressed in terms of the so-called “three supers”:

  • Super longevity: significantly improved physical health, including much longer lifespans – transcending human tendencies towards physical decay and decrepitude
  • Super intelligence: significantly improved thinking capability – transcending human tendencies towards mental blind spots and collective stupidity
  • Super wellbeing: significantly improved states of consciousness – transcending human tendencies towards depression, alienation, vicious emotions, and needless suffering.

My own advocacy of transhumanism actually emphasises one variant within the overall set of transhumanist philosophies. This is the variant of transhumanism known as technoprogressive transhumanismThe technoprogressive variant of transhumanism in effect adds one more “super” to the three already mentioned:

  • Super democracy: significantly improved social inclusion and resilience, whilst upholding diversity and liberty – transcending human tendencies towards tribalism, divisiveness, deception, and the abuse of power.

These radical improvements, by the way, can be brought about by a combination of changes at the level of individual humans, changes in our social structures, and changes in the prevailing sets of ideas (stories) that we tend to tell ourselves. Exactly what is the best combination of change initiatives, at these different levels, is something to be determined by a mix of thought and experiment.

Different transhumanists place their emphases upon different priorities for potential transformation.

If you’d like to listen in to that ongoing conversation, let me draw your attention to the London Futurists webinar taking place this Saturday – 20th of June – from 7pm UK time (BST).

In this webinar, four leading transhumanists will be discussing and contrasting their different views on the following questions (along with others that audience members raise in real time):

  • In a time of widespread anxiety about social unrest and perceived growing inequalities, what political approach is likely to ensure the greatest liberty?
  • In light of the greater insights provided by science into human psychology at both the individual and group levels, what are the threats to our wellbeing that most need to be guarded against, and which aspects of human character most need to be protected and uplifted?
  • What does the emerging philosophy of transhumanism, with its vision of conscious life evolving under thoughtful human control beyond the current human form, have to say about potential political interventions?

As you can see, the webinar is entitled “Politics for greater liberty: transhumanist perspectives”. The panellists are:

For more details, and to register to attend, click here.

Other views on the future of governance and the economy

If you’d like to hear a broader set of views on a related topic, then consider attending a Fast Future webinar taking place this Sunday – 21st June – from 6pm UK time (BST).

There will be four panellists in that webinar – one being me. We’ll each be be presenting a snapshot of ideas from the chapters we contributed to the recent Fast Future book, Aftershocks and Opportunities – Scenarios for a Post-Pandemic Future, which was published on June 1st.

After the initial presentations, we’ll be responding to each other’s views, and answering audience questions.

My own topic in this webinar will be “More Aware, More Agile, More Alive”.

The other panellists, and their topics, will be:

  • Geoff Mulgan – “Using the Crisis to Remake Government for the Future”
  • Bronwyn Williams – “The Great Separation”
  • Rohit Talwar – “Post-Pandemic Government and the Economic Recovery Agenda: A Futurist Perspective”

I’m looking forward to a lively discussion!

Click here for more details of this event.

Transcending Politics

As I said above (twice), things are complicated. The science and engineering behind the various technological solutions are complicated. And the considerations about regulations and incentives, to constrain and guide our collective use of that technology, are complicated too. We should beware any overly simple claims about easy answers to these issues.

My fullest treatment of these issues is in a 423 page book of mine, Transcending Politics, that I published in 2018.

Over the last couple of weeks, I’ve been flicking through some of the pages of that book again. Although there are some parts where I would now wish to use a different form of expression, or some updated examples, I believe the material stands the test of time well.

If the content in this blogpost strikes you as interesting, why not take a closer look at that book? The book’s website contains opening extracts of each of the chapters, as well as an extended table of contents. I trust you’ll like it.

Blog at WordPress.com.