What are the best options for controlling artificial superintelligence?
Should we confine it in some kind of box (or simulation), to prevent it from roaming freely over the Internet?
Should we hard-wire into its programming a deep respect for humanity?
Should we avoid it from having any sense of agency or ambition?
Should we ensure that, before it takes any action, it always double-checks its plans with human overseers?
Should we create dedicated “narrow” intelligence monitoring systems, to keep a vigilant eye on it?
Should we build in a self-destruct mechanism, just in case it stops responding to human requests?
Should we insist that it shares its greater intelligence with its human overseers (in effect turning them into cyborgs), to avoid humanity being left behind?
More drastically, should we simply prevent any such systems from coming into existence, by forbidding any research that could lead to artificial superintelligence?
Alternatively, should we give up on any attempt at control, and trust that the superintelligence will be thoughtful enough to always “do the right thing”?
Or is there a better solution?
If you have clear views on this question, I’d like to hear from you.
I’m looking for speakers for a forthcoming London Futurists online webinar dedicated to this topic.
I envision three speakers each taking up to 15 minutes to set out their proposals. Once all the proposals are on the table, the real discussion will begin – with the speakers interacting with each other, and responding to questions raised by the live audience.
The date for this event remains to be determined. I will find a date that is suitable for the speakers who have the most interesting ideas to present.
As I said, please get in touch if you have questions or suggestions about this event.
Image credit: the above graphic includes work by Pixabay user Geralt.
PS For some background, here’s a video recording of the London Futurists event from last Saturday, in which Roman Yampolskiy gave several reasons why control of artificial superintelligence will be deeply difficult.
For other useful background material, see the videos on the Singularity page of the Vital Syllabus project.