dw2

24 August 2012

Duplication stuplication

Filed under: Accenture, Android, brain simulation, Connectivity, cryonics, death, futurist, Symbian — David Wood @ 12:04 am

I had a mixture of feelings when I looked at the display of the Agenda application on my Samsung Note smartphone:

On the face of things, I was going to be very busy at 09:00 that morning – I had five simultaneous meetings to attend!

But they were all the same meeting. And in fact I had already cancelled that meeting. Or, at least, I had tried to cancel that meeting. I had tried to cancel it several times.

The meeting in question – “TPR” – the Technology Planning Review that I chair from time to time inside Accenture Mobility – is a meeting I had organised, on a regularly repeating basis. This particular entry was set to repeat every four weeks. Some time earlier, I had decided that this meeting no longer needed to happen. From my Outlook Calendar on my laptop, I had pressed the button that, ordinarily, would have sent cancellation messages to all attendees. At first, things seemed to go well – the meeting disappeared from sight in my Outlook calendar.

However, a couple of hours later, I noticed it was still there, or had re-appeared. Without giving the matter much thought, I imagined I must have experienced some finger problem, and I repeated the cancellation process.

Some time later, I glanced at my engagements for that day on my smartphone – and my heart sank. The entry was shown no less than nine times, stacked on top of each other. One, two, three, four, five, six, seven, eight, nine. Woops.

(The screenshot above only shows the entry appearing five times. That’s because I deleted four of the occurrences before I had the presence of mind to record the image for posterity.)

To tell the truth, I also had a wry, knowing smile. It was a kind of “aha, this confirms that synchronising agendas can be hard” smile. “Thank goodness there are duplicate entry bugs on Android phones too!”

That uncharitable thought had its roots in many moments of personal embarrassment over the years, whenever I saw examples of duplicated entries on phones running Symbian OS. The software that synchronised agenda information across more than one device – for example, between a PC and a connected Symbian smartphone – got into a confused state on too many occasions. Symbian software had many strengths, but laser accuracy of agenda entry synchronisation was not one of them.

But in this case, there was no Symbian software involved. The bug – whatever it was – could not be blamed on any software (such as Symbian OS) for which I personally had any responsibility.

Nevertheless, I still felt bad. The meeting entry that I had created, and had broadcast to a wide number of my Accenture Mobility colleagues, was evidently misbehaving on their calendars. I had to answer several emails and instant messaging queries: Is this meeting happening or not?

Worse, the same problem applied to every one of the repeating entries in the series. Entries show up in the calendars of lots of my Accenture colleagues, once every four weeks, encouraging them to show up for a meeting that is no longer taking place.

Whether I tried to cancel all the entries in the series, or just an individual entry, the result was the same. Whether I cancelled them from my smartphone calendar or from Outlook on my laptop, the result was the same. Namely, the entry disappeared for a while, but re-appeared a few hours later.

Today I tried again. Looking ahead to the meeting slot planned for 30th August, I thought I would be smart, and deleted the entry, both from my smartphone calendar, and from Outlook on my laptop, within a few seconds of each other, just in case a defective synchronisation between the two devices was to blame. You guessed it: the result was the same. (Though this time it was about three hours before the entry re-appeared, and I was starting to think I had cracked it after all.

So what’s going on? I found a clue in an unexpected place – the email folder of Deleted Items in Microsoft Outlook. This showed an email that was unread, but which had somehow moved directly into the Deleted Items folder, without me seeing it.

The entry read as follows:

Microsoft Outlook on behalf of <one of the meeting participants>

One or more problems with this meeting were detected by Exchange 2010.

This meeting is missing from your calendar. You’re the meeting organizer and some attendees still have the meeting on their calendar.

And just as Outlook had silently moved this email into the Deleted Items folder, without drawing my attention to it, Outlook had also silently reinstated the meeting, in my calendar and (it seems) in everyone else’s calendar, without asking me whether or not that was a good idea. Too darned clever.

I still don’t know how to fix this problem. I half-suspect there’s been some kind of database corruption problem – perhaps caused by Microsoft Exchange being caught out by:

  • Very heavy usage from large numbers of employees (100s of 1000s) within one company
  • Changes in policy for how online meetings are defined and operated, in between when the meeting was first created, and when it was due to take place
  • The weird weather we’ve experienced in London this summer
  • Some other who-knows-what strange environmental race conditions.

However, I hear of tales of other colleagues experiencing similar issues with repeating entries they’ve created, which provides more evidence of a concrete software defect, rather than a random act of the universe.

Other synchronisation problems

As I said, when I reflected on what was happening, I had a wry smile. Synchronisation of complex data between different replications is hard, when the data could be altered in more than one place at the same time.

Indeed, it’s sometimes a really hard problem for software to know when to merge apparent duplicates together, and when to leave them separated. I’m reminded of that fact whenever I do a search in the Contacts application on my Android phone. It often lists multiple entries corresponding to a single person. Some of these entries show pictures, but others don’t. At first, I wasn’t sure why there were multiple entries. But closer inspection showed that some details came from my Google mail archives, some from my collection of LinkedIn connections, some from my set of Facebook Friends, and so on. Should the smartphone simply automatically merge all these instances together? Not necessarily. It’s sometimes not clear whether the entries refer to the same person, or to two people with similar names.

If that’s a comparatively simple example, let me finish with an example that takes things further afield. It’s not about the duplication and potential re-integration of agenda entries. Nor is it about the duplication and potential re-integration of pieces of contacts info. It’s about the duplication and potential re-integration of human minds.

Yes: the duplication and potential re-integration of human minds.

That’s a topic that came up in a presentation in the World Future 2012 conference I attended in Toronto at the end of July.

The talk was given by John M. Smart, founder and president of the Acceleration Studies Foundation. The conference brochure described the talk as follows:

Chemical Brain Preservation: How to Live “Forever”

About 57 million unique and precious human beings die every year, or 155,000 people every day. The memories and identities in their brains are permanently lost at present, but may not be in the near future.

Chemical brain preservation is a technique that many scientists believe may inexpensively preserve our memories and identity when we die, eventually for less than $10,000 per person in the developed world, and less than $3,000 per person in the developing world. Preserved brains can be stored at room temperature in cemeteries, in contract storage, or even in private homes. Our organization, the Brain Preservation Foundation (brainpreservation.org), is offering a $100,000 prize to the first scientific team to demonstrate that the entire synaptic connectivity of mammalian brains, where neuroscientists believe our memories and identities reside, can be perfectly preserved using these low-cost chemical techniques.

There is growing evidence that chemically preserved brains can be “read” in the future, like a computer hard drive, so that memories, and even the complete identities of the preserved individuals can be restored, using low-cost automated techniques. Amazingly, given the accelerating rate of technological advance, a person whose brain is preserved in 2020 might “return” to the world, most likely in a computer form, as early as 2060, while their loved ones and some of their friends are still alive…

Note: this idea is different from cryonics. Cryonics also involves attempted brain preservation, at an ultra-low temperature, but with a view to re-animating the brain some time in the future, once medical science has advanced enough to repair whatever damage brought the person to the point of death. (Anyone serious about finding out more about cryonics might be interested in attending the forthcoming Alcor-40 conference, in October; this conference marks the 40th anniversary of the founding of the most famous cryonics organisation.)

In contrast, the Brain Preservation Foundation talks about reading the contents of a brain (in the future), and copying that information into a computer, where the person can be re-started. The process of reading the info from the brain is very likely to destroy the brain itself.

There are several very large questions here:

  • Could the data of a brain be read with sufficient level of detail, and recreated in another substrate?
  • Once recreated, could that copy of the brain be coaxed into consciousness?
  • Even if that brain would appear to remember all my experiences, and assert that it is me, would it be any less of a preservation of me than in the case of cryonics itself (assuming that cryonics re-animation could work)?
  • Given a choice between the two means of potential resurrection, which should people choose?

The first two of these questions are scientific, whereas the latter two appear to veer into metaphysics. But for what it’s worth, I would choose the cryonics option.

My concern about the whole program of “brain copying” is triggered when I wonder:

  • What happens if multiple copies of a mind are created? After all, once one copy exists in software, it’s probably just as easy to create many copies.
  • If these copies all get re-animated, are they all the same person?
  • Imagine how one of these copies would feel if told “We’re going to switch you off now, since you are only a redundant back-up; don’t worry, the other copies will be you too”

During the discussion in the meeting in Toronto, John Smart talked about the option to re-integrate different copies of a single mind, resulting in a whole that is somehow better than each individual copy. It sounds an attractive idea in principle. But when I consider the practical difficulties in re-integrating duplicated agenda entries, a wry, uneasy smile comes to my lips. Re-integrating complex minds will be a zillion times more complicated. That project could be the most interesting software development project ever.

17 March 2010

Suspended animation is within our grasp

Filed under: cryonics, death — David Wood @ 12:44 pm

You’re not dead until you’re warm and dead

That’s a saying mentioned by University of Washington Cancer Research Center suspended animation researcher Mark B Roth, in his recent TEDtalk “Suspended animation is within our grasp“.

The same phrase – You’re not dead until you’re warm and dead – is used as the title of a January 1982 Yankee magazine account by Evan Mcleod Wylie of a real life drama:

… The girl on the table was without visible signs of life, her body cold, her lips blue, her muscles flaccid. When Herman lifted her eyelids, he found the pupils of the eyes staring fixed and dilated. By all the usual signs, the girl was clinically dead, a victim of drowning.

A major medical discovery of recent years, however, has been that sometimes such victims of prolonged submersion may be recalled to life, The chances for a recovery depend upon several factors: the age of the victim; the length of time submerged; the temperature of the water; the efficiency of the initial rescue effort, including the crucial CPR; and the intensity and sophistication of the ensuing medical treatment.

The girl’s temperature was too low to register on an ordinary medical thermometer, but Nurse Anne Torres had used a rectal thermometer to obtain an internal temperature of 82 degrees Fahrenheit, the lowest anyone on the emergency medical team had ever encountered.

“She is so cold,” Herman said, “that there is a chance she might still be alive.”

He knew that he was looking at a case of acute hypothermia — a condition in which the central core temperature of the body is reduced far below normal limits. It begins when the core temperature falls from a normal 98.6 degrees to 95. As it drops to 88 degrees, all major body functions cease. In such cases the victims may enter a state in which body functions are so arrested that the brain may need little oxygen to survive, At the same time there is a sudden transfer of blood supply from the skin, muscles, and abdominal organs to the heart, lungs, and brain, which are most sensitive and dependent upon oxygen.

But if life does linger in such a case of severe ‘hypothermia, any sudden warming of the exterior body may cause such a shock as to bring death. Many experts believe that the proper medical treatment must be to restore the beat of the heart and then slowly rewarm the body from the core outward. The message today in emergency rooms and ambulances and rescue squads is, “No one is dead until he is warm and dead.”

Although Dr, Herman was a recent graduate of Tufts Medical School, he had participated in the treatment of an extraordinary case of hypothermia. At St. E1izabeth’s Hospital in Brighton, Massachusetts, he had been a member of a medical team led by Dr. Kenneth F, MacDonnell that had successfully treated Elizabeth “Libby” Margolis, 24, after she had been trapped in the back seat of a car that had been submerged in the winter-chilled Charles River for 25 minutes…

Mark Roth’s TEDtalk provides an up-to-the-minute report of some findings about suspended animation.  It includes a fascinating tale of a search for a chemical agent that can trigger de-animation of a mammal: a search with numerous failures before the serendipitous discovery of something that does work – hydrogen sulphide.

The core idea is that, ordinarily, if the supply of oxygen is reduced to a mammal, without reducing that organism’s demand for oxygen, that mammal will die.  However, if the demand can be reduced – via an agent that triggers the de-animated state – then the organism would subsequently withstand environments with reduced oxygen (and/or intense cold).

In such a state, the organism (such as a mouse) can also withstand significant loss of blood.  Similarly, if a heart attack has been suffered, much less heart damage ensues.  As Mark describes, there are many possible applications – including for humans.

This research, not unexpectedly, is of interest to the military – as a means to quickly treat battlefield trauma casualties.  You can read about some of Mark’s interplay with DARPA in a quirky 2008 Esquire magazine article, “The Mad Scientist Bringing Back the Dead…. Really“.

The research is also of clear interest to cryonicists – and to many others.  I recommend it!

14 August 2009

Deadly serious changes

Filed under: cryonics, death, medicine, Uncategorized — David Wood @ 12:26 am

Who could fail to be moved by the story that emerged in Asuncion, Paraguay last weekend, of the baby boy Angel Salvador born 16 week premature?  Doctors declared the boy to be dead shortly after birth.  But four hours later, when family member Liliana Alvarenga removed the baby’s body from a cardboard box to dress it ahead of burial, the baby started crying.  The baby was not dead after all.

The baby’s grandfather, Guarani Caceres, was certainly moved.  He said of the doctors at the hospital, “they are criminals”.

Knowing when someone is “dead beyond all chance of recovery” can be a tough problem. History contains many horrific accounts of premature burials.  A short list includes:

  • The grammarian and metaphysician, Johannes Duns Scotus died in Cologne in 1308.  When the vault his corpse resided in was opened later he was found lying outside the coffin.
  • Thomas A Kempis died in 1471 and was denied canonization because splinters were found embedded under his nails. Anyone aspiring to be a saint would not fight death if he found himself buried alive!
  • Ann Green was hanged by the neck until dead – or so they thought – in 1650 at Oxford  She was found to be alive after being placed in a coffin for burial.  One kindly gentleman attempted to assist her back to the land of the dead by raising his foot and stamping her chest and stomach with such severe force that he only succeeded in completely reviving her.  She lived a long life and bore several children.
  • Virginia Macdonald was buried in a Brooklyn cemetery in 1850.  Her mother was so persistent that she had been buried alive that authorities finally relented and raised her coffin.  The lid was opened to find that her delicate hands had been badly bitten and she was lying on her side.
  • When the Les Innocents cemetery in Paris, France was moved from the center of the city to the suburbs the number of skeletons found face down convinced the lay people and several doctors that premature burial was very common.

(One source for many of these points is the book “Death: A History of Man’s Obsessions and Fears” by Robert Wilkins.)

Changes in technology are on the point of throwing a big new twist on this age-old problem. We have to bear in mind, not only the power of present-day medicine to revive someone from near-deadly diseases and traumas, but also the significantly greater power of future medicine.  The practice of cryonics is focused on preserving the body of someone who has many of the signs of death, in a state so that there is at least a chance that, at some time in the future, the body can be revived and cured of whatever disease or trauma was inflicting it.  Of course, it’s a controversial topic.

And there are at least two big legal and ethical issues that are bound to be discussed more and more often, in connection with cryonics.  These issues potentially apply to anyone who believes in cryonics and who makes provision for the preservation of their body at around the time of death.

The first issue is when medical professionals or other officials demand the right to autopsy the person following death. To quote from the website “Autopsy choice“:

Autopsy is a process of cutting open the body and removing all organs for examination. The organs are [later] placed together to the chest cavity and the wounds are sown up and the body made presentable for the funeral profession…

Advantages are that the medical profession has information for research and quality control, and the legal profession has information for research which it may be able to use in cases of crime or professional misconduct…

Nevertheless, some individuals because of religious or moral belief, would prefer not to be autopsied.

Indeed, anyone signed up for cryonics needs to give careful consideration to avoiding the risk of being autopsied in any way that significantly reduce the chances of subsequent revivification.  An autopsy that destroys the brain is particularly to be feared.  The Cryonics Insitute has a useful webpage “Avoiding Autopsy for Cryonics” on this topic.  Evidently, there’s a potential “clash of rights”:

  • The right of the state, to conduct an autopsy in order to advance knowledge beneficial to society as a whole;
  • The right of any individual, who is alive or potentially revivable, not to be treated in a way that destroys the potential for life.

Depending on the degree of credence that society is prepared to give to the possibility that future technology could revive someone who has recently died, this balance of rights is bound to change.

The second issue is if an individual wishes to start the body preservation process even before the medical profession is ready to declare them as dead. For example, someone whose brain is deteriorating under dementia may feel that their chances for eventual full mental recovery will be better if they are cryogenically vitrified sooner rather than later.

This seems close to the case of someone seeking the right to “assisted suicide“.  That’s already a hot potato!  But many of the same arguments apply for what we might term “early cryonic suspension”.

I’m expecting both these issues to receive increasing public debate.  My hope is that the debate avoids being hijacked by any claims that “death is natural and inevitable”.  If society is prepared to grant certain respect and concessions to people with a variety of religious beliefs, it should also be prepared to grant certain respect and concessions to people who sincerely believe that cyronics might be a pathway to life beyond death.

At some not-too-distant future date, if post-cryonic revival is successfully demonstrated in a laboratory, there may be many more people venting the same kind of anger expressed by Guarani Caceres, denouncing as “criminals” the people who interfered with access to cryonics procedures for their dead relatives.

Footnote: The story of baby Angel Salvador did not have a happy ending.  Shortly after his apparently miraculous recovery, he lost the fight to live.  Medical staff explained that he had now died as his vital organs were not strong enough to survive.  It’s not clear if the four hours the baby spent in the cardboard box (instead of a hospital incubator) contributed to these organ failures.

20 July 2009

An engaging family-friendly vision of the future

Filed under: books, cryonics, futurist — David Wood @ 7:59 pm

When I was around 11-15 years old, I devoured almost all the science fiction books in the local village library. The experience not only inspired me and stretched my imagination, but pre-disposed me to be open-minded about possible large impacts by technology on how life would be lived in the future.

Much of the technology that will have the biggest impact on the 21st century remains as yet undiscovered. Some of these discoveries will, presumably, be made by people who are currently still children. My hope is that these children will take interest in the kinds of ideas that permeate Shannon Vyff’s fine book “21st century kids: a trip from the future to you”.

The majority of the action in this book is set 180 years in the future – although there are several loop-backs to the present day. Here are just a few of the themes that are woven together in this fast-moving book:

  • Cryonic suspension, and the problems of eventual re-animation;
  • Brain implants, that enable a kind of telepathic communication;
  • Implications if human brains and human bodies could be dramatically improved;
  • Options for improving the brains of other animal species, even to the point of enabling rich communications between these creatures and humans;
  • Humans co-existing with self-aware robots and other AIs;
  • Friendly versus unfriendly AI;
  • Transferring human consciousness into robots (and back again);
  • Coping with the drawbacks of environmental degradation;
  • Future modes of manufacturing, transport, recreation, education, and religion;
  • Circumstances in which alien civilisations might take an active interest in developments on the Earth.

Adults can enjoy reading “21st century kids”, but there are parts of the book that speak more directly to children as the primary intended readership. Since I’ve long left my own adolescent days behind, I’m not able to fully judge the likely reactions of that target audience. My expectation is that many of them will find the contents engaging, thought-provoking, and exciting. It’s family-friendly throughout.
One unusual aspect of the book is that several of the main characters have the same names (and early life histories) as three of the author’s own children: Avianna, Avryn, and Avalyse. The author herself features in the book, as the (unnamed) “Mom”. I found this occasionally unsettling, but it adds to the book’s vividness and immediacy.

As regards the vision the book paints of the future, it’s certainly possible to take issue with some of the details. However, the bigger picture is that the book is sufficiently interesting that it is highly likely to provoke a lot of valuable debate and discussion. Hopefully it will stretch the imagination of many potential future technologists and engineers, and inspire them to keep an open mind about what innovative technology can accomplish.

28 May 2009

The future of medicine

Filed under: cryonics, medicine, UKTA — David Wood @ 11:37 pm
  • Someone who believes in the radical transformational potential of technology, and who anticipates that technology will result in very significant improvements in the quality of life in the relatively near future – but who is willing to go beyond predictions and theorising, to roll up his sleeves and become vigorously involved in building better technology.

That’s how I’d describe Mike Darwin, the speaker at the Extrobritannia (UKTA) meeting at Birkbeck College in central London this Saturday. In other words, Mike is an eminent engineer as well as a philosopher. Specifically, he’s an engineer in the field of preservative medicine.

But there’s more. Mike appreciates that the process of refining new medical processes can be intensely messy and flawed. Just because we’re surrounded by hi-tech, it’s no guarantee that medical trials will be pain-free or mistake-free. Far from it. There are technological uncertainties, organisational impediments, and cultural hurdles. Without a willingness to embrace this ugly fact, there’s a real risk that developments in medicine will slow down.

Mike’s topic on Saturday is “Whatever happened to the future of medicine”; the subtitle is “Why the much anticipated medical breakthroughs of the early 21st century are failing to materialize”. In his own words, here’s what the talk will address:

The last half of the 20th Century was a time of explosive growth in growth in high technology medicine. Effective chemotherapy for many microbial diseases, the advent of sophisticated vaccination, the development and application of the corticosteroids, and the development of extracorporeal and cardiovascular prosthetic medicine (cardiopulmonary bypass, hemodialysis, synthetic arterial vascular grafts and cardiac valves) are but a few examples of what can only be described as stunning progress in medicine derived in large measure from translation research.

The closing decades of the last century brought confident predictions from both academic and clinical researchers (scientists and physicians alike) that the opening decade of this century would see, if not definitive cure or control, then certainly the first truly effective therapeutic drugs for cancer, ischemia-reperfusion injury (i.e. heart attack, stroke and cardiac arrest), multisystem organ failure and dysfunction (MSOF/D), immunomodulation (control of rejection and much improved management of autoimmune diseases), oxygen therapeutics and more radically, the perfection of long term organ preservation, widespread use of the total artificial heart (TAH) and the clinical application of the first drugs to slow or moderate biological aging.

So far, so good. But Mike continues:

However, none of these anticipated gains has materialized, and countless drug trials in humans based on highly successful animal models of MSOF/D, stroke, heart attack, cancer, and immunomodulation have failed. Indeed it may be reasonably argued that the pace of therapeutic advance has slowed. By contrast, the growth of technology and capability in some areas of diagnostic medicine, primarily imaging, has maintained its exponential rate of growth and, while much slower than growth in other areas of technological endeavor, such as communications and consumer electronics, progress has been impressive.

Why has translational research at the cutting edge of medicine (and in particular in critical care medicine) stalled, or often resulted in clinical trials that had to be halted due to increased morbidity and mortality in the treated patients? The answers to these questions are complex and multifactorial, and deserve careful review.

And in conclusion:

Renewed success in the application of translational research in humans will require a return to the understanding and acceptance of the inescapable fact that perfection of complex biomedical technologies cannot be modeled solely in the animal or computer research laboratory. The corollary of this understanding must be the acceptance of the unpleasant reality that perfection of novel, let alone revolutionary medical technologies, will require a huge cost in human suffering and sacrifice. The aborted journey of the TAH to widespread clinical application due to the unwillingness on the part of the public, and the now extant bioethical infrastructure in medicine, to accept the years of suffering accompanied by modest, incremental advances towards perfection of this technology, is a good example of what might rightly be described as a societal ‘failure of nerve’ in the face of great benefit at great cost. It may be rightly said, to quote the political revolutionary Delores Ibarruri, that we must once again come to understand that, “It is better to die on our feet than to live on our knees!”

Mike has spoken once before at an Extrobritannia meeting. See here for my write-up. It was a tremendous event. I’m expecting a similar engrossing debate this Saturday too. No doubt some of the discussion will focus on the main thrust of Mike’s life work, cryonics: very few people in the world are as knowledgeable about this topic.

If anyone reading this is going to be in or near London on Saturday, it would be great to see you at this meeting.

26 October 2008

The Singularity will go mainstream

Filed under: AGI, brain simulation, cryonics, Moore's Law, robots, Singularity — David Wood @ 1:49 pm

The concept of the coming technological singularity is going to enter mainstream discourse, and won’t go away. It will stop being something that can be dismissed as freaky or outlandish – something that is of interest only to marginal types and radical thinkers. Instead, it’s going to become something that every serious discussion of the future is going to have to contemplate. Writing a long-term business plan – or a long-term political agenda – without covering the singularity as one of the key topics, is increasingly going to become a sign of incompetence. We can imagine the responses, just a few years from now: “Your plan lacks a section on how the onset of the singularity is going to affect the take-up of your product. So I can’t take this proposal seriously”. And: “You’ve analysed five trends that will impact the future of our company, but you haven’t included the singularity – so everything else you say is suspect.”

In short, that’s the main realisation I reached by attending the Singularity Summit 2008 yesterday, in the Montgomery Theater in San Jose. As the day progressed, the evidence mounted up that the arguments in favour of the singularity will be increasingly persuasive, to wider and wider groups of people. Whether or not the singularity will actually happen is a slightly different question, but it’s no longer going to be possible to dismiss the concept of the singularity as irrelevant or implausible.

To back up my assertion, here are some of the highlights of what was a very full day:

Intel’s CTO and Corporate VP Justin Rattner spoke about “Countdown to Singularity: accelerating the pace of technological innovation at Intel”. He described a series of technological breakthroughs that would be likely to keep Moore’s Law operational until at least 2020, and he listed ideas for how it could be extended even beyond that. Rattner clearly has a deep understanding of the technology of semiconductors.

Dharmendra Modha, the manager of IBM’s cognitive computing lab at Almaden, explained how his lab had already utilised IBM super-computers to simulate an entire rat brain, with the simulation running at one tenth of real-time speed. He explained his reasons for expecting that his lab should be emable to simular an entire human brain, running at full speed, by 2018. This was possible as a result of the confluence of “three hard disruptive trends”:

  1. Neuroscience has matured
  2. Supercomputing meets the brain
  3. Nanotechnology meets the brain.

Cynthia Breazeal, Associate Professor of Media Arts and Sciences, MIT, drew spontaneous applause from the audience part-way through her talk, by showing a video of one of her socially responsive robots, Leonardo. The video showed Leonardo acting on beliefs about what various humans themselves believed (including beliefs that Leonardo could deduce were false). As Breazeal explained:

  • Up till recently, robotics has been about robots interacting with things (such as helping to manufacture cars)
  • In her work, robotics is about robots interacting with people in order to do things. Because humans are profoundly social, these robots will also have to be profoundly social – they are being designed to relate to humans in psychological terms. Hence the expressions of emotion on Leonardo’s face (and the other body language).

Marshall Brain, founder of “How Stuff Works”, also spoke about robots, and the trend for them to take over work tasks previously done by humans: MacDonalds waitresses, Wal-Mart shop assistants, vehicle drivers, construction workers, teachers…

James Miller, Associate Professor of Economics, Smith College, explicitly addressed the topic of how increasing belief in the likelihood of an oncoming singularity would change people’s investment decisions. Once people realise that, within (say) 20-30 years, the world could be transformed into something akin to paradise, with much greater lifespans and with abundant opportunities for extremely rich experiences, many will take much greater care than before to seek to live to reach that event. Interest in cryonics is likely to boom – since people can reason their bodies will only need to be vitrified for a short period of time, rather than having to trust their descendants to look after them for unknown hundreds of years. People will shun dangerous activities. They’ll also avoid locking money into long-term investments. And they’ll abstain from lengthy training courses (for example, to master a foreign language) if they believe that technology will shortly render as irrelevant all the sweat of that arduous learning.

Not every speaker was optimistic. Well-known author and science journalist John Horgan gave examples of where the progress of science and technology has been, not exponential, but flat:

  • nuclear fusion
  • ending infectious diseases
  • Richard Nixon’s “war on cancer”
  • gene therapy treatments
  • treating mental illness.

Horgan chided advocates of the singularity for their use of “rhetoric that is more appropriate to religion than science” – thereby risking damaging the standing of science at a time when science needs as much public support as it can get.

Ray Kurzweil, author of “The Singularity is Near”, responded to this by agreeing that not every technology progresses exponentially. However, those that become information sciences do experience significant growth. As medicine and health increasingly become digital information sciences, they are experiencing the same effect. Although in the past I’ve thought that Kurzweil sometimes overstates his case, on this occasion I thought he spoke with clarity and restraint, and with good evidence to back up his claims. He also presented updated versions of the graphs from his book. In the book, these graphs tended to stop around 2002. The slides Kurzweil showed at the summit continued up to 2007. It does appear that the rate of progress with information sciences is continuing to accelerate.

Earlier in the day, science fiction author and former maths and computing science professor Vernor Vinge gave his own explanation for this continuing progress:

Around the world, in many fields of industry, there are hundreds of thousands of people who are bringing the singularity closer, through the improvements they’re bringing about in their own fields of research – such as enhanced human-computer interfaces. They mainly don’t realise they are advancing the singularity – they’re not working to an agreed overriding vision for their work. Instead, they’re doing what they’re doing because of the enormous incremental economic plus of their work.

Under questioning by CNBC editor and reporter Bob Pisani, Vinge said that he sticks with the forecast he made many years ago, that the singularity would (“barring major human disasters”) happen by 2030. Vinge also noted that rapidly improving technology made the future very hard to predict with any certainty. “Classic trendline analysis is seriously doomed.” Planning should therefore focus on scenario evaluation rather than trend lines. Perhaps unsurprisingly, Vinge suggested that more forecasters should read science fiction, where scenarios can be developed and explored. (Since I’m midway through reading and enjoying Vinge’s own most recent novel, “Rainbows End” – set in 2025 – I agree!)

Director of Research at the Singularity Institute, Ben Goertzel, described a staircase of potential applications for the “OpenCog” system of “Artificial General Intelligence” he has been developing with co-workers (partially funded by Google, via the Google Summer of Code):

  • Teaching virtual dogs to dance
  • Teaching virtual parrots to talk
  • Nurturing virtual babies
  • Training virtual scientists that can read vast swathes of academic papers on your behalf
  • And more…

Founder and CSO of Innerspace Foundation, Pete Estep, gave perhaps one of the most thought-provoking presentations. The goal of Innerspace is, in short, to improve brain functioning. In more detail, “To establish bi-directional communication between the mind and external storage devices.” Quoting from the FAQ on the Innerspace site:

The IF [Innerspace Foundation] is dedicated to the improvement of human mind and memory. Even when the brain operates at peak performance learning is slow and arduous, and memory is limited and faulty. Unfortunately, other of the brain’s important functions are similarly challenged in our complex modern world. As we age, these already limited abilities and faculties erode and fail. The IF supports and accelerates basic and applied research and development for improvements in these areas. The long-term goal of the foundation is to establish relatively seamless two-way communication between people and external devices possessing clear data storage and computational advantages over the human brain.

Estep explained that he was a singularity agnostic: “it’s beyond my intellectual powers to decide if a singularity within 20 years is feasible”. However, he emphasised that it is evident to him that “the singularity might be near”. And this changes everything. Throughout history, and extending round the world even today, “there have been too many baseless fantasies and unreasonable rationalisations about the desirability of death”. The probable imminence of the singularity will help people to “escape” from these mind-binds – and to take a more vigorous and proactive stance towards planning and actually building desirable new technology. The singularity that Estep desires is one, not of super-powerful machine intelligence, but one of “AI+BCI: AI combined with a brain-computer interface”. This echoed words from robotics pioneer Hans Moravec that Vernor Vinge had reported earlier in the day:

“It’s not a singularity if you are riding the curve. And I intend to ride the curve.”

On the question of how to proactively improve the chances for beneficial technological development, Peter Diamandis spoke outstandingly well. He’s the founder of the X-Prize Foundation. I confess I hadn’t previously realised anything like the scale and the accomplishment of this Foundation. It was an eye-opener – as, indeed, was the whole day.

3 August 2008

Human obstacles to audacious technical advances

Filed under: cryonics, flight, leadership, UKTA — David Wood @ 7:11 pm

[A] French noblewoman, a duchess in her eighties, …, on seeing the first ascent of Montgolfier’s balloon from the palace of the Tuilleries in 1783, fell back upon the cushions of her carriage and wept. “Oh yes,” she said, “Now it’s certain. One day they’ll learn how to keep people alive forever, but I shall already be dead.”

Throughout history, individual humans have from time to time dared to dream that technological advances could free us from some of the limitations of our current existence. Fantastic tales of people soaring into the air, like birds, go back at least as far as Icarus. Fantastic tales of people with lifespans exceeding the biblical “three score years and ten” go back at least as far as, well, the Bible. The French noblewoman mentioned above, in a quote taken from Lewis Lapham’s 2003 Commencement speech at St. John’s College Annapolis, made the not implausible connection that technology’s progress in solving the first challenge was a sign that, in time, technology might solve the second challenge too.

Mike Darwin made the same connection at an utterly engrossing UKTA meeting this weekend. Since the age of 16 (he’s now 53), Mike has been trying to develop technological techniques to significantly lower the temperature of animal tissue, and then to warm up the tissue again so that it can resume its previous function. The idea, of course, is to enable the cryo-preservation of people who have terminal diseases (and who have nominally died of these diseases) until reviving them at such time in the future when science now has a cure for that disease.

Mike compared progress with the technology of cryonics to progress with the technology of powered manned flight. Renowned physicist Lord Kelvin had said as late as 1896 that “I do not have the smallest molecule of faith in aerial navigation other than ballooning“. Kelvin was not the only person with such a viewpoint. Even the Wright brothers themselves, after some disappointing setbacks in their experiments in 1901, “predicted that man will probably not fly in their lifetime“. There were a host of detailed, difficult engineering problems that needed to be solved, by painstakingly analysis. These included three kinds of balance and stability (roll, pitch, and yaw) as well as lift, power, and thrust. Perhaps it is no surprise that it was the Wright brothers, as accomplished bicycle engineers, that first sufficiently understood and solved this nexus of problems. Eventually, in 1903, they did manage one small powered flight, lasting just 12 seconds. Later that day, a flight lasted 59 seconds. That was enough to stimulate much more progress. Only 16 years later, John Alcock and Arthur Brown flew an airplane non-stop across the Atlantic. And the rest is history.

For this reason, Mike is particularly keen to demonstrate incremental progress with suspension and revival techniques. For example, there is the work done by Brian Wowk and Gregory Fahy and others on the vitrification and then reanimation of rabbit kidneys.

However, the majority of Mike’s remarks were on topics different from the technical feasibility of cryonics. He spoke for over two hours, and continued in a formal Q&A session for another 30 minutes. After that, informal discussion continued for at least another 45 minutes, at which time I had to make my excuses and leave (in order to keep my date to watch Dark Knight that evening). It was a tour-de-force. It’s hard to summarise such a lengthy passionate yet articulate presentation, but let me try:

  1. Cryonics is morally good
  2. Cryonics is technically feasible
  3. By 1968, Cryonics was a booming enterprise, with many conferences, journals, and TV appearances
  4. However, Cryonics has significantly failed in its ambitions
  5. Unless we understand the real reasons for these failures, we can’t realise the potential benefits of this program
  6. The failures primarily involve people issues rather than technical issues
  7. In any case, we should anticipate fierce opposition to cryonics, since it significantly disrupts many core elements of the way society currently operates.

The most poignant part was the description of the people issues during the history of cryonics:

  • People who had (shall we say) unclear ethical propriety (“con-men, frauds, and incompetents”)
  • People who failed to carry out the procedures they had designed – yet still told the world that they had followed the book (with the result that patients’ bodies suffered grievous damage during the cryopreservation process, or during subsequent storage)
  • People who were technically savvy and emotionally very committed yet who lacked sufficient professional and managerial acumen to run a larger organisation
  • People who lacked skills in raising and handling funding
  • People who lacked sufficient skills in market communications – they appeared as cranks rather than credible advocates.

This rang a lot of bells for me. The technology industry as a whole (including the smartphone industry) often struggles with similar issues. The individuals who initially come up with a great technical idea, and who are its first champions, are often not the people best placed to manage the later stages of development and implementation of that idea. The transition between early stage management and any subsequent phase is tough. But it is frequently essential. (And it may need to happen more than once!) You sometimes have to gently ease aside people (ideally at the same time finding a great new role for them) who are your personal friends, and who are deeply talented, but who are no longer the right people to lead a program through its next stage. Programs often grow faster than people do.

I don’t see any easy answers in general. I do agree with Mike on the following points:

  • A step-by-step process, with measurable feedback, is much preferable to reliance on (in essence) a future miracle that can undo big mistakes made by imprecise processes today(this is what Mike called “the fallacy of our friends in the future“);
  • Feedback on experiments is particularly important. If you monitor more data on what happens during the cryopreservation process, you’ll discover more quickly whether your assumptions are correct. Think again about the comparable experiences of the Wright brothers. Think also of the importance of carrying out retrospectives at regular intervals during a project;
  • Practice is essential. Otherwise it’s like learning to drive by just studying a book for six months, and then trying to drive all the way across the country the first time you sit in the drivers seat;
  • The quality of the key individuals in the organisations is of paramount importance, so that sufficient energies can be unleashed from the latent support both in the organisation and in wider society. Leadership matters greatly.

Footnote: I first came across the reference to the tale of the venerable French duchess in the commentary to Eliezer Yudkowsky’s evocative online reminiscences regarding the death of his 19-year old brother Yehuda Nattan Yudkowsky.

11 July 2008

Into the long, deep, deep cold

Filed under: cryonics, Methuselah, UKTA — David Wood @ 9:11 pm

My interest in smartphones stems from my frequent observation and profound conviction that these devices can make their human users smarter: more knowledgeable, more connected, and more in control. It’s an example of the careful use of technology to make users that are, in some sense, better humans. Technology – including the wheel, the plough, the abacus, the telescope, the watch, the book, the steam engine, the Internet, and (of course) much more besides – has been making humans “better” (stronger, fitter, and cleverer) since the dawn of history. What’s different in our age is that the rate of potential improvement has accelerated so dramatically.

The website “Better Humans” often has interesting articles on this theme of accelerating real-world uses of technology to enhance human ability and experience. This morning my attention was taken by some new articles there with an unusual approach to the touchy subject of cryonics. For example, the article “Cryonics: Using low temperatures to care for the critically ill” starts by quoting the cryobiologist Brian Wowk:

“Ethically, what is the correct thing to do when medicine encounters a difficult problem? Stablize the patient until a solution can be found? Or throw people away like garbage? Centuries from now, historians may marvel at the shortsightedness and rationalizations used to sanction the unnecessary death of millions.”

The article (originally from a site with a frankly less-than-inspiring name, Depressed Metabolism) continues as follows:

In contemporary medicine terminally ill patients can be declared legally dead using two different criteria: whole brain death or cardiorespiratory arrest. Although many people would agree that a human being without any functional brain activity, or even without higher brain function, has ceased to exist as a person, not many people realize that most patients who are currently declared legally dead by cardiorespiratory criteria have not yet died as a person. Or to use conventional biomedical language, although the organism has ceased to exist as a functional, integrated whole, the neuroanatomy of the person is still intact when a patient is declared legally dead using cardiorespiratory criteria.

It might seem odd that contemporary medicine allows deliberate destruction of the properties that make us uniquely human (our capacity for consciousness) unless one considers the significant challenge of keeping a brain alive in a body that has ceased to function as an integrated whole. But what if we could put the brain “on pause” until a time when medical science has become advanced enough to treat the rest of the body, reverse aging, and restore the patient to health?

Putting the brain on pause is not as far fetched as it seems. The brain of a patient undergoing general anesthesia has ceased being conscious. But because we know that the brain that represents the person is still there in a viable body, we do not think of such a person as “temporarily dead.”

One step further than general anesthesia is hypothermic circulatory arrest. Some medical procedures, such as complicated neurosurgical interventions, require not only cessation of consciousness but also complete cessation of blood flow to the brain. In these cases the temperature of the patient is lowered to such a degree (≈16 degrees Celsius) that the brain can tolerate a period without any circulation at all. Considering the fact that parts of the human brain can become irreversibly injured after no more than five minutes without oxygen, the ability of the brain to survive for at least an hour at these temperatures without any oxygen is quite remarkable.

And so it continues. See also, by the same author, “Why is cryonics so unpopular?

Is it really conceivable that the human body (or perhaps just the human head) could be placed into deep, deep cold, potentially for decades, and then subsequently revived and repaired, using the substantially improved technology of the future? Never mind conceivable, is it desirable?

I’m reminded of a book that made a big impression on me, several years ago – the provocatively titled “The first immortal” by James Halperin. It’s written as fiction, but it’s intended to describe a plausible future scenario. I understand that the author did a great deal of research into the technology of cryonics, in order to make the account scientifically credible.

As a work of fiction, it’s no great shakes. The characterisation, the plotting, and the language is often laboured – sometimes even embarrassing. But the central themes of the book are tremendously well done. As a reader, you get to think lots of new thoughts, and appreciate the jaw-dropping ups and downs that cryonics might make possible. (By the way, some of the ideas and episodes in the book are very vivid indeed, and remain clearly in my mind now, quite a few years after I read the book.) As the various characters in the book change their attitudes towards the possibility and desirability of cryonic preservation and restoration, it’s hard not to find your own attitude changing too.

Footnote: Aubrey de Grey, one of the speakers at tomorrow’s UKTA meeting (“How to live longer and longer yet healthier and healthier: realistic grounds for hope?“), has put on public record the fact that he has signed up for cryopreservation. See here for some characteristically no-nonsense statements from Aubrey himself on this topic.

Customized Silver is the New Black Theme Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 98 other followers