dw2

16 October 2011

Human regeneration – limbs and more

Filed under: healthcare, medicine, rejuveneering, risks, Singularity — David Wood @ 1:57 am

Out of the many interesting presentations on Day One of the 2011 Singularity Summit here in New York, the one that left me with the most to think about was “Regenerative Medicine: Possibilities and Potential” by Dr. Stephen Badylak.

Dr Badylak is deputy director of the McGowan Institute for Regenerative Medicine, and a Professor in the Department of Surgery at the University of Pittsburg. In his talk at the Singularity Summit, he described some remarkable ways in which the human body could heal itself – provided we provide it with suitable “scaffolding” that triggers the healing.

One of the examples Dr Badylak discussed is also covered in a recent article in Discover Magazine, How Pig Guts Became the Next Bright Hope for Regenerating Human Limbs.  The article deserves reading all the way through. Here are some short extracts from the beginning:

When he first arrived in the trauma unit of San Antonio’s Brooke Army Medical Center in December 2004, Corporal Isaias Hernandez’s leg looked to him like something from KFC. “You know, like when you take a bite out of the drumstick down to the bone?” Hernandez recalls. The 19-year-old Marine, deployed in Iraq, had been trying to outfit his convoy truck with a makeshift entertainment system for a long road trip when the bomb exploded. The 12-inch TV he was clutching to his chest shielded his vital organs; his buddy carrying the DVDs wasn’t so lucky.

The doctors kept telling Hernandez he would be better off with an amputation. He would have more mobility with a prosthetic, less pain. When he refused, they took a piece of muscle from his back and sewed it into the hole in his thigh. He did all he could to make it work. He grunted and sweated his way through the agony of physical therapy with the same red-faced determination that got him through boot camp. He even sneaked out to the stairwell, something they said his body couldn’t handle, and dragged himself up the steps until his leg seized up and he collapsed.

Generally people never recovered from wounds like his. Flying debris had ripped off nearly 70 percent of Hernandez’s right thigh muscle, and he had lost half his leg strength. Remove enough of any muscle and you might as well lose the whole limb, the chances of regeneration are so remote. The body kicks into survival mode, pastes the wound over with scar tissue, and leaves you to limp along for life….

Hernandez recalled that one of his own doctors—Steven Wolf, then chief clinical researcher for the United States Army Institute of Surgical Research in Texas—had once mentioned some kind of experimental treatment that could “fertilize” a wound and help it heal. At the time, Hernandez had dismissed the therapy as too extreme. The muscle transplant sounded safer, easier. Now he changed his mind. He wanted his leg back, even if it meant signing himself up as a guinea pig for the U.S. Army.

So Hernandez tracked down Wolf, and in February 2008 the two got started. First, Wolf put Hernandez through another grueling course of physical therapy to make sure he had indeed pushed any new muscle growth to the limit. Then he cut open Hernandez’s thigh and inserted a paper-thin slice of the same material used to make the pixie dust: part of a pig’s bladder known as the extracellular matrix, or ECM, a fibrous substance that occupies the spaces between cells. Once thought to be a simple cellular shock absorber, ECM is now understood to contain powerful proteins that can reawaken the body’s latent ability to regenerate tissue.

A few months after the surgery healed, Wolf assigned the young soldier another course of punishing physical therapy. Soon something remarkable began to happen. Muscle that most scientists would describe as gone forever began to grow back. Hernandez’s muscle strength increased by 30 percent from what it was before the surgery, and then by 40 percent. It hit 80 percent after six months. Today it is at 103 percent—as strong as his other leg. Hernandez can do things that were impossible before, like ease gently into a chair instead of dropping into it, or kneel down, ride a bike, and climb stairs without collapsing, all without pain

The challenge now is replicating Hernandez’s success in other patients. The U.S. Department of Defense, which received a congressional windfall of $80 million to research regenerative medicine in 2008, is funding a team of scientists based at the University of Pittsburgh’s McGowan Institute for Regenerative Medicine to oversee an 80-patient study of ECM at five institutions. The scientists will attempt to use the material to regenerate the muscle of patients who have lost at least 40 percent of a particular muscle group, an amount so devastating to limb function that it often leads doctors to perform an amputation.

If the trials are successful, they could fundamentally change the way we treat patients with catastrophic limb injuries. Indeed, the treatment might someday allow patients to regrow missing or mangled body parts. With an estimated 1.7 million people in the United States alone missing limbs, promoters of regenerative medicine eagerly await the day when therapies like ECM work well enough to put the prosthetics industry out of business.

The interesting science is the explanation of the role of the ECM – the extracellular matrix, which provides the scaffolding that allows the healing to take place. The healing turns out to involve the body directing stem cells to the scaffolding. These stem cells then differentiate into muscle cells, nerve cells, blood cells, and so on. There’s also some interesting science to explain why the body doesn’t reject the ECM that’s inserted into it.

Badylak speaks with confidence of the treatment one day allowing the regeneration of damaged human limbs, akin to what happens with salamanders.  He also anticipates the healing of brain tissue damaged by strokes.

Later that morning, another speaker at the Singularity Summit, Michael Shermer, referred to Dr Badylak’s presentation. Shermer is a well-known sceptic – indeed, he’s the publisher of Skeptic magazine.  Shermer often participates in public debates with believers in various religions and new-age causes.  Shermer mentioned that, at these debates, his scientific open mindedness is sometimes challenged.  “OK, if you are open-minded, as you claim, what evidence would make you believe in God?”  Shermer typically gives the answer that, if someone with an amputated limb were to have that limb regrow, that would be reason for him to become a believer:

Most religious claims are testable, such as prayer positively influencing healing. In this case, controlled experiments to date show no difference between prayed-for and not-prayed-for patients. And beyond such controlled research, why does God only seem to heal illnesses that often go away on their own? What would compel me to believe would be something unequivocal, such as if an amputee grew a new limb. Amphibians can do it. Surely an omnipotent deity could do it. Many Iraqi War vets eagerly await divine action.

However, Shermer joked with the Singularity Summit audience, it now appears that Dr Badylak might be God.  The audience laughed.

But there’s a serious point at stake here. The Singularity Summit is full of talks about humans being on the point of gaining powers that, in previous ages, would have been viewed as Divine. With great power comes great responsibility. As veteran ecologist and environmentalist Stewart Brand wrote at the very start of his recent book “Whole Earth Discipline“,

We are as gods and HAVE to get good at it.

In the final talk of the day, cosmologist Professor Max Tegmark addressed the same theme.  He gave an estimate of “between 1/10 and 1/10,000” for the probability of human extinction during any decade in the near-term future – extinction arising from (for example) biochemical warfare, runaway global warming, nanotech pollution, or a bad super-intelligence singularity. In contrast, he said, only a tiny fraction of the global GDP is devoted to management of existential risks.  That kind of “lack of paying attention” meant that humanity deserved, in Tegmark’s view, a “mid-term rating” of just D-.  Our focus, far too much of the time, is on the next election cycle, or the next quarterly financial results, or other short term questions.

One person who is seeking to encourage greater attention to be paid to existential risks is co-founder of Skype, Jaan Tallinn (who earlier in the year gave a very fine talk at a Humanity+ event I organised in London).  Jaan’s main presentation at the 2011 Singularity Summit will be on Day Two, but he briefly popped up on stage on Day One to announce a significant new fundraising commitment: he will personally match any donations made over the weekend to the Singularity Institute, up to a total of $100,000.

With the right resources, wisely deployed, we ought to see collective human intelligence achieve lots more regeneration – not just of broken limbs, but also of troubled societies and frustrated lives – whilst at the same time steering humanity away from the existential risks latent in these super-powerful technologies.  The discussion will continue tomorrow.

1 May 2010

Costs of complexity: in healthcare, and in the mobile industry

Filed under: books, business model, disruption, healthcare, innovation, modularity, simplicity — David Wood @ 11:56 am

While indeed there are economies of scale, there are countervailing costs of complexity – the more product families produced in a plant, the higher the overhead burden rates.

That sentence comes from page 92 of “The Innovator’s Prescription: A disruptive solution for health care“, co-authored by Clayton Christensen, Jerome Grossman, and Jason Hwang.  Like all the books authored (or co-authored) by Christensen, the book is full of implications for fields outside the particularly industry being discussed.

In the case of this book, the subject matter is critically important in its own right: how can we find ways to allow technological breakthroughs to reduce the spiralling costs of healthcare?

In the book, the authors brilliantly extend and apply Christensen’s well-known ideas on disruptive change to the field of healthcare.  But the book should be recommended reading for anyone interested in either strategy or operational effectiveness in any hi-tech industry.  (It’s also recommended reading for anyone interested in the future of medicine – which probably includes all of us, since most of us can anticipate spending increasing amounts of time in hospitals or doctor’s surgeries as we become older.)

I’m still less than half way through reading this book, but the section I’ve just read seems to speak loudly to issues in the mobile industry, as well as to the healthcare industry.

It describes a manufacturing plant which was struggling with overhead costs.  At this plant, 6.2 dollars were spent in overhead expenses for every dollar spend on direct labour:

These overhead costs included not just utilities and depreciation, but the costs of scheduling, expediting, quality control, repair and rework, scrap maintenance, materials handling, accounting, computer systems, and so on.  Overhead comprised all costs that were not directly spent in making products.

The quality of products made at that plant was also causing concern:

About 15 percent of all overhead costs were created by the need to repair and rework products that failed in the field, or had been discovered by inspectors as faulty before shipment.

However, it didn’t appear to the manager that any money was being wasted:

The plant hadn’t been painted inside or out in 20 years.  The landscaping was now overrun by weeds.  The receptionist in the bare-bones lobby had been replaced long ago with a paper directory and a phone.  The manager had no secretarial assistance, and her gray World War II vintage steel desk was dented by a kick from some frustrated predecessor.

Nevertheless, this particular plant had considerably higher overhead burden rates than the other plants from the same company.  What was the difference?

The difference was in the complexity.  This particular plant was set up to cope with large numbers of different product designs, whereas the other plants (which had been created later) had been able to optimise for particular design families.

The original plant essentially had the value proposition,

We’ll make any product that anyone designs

In contrast, the newer plants had the following kind of value proposition:

If you need a product that can be made through one of these two sequences of operations and activities, we’ll do it for you at the lowest possible cost and the highest possible quality.

Further analysis, across a number of different plants, reached the following results:

Each time the scale of a plant doubled, holding the degree of pathway complexity constant, the overhead rate could be expected to fall by 15 percent.  So, for example, a plant that made two families and generated $40 million in sales would be expected to have an overhead burden ratio of about 2.85, while the burden rate for a plant making two families with $80 million in sales would be 15% lower (2.85 x 0.85 = 2.42).  But every time the number of families produced in a plant of a given scale doubled, the overhead burden rate soared 27 percent.  So if a two-pathway, $40 million plant accepted products that required two additional pathways, but that did not increase its sales volume, its overhead burden rate would increase by 2.85 x 1.27, to 3.62…

This is just one aspect of a long and fascinating analysis.  Modern day general purpose hospitals support huge numbers of different patient care pathways, so high overhead rates are inevitable.  The solution is to allow the formation of separate specialist units, where practitioners can then focus on iteratively optimising particular lines of healthcare.  We can already see this in firms that specialise in laser eye surgery, in hernia treatment, and so on.  Without these new units separating and removing some of the complexity of the original unit, it becomes harder and harder for innovation to take place.  The innovation becomes stifled under conflicting business models.  (I’m simplifying the argument here: please take a look at the book for the full picture.)

In short: reducing overhead costs isn’t just a matter of “eliminating obvious inefficiencies, spending less time on paperwork, etc”.  It often requires initially painful structural changes, in which overly complex multi-function units are simplified by the removal and separation of business lines and product pathways.  Only with the new, simplified set up – often involving new companies, and sometimes involving “creative destruction” – can disruptive innovations flourish.

Rising organisational complexity impacts the mobile industry too.  I’ve written about this before.  For example, in May last year I wrote an article “Platform strategy failure modes“:

The first failure mode is when a device manufacturer fails to have a strategy towards mobile software platforms.  In this case, the adage holds true that a failure to strategise is a strategy to fail.  A device manufacturer that simply “follows the wind” – picking platform P1 for device D1 because customer C1 expressed a preference for P1, picking platform P2 for device D2 because customer C2 expressed a preference for P2, etc – is going to find that the effort of interacting successfully with all these different platforms far exceeds their expectations.  Mobile software platforms require substantial investment from manufacturers, before the manufacturer can reap commercial rewards from these platforms.  (Getting a device ready to demo is one thing.  That can be relatively easy.  Getting a device approved to ship onto real networks – a device that is sufficiently differentiated to stand out from a crowd of lookalike devices – can take a lot longer.)

The second failure mode is similar to the first one.  It’s when a device manufacturer spreads itself  too thinly across multiple platforms.  In the previous case, the manufacturer ended up working with multiple platforms, without consciously planning that outcome.  In this case, the manufacturer knows what they are doing.  They reason to themselves as follows:

  • We are a highly competent company;
  • We can manage to work with (say) three significant mobile software platforms;
  • Other companies couldn’t cope with this diversification, but we are different.

But the outcome is the same as the previous case, even though different thinking gets the manufacturer into that predicament.  The root failure is, again, a failure to appreciate the scale and complexity of mobile software platforms.  These platforms can deliver tremendous value, but require significant ongoing skill and investment to yield that kind of result.

The third failure mode is when a manufacturer seeks re-use across several different mobile software platforms.  The idea is that components (whether at the application or system level) are developed in a platform-agnostic way, so they can fit into each platform equally well.

To be clear, this is a fine goal.  Done right, there are big dividends.  But my observation is that this strategy is hard to get right.  The strategy typically involves some kind of additional “platform independent layer”, that isolates the software in the component from the particular programming interfaces of the underlying platform.  However, this additional layer often introduces its own complications…

Seeking clever economies of scale is commendable.  But there often comes time when growing scale is bedevilled by growing complexity.  It’s as mentioned at the beginning of this article:

While indeed there are economies of scale, there are countervailing costs of complexity – the more product families produced in a plant, the higher the overhead burden rates.

Even more than a drive to scale, companies in the mobile space need a drive towards simplicity. That means organisational simplicity as well as product simplicity.

As I stated in my article “Simplicity, simplicity, simplicity“:

The inherent complexity of present-day smartphones risks all kinds of bad outcomes:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window;
  • Smartphone application development may become difficult, as developers need to juggle different programming interfaces and optimisation methods;
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.  The number one issue for the mobile industry, arguably, is to constantly find better ways to tame this complexity.

The companies that are successfully addressing the complexity issue seem, on the whole, to be the ones on the rise in the mobile space.

Footnote: It’s a big claim, but it may well be true that of all the books on the subject of innovation in the last 20 years, Clayton’s Christensen’s writings are the most consistently important.  The subtitle of his first book, “The innovator’s dilemma”, is a reminder why: “When new technologies cause great firms to fail“.

24 December 2009

Predictions for the decade ahead

Before highlighting some likely key trends for the decade ahead – the 2010’s – let’s pause a moment to review some of the most important developments of the last ten years.

  • Technologically, the 00’s were characterised by huge steps forwards with social computing (“web 2.0”) and with mobile computing (smartphones and more);
  • Geopolitically, the biggest news has been the ascent of China to becoming the world’s #2 superpower;
  • Socioeconomically, the world is reaching a deeper realisation that current patterns of consumption cannot be sustained (without major changes), and that the foundations of free-market economics are more fragile than was previously widely thought to be the case;
  • Culturally and ideologically, the threat of militant Jihad, potentially linked to dreadful weaponry, has given the world plenty to think about.

Looking ahead, the 10’s will very probably see the following major developments:

  • Nanotechnology will progress in leaps and bounds, enabling increasingly systematic control, assembling, and reprogamming of matter at the molecular level;
  • In parallel, AI (artificial intelligence) will rapidly become smarter and more pervasive, and will be manifest in increasingly intelligent robots, electronic guides, search assistants, navigators, drivers, negotiators, translators, and so on.

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

We’ll see the following applications of nanotechnology and AI:

  • Energy harvesting, storage, and distribution (including via smart grids) will be revolutionised;
  • Reliance on existing means of oil production will diminish, being replaced by greener energy sources, such as next-generation solar power;
  • Synthetic biology will become increasingly commonplace – newly designed living cells and organisms that have been crafted to address human, social, and environmental need;
  • Medicine will provide more and more new forms of treatment, that are less invasive and more comprehensive than before, using compounds closely tailored to the specific biological needs of individual patients;
  • Software-as-a-service, provided via next-generation cloud computing, will become more and more powerful;
  • Experience of virtual worlds – for the purposes of commerce, education, entertainment, and self-realisation – will become extraordinarily rich and stimulating;
  • Individuals who can make wise use of these technological developments will end up significantly cognitively enhanced.

In the world of politics, we’ll see more leaders who combine toughness with openness and a collaborative spirit.  The awkward international institutions from the 00’s will either reform themselves, or will be superseded and surpassed by newer, more informal, more robust and effective institutions, that draw a lot of inspiration from emerging best practice in open source and social networking.

But perhaps the most important change is one I haven’t mentioned yet.  It’s a growing change of attitude, towards the question of the role in technology in enabling fuller human potential.

Instead of people decrying “technical fixes” and “loss of nature”, we’ll increasingly hear widespread praise for what can be accomplished by thoughtful development and deployment of technology.  As technology is seen to be able to provide unprecedented levels of health, vitality, creativity, longevity, autonomy, and all-round experience, society will demand a reprioritisation of resource allocation.  Previous sacrosanct cultural norms will fall under intense scrutiny, and many age-old beliefs and practices will fade away.  Young and old alike will move to embrace these more positive and constructive attitudes towards technology, human progress, and a radical reconsideration of how human potential can be fulfilled.

By the way, there’s a name for this mental attitude.  It’s “transhumanism”, often abbreviated H+.

My conclusion, therefore, is that the 2010’s will be the decade of nanotechnology, AI, and H+.

As for the question of which countries (or regions) will play the role of superpowers in 2020: it’s too early to say.

Footnote: Of course, there are major possible risks from the deployment of nanotechnology and AI, as well as major possible benefits.  Discussion of how to realise the benefits without falling foul of the risks will be a major feature of public discourse in the decade ahead.

20 March 2009

The industry with the greatest potential for disruptive growth

Filed under: aging, healthcare, UKTA — David Wood @ 11:37 pm

Where is the next big opportunity?

According to renowned Harvard Business School professor and author Clayton Christensen, in a video recorded recently for BigThink:

The biggest opportunities are in healthcare. We are now just desperate to make healthcare affordable and accessible. Healthcare is something that everybody consumes. There are great opportunities for non-consumers to be brought into the market by making things affordable and accessible. I just can’t think of another industry that has those kinds of characteristics where demand is robust, and there’s such great opportunities for disruption.

The healthcare industry has many angles. I’m personally fascinated by the potential of smart mobile devices to play significant new roles in maintaining and improving people’s health.

Another important dimension to healthcare is the dimension of reducing (or even altogether removing) the impacts of aging. In an article on “10 ideas changing the world right now”, Time magazine recently coined the word “amortality” for the growing trend for people who seek to keep the same lifestyle and appearance, regardless of their physical age:

When Simon Cowell let slip last month that he planned to have his corpse cryonically preserved, wags suggested that the snarky American Idol judge may have already tested the deep-freezing procedure on his face. In 2007, Cowell, now 49, told an interviewer that he used Botox. “I like to take care of myself,” he said. Cowell is in show biz, where artifice routinely imitates life. But here’s a fact startling enough to raise eyebrows among Botox enthusiasts: his fellow Brits, famously unconcerned with personal grooming, have tripled the caseload of the country’s cosmetic surgeons since 2003. The transfiguration of the snaggletoothed island race is part of a phenomenon taking hold around the developed world: amortality.

You may not have heard of amortality before – mainly because I’ve just coined the term. It’s about more than just the ripple effect of baby boomers’ resisting the onset of age. Amortality is a stranger, stronger alchemy, created by the intersection of that trend with a massive increase in life expectancy and a deep decline in the influence of organized religion – all viewed through the blue haze of Viagra…

Amortals don’t just dread extinction. They deny it. Ray Kurzweil encourages them to do so. Fantastic Voyage, which the futurist and cryonics enthusiast co-wrote with Terry Grossman, recommends a regimen to forestall aging so that adherents live long enough to take advantage of forthcoming “radical life-extending and life-enhancing technologies.” Cambridge University gerontologist Aubrey de Grey is toiling away at just such research in his laboratory. “We are in serious striking distance of stopping aging,” says De Grey, founder and chairman of the Methuselah Foundation, which awards the Mprize to each successive research team that breaks the record for the life span of a mouse…

Notions of age-appropriate behavior will soon be relegated as firmly to the past as dentures and black-and-white television. “The important thing is not how many years have passed since you were born,” says Nick Bostrom, director of the Future of Humanity Institute at Oxford, “but where you are in your life, how you think about yourself and what you are able and willing to do.” If that doesn’t sound like a manifesto for revolution, it’s only because amortality has already revolutionized our attitudes toward age.

Just how feasible is the idea of radical life extension? In part, it depends on what you think about the aging processes that take place in humans. Are these processes fixed, or can they somehow be influenced?

One person who is engaged in a serious study of this topic is Dr Richard Faragher, Reader in the School of Pharmacy and Biomolecular Sciences at the University of Brighton on the English south coast. Richard describes the research interests of his team as follows:

We “do” senescence. Why do we do this? Because it has been suggested for over 30 years that the phenomenon of cell senescence may be linked in some way to human ageing. Senescence is the progressive replicative failure of a population of cells to divide in culture. Once senescent, cells exhibit a wide range of changes in phenotype and gene expression which give them the potential to alter the behaviour of any tissue in which they are found. In its modern form the cell hypothesis of ageing suggests that the progressive accumulation of such senescent cells (as a result of ongoing tissue turnover) may contribute to the ageing process.

Richard is the featured speaker at this month’s Extrobritannia (UKTA) meeting in Central London, this Saturday (21st March). The title for his talk is “One foot in the future. Attaining the 10,000+ year lifespan you always wanted?”:

Dr Richard Faragher, Reader in Gerontology, School of Pharmacy & Biomolecular Sciences, University of Brighton, will review the aging process across the animal kingdom together with the latest scientific insights into how it may operate. The lecture will also review promising avenues for translation into practice over the next few years, and current barriers to progress in aging research will be considered.

I’m expecting a lively but informative discussion!

« Newer Posts

Blog at WordPress.com.