30 September 2013

Questions about Hangouts on Air

Filed under: collaboration, Google, Hangout On Air, intelligence — David Wood @ 11:05 pm

HOA CaptureI’m still learning about how to get the best results from Google Hangouts On Air – events that are broadcast live over the Internet.

On Sunday, I hosted a Hangout On Air which ran pretty well. However, several features of the experience were disappointing.

Here, I’m setting aside questions about what the panellists said. It was a fascinating discussion, but in this blogpost, I want to ask some questions, instead, about the technology involved in creating and broadcasting the Hangout On Air. That was the disappointing part.

If anyone reading this can answer my questions, I’ll be most grateful.

If you take a quick look at the beginning of the YouTube video of the broadcast, you’ll immediately see the first problem I experienced:

The problem was that the video uplink from my own laptop didn’t get included in the event. Instead of what I thought I was contributing to the event, the event just showed my G+ avatar (a static picture of my face). That was in contrast to situation for the other four participants.

When I looked at the Hangout On Air window on my laptop as I was hosting the call, it showed me a stream of images recorded by my webcam. It also showed, at other times, slides which I was briefly presenting. That’s what I saw, but no-one else saw it. None of these displays made it into the broadcast version.

Happily, the audio feed from my laptop did reach the broadcast version. But not the video.

As it happens, I think that particular problem was “just one of those things”, which happen rarely, and in circumstances that are difficult to reproduce. I doubt this problem will recur in this way, the next time I do such an event. I believe that the software system on my laptop simply got itself into a muddle. I saw other evidence for the software being in difficulty:

  • As the event was taking place, I got notifications that people had added me to their G+ circles. But when I clicked on these notifications, to consider reciprocally adding these people into my own circles, I got an error message, saying something like “Cannot retrieve circle status info at this time”
  • After the event had finished, I tried to reboot my laptop. The shutdown hung, twice. First, it hung with a most unusual message, “Waiting for explorer.exe – playing logoff sound”. Second, after I accepted the suggestion from the shutdown dialog to close down that app regardless, the laptop hung indefinitely in the final “shutting down” display. In the end, I pressed the hardware reset button.

That muddle shouldn’t have arisen, especially as I had taken the precaution of rebooting my laptop some 30 minutes before the event was due to start. But it did. However, what made things worse is that I only became aware of this issue once the Hangout had already started its broadcast phase.

At that time, the other panellists told me they couldn’t see any live video from my laptop. I tried various quick fixes (e.g. switching my webcam off and on), but to no avail. I also wondered whether I was suffering from a local bandwidth restriction, but I had reset my broadband router 30 minutes before the call started, and I was the only person in my house at that time.

Exit the hangout and re-enter it, was the next suggestion offered to me. Maybe that will fix things.

But this is where I see a deeper issue with the way Hangouts On Air presently work.

From my experience (though I’ll be delighted if people can tell me otherwise), when the person who started the Hangout On Air exits the event, the whole event shuts down. It’s therefore different from if any of the other panellists exits and rejoins. The other panellists can exit and rejoin without terminating the event. Not so for the host.

By the time I found out about the video uplink problem, I had already published the URL of where the YouTube of the Hangout would be broadcast. After starting the Hangout On Air (but before discovering the problem with my video feed), I had copied this URL to quite a few different places on social media – Meetup.com, Facebook, etc. I knew that people were already watching the event. If I exited the Hangout, to see if that would get the video uplink working again, we would have had to start a new Hangout, which would have had a different YouTube URL. I would have had to manually update all these social networking pages.

I can imagine two possible solutions to this – but I don’t think either are available yet, right?

  1. There may be a mechanism for the host to leave the Hangout On Air, without that Hangout terminating
  2. There may be a mechanism for something like a URL redirector to work, even for a second Hangout instance, which replaces a previous instance. The same URL would work for two different Hangouts.

Incidentally, in terms of URLs for the Hangout, note that there are at least three different such URLs:

  1. The URL of the “inside” of the Hangout, which the host can share with panellists to allow them to join it
  2. The URL of the Google+ window where the Hangout broadcast runs
  3. The URL of the YouTube window where the Hangout broadcast runs.

As far as I know, all three URLs change when a Hangout is terminated and restarted. What’s more, #1 and #3 are created when the Hangout starts, even before it switches into Broadcast mode, whereas #2 is only available when the host presses the “Start broadcasting” button.

In short, it’s a pretty complicated state of affairs. I presume that Google are hard at work to simplify matters…

To look on the positive side, one outcome that I feared (as I mentioned previously) didn’t come to pass. That outcome was my laptop over-heating. Instead, according to the CPU temperature monitor widget that I run on my laptop, the temperature remained comfortable throughout (reaching the 70s Centigrade, but staying well short of the 100 degree value which triggers an instant shutdown). I imagine that, because no video uplink was taking place, there was no strong CPU load on my laptop. I’ll have to wait to see what happens next time.

After all, over-heating is another example of something that might cause a Hangout host to want to temporarily exit the Hangout, without bringing the whole event to a premature end. There are surely other examples as well.

26 August 2012

Yoga, mindfulness, science, and human progress

Filed under: books, change, culture, Google, medicine, science, yoga — David Wood @ 12:07 am

Friday’s Financial Times carried a major article “The mind business“:

Yoga, meditation, ‘mindfulness’ – why some of the west’s biggest companies are embracing eastern spirituality

The article describes a seven-year long culture transformation initiative within business giant General Mills, as an example a growing wave of corporate interest in the disciplines of yoga, meditation, and ‘mindfulness’. It also mentions similar initiatives at Target, First Direct, Aetna, and Google, among others.

The article quotes Professor William George, the former CEO and Chairman of the Board of Medtronic, who is an intelligent fan of mindfulness. In a Harvard Business Review interview, Professor George makes the following points:

Mindfulness is a state of being fully present, aware of oneself and other people, and sensitive to one’s reactions to stressful situations. Leaders who are mindful tend to be more effective in understanding and relating to others, and motivating them toward shared goals. Hence, they become more effective in leadership roles…

Leaders with low emotional intelligence (EQ) often lack self-awareness and self-compassion, which can lead to a lack of self-regulation. This also makes it very difficult for them to feel compassion and empathy for others. Thus, they struggle to establish sustainable, authentic relationships.

Leaders who do not take time for introspection and reflection may be vulnerable to being seduced by external rewards, such as power, money, and recognition. Or they may feel a need to appear so perfect to others that they cannot admit vulnerabilities and acknowledge mistakes. Some of the recent difficulties of Hewlett-Packard, British Petroleum, CEOs of failed Wall Street firms, and dozens of leaders who failed in the post-Enron era are examples of this…

Public awareness of ‘mindfulness’ has recently received a significant boost from the publication of a book by one of Google’s earliest employees, Chade-Meng Tan. The book’s title is smart, playing on Google’s re-invention and dominance of the Search business: “Search inside yourself“. The sub-title of the book is both provocative and playful: The unexpected path to achieving success, happiness (and world peace).

The book’s website claims,

Meng has distilled emotional intelligence into a set of practical and proven tools and skills that anyone can learn and develop. Created in collaboration with a Zen master, a CEO, a Stanford University scientist, and Daniel Goleman (the guy who literally wrote the book on emotional intelligence), this program is grounded in science and expressed in a way that even a skeptical, compulsively pragmatic, engineering-oriented brain like Meng’s can process…

It’s playful, but it’s also serious. It’s a great idea to re-express the ideas of mindfulness in ways that software engineers find interesting and compelling. It uses the language of algorithms – familiar to all software engineers – to discuss techniques for improved mental performance.

(Aside: Meng has a remarkable gallery of photographs of himself alongside industry titans, leading politicians, media stars, famous book authors, and others. He’s impressively well connected.)

But does this stuff work?

Sure, these exercises can leave people feeling good, but do concrete long-term effects persist?

These are big questions, and for now, I’d like to zero in to a question that’s (marginally) smaller. I’ll leave further discussion about mindfulness and meditation for another occasion, and look now at the yoga side of this grand endeavour. Does yoga work?

My reason for focussing on the yoga aspect is that I can speak with more confidence about yoga than about mindfulness or meditation. That’s because I’ve been attending yoga classes, semi-regularly, for nearly 24 months. What I’ve experienced in these classes, and my discussions with fellow participants, has prompted me to read more, to try to make sense of what I’ve seen and heard.

I kept hearing about one particular book about yoga, “The Science of Yoga: The Risks and the Rewards“, written by Pulitzer prize-winning New York Times science journalist William J. Broad.

Broad is no newcomer to yoga – he has been practising the discipline since 1970. As he explains in the Acknowledgements section of the book, he initially thought it would take him nine months to write it, but it turned into a five-year project. The result shows – the book bristles with references to over a century’s worth of research, carried out all over the globe.

“The Science of Yoga” has received a great deal of criticism, especially from within the yoga community itself. Don’t let these criticisms put you off reading the book. It’s a mine of useful information.

The book has been criticised because of its less-than-reverential approach to many of the pioneers of yoga, as well as to some contemporary yoga leaders. The book also punctures several widespread myths about yoga – including claims made in many popular books. Some of these myths are enumerated in a handy review of Broad’s book by Liz Neporent, “What Yoga Can—and Can’t—Do: A look at the benefits and limitations of this popular, mind-body practice“:

  1. Yoga is a good cardiovascular workout
  2. Yoga boosts metabolism
  3. Yoga floods your body with oxygen
  4. Yoga doesn’t cause injuries
  5. Yoga is good for flexibility and balance
  6. Yoga improves mood
  7. Yoga is good for your brain
  8. Yoga improves your sex life

It turns out the four of these eight claims are strongly contradicted by growing scientific evidence. On the other hand, the other four are strongly supported. (I’ll leave you to do your own reading to find out which are which…)

The analysis Broad assembles doesn’t just point to correlations and statistics. It explains underlying mechanisms, so far as they are presently understood. In other words, it covers, not just the fact that yoga works, but why it works.

As well as summarising the scientific investigations that have been carried out regarding yoga, Broad provides lots of insight into the history of yoga – puncturing various more myths along the way. (For example, there’s no evidence that the popular “Sun salutation” exercises existed prior to the twentieth century. And advanced yogis aren’t actually able to stop their hearts.)

As you can see, there’s lots of good news here, for yoga enthusiasts, alongside some warnings about significant dangers.

Broad is convinced that yoga, carefully prescribed to the specific needs of individuals, can work wonders in curing various physical ailments. Broad’s discussions with Loren Fishman MD, in the chapter entitled “Healing”, form a great high point in the book. Fishman is an example of someone who immersed himself in the study of medicine after already learning about yoga. (Fishman learned yoga from none other than BKS Iyengar, who he travelled to Pune, India, to meet in 1973. Iyengar comes out well in Broad’s book, although Broad does find some aspects of Iyengar’s writing to be questionable. Full disclosure: the type of yoga I personally practice is Iyengar.)

For example, Fishman described yoga’s applicability to treat osteoporosis, the bone disease, which in turn “lies behind millions of fractures of the hip, spine, and wrist”:

Yoga stretching… works beautifully to stimulate the rebuilding of the bone. It happens at the molecular level. Stress on a bone prompts it to grow denser and stronger in the way that best counteracts the stress. Fishman said that for three years he had been conducting a study to find out which poses worked best to stimulate the rejuvenation.

“It’s a big thing”, he said of the disease. “Two hundred million women in the world have it and most can’t afford the drugs”, some of which produce serious side effects. By contrast, Fishman enthused, “Yoga is free”…

In addition to its beneficial effect on the body, yoga can have several key beneficial effects on the mind – helping in the treatment of depression, calming the spirit, and boosting creativity.

In the epilogue to the book, Broad envisions two possible futures for yoga. Here’s his description of the undesirable future:

In one scenario, the fog has thickened as competing groups and corporations view for market share among the bewildered. The chains offer their styles while spiritual groups offer theirs, with experts from different groups clashing over differing claims…

The disputes resemble the old disagreements of religion. Factionalism has soared… hundreds of brands, all claiming unique and often contradictory virtues.

Yet, for all the activity, yoga makes only a small contribution to global health care because most of the claims go unproven in the court of medical science. The general public sees yoga mainly as a cult that corporations try to exploit.

But a much more positive outcome is also possible:

In the other scenario, yoga has gone mainstream and plays an important role in society. A comprehensive program of scientific study… produced a strong consensus on where yoga fails and where it succeeds. Colleges of yoga science now abound. Yoga doctors are accepted members of the establishment, their natural therapies often considered gentler and more reliable than pills. Yoga classes are taught by certified instructors whose training is as rigorous as that of physical therapists. Yoga retreats foster art and innovation, conflict resolution, and serious negotiating…

Broad clearly recognises that his own book is far from the last word on many of the topics he covers. In many cases, more research is needed, to isolate likely chains between causes and effects. He frequently refers to research carried out within the 12-24 months prior to the book’s publication. We can expect ongoing research to bring additional clarity – for example, to shed more light on the fascinating area of the scientific underpinning of “kundalini awakening“. Broad comments,

The science of yoga has only just begun. In my judgement, the topic has such depth and resonance that the voyage of discovery will go on for centuries…

Studies… will spread to investigations ever more central to life and living, to questions of insight and ecstasy, to being and consciousness. Ultimately, the social understanding that follows in the wake of scientific discovery will address issues of human evolution and what we decide to become as a species.

I say “amen” to all this, but with two clarifications of my own:

  • Alongside a deeper understanding and wider application of yoga, improving human well-being, I expect leaps and bounds of improvement in “hard technology” fields such as genetic analysis, personalised medicines, stem cell therapy, nano-surgery, and artificial organs – which will work in concert with yoga and mindfulness to have an even more dramatic effect
  • Because I see the pace of scientific improvement as increasing, I think the most significant gains in knowledge are likely to happen in the next few decades, rather than stretching out over centuries.

Footnote: Earlier this week, William J Broad featured in a fifteen minute interview in a public radio broadcast. In this interview, Broad describes the adverse reaction of “the yoga industrial complex” to his book – “they hate this book, because it’s exploding myths… there’s an economic incentive for people to only focus on the good and deny the bad”.

30 June 2012

My reasonably smooth upgrade to Ice Cream Sandwich

Filed under: Android, change, compatibility, Google, Samsung, WordPress — David Wood @ 9:42 pm

I’ve been looking forwards to the new experiences that would be unlocked by installing “Ice Cream Sandwich” (Android 4.0) in my Samsung Galaxy Note, in place of the Gingerbread (Android 2.3) it originally contained. But I’ve been delaying the upgrade.

I’m a big fan of new technology, but my experience teaches me that upgrades often bring disruption as well as progress. Upgrades of complex software systems often unintentionally break functionality, due to unexpected incompatibilities with the old platform. And even when functionality is improved, it can take some time for users to learn a new interface: old habits have to be unlearned, and new “intuitions” acquired. That’s why I’m sometimes a technology laggard, as well as a technology enthusiast.

But today is the day. The new platform is mature, and is no longer “bleeding edge”. It’s been on the market for a few months. Several of my Accenture work colleagues have already upgraded the Galaxy Notes they use, without reporting any issues. And some of the applications I now want to test (applications developed by work colleagues) rely on functionality that is present only in the newer platform – such as improved Bluetooth. So this morning I resolved: let’s do it today.

In summary: the experience was smooth, although not without glitch. So far, I am pleased with the outcome, although I’ve experienced surprises along the way.

The first surprise was that I had to go looking for the upgrade. I had expected I would automatically be notified that a new version was ready. After all, a similar system works fine, to automatically notify me of the availability of new versions of the apps I’ve installed. And – see the following screenshot – my phone had the setting “auto update: check for updates automatically” enabled.

However, my experience was that I had to explicitly press the button “Check for updates”.

That button helpfully recommended me to ensure that I was on a wifi network. Good point.

The update would happen in two stages:

  1. First, the new version of the software would be downloaded – all 349.38MB of it
  2. Second, the new software would be installed, in place of the old.

The download system estimated that it would take 16 minutes to download the new version. It told me I could keep on using the device in the meantime, with the download proceeding in background. Having kicked off the download, and watched the first 10% of it complete fine, I switched tasks and started browsing. In retrospect, that was a mistake.

As the download proceeded, I read some tweets, and followed links in tweets to Internet pages. One link took me into someone’s Google Plus page, and another link from their took me to yet another page. (By this stage, the download was about 60% complete – I was keeping an eye on it via a notification icon in the top bar of the screen.) I then tried pressing the Back button to undo the stack of links. But as sometimes happens, Back didn’t work cleanly. It took me “back” from one page to the same page, with a minor shiggle in between. This kind of thing sometimes happens when a link includes a redirection.

This is where personal habit took over. In such cases, I have fallen into the habit of hammering the Back key several times quickly in succession. And that seemed to work – I ended up back in the Twitter application. But a few minutes later, I realised that the upgrade notifier icon had disappeared. And the download was nowhere to be found. I think that one of the Back buttons must have ended up going to the download window, cancelling it. Woops.

No problem, I thought, I’ll restart the download. It will presumably continue from where it had been interrupted. But alas, no, it started at the beginning again.

The second time, I resisted the temptation to multi-task, and let the download complete in splendid isolation. Around 20 minutes later, the download was complete. I thought to myself, Now for the more interesting part…

Before completing the installation, I ensured the mains power lead was plugged in, to avoid any complications of a battery failure half-way through rewriting the operating system part of the phone. At all costs, I did not want to end up with a “bricked” device (a device that cannot restart, and has as much life as a brick).

The upgrade proceeded. The screen changed several times during the process. At one stage, a progress indicator seemed to become stuck at around 80% complete for ages – so that I wondered if the system had crashed – before finally slowly inching forwards again.

Once the phone restarted, it run through yet more steps of the upgrade. It told me it was “Optimising application 1 of 82” … “Optimising application 82 of 82”. Then it said it was “Upgrading Contacts database” and “Upgrading Agenda database”. Clearly a lot was happening behind the scenes.

Finally it showed the familiar SIM unlock screen. Except that it wasn’t exactly the same SIM unlock screen as before – there were small but noticeable changes in the layout. Likewise with the device unlock: the ‘OK’ button is now in a different position from before. My fingers will need to learn a slightly different physical sequence, to unlock the device.

A bigger surprise was that all my customisations to the seven different home screens were lost – they had all been reset to defaults. It’s no big deal – I can gradually change the screens back to what I personally find convenient. And a good clean out is probably not a bad idea.

There are lots of pleasant surprises too. For example, there’s a handy new “Restart” addition to the dialog that is shown when the power switch is held down:

Here’s another example of an unexpected change: I found by trial and error that screenshots are now stored in a different directory on the phone – \phone\pictures\screenshots rather than \phone\screencapture – and are (it seems) stored in a different way: they’re not written to disk until some indeterminate time after the screen capture has finished.

That change caught me out twice over: first, because I could not find the screenshots (as copied into this blogpost) in the place I was accustomed to finding them, and second, because the files I tried to upload into WordPress were zero bytes in size. (WordPress helpfully advised me to “upload something more substantial”.)

In case this sounds like a litany of complaints, let me hasten to clarify that I find the entire process highly impressive. A huge quantity of software has been transferred wirelessly onto my phone, including countless changes from before. It’s a technology miracle.

What’s more, I didn’t pay anything for this upgrade. It’s a free technology miracle.

But I am glad I waited until the weekend before embarking on this upgrade, rather than trying to squeeze it into the middle of a busy work schedule. Significant change deserves significant time.

15 April 2010

Accelerating automation and the future of work

Filed under: AGI, Economics, futurist, Google, politics, regulation, robots — David Wood @ 2:45 am

London is full of pleasant surprises.

Yesterday evening, I travelled to The Book Club in Shoreditch, EC2A, and made my way to the social area downstairs.  What’s your name? asked the person at the door.  I gave my name, and in return received a stick-on badge saying

Hi, I’m David.

Talk to me about the future of humanity!

I was impressed.  How do they know I like to talk to people about the future of humanity?

Then I remembered that the whole event I was attending was under the aegis of a newly formed group calling itself “Future Human“.  It was their third meeting, over the course of just a few weeks – but the first I had heard about (and decided to attend).  Everyone’s badge had the same message.  About 120 people crammed into the downstairs room – making it standing room only (since there were only around 60 seats).  Apart from the shortage of seats, the event was well run, with good use of roaming mikes from the floor.

The event started with a quick-fire entertaining presentation by author and sci-fi expert Sam Jordison.  His opening question was blunt:

What can you do that a computer can’t do?

He then listed lots of occupations from the past which technology had rendered obsolete.  Since one of my grandfathers was the village blacksmith, I found a personal resonance with this point.  It will soon be the same for many existing professions, Sam said: computers are becoming better and better at all sorts of tasks which previously would have required creative human input.  Journalism is particularly under threat.  Likewise accountancy.  And so on, and so on.

In general terms, that’s a thesis I agree with.  For example, I anticipate a time before long when human drivers will be replaced by safer robot alternatives.

I quibble with the implication that, as existing jobs are automated, there will be no jobs left for humans to do.  Instead, I see that lots of new occupations will become important.  “Shape of Jobs to Come”, a report (PDF) by Fast Future Research, describes 20 jobs that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

(See the original report for explanations of some of these unusual occupation names!)

In other words, as technology improves to remove existing occupations, new occupations will become significant – occupations that build in unpredictable ways on top of new technology.

But only up to a point.  In the larger picture, I agree with Sam’s point that even these new jobs will quickly come under the scope of rapidly improving automation.  The lifetime of occupations will shorten and shorten.  And people will typically spend fewer hours working each week (on paid tasks).

Is this a worry? Yes, if we assume that we need to work long hours, to justify our existence, or to earn sufficient income to look after our families.  But I disagree with these assumptions. Improved technology, wisely managed, should be able to result, not just in less labour left over for humans to do, but also in great material abundance – plenty of energy, food, and other resources for everyone.  We’ll become able – at last – to spend more of our time on activities that we deeply enjoy.

The panel discussion that followed touched on many of these points. The panellists – Peter Kirwan from Wired, Victor Henning from Mendeley, and Carsten Sorensen and Jannis Kallinikos from the London School of Economics – sounded lots of notes of optimism:

  • We shouldn’t create unnecessary distinctions between “human” and “machine”.  After all, humans are kinds of machines too (“meat machines“);
  • The best kind of intelligence combines human elements and machine elements – in what Google have called “hybrid intelligence“;
  • Rather than worrying about computers displacing humans, we can envisage computers augmenting humans;
  • In case computers become troublesome, we should be able to regulate them, or even to switch them off.

Again, in general terms, these are points I agree with.  However, I believe these tasks will be much harder to accomplish than the panel implied. To that extent, I believe that the panel were too optimistic.

After all, if we can barely regulate rapidly changing financial systems, we’ll surely find it even harder to regulate rapidly changing AI systems.  Before we’ve been able to work out if such-and-such an automated system is an improvement on its predecessors, that system may have caused too many rapid irreversible changes.

Worse, there could be a hard-to-estimate “critical mass” effect.  Rapidly accumulating intelligent automation is potentially akin to accumulating nuclear material until it unexpectedly reaches an irreversible critical mass.  The resulting “super cloud” system will presumably state very convincing arguments to us, for why such and such changes in regulations make great sense.  The result could be outstandingly good – but equally, it could be outstandingly bad.

Moreover, it’s likely to prove very hard to “switch off the Internet” (or “switch off Google”).  We’ll be so dependent on the Internet that we’ll be unable to disconnect it, even though we recognise there are bad consequences,

If all of this happens in slow motion, we would be OK.  We’d be able to review it and debug it in real time.  However, the lessons from the recent economic crisis is that these changes can take place almost too quickly for human governments to intervene.  That’s why we need to ensure, ahead of time, that we have a good understanding of what’s happeningAnd that’s why there should be lots more discussions of the sort that took place at Future Human last night.

The final question from the floor raised a great point: why isn’t this whole subject receiving prominence in the current UK general election debates?  My answer: It’s down to those of us who do see the coming problems to ensure that the issues get escalated appropriately.

Footnote: Regular readers will not be surprised if I point out, at this stage, that many of these same topics will be covered in the Humanity+ UK2010 event happening in Conway Hall, Holborn, London, on Saturday 24 April.  The panellists at the Future Human event were good, but I believe that the H+UK speakers will be even better!

2 March 2010

Major new challenges to receive X PRIZE backing

Filed under: catalysts, challenge, futurist, Genetic Engineering, Google, grants, innovation, medicine, space — David Wood @ 7:16 pm

The X PRIZE Foundation has an audacious vision.

On its website, it describes itself as follows:

The X PRIZE Foundation is an educational nonprofit organization whose mission is to create radical breakthroughs for the benefit of humanity thereby inspiring the formation of new industries, jobs and the revitalization of markets that are currently stuck

The foundation can point to the success of its initial prize, the Ansari X PRIZE.  This was a $10M prize to be awarded to the first non-government organization to launch a reusable manned spacecraft into space twice within two weeks.  This prize was announced in May 1996 and was won in October 2004, by the Tier One project using the experimental spaceplane SpaceShipOne.

Other announced prizes are driving research and development in a number of breakthrough areas:

The Archon X PRIZE will award $10 million to the first privately funded team to accurately sequence 100 human genomes in just 10 days.  Renowned physicist Stephen Hawking explains his support for this prize:

You may know that I am suffering from what is known as Amyotrophic Lateral Sclerosis (ALS), or Lou Gehrig’s Disease, which is thought to have a genetic component to its origin. It is for this reason that I am a supporter of the $10M Archon X PRIZE for Genomics to drive rapid human genome sequencing. This prize and the resulting technology can help bring about an era of personalized medicine. It is my sincere hope that the Archon X PRIZE for Genomics can help drive breakthroughs in diseases like ALS at the same time that future X PRIZEs for space travel help humanity to become a galactic species.

The Google Lunar X PRIZE is a $30 million competition for the first privately funded team to send a robot to the moon, travel 500 meters and transmit video, images and data back to the Earth.  Peter Diamandis, Chairman and CEO of the X PRIZE Foundation, provided some context in a recent Wall Street Journal article:

Government agencies have dominated space exploration for three decades. But in a new plan unveiled in President Barack Obama’s 2011 budget earlier this month, a new player has taken center stage: American capitalism and entrepreneurship. The plan lays the foundation for the future Google, Cisco and Apple of space to be born, drive job creation and open the cosmos for the rest of us.

Two fundamental realities now exist that will drive space exploration forward. First, private capital is seeing space as a good investment, willing to fund individuals who are passionate about exploring space, for adventure as well as profit. What was once affordable only by nations can now be lucrative, public-private partnerships.

Second, companies and investors are realizing that everything we hold of value—metals, minerals, energy and real estate—are in near-infinite quantities in space. As space transportation and operations become more affordable, what was once seen as a wasteland will become the next gold rush. Alaska serves as an excellent analogy. Once thought of as “Seward’s Folly” (Secretary of State William Seward was criticized for overpaying the sum of $7.2 million to the Russians for the territory in 1867), Alaska has since become a billion-dollar economy.

The same will hold true for space. For example, there are millions of asteroids of different sizes and composition flying throughout space. One category, known as S-type, is composed of iron, magnesium silicates and a variety of other metals, including cobalt and platinum. An average half-kilometer S-type asteroid is worth more than $20 trillion.

Technology is reaching a critical point. Moore’s Law has given us exponential growth in computing technology, which has led to exponential growth in nearly every other technological industry. Breakthroughs in rocket propulsion will allow us to go farther, faster and more safely into space…

The Progressive Automotive X PRIZE seeks “to inspire a new generation of viable, safe, affordable and super fuel efficient vehicles that people want to buy“.  $10 million in prizes will be awarded in September 2010 to the teams that win a rigorous stage competition for clean, production-capable vehicles that exceed 100 MPG energy equivalent (MPGe).  Over 40 teams from 11 countries are currently entered in the competition.

Forthcoming new X PRIZEs

The best may still be to come.

It now appears that a series of new X PRIZEs are about to be announced.  CNET News writer Daniel Terdiman reports a fascinating interview with Peter Diamandis, in his article “X Prize group sets sights on next challenges (Q&A)“.

The article is well worth reading in its entirety.  Here are just a few highlights:

On May 15, at a gala fundraising event to be held at George Lucas’ Letterman Digital Arts Center in San Francisco, X Prize Foundation Chairman and CEO Peter Diamandis, along with Google founders Larry Page and Sergey Brin, and “Avatar” director James Cameron, will unveil their five-year vision for the famous awards…

The foundation …  is focusing on several potential new prizes that could change the world of medicine, oceanic exploration, and human transport.

The first is the so-called AI Physician X Prize, which will go to a team that designs an artificial intelligence system capable of providing a diagnosis equal to or better than 10 board-certified doctors.

The second is the Autonomous Automobile X Prize, which will go to the first team to design a car that can beat a top-seeded driver in a Gran Prix race.

The third would go to a team that can generate an organ from a terminal patient’s stem cells, transplant the organ [a lung, liver, or heart] into the patient, and have them live for a year.

And the fourth would reward a team that can design a deep-sea submersible capable of allowing scientists to gather complex data on the ocean floor

Diamandis  explains the potential outcome of the AI Physician Prize:

The implications of that are that by the end of 2013, 80 percent of the world’s populace will have a cell phone, and anyone with a cell phone can call this AI and the AI can speak Mandarin, Spanish, Swahili, any language, and anyone with a cell phone then has medical advice at the level of a board certified doctor, and it’s a game change.

Even more new X PRIZEs

Details of the process of developing new X PRIZEs are described on the foundation’s website.  New X PRIZEs are are guided by the following principles:

  • We create prizes that result in innovation that makes a lasting impact. Although a technological breakthrough can meet this criterion, so do prizes which inspire teams to use existing technologies, knowledge or systems in more effective ways.
  • Prizes are designed to generate popular interest through the prize lifecycle: enrollment, competition, attempts (both successful and unsuccessful) and post-completion…
  • Prizes result in financial leverage. For a prize to be successful, it should generate outside investment from competitors at least 5-10 times the prize purse size. The greater the leverage, the better return on investment for our prize donors and partners.
  • Prizes incorporate both elements of technological innovation as well as successful “real world” deployment. An invention which is too costly or too inconvenient to deploy widely will not win a prize.
  • Prizes engage multidisciplinary innovators which would otherwise be unlikely to tackle the problems that the prize is designed to address.

The backing provided to the foundation by the Google founders and by James Cameron provides added momentum to what is already an inspirational initiative and a great catalyst for innovation.

2 February 2010

Cutting edge computing science research for the real-world

Filed under: architecture, computer science, Google, universities — David Wood @ 11:54 pm

It is an amazing time in computer science.

This is a field that, while it is about 50 years old, has more opportunity today than it has ever had, by a large factor.

These were among the opening statements made by Google’s VP of Research & Special Initiatives, Alfred Z. Spector, in a colloquium a couple of weeks ago to the Computer Science & Engineering faculty of the University of Washington.  A video of the presentation is available from the University of Washington CSE website.

I previously mentioned this video at the tail end of a previous blogpost, “In praise of hybrid AI“.  The video is full of interesting comments about the direction of computer science.

As context, Spector mentioned “four application areas in flux today“:

  • publishing, education, healthcare, and government.

He also mentioned three “systems areas evolving“:

  • ubiquitous high performance networking, distributed computing, and new end-user devices.

This provided a prelude to “three truly big results brewing“:

  1. Totally transparent processing
  2. Ideal distributed computing
  3. Hybrid, not Artificial Intelligence

It’s worth highlighting some points about each of these “big results”.  In all cases, Google seek to follow a quantitative approach, looking at large sets of data, and checking results as systems are incrementally changed.  As Spector said, “more data is better…”

1. Totally transparent processing

Spector spelt out a vision encompassing each of four dimensions: processing should be “effectively transparent”:

  • Across all types of end-user access devices,
  • across all human languages (both formal and informal),
  • across all the modes of  information (eg text, images, audio, video, sensor data, maps, timelines),
  • and across every body of knowledge (both online and offline).

In this vision:

  • There should be “no dependence or occlusions because something has got in the way” or is in the wrong format;
  • There should be “fluidity across all these forms”.

Some subsets of this grand vision include solving “voice to text”, “image recognition”, “find similar images”, and “language translation”.  Spector claimed that progress was being made across many of these sub-problems.

2. Ideal distributed computing

Spector pointed out that

Distributed computing is 30 years old, but, not very deeply understood until recently;

There was a limitation of understanding of (truly) large-scale, open integrated distributed systems.

Particular aspects of distributed systems that had not been deeply understood included:

  • Requirements for systems in which the application needs (and APIs) are not known in advance;
  • Systems with 10^6 or even 10^7 processes, with consequent enormous complexity.

Spector claimed that – as in the case of transparent processing – “there has been lots of incremental progress done with distributed systems, picking away at problem areas”.

Improvements that can be expected for huge distributed systems of computers, arising from computer science research, include:

  • Online system optimisation;
  • Data checking – verifying consistency and validating data/config files;
  • Dynamic repair – eg find the closest feasible solution after an incident (computer broke down);
  • Better efficiency in energy usage of these systems;
  • Improvements in managing security and privacy.

3. Hybrid, not Artificial Intelligence

Hybrid intelligence is like an extension of distributed computing: people become part of the system that works out the answers.

Spector said that Google’s approach was:

To see if some problem can be solved by people and computers working together.

As a familiar example, Search doesn’t try to offer the user only the one best result.  It provides a set of results, and relies on the user picking answers from the list generated by the computer.

Hybrid intelligence can be contrasted with AI (artificial intelligence):

  • AI aims at creating computers as capable as people, often in very broad problem domains.  While progress has been made, this has turned out to be very challenging;
  • Instead, it has proven more useful for computers to extend the capability of people, not in isolation, and to focus on more specific problem areas.

Computer systems can learn from feedback from users, with powerful virtuous circles.  Spector said that aggregation of user responses has proven extremely valuable in learning, such as:

  • feedback in ranking of results, or in prioritising spelling correction options;
  • semi-supervised image content analysis / speech recognition / etc.

(This idea has evolved over time, and was previously known as “The combination hypothesis”: computers would become smarter if different methods of learning can be combined.  See for example the 2003 article “IBM aims to get smart about AI” from a time when Spector worked at IBM.  It’s good to see this idea bearing more and more fruit.)

Selected questions and answers

A couple of the questions raised by the audience at the end of the lecture were particularly interesting.

One questioner asked if Google’s guidelines for research projects specified any “no-go areas” that should be avoided.  Spector answered:

No one wants a creepy computer.  So the rule is … “don’t be creepy”.

(Which is an unusual twist on “don’t be evil”!)

Spelling this out in more detail:

  • Google aim to apply extremely neutral algorithms to ranking and crawling;
  • They want systems that are very responsive to users’ needs, without being in any way creepy;
  • Views on “what is creepy” may change over time (and may be different in different parts of the world).

A second audience member asked if there are risks to pursuing a quantitative, evolutionary approach to computer science problems.  Spector answered:

  • Yes, the research might get stuck in a local maximum;
  • So you can’t do everything “strictly by the numbers”.  But having the numbers available is a great guide.

Ongoing research

As I viewed this video, part of my brain told me that perhaps I should return to an academic life, in the midst of a computer science faculty somewhere in the world.

I share Spector’s conclusion:

It’s a time of unprecedented diversity and fertility in computer science – and amazing challenges abound;

The results from computer science should continue to make the world a better place.

Spector pointed out that key research challenges are published on the Google Research Blog.  Examples he listed included:

  • increasingly fluid partnership between people and computation;
  • fundamental changes in the methods of science;
  • rethinking the database;
  • CS+X, for all X (how Computer Science, CS, can assist and even transform other fields of study, X);
  • computing with ultra-low power (eg just ambient light as a power source).

Stop press: Google’s Focused Research Awards

Coincidentally, I see that Google have today made a new announcement about their support for research in specific areas of computer science, at a small number of universities worldwide.  The four areas of research are:

  • Machine Learning
  • Use of mobile phones as data collection devices for public health and environment monitoring
  • Energy efficiency in computing
  • Privacy.

It looks like “energy efficiency in computing” is receiving the largest amount of funding.  I think that’s probably the right choice!

31 October 2008

Watching Google watching the world

Filed under: books, collaboration, Google, infallibility, mobile data — David Wood @ 6:07 pm

If there was a prize for the best presentation at this week’s Informa “Handsets USA” forum in San Diego, it would have to go to Sumit Agarwal, Product Manager for Mobile from Google. Although there were several other very good talks there, Sumit’s was in a class of its own.

In the first place, Sumit had the chutzpah to run his slides directly on a mobile device – an iPhone – with a camera relaying the contents of the mobile screen to the video projector. Second, the presentation included a number of real-time demos – which worked well, and even the ways in which they failed to work perfectly became a source of more insight for the audience (I’ll come back to this point later). The demos were spread among a number of different mobile devices: an Android G1, the iPhone, and a BlackBerry Bold. (Sumit rather cheekily said that the main reason he carried the Bold was for circumstances in which the G1 and the iPhone run out of battery power.)

One reason the talk oozed authority was because Sumit could dig into actual statistics, collected on Google’s servers.

For example, the presentation included a graph showing the rate of Google search inquiries from mobile phones on different (anonymised) North American network operators. In September 2007, one of the lines started showing an astonishing rhythm, with rapid fluctuations in which the rate of mobile search inquiries jumped up sevenfold – before dropping down again a few days later. The pattern kept repeating, on a weekly basis. Google investigated, and found that the network operator in question had started an experiment with “free data weekends”: data usage would be free of charge on Saturday and Sunday. As Sumit pointed out:

  • The sharp usage spikes showed the latent demand of mobile users for carrying out search enquiries – a demand that was previously being inhibited by fear of high data charges;
  • Even more interesting, this line on the graph, whilst continuing to fluctuate drastically at weekends, also showed a gradual overall upwards curve, finishing up with data usage signifcantly higher than the national average, even away from weekends;
  • The takeaway message here is that “users get hooked on mobile data”: once they discover how valuable it can be to them, they use it more and more – provided (and this is the kicker) the user experience is good enough.

Another interesting statistic involved the requests received by Google’s servers for new “map tiles” to provide to Google maps applications. Sumit said that, every weekend, the demand from mobile devices for map tiles reaches the same level as the demand from fixed devices. Again, this is evidence of strong user interest for mobile services.

As regards the types of textual search queries received: Google classifies all incoming search queries, into categories such as sports, entertainment, news, and so on. Sumit showed spider graphs for the breakdown of search queries into categories. The surprising thing is that the spider graph for mobile-originated search enquiries had a very similar general shape to that for search enquiries from fixed devices. In other words, people seem to want to search for the same sorts of things – in the same proportion of times – regardless of whether they are using fixed devices or mobile ones.

It is by monitoring changes in server traffic that Google can determine the impacts of various changes in their applications – and decide where to prioritise their next efforts. For example, when the My Location feature was added to Google’s Mobile Maps application, it had a “stunning impact” on the usage of mobile maps. Apparently (though this could not be known in advance), many users are fascinated to track how their location updates in near real-time on map displays on their mobile devices. And this leads to greater usage of the Google Maps product.

Interspersed among the demos and the statistics, Sumit described elements of Google’s underlying philosophy for success with mobile services:

  • “Ignore the limitations of today”: don’t allow your thinking to be constrained by the shortcomings of present-day devices and networks;
  • “Navigate to where the puck will be”: have the confidence to prepare services that will flourish once the devices and networks improve;
  • “Arm users with the data to make decisions”: instead of limiting what users are allowed to do on their devices, provide them with information about what various applications and services will do, and leave it to the users to decide whether they will install and use individual applications;
  • “Dare to delight” the user, rather than always seeking to ensure order and predictability at all times;
  • “Accept downside”, when experiments occasionally go wrong.

As an example of this last point, there was an amusing moment during one of the (many) demos in the presentation, when two music-playing applications each played music at the same time. Sumit had just finished demoing the remarkable TuneWiki, which allows users to collaborate in supplying, sharing, and correcting lyrics to songs, for a Karaoke-like mobile experience without users having to endure the pain of incorrect lyrics. He next showed an application that searched on YouTube for videos matching a particular piece of music. But TuneWiki continued to play music through the phone speakers whilst the second application was also playing music. Result: audio overlap. Sumit commented that an alternative design philosophy by Google might have ensured that no such audio overlap could occur. But such a constraint would surely impede the wider flow of innovation in mobile applications.

And there was a piece of advice for application developers: “emphasise simplicity”. Sumit demoed the “AroundMe” application by TweakerSoft, as an illustration of how a single simple idea, well executed, can result in large numbers of downloads. (Sumit commented: “this app was written by a single developer … who has probably quintupled his annual income by doing this”.)

Google clearly have a lot going for them. Part of their success is no doubt down to the technical brilliance of their systems. The “emphasise simplicity” message has helped a great deal too. Perhaps their greatest asset is how they have been able to leverage all the statistics their enormous server farms have collected – not just statistics about links between websites, but also statistics about changes in user activity. By watching the world so closely, and by organising and analysing the information they find in it, Google are perhaps in a unique position to identify and improve new mobile services.

Just as Google has benefited from watching the world, the rest of the industry can benefit from watching Google. Happily, there’s already a great deal of information available about how Google operates. Anyone concerned about whether Google might eat their lunch can become considerably wiser from taking the time to read some of the fascinating books that have been written about both the successes and (yes) the failures of this company:

I finished reading the Stross book a couple of weeks ago. I found it an engrossing easy-to-read account of many up-to-date developments at Google. It confirms that Google remains an utterly intriguing company:

  • For example, one thought-provoking discussion was the one near the beginning of the book, about Google, Facebook, and open vs. closed;
  • I also valued the recurring theme of “algorithm-driven search” vs. “human-improved search”.

It was particularly interesting to read what Stross had to say about some of Google’s failures – eg Google Answers and Google Video (and arguably even YouTube), as a balance to its better-known string of successes. It’s a reminder that no company is infallible.

Throughout most of the first ten years of Symbian’s history, commentators kept suggesting that it was only a matter of time before the mightiest software company of that era – Microsoft – would sweep past Symbian in the mobile phone operating system space (and, indeed, would succeed – perhaps at the third attempt – in every area they targeted). Nowadays, commentators often suggest the same thing about Google’s Android solution.

Let’s wait and see. And in any case, I personally prefer to explore the collaboration route rather than the head-on compete route. Just as Microsoft’s services increasingly run well on Symbian phones, Google’s services can likewise flourish there.

1 July 2008

Win-win: how the Symbian Foundation helps Google to win

Filed under: collaboration, Google, RIM — David Wood @ 9:29 am

Olga Kharif of Business Week has found an interesting new angle on the Symbian Foundation announcement, in her article “How Nokia’s Symbian Move Helps Google“:

Nokia rocked the wireless industry June 24 with news it would purchase the portion of Symbian, a maker of mobile-phone software, that it didn’t already own—and then give away the software for nothing. …

But Nokia’s move may play right into Google’s hands, by helping to nurture a blossoming of the mobile Web and spur demand for all manner of cell-phone applications—and most important, the ads sold by Google. “There’s nothing to say that this isn’t what Google’s plan was all along,” says Kevin Burden, research director, mobile devices at consultancy ABI Research. “They might have wanted a more open device environment anyway. This might have been Google’s end game.”

My comment on this analysis is: why does it need to be a bad thing for Nokia and Symbian, if the outcome has benefits for Google? If Google wins (by being able to sell more ads on mobile phones than before), does it mean that Nokia and Symbian lose? I think not. I prefer to see this as being mutually beneficial.

The truth is, many of the companies who provide really attractive applications and services for Symbian-powered phones are both complementors and competitors of Symbian:

  • RIM provide the first class BlackBerry email service that runs on my Symbian-powered Nokia E61i and which I use virtually every hour I’m awake; they also create devices that run their own operating system, and which therefore compete with Symbian devices
  • Google, as well as working on Android, provide several of the other mobile applications that I use heavily on my E61i, including native Google Maps and native Google Search.

If companies like RIM and Google are able, as a result of the Symbian Foundation and its unification of the currently separate Symbian UIs (not to mention the easier accessibility of the source code), to develop new and improved applications for Symbian devices more quickly than before – then it will increase the attractiveness of these devices. RIM and Google (and there are many others too!) will benefit from increased services revenues which these mobile apps enable. Symbian and the various handset manufacturers who use the Symbian platform will benefit from increased sales and increased usage of the handsets that contain these attractive new applications and services. Win-win.

I see two more ways in which progress by any one of the open mobile operating systems (whether Android or the Symbian Platform, etc) boosts the others:

  1. The increasing evident utility of the smartphones powered by any one of these operating systems, helps spread word of mouth among end users that, hey, smartphones are pretty useful things to buy. So next time people consider buying a new phone, they’ll be more likely to seek out one that, in addition to good voice and text, also supplies great mobile web access, push email, and so on. The share of smartphones out of all mobile phones will rise.
  2. Progress of these various open mobile operating systems will help the whole industry to see the value of standard APIs, free exchange of source code, open gardens, and so on. The role of open operating systems will increase and that of closed operating systems will diminish.

In both cases, a rising tide will lift all boats. Or in the words of Symbian’s motto, it’s better to seek collaboration than to seek competition.

Blog at WordPress.com.