Perhaps the most disturbing thing about Jaron Lanier’s new book “You are not a gadget: a manifesto” is the undue adulation it has received.
For example, here’s what eminent theoretical physicist Lee Smolin says about the book (on its back cover):
Jaron Lanier’s long-awaited book is fabulous – I couldn’t put it down and shouted out Yes! Yes! on many pages.
Smolin goes on:
Lanier is a rare voice of sanity in the debate about the relationship between computers and we human beings. He convincingly shows us that the idea of digital computers having human-like intelligence is a fantasy.
However, when I read it, far from shouting out Yes! Yes! on many pages, the thoughts that repeatedly came to my mind were: No! No! What a misunderstanding! What a ridiculous straw man! How poor! How misleading!
The titles of reviews of Lanier’s book on Amazon.com show lots more adulation:
- A brilliant work of Pragmatic “Techno-Philosophy” (a five-star review)
- Thought provoking and worthy of your time (ditto)
- One of the best books in a long while (ditto)
- A tribute to humanity (ditto)
That last title indicates what is probably going on. Many people feel uneasy that “humanity” is seemingly being stretched, trampled, lost, and reduced, by current changes in our society – including the migration of so much culture online, and the increasing ubiquity of silicon brains. So they are ready to clutch at straws, with the hope of somehow reaffirming a more natural state of humanity.
But this is a bad straw to clutch at.
Interestingly, even one of the five star reviews has to remark that there are significant mistakes in Lanier’s account:
While my review remains positive, I want to point out one major problem in the book. The account of events on p. 125-126 is full of misinformation and errors. The LISP machine in retrospect was a horrible idea. It died because the RISC and MIPS CPU efforts on the west coast were a much better idea. Putting high-level software (LISP) into electronics was a bad idea.
Stallman’s disfunctional relationship with Symbolics is badly misrepresented. Stallman’s licence was not the first or only free software licence…
My own list of the misinformation and errors in this book would occupy many pages. Here’s just a snippet:
1. The iPhone and UNIX
Initially, I liked Lanier’s account of the problems caused by lock-in. But then (page 12) he complains, incredibly, that some UI problems on the iPhone are due to the fact that the operating system on the iPhone has had to retain features from UNIX:
I have an iPhone in my pocket, and sure enough, the thing has what is essentially UNIX in it. An unnerving element of this gadget is that it is haunted by a weird set of unpredictable user interface delays. One’s mind waits for the response to the press of a virtual button, but it doesn’t come for a while. An odd tension builds during that moment, and easy intuition is replaced by nervousness. It is the ghost of UNIX, still refusing to accommodate the rhythms of my body and my mind, after all these years.
As someone who has been involved for more than 20 years with platforms that enable UI experience, I can state categorically that delays in UI can be addressed at many levels. It is absurb to suggest that a hangover from UNIX days means that all UIs on mobile devices (such as the iPhone) are bound to suffer unnerving delays.
2. Obssession with anonymous posters
Time and again Lanier laments that people are encouraged to post anonymously to the Internet. Because people have to become anonymous, they are de-humanised.
My reaction:
- It is useful that the opportunity for anonymous posting exists;
- However, in the vast bulk of the discussions in which I participate, most people sign their names, and links are available to their profiles;
- Rather than a sea of anonymous interactions, there’s a sea of individuals ready to become better known, each with their own fascinating quirks and strengths.
3. Lanier’s diatribe against auto-layout features in Microsoft Word
Lanier admits (page 27) that he is “all for the automation of petty tasks” by software. But (like most of us) he’s had the experience where Microsoft Word makes a wrong decision about an automation it presumes we want to do:
You might have had the experience of having Microsoft Word suddenly determine, at the wrong moment, that you are creating an indented outline…
This type of design feature is nonsense, since you end up having to do more work than you would otherwise in order to manipulate the software’s expectations of you.
Most people would say this just shows that there are still bugs in the (often useful) auto-layout feature. Not so Lanier. Instead, incredibly, he imputes a sinister motivation onto the software’s designers:
The real [underlying] function of the feature isn’t to make life easier for people. Instead, it promotes a new philosophy: that the computer is evolving into a life-form that can understand people better than people can understand themselves.
Lanier insists there’s a dichotomy: either a software designer is trying to make tasks easier for users, or the software designer has views that computers will, one day, be smarter than humans. Why would the latter view (if held) mean the former cannot also be true? And why is “this type of design feature” nonsense?
4. Analysis of Alan Turing
Lanier’s analysis (and psycho-analysis) of AI pioneer Alan Turing is particularly cringe-worthy, and was the point where, for me, the book lost all credibility.
For example, Lanier tries to score points against Turing by commenting (page 31) that:
Turing’s 1950 paper on the test includes this extraordinary passage: “In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates”.
However, referring to the context (Turing’s paper is available online here) indicates that Turing is, in the quoted passage, in the midst of seeking to engage with a number of different objections to his main hypothesis. Each time, he seeks to enter into the mindset of people who might oppose his thinking. This extract is from the section “The Theological Objection”. Immediately after the section highlighted by Lanier, Turing’s paper goes on to comment:
However, this is mere speculation. I am not very impressed with theological arguments whatever they may be used to support. Such arguments have often been found unsatisfactory in the past. In the time of Galileo it was argued that the texts, “And the sun stood still . . . and hasted not to go down about a whole day” (Joshua x. 13) and “He laid the foundations of the earth, that it should not move at any time” (Psalm cv. 5) were an adequate refutation of the Copernican theory. With our present knowledge such an argument appears futile. When that knowledge was not available it made a quite different impression.
Given a choice between the analytic powers of Turing and those of Lanier, I would pick Turing very nearly 100% of the time.
5. Clay Shirky and the latent cognitive surplus
Lanier’s treatment of Clay Shirky’s ideas is equally deplorable – sleight of hand again distorts the original message. It starts off fine, with Lanier quoting an April 2008 article by Shirky:
And this is the other thing about the size of the cognitive surplus we’re talking about. It’s so large that even a small change could have huge ramifications. Let’s say that everything stays 99 percent the same, that people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing. The Internet-connected population watches roughly a trillion hours of TV a year. That’s about five times the size of the annual U.S. consumption. One per cent of that is 100 Wikipedia projects per year worth of participation.
I think that’s going to be a big deal. Don’t you?
In Shirky’s view, there’s lots of time available for people to apply to creative tasks, if only they would spend less time watching sitcoms on TV. Lanier pokes nauseasting fun at this suggestion, but only (page 49) by means of changing the time available into “seconds of salvaged” time. (Who mentioned seconds? Surely Shirky is talking about people applying themselves for longer than seconds at a time.) Lanier labours his point with a ridiculous hyperbole:
How many seconds of salvaged erstwhile television time would need to be harnessed to replicate the achievements of, say, Albert Einstein? It seems to me that even if we could network all the potential aliens in the galaxy – quadrillions of them, perhaps – and get each of them to contribute some seconds to a physics wiki, we would not replicate the achievements of even one mediocre physicist, much less a great one.
6. Friends and Facebook friends
Lanier really seems to believe (page 53) that people who use Facebook cannot distinguish between “Facebook friends” and “real world friends”. He should talk more often to people who use Facebook, to see if they really are so “reduced” as he implies.
7. Lack of appreciation for security researchers
Lanier also rails (page 65) against people who investigate potential security vulnerabilities in software systems.
It seems he would prefer us all to live in ignorance about these potential vulnerabilities.
8. The Long Tail and individuals
Lanier cannot resist an ill-warranted attack on the notion of the long tail. Describing a proposal of his own for how authors and artists could be rewarded for Internet usage of their material, Lanier makes the bizarre comment (page 101):
Note that this is a very different idea from the long tail, because it rewards individuals rather than cloud owners
Where did the assumption come from that writers who describe the Long Tail are only interested in rewarding “cloud owners” such as Amazon and Google?
9. All generations from Generation X onwards are somnolent
Lanier bemoans the blandness of the youth (page 128):
At the time that the web was born, in the early 1990s, a popular trope was that a new generation of teenagers, raised in the conservative Reagan years, had turned out exceptionally bland. The members of “Generation X” were characterised as blank and inert. The anthropologist Steve Barnett compared them to pattern exhaustion, a phenonemon in which a culture runs out of variations of traditional designs in their pottery and becomes less creative.
A common rationalisation is the fledgling world of digital culture back then is that we were entering a transitional lull before a creative storm – or were already in the eye of one. But the sad truth is that we were not passing through a momentary lull before a storm. We had instead entered a persistent somnolence, and I have come to believe that we will only escape it when we kill the hive.
My experience is at radical odds with this. Through my encounters with year after year of graduate recruit intake at Symbian, I found many examples, each year, of youth full of passion, verve, and creativity.
The cloud which Lanier fears so much doesn’t stifle curiosity and creativity, but provides many means for people to develop a fuller human potential.
10. Open Source and creativity
Lanier complains that Open Source – and, more generally, Web 2.0 collaborative processes – has failed to produce anything of real value. All it can do, he says (page 122 – and repeated numerous times elsewhere), is to imitate: Linux is a copy of UNIX and Wikipedia is a copy of Encyclopaedia Britannica.
But what about the UI creativity of Firefox (an open source web browser, that introduced new features ahead of the Microsoft alternative)?
How about the creativity of many of the applications on mobile devices, such as the iPhone, that demonstrate mashups of information from diverse sources (including location-based information).
Even to say that Wikipedia is derivative from Britannica misses the point, of course, that material in Wikipedia is updated so quickly. Yes, there’s occasional unreliability, but people soon learn how to cross-check it.
It goes on…
For each point I’ve picked out above, there are many others I could have shared as well.
Lanier is speaking this evening (Monday 1st February) at London’s RSA. The audience is usually respectful, but can ask searching questions. This evening, if the lecture follows the same lines as the book, I expect to see more objections than usual. However, I also expect there will be some in the audience who jump at the chance to defend humanity from the perceived incursions from computers and AI.
For a wider set of ojbections to Lanier’s ideas – generally expressed much more politely than my comments above – see this compendium from Edge.
My own bottom line view is that technology will significantly enhance human experience and creativity, rather than detract from it.
To be clear, I accept that there are good criticisms that can be made of the excesses of Web 2.0, open source, and so on. For example, the second half of Nick Carr’s book “The Big Switch: Rewiring the World, from Edison to Google” is a good start. (Andrew Orlowski produced an excellent review of Carr’s book, here.) Lanier’s book is not a good contribution.
Hi David,
That’s a really excellent “fisking” you’ve done there : http://en.wikipedia.org/wiki/Fisking
> Many people feel uneasy that “humanity” is seemingly being stretched, trampled, lost, and reduced, by current changes in our society
This is hardly news – Alvin Tofler’s Future Shock had a huge impact on me when I read it even though I’m rather impatient to arrive in “the future” rather than fear it. I suggest that it’s likely the similar books have been written in every time period.
Comment by David Durant — 2 February 2010 @ 1:07 am
Hi David,
Fisking isn’t my preferred style 😮
I’d much rather collaborate than compete…
But I was particularly annoyed at Lanier’s sustained insistance (repeated in his RSA lecture last night) that anyone who thinks that humans are machines is “loopy”.
It’s easy to say that humans are not “gadgets” and shouldn’t be treated as such. However, machines can be very much more sophisticated than gadgets. It’s obscurantist to view (like Lanier) human nature as somehow beyond scientific investigation and technological improvement. This fits, alas, in a very long line of bad tradition.
I’m all in favour of emphasising and elevating humanity. But that’s fully compatible with recognising our mechanical and computational infrastructure!
Comment by David Wood — 2 February 2010 @ 2:53 pm