Friday, 24 August 2012

Enterprise 2.0: New Collaborative Tools for Your Organization's Toughest Challenges: Andrew McAfee

Why has the Facebook revolution made nary a dent in corporate culture, and how to change that. Andrew McAfee has some not-so-starry-eyed answers.
I have long been entranced by the potential of the collaborative internet and have, as a result being trying my darndest to evangelise its benefits in my professional life - no small challenge, involving as it does a bunch of lawyers inhabiting the more cobwebbed crannies in the infrastructure of a bank. To that end I've set up wikis, libraries, discussion forums and sharepoint sites all, for the most part, to no avail. Old habits die hard in any circumstance, but amongst moribund lawyers they live on like zombies.

In recent times I have taken to trying to understand, or at any rate deduce, whether it is simply a challenge to the design of our particular distributed system or whether it is more a problem of the psychological configuration of the communal working environment, or some unholy, un-dead combination of the two, which renders barren my efforts. Given my current place of toil is basically one gigantic supercomputer, part human, part machine and therefore, you would think, ripe for the benefits enterprise collaboration can bring - it is frustrating to say the least to discover how immune it appears to be to those very charms.

Enterprise 2.0: New Collaborative Tools for Your Organization's Toughest ChallengesIn my studies I have consulted learned (and excellent) theoretical volumes like Lawrence Lessig's Code: Version 2.0 and Yochai Benkler's The Wealth of Networks: How Social Production Transforms Markets and Freedom, and populist ones like Chris Anderson's The Long Tail and Don Tapscott's Wikinomics: How Mass Collaboration Changes Everything, and all tell me, with varying degrees of erudition and insight, that the new world order is at hand.

Except, for all my efforts and enthusiasm, it isn't. Of the 800 odd articles in our wiki, I have personally authored, in their entirety, about 90 percent of them. I can't persuade anyone to use a discussion board but me (discussing things with myself palls after a while) and while SharePoint has been taken up with some gusto, it has invariably been done so stupidly, without thought for the collaborative opportunities it offers. Everyone sets up their own SharePoint sites, protects it like a fiefdom, and ignores all others.

I have been looking for the book that explains these challenges of the new world order and which explains how this entropy can be fought. Andrew Mcafee's Enterprise 2.0 might just be that book.

Mcafee is a believer, and a convert from a position of scepticism but, unlike (for example) Chris Anderson, he is not so starry eyed that he can't apprehend the challenges presented. Mcafee takes us through four case studies (all thrillingly on point for me!) of business executives trying, and struggling, to collaborate using existing tools. Mcafee maps these efforts (namely technological solutions) to his own sociological analysis which differentiates groups in terms of the strengths of existing ties between the individuals purporting to connect: there are strong bonds (as between direct colleagues in geographically centralised team, weaker bonds (as between fellow employees of a wider organisation) and right out at the limit, no particular bonds at all - the Wikipedia example. Different types of emergent social software platforms (ESSPs) work better for different types of community bond. Mcafee also deals with the "long haul" challenges, which acknowledges that, particularly where there is an "endowment" collaboration system to overcome (email being the most obvious), or where collaborative opportunity is "above the flow" rather than in it (i.e., collaboration is a voluntary action completed after the "compulsory" work is done), the change in behaviour will take a long time, so stick with it (encouraging stuff for this lone wiki collaborator!)

Ultimately Mcafee doesn't have the answers - nor should we expect him to - but his analysis is thoughtful, credible (as opposed to the more frequent "credulous") and optimistic - Enterprise 2.0 needs evangelists and "prime movers" who are engaged and prepared to stick with it - meaning that this is well recommended as a volume for those wanting a practical view of the enterprise benefits of social networking and Web 2.0.

Wednesday, 15 August 2012

Film Shorts: Alien (1979)

After Ridley Scott's widely anticipated, commonly disappointing Prometheus, it was interesting to go back to the original Alien, a film I remember being utterly terrified by when I first saw it.

The "Space Jockey" - and H R Giger's wondrous machine-organic phallic designs, get about ten minutes' airtime. They are not explanained. Giger's hot, steaming, streaming dirty evolution is echoed in the wet and grimy bronze interiors of the Nostromo. Once aboard the Space Ship, the film is an out-and-out thriller - there's a brief dalliance with Hal-style computer malevolence in the form of Ash, but otherwise the film has no intellectual pretensions. 

Would that were so for Prometheus, which instead makes the same mistake Matrix: Reloaded made after The Matrix: rather than simply hinting ambiguously at great profundity, letting an awestruck audience infer a grand metaphorical scheme - a magician's trick of misdirection - it tries to hit you between the eyes with it. To carry that off you do need real, well, profundity. 

Sunday, 6 May 2012

Film Shorts: Melancholia (2011)

This handsome film opens with a wondrous series of slow motion tableaux, but is bogged down quickly afterwards by a long set piece set at the most fractious wedding reception ever caught on celluloid. 

Kirsten Dunst and Charlotte Gainsbourg play polar opposite sisters, Dunst the melancholic, Gainsbourg the neurotic,and neither given to doing anything cheerily or with any great velocity. John Hurt, Keifer Sutherland and Alexander Skarsgård lend weight but not likeability to the opening act. The second part of the movie takes an apocalyptic turn and, ironically, improves in mood, but the whole thing was immeasurably enhanced, I found, by watching at 1.5x speed on the Blu Ray player. I love how technology puts power in the hands of the viewer like that.

Monday, 23 April 2012

Worthless Aspirational Quotes: Aiming for the Moon to get to the top of the Tree

"If you aim for the moon you may get to the top of the trees," a witless motivational speaker may tell you, "but if you aim for the top of the trees, you may never get off the ground!"  Don't, in otherwords, choose realistic or sensible goals, and that way your life will work out exactly as you wish!

No. If you are aiming for the moon, and you're serious about it, you'll spend the rest of your life, frustrated and alone, in your basement. Best case scenario, you'll go and work for NASA, and even then, most likely you won't make it - not even NASA has been in 40 years. Okay, Richard Branson might be a better bet. Whatever, it won't help you get to the top of a tree.

If you want to get to the top of a tree, climb it, my son, and stop listening to motivational speakers.

You will achieve almost nothing in your life consequent on an outrageous, long-odds punt. Even those few freak Outliers that seem to have (Elvis, Bill Gates and so on) most likely weren't dreaming anything of the sort when they started out.

Instead, meander through the evolutionary design space of your life like the rest of us do, as pygmies in a field of high grass, purblindly stumbling hither and thither, attracted intuitively to what you like and repelled by what you don't. Continually set sensible, achievable goals, achieve them, and after a while you might think it isn't such a miserable existence on mother earth after all.

Friday, 3 February 2012

The End – or the Start – of Ignorance

E.O. Wilson is just the latest biologist to try turning the base metal of scientific induction into the spun gold of existential truth. What is the allure of religious certainty for these folks, and why they can’t heed the lessons of their own discipline?

I’ve made  the observation before that scientists - especially biologists - make lousy philosophers, and it doesn’t take long for Professor E. O. Wilson - one of evolutionary biology’s most prominent lights - to place himself squarely in that camp.

“No one should suppose,” he asserts, “that objective truth is impossible to attain, even when the most committed philosophers urge us to acknowledge that incapacity. In particular it is too early for scientists, the foot soldiers of epistemology, to yield ground so vital to their mission. ... No intellectual vision is more important and daunting than that of objective truth based on scientific understanding.”

On the other hand not long afterwards, apparently without intending the irony with which the statement overflows, he says, “People are innate romantics, they desperately need myth and dogma.”

None more so, it would seem, that philosophising evolutionary biologists. 

Wilson’s Consilience is a long essay on objective truth that - per the above quotation, gratuitously misunderstands what epistemology even is, whilst at the same time failing to mention (except in passing) any of its most important contributors - the likes of Wittgenstein, Kuhn, Quine, Rorty or even dear old Popper. Instead, Wilson characterises objections to his extreme reductionism as “leftist” thought including - and I quote - “Afrocentrism, ‘critical’ (i.e., socialist) science, deep ecology, ecofeminism, Lacanian psychoanalysis, Latourian sociology of science and neo-Marxism.”

Ad hominem derision is about the level of engagement you’ll get, and the only concession - a self-styled “salute” to the postmodernists - is “their ideas are like sparks from firework explosions that travel away in all directions, devoid of following energy, soon to wink out in the dimensionless dark. Yet a few will endure long enough to cast light on unexpected subjects.” You could formulate a more patronising disposition, I suppose, but it would take some work.

“You could formulate a more patronising disposition, I suppose, but it would take some work.”

What is extraordinary is that of all scientists, a biologist should be so insensitive to the contingency of knowledge, as this is the exact lesson evolutionary theory teaches: it’s not the perfect solution that survives, but the most effective. There is no “ideal organism”.

In support of his own case, Wilson refers at some length to the chimerical nature of consciousness (taking Daniel Dennett’s not uncontroversial account more or less as read). But there is a direct analogy here: Dennett’s model of consciousness stands in the same relation to the material brain as Wilson’s consilience stands to the physical universe. Dennett says consciousness is an illusion - a trick of the mind, if you like (and rather wilfully double-parks the difficult question “a trick on whom?”).

But by extension, could not consilience also be a trick of the mind? Things look like they’re ordered, consistent and universal because that’s how we’re wired to see them. Our evolutionary development (fully contingent and path-dependent, as even Wilson would agree) has built a sensory apparatus which filters the information in the world in a way which is ever-more effective.  That’s the clever trick of evolutionary development. If it is of adaptive benefit to apprehend “the world” as a consistent, coherent whole, then as long as that coherent whole accounts effectively for our physiologically meaningful experiences, then its relation to “the truth” is really beside the point.

When I run to catch a cricket ball on the boundary no part of my brain solves differential equations to catch it (I don’t have nearly enough information to do that), and no immutable, unseen cosmic machine calculates those equations to plot its trajectory either. Our mathematical model is a clever proxy, and we shouldn’t be blinded by its elegance or apparent accuracy (though, in point of fact, practically it isn’t that accurate) into assuming it somehow reveals an ineffable truth. This isn’t a new or especially controversial objection, by the way: this was one of David Hume’s main insights - an Enlightenment piece of enlightenment, if you will. As a matter of logic, there must be alternate ways of describing the same phenomena, and if you allow yourself to implement different rules to solve the puzzle, the set of coherent alternative solutions is infinite.

“It is extraordinary that a biologist should be so insensitive to the contingency of knowledge, it being the exact lesson of evolutionary theory.” 

So our self-congratulation at the cleverness of the model we have arrived at (and, sure, it is very clever) shouldn’t be overdone. It isn’t the “truth” - it’s an effective proxy, and there is a world of difference between the two. And there are uncomfortable consequences of taking the apparently harmless step of conflating them.

For one thing, “consilience” tends to dissuade inquiry: if we believe we have settled on an ineffable truth, then further discussion can only confuse and endanger our grip on it. It also gives us immutable grounds for arbitrating against those who hold an “incorrect” view. That is, to hold forth a theory which is inconsistent with the mainstream “consiliated” view is wasteful and given it has the potential to lead us away from the “true” path, may legitimately be suppressed.

You can see this style of reasoning being employed by two groups already: militant religious fundamentalists, and militant atheists. Neither is prepared to countenance the pluralistic, pragmatic (and blindingly obvious) view that there are not just many different *ways* of looking at the world but many different *reasons* for doing so, and each has its own satisfaction criteria. While these opposing fundamentalists go hammer and tongs against each other, their similarities are greater than their differences, and their greatest similarity is that neither fully comprehends, and as a consequence neither takes seriously, the challenge of the “postmodern” strands of thought against which they’re aligned.

Hence, someone like Wilson can have the hubris to say things like: “Yet I think it is fair to say that enough is known to justify confidence in the principle of universal rational consilience across all the natural sciences”

Try telling that to Kurt Gödel or Bertrand Russell, let alone Richard Rorty or Jacques Derrida.

Monday, 23 January 2012

Business design for the visionary within

Roger Martin almost beats a fine idea into submission in this thought-provoking look at the importance of “design thinking” in business. But, as the recent travails of his own client case-studies shows, in large organisations, appeals to visionary thinking tend to fall upon deaf ears.

This is a short book with some big, and very good, ideas. It could have been yet shorter: I felt I’d got the concept from the first chapter, and thereafter Roger Martin does very little with it. This is partly because the idea is self-explanatory, and it’s something you’ll either take to instinctively (if you’re disposed to “design thinking”), or won’t, if youre not.

Martin’s thesis, broadly stated, is that there are three main “phases” any business proposition:
  • Mystery: when an intuition nags at an inventor: the germ of a problem (and, more to the point, its solution) suggests itself and there is no orthodox means for solving it - here is the maximum opportunity for those who can (think of a young Ray Kroc thinking “how do I build scale in my hamburger joint?”);
  • Heuristic: when you’ve figured out a potential solution that does the job, but you don’t necessarily understand the full implications, possibilities and boundaries of your solution; and
  • Algorithm: where you fully understand both the problem/opportunity and its solution, and you are able to commoditise and automate it and the only remaining question is efficiency. 
Roger Martin’s presentation is a convincing as far as it goes: I dare say the boundaries between the three phases are porous, and Martin is convincing that there is a reflexive quality to the propositions: the more they are solved, and the more the richness of an offering is stripped to its essential superstructure, the lower the barriers to competition, the slimmer the margins, and the more compelling is an entrepreneur’s need to look for some more mysteries to solve.

It won’t do, in other words, to solve your mystery, drive it down the “design funnel” as hard and fast as you can, and relentlessly and mindlessly tweak the algorithm to make it run faster. Your own behaviour, if successful enough, itself will present opportunities for others: witness MacDonald’s versus, say, Subway or Starbucks. 

MacDonald’s algorithm stripped away “extraneous” considerations like healthiness, “coolness”, freshness and so on. So Subway was able to differentiate itself on food quality, and Starbucks on the delightful hipness of actually visiting the store (it seems extraordinary in hindsight, doesn’t it!) MacDonald’s was forced by its competitors to reverse back up the funnel to consider other offerings.

The idea is intuitive and makes a lot of sense. Particularly in large organisations there is a tendency towards “backward looking” data, regression analyses and the tried and true: “no one ever got fired for buying IBM” was a truism when I was a youngster. But the passage of time illustrates the corollary of that truism as well: no-one revolutionised their business by buying IBM either. And that, says Roger Martin, is what design thinking makes possible.

“ ‘no one ever got fired for buying IBM’ was a truism when I was a youngster. But the passage of time illustrates its corollary: no-one revolutionised their business by buying IBM either.”

It is certainly my experience that large organisations tend to “reliability” rather than “validity” thinking, and are so keen on moving to algorithm stage that they are inclined to skip the “heuristic”.

So some gripes: Firstly for a short book with an attractive big idea, it was rather hard to keep focussed on it. Something about Roger’s writing style is disengaging. I’m not entirely sure what it is: partly I think he takes a simple idea and beats it to death with self-serving examples (there are extended case studies of Proctor & Gamble, Target, and Research In Motion, all of which he was closely involved with). RIM in particular seems a poor example: yes, they had a big idea and commoditised it (isn’t that what all successful businesses do?) but their subsequent performance has been underwhelming, as they’ve been unable to withstand the march of the smart phones, and while they’re still the dominant player in the business market, they seem to be slowly but surely withering on the vine in the consumer space. (Talk as I write is that RIM is all but a goner, simply awaiting takeover).

On the other hand, Roger’s take on the underlying philosophy of design and business development is polymath enough to take in pragmatists like Dewey and Charles Sanders Pierce. Being a fan of Richard Rorty and other post-modern philosophers this went down well with me: It is a solid basis for the common sense contained in the book: in a contingent, ironic and pragmatic universe, where priorities, economic conditions, consumer preferences and political orthodoxies change like the wind, big, fast, dumb, inflexible machinery seems like a poor suit to be long in. The relentless preference for algorithms (mechanical, reliable) over heuristics (logical, but requiring interpretation and judgment) seems so blindingly obvious that it’s a wonder so much of corporate enterprise is so blind to it. Then again, being a design thinker is not easy: translating your unorthodox point of view to an anally retentive business analyst requires powers of persuasion not all of us have (“use lots of analogies!” Martin cheerfully advises) and you wonder whether design thinking - utopian an idea though it might be - is one that will generally get nowhere near the beating heart of your average multi-national.


Sunday, 22 January 2012


When the universe wakes up, will it smell the coffee? 
Not everyone is as certain as Ray Kurzweil that the End of History is at hand.

L'observatoire de St-Véran by Сергей'

JULIAN JAYNES rounds out his wonderful The Origins of Consciousness in the Breakdown of the Bicameral Mind with a sanguine remark that the idea of science is rooted in the same impulse that drives religion: the desire for "the Final Answer, the One Truth, the Single Cause".

Nowhere is this impulse better illustrated, or the scientific mien so resemblant of a religious one, than in Ray Kurzweil's hymn to forthcoming technology, The Singularity Is Near. For if ever a man were committed overtly - fervently, even - to such a unitary belief, it is Ray Kurzweil. And the sceptics among our number could hardly have asked for a better example of the pitfalls, or ironies, of such an intellectual fundamentalism: one one hand, this sort of essentialism features prominently in the currently voguish denouncements of the place of religion in contemporary affairs, often being claimed as a knock-out blow to the spiritual disposition. On the other, it is too strikingly similar in its own disposition to be anything of the sort. Ray Kurzweil is every inch the millenarian, only dressed in a lab-coat and not a habit.

Kurzweil believes that the "exponentially accelerating" "advance" of technology has us well on the way to a technological and intellectual utopia/dystopia (this sort of beauty being, though Kurzweil might deny it, decidedly in the eye of the beholder) where computer science will converge on and ultimately transcend biology and, in doing so, will transport human consciousness into something quite literally cosmic. This convergence he terms the "singularity", a point at which he expects with startling certainty that the universe will "wake up", and many immutable limitations of our current sorry existence (including, he seems to say, the very laws of physics) will simply fall away.

Some, your correspondent included, might wonder whether, this being the alternative, our present existence is all that sorry in the first place.

But not Raymond Kurzweil. This author seems to be genuinely excited about a prospect which sounds rather desolate, bordering on the apocalyptic, in those aspects where it manages to transcend sounding simply absurd. Which isn't often. One thing you could not accuse Ray Kurzweil of is a lack of pluck; but there's a fine line between bravado and foolhardiness which, in his enthusiasm, he may have crossed.
“Kurzweil seems to be genuinely excited about a prospect which sounds desolate, bordering on the apocalyptic, where it manages to transcend sounding simply absurd. Which isn’t often.”
His approach to evolution is a good example. He talks frequently and modishly of the algorithmic nature of evolution, but then makes observations not quite out of the playbook, such as: "the key to an evolutionary algorithm ... is defining the problem. ... in biological evolution the overall problem has always been to survive" and "evolution increases order, which may or may not increase complexity".

But to suppose an evolutionary algorithm has "a problem it is trying to solve" - in other words, a design principle - is to emasculate its very power, namely the facility of explaining how a sophisticated phenomenon comes about *without* a design principle. Evolution works because organisms (or genes) have a capacity - not an intent - to replicate themselves. Nor, necessarily, does evolution increase order. It will tend to increase complexity, because the evolutionary algorithm, having no insight, is unable to "perceive" the structural improvements implied in a design simplification. Evolution has no way of rationalising design except by fiat. The adaptation required to replace an overly elaborate design with more effective but simpler one is, to use Richard Dawkins' expression, an implausible step back down "Mount Improbable". That's generally not how evolutionary processes work: over-engineering is legion in nature; economy of design isn't, really.

This sounds like a picky point, but it gets to the nub of Kurzweil's outlook, which is to assume that technology evolves like biological organisms do - that a laser printer, for example, is a direct evolutionary descendent of the printing press. This, I think, is to superimpose a convenient narrative over a process that is not directly analogous: a laser printer is no more a descendent of a printing press than a mammal is a descendent of a dinosaur. Successor, perhaps; descendant, no. But the "exponential increase in progress" arguments that Kurzweil repeatedly espouses depend for their validity on this distinction.

The "evolutionary process" from woodblock printing to the Gutenberg press, to lithography, to hot metal typing, to photo-typesetting, to the ink jet printer (thanks, Wikipedia!) involves what Kurzweil would call "paradigm shifts" but which a biologist might call extinctions; each new technology arrives, supplements and (usually) obliterates the existing ones, not just by doing the same job more effectively, but - and this is critical - by opening up new vistas and possibilities altogether that weren't even conceived of in the earlier technology - sometimes even at the cost of a certain flexibility inherent in the older technology. That is, development is constantly forking off in un-envisaged, unexpected directions. This plays havoc with Kurzweil's loopy idea of a perfect, upwardly arcing parabola of utopian progress.

It is what I call "perspective chauvinism" to judge former technologies by the standards and parameters set by the prevailing orthodoxy - being that of the new technology. Judged by such an arbitrary standard older technologies will, by degrees, necessarily seem more and more primitive and useless. The fallacious process of judging former technologies by subsequently imposed criteria is, in my view, the source of many of Ray Kurzweil's inevitably impressive charts of exponential progress. It isn't that we are progressing ever more quickly onward, but the place whence we have come falls exponentially further away as our technology meanders, like a perpetually deflating balloon, through design space. Our rate of progress doesn't change; our discarded technologies simply seem more and more irrelevant through time.

Kurzweil may argue that the rate of change in technology has increased, and that may be true - but I dare say a similar thing happened at the time of the agricultural revolution and again in the industrial revolution - we got from Stephenson's rocket to the diesel locomotive within 75 years; in the subsequent 97 years the train's evolution been somewhat more sedate. Eventually, the "S" curves Kurzweil mentions flatten out. They clearly aren't exponential, and pretending that an exponential parabola might emerge from a conveniently concatenated series of "S" curves seems credulous to the point of disingenuity. This extrapolation into a single "parabola of best fit" has heavy resonances of the planetary "epicycle", a famously desperate attempt of Ptolemaic astronomers to fit "misbehaving" data into what Copernicans would ultimately convince the world was a fundamentally broken model. 

If this is right, then Kurzweil's corollary assumption - that there is a technological nirvana to which we're ever more quickly headed - commits the inverse fallacy of supposing the questions we will ask in the future - when the universe "wakes up", as he puts it - will be exactly the ones we anticipate now. History would say this is a naïve, parochial, chauvinistic and false assumption. 
“Assuming there is a technological nirvana to which we’re inevitably headed is to suppose the questions we will ask when the universe “wakes up” will be same the ones we ask now. History would say this is a parochial and chauvinistic assumption.”
And that, I think, is the nub of it. One feels somewhat uneasy so disdainfully pooh-poohing a theory put together with such enthusiasm and such an energetic presentation of data (and to be sure, buried in Kurzweil's breathless prose is plenty of learning about technology which, if even half-way right, is fascinating), but that seems to be it. I suppose I am fortified by the nearby predictions made just four years ago, seeming not to have come anything like true just yet:

"By the end of this decade [i.e., by 2010] computers will disappear as distinct physical objects, with displays built in our eyeglasses and electronics woven into our clothing"

On the other hand I could find scant reference to "cloud computing" or equivalent phenomena like the Berkeley Open Infrastructure for Network Computing project which spawned schemes like SETI@home in Kurzweil's book. Now here is a rapidly evolving technological phenotype, for sure: hooking up thousands of serially processing computers into a massive parallel network, giving processing power way beyond any technology currently envisioned. It may be that this adaptation means we simply don't need to incur the mental challenge of molecular transistors and so on, since there must, at some point, be an absolute limit to miniaturisation, as we approach it the marginal utility of developing the necessary technology will swan dive just as the marginal cost ascends to the heavens; whereas the parallel network involves none of those limitations. You can always hook up yet another computer, and every one will increase performance.

I suppose it's easy to be smug as I type on my decidedly physical computer, showing no signs of being superseded with VR Goggles just yet and we're already two yeasrs into the new decade, but the point is that the evolutionary process is notoriously bad at making predictions (until, that is, the results are in!), being path-dependent as it is. You can't predict for developments that haven't yet happened. Kurzweil glosses over this shortfall at his theory's cost.