8 November 2016

Agile organisations for agile politics

Filed under: Agile, H+Pedia, politics, Transpolitica, Uncategorized — Tags: , , , , — David Wood @ 6:23 pm

The pace of change in politics over the last twelve months has been breathtaking. It’s possible the change will accelerate further over the next twelve months:

  • Huge dissatisfaction exists with present-day political parties, candidates, and processes
  • Ideas can spread extremely rapidly, due to extensive usage of social media
  • Although many people feel alienated from mainstream politics, they have a hunger for political change.

Growing awareness of forthcoming technological disruptions heightens the general feeling of angst:

  • Technological unemployment (automation) threatens to eliminate whole swathes of jobs, or to reduce the salaries available to people who continue in their current roles
  • Genetic editing and artificial intelligence have the potential for people living “better than well” and even “more than human”, but it’s unclear how widely these benefits will be shared among all sectors of society
  • Technologies such as blockchain and 3D printing raise the possibility of decentralised coordination – coordination with less need for powerful states or corporations
  • Virtual Reality, along with new types of drug, could lead to large-scale disengagement of citizens from mainstream society – with people “tuning in and dropping out” as never before
  • Breakthroughs in fields of energy, nanotech, the Internet of Things, synthetic biology, and self-learning artificial intelligence could result, intentionally or unintentionally, in extremely chaotic outcomes – with recourse to new types of “weapons of mass destruction” (including cyber-terrorism, nano-terrorism, gene-terrorism, and AI-terrorism)
  • Technologies of surveillance could put more power than ever before in the hands of all-seeing, all-manipulating governments and/or corporations
  • Misguided attempts to “geo-engineer” planetary solutions to potential runaway climate change could have devastating unintended consequences for the environment.

In the light of such uncertainty, two skills are becoming more important than ever:

  • The skill of foresight – the anticipation and evaluation of new scenarios, arising from the convergence of multiple developing trends
  • The skill of agility – the capability to change plans rapidly, as unexpected developments take on a life of their own.

An update on the Transhumanist Party of the UK

This context is the background for a significant change in a political party that was formed nearly two years ago – the Transhumanist Party of the UK (TPUK).

As a reminder, here’s a 90 second promotional video for TPUK from April last year:


The messages in that video remain as relevant and important today as when the Party was founded:

The Transhumanist Party – Transcending human limitations

Harnessing accelerating technology:

  • Enabling positive social change and personal freedom,
  • With no-one abandoned,
  • So technology benefits all – not just vested interests.

Sustainable, bright green policies – good for humanity and good for the environment

  • Policies informed by science and evidence,
  • Ideology and divisiveness replaced by rationality and compassion ,
  • Risks managed proactively, enabling innovation to flourish.

Regenerative solutions – for body, mind, education, society, and politics

  • Smart automation and artificial intelligence addressing age-old human burdens,
  • Huge personal and financial benefits from preventive medicine and healthy longevity,
  • Politics transcending past biases and weaknesses.

However, despite this vision, and despite an initial flurry of positive publicity (including the parliamentary candidacy of Alex Karran), the Party has made little progress over the last 6-9 months. And in the last couple of weeks, two key members of the Party’s NEC (National Executive Committee) have resigned from the Party:

These resignations arise from the recognition that there are many drawbacks to creating and developing a new political party in the United Kingdom:

  • The “first past the post” electoral system makes it especially difficult for minority parties to win seats in parliament
  • Political parties need to establish a set of policies on a wide range of issues – issues away from the areas of core agreement among members, and where dissension can easily arise
  • The timescales spoken about for full electoral success – potentially up to 25 years – are far too far into the future, given all the other changes expected in the meantime.

Party executives will each be following their own decisions about the best way to progress the underlying goals of transhumanist politics. Many of us will be redoubling our efforts behind Transpolitica – the think tank which was established at the same time as the Transhumanist Party. The relationship between Transpolitica and TPUK is covered in this FAQ from the Transpolitica website:

Q: What is the relation between Transpolitica and the various Transhumanist Parties?

Transpolitica aims to provide material and services that will be found useful by transhumanist politicians worldwide, including:

  • Transhumanist supporters who form or join parties with the name “Transhumanist Party” in various countries
  • Transhumanist supporters who form other new parties, without using the word “transhumanist” in their party name
  • Transhumanist supporters inside other existing political parties, including mainstream and long-established parties
  • Transhumanist supporters who prefer not to associate closely with any one political party, but who have an interest in political action.

Transpolitica 2016

Transpolitica is hosting a major conference later this year – on 3rd December. It’s a conference with a very practical ambition – to gather and review proposals for “Real world policy changes for a radically better future”. There will be 15 speakers, covering topics in three broad sections:

  • Regulations, health, and transformation
  • Politics, tools, and transformation
  • Society, data, and transformation

Click here for more details, and to register to attend (while tickets are still available).

I’ll be kicking off the proceedings, with a talk entitled “What prospects for better politics?”.


Watch out for more news about the topics being covered by the other speakers.

Note that a focus on devising practical policies for a radically better future – policies which could become the focus of subsequent cross-party campaigns for legislative changes – resonates with an important evolution taking place within the IEET (the Institute for Ethics and Emerging Technologies). As James Hughes (the IEET Executive Director) writes:

I am proposing that the IEET re-focus in a major way, on our website, with our blog, with our community, and in our work, on the explicit project of building a global technoprogressive ideological tendency to intervene in debates within futurism, academe and public policy. While we will remain a nonpartisan nonprofit organization, and will not be endorsing specific candidates, parties or pieces of legislation, we can focus on the broad parameters of the technoprogressive regulatory and legislative agenda to be pursued globally.

Regarding a first concrete project in this new direction, I have in mind our editing a Technoprogressive Policy Briefing Book, comparable to the briefing books of think tanks like the Brookings Institution, AEI, or Heritage Foundation. This project can collect and collaborate with the excellent work done by Transpolitica and other technoprogressive groups and friends. Each policy briefing would state a general issue in a couple of paragraphs, outline the key technoprogressive policy ideas to address the issue, and then list key publications and links to organizations pursuing those policies.

Next steps with the TPUK

As the official Treasurer of the TPUK, and following (as mentioned above) the resignation of both the leader and deputy leader of the Party, it legally falls to me to manage the evolution of the Party in a way that serves the vision of the remaining members. I’m in discussion with the other remaining representatives on the National Executive Committee, and we’ll be consulting members via the Party’s email conferencing systems. The basic principles I’ll be proposing are as follows:

  1. Times of rapid change demand organisational agility, rather than any heavyweight structures
  2. We will retain our radical purpose – the social changes ahead could (and should) be momentous over the next 5-25 years
  3. We will retain our progressive vision, in which technology benefits all – not just vested interests
  4. We will provide support across the spectrum of existing political parties to sympathisers of transhumanist and technoprogressive changes
  5. We will be ready to play a key positive enabling role as the existing political spectrum undergoes its own changes ahead – including the fragmentation of current parties and the creation of new alliances and new initiatives
  6. We will continue to champion the vision of (a.) Harnessing accelerating technology to enable positive social change and personal freedom; (b.) Sustainable, bright green policies – good for humanity and good for the environment; (c.) Regenerative solutions – for body, mind, education, society, and politics
  7. We will aim to provide actionable, practical analyses – of the sort being presented at Transpolitica 2016 – rather than (just) statements of principle
  8. Rather than maintain an expensive infrastructure of our own, we should feed our work into existing systems – such as H+Pedia, Transpolitica, the IEET, and the Transhuman National Committee of the United States
  9. As far as possible, we will remain collaborative rather than divisive
  10. We will hold onto our domain names
  11. We will retain the option to field our own candidates in future elections, in case that turns out to be the most sensible course of action at that time (this means the Party will remain officially registered with the Electoral Commission – at modest cost)
  12. We will offer our donors and members a refund of the payments they have provided the Party within the last six months, in case they feel they no longer support our vision.


1 January 2012

Planning for optimal ‘flow’ in an uncertain world

Filed under: Agile, books, critical chain, flow, lean, predictability — David Wood @ 1:44 pm

In a world with enormous uncertainty, what is the best planning methodology?

I’ve long been sceptical about elaborate planning – hence my enthusiasm for what’s often called ‘agile‘ and ‘lean‘ development processes.  Indeed, I devoted a significant chunk of my book “Symbian for software leaders – principles of successful smartphone development projects” to comparing and contrasting the “plan is king” approach to an agile approach.

But the passage of time accumulates deeper insight.  Key thinkers in this field now refer to “second generation lean product development”.  Perhaps paramount among these thinkers is the veteran analyst of best practice in new product development, Donald Reinertsen.  I’ve been influenced by his ideas more than once in my career already:

  • In the early 1990s, while I was a software engineering manager at Psion, my boss at the time recommended I read Reinertsen’s “Developing Products in Half the Time“. It was great advice!
  • In the early 200xs, while I was EVP at Symbian, I remember enjoying insights from Reinsertsen’s “Managing the Design Factory“.

I was recently pleased to discover Reinertsen has put pen to paper again.  The result is “The Principles of Product Development Flow: Second Generation Lean Product Development“.

The following Amazon.com review of the latest book, by Maurice Hagar, persuaded me to purchase that book:

This new standard on lean product and software development challenges orthodox thinking on every side and is required reading. It’s fairly technical and not an easy read but well worth the effort.

For the traditionalist, add to cart if you want to learn:

  • Why prioritizing work “on the basis of project profitability measures like return on investment (ROI)” is a mistake
  • Why we should manage queues instead of timelines
  • Why “trying to estimate the amount of work in queue” is a waste of time
  • Why our focus on efficiency, capacity utilization, and preventing and correcting deviations from the plan “are fundamentally wrong”
  • Why “systematic top-down design of the entire system” is risky
  • Why bottom-up estimating is flawed
  • Why reducing defects may be costing us money
  • Why we should “watch the work product, not the worker”
  • Why rewarding specialization is a bad idea
  • Why centralizing control in project management offices and information systems is dangerous
  • Why a bad decision made rapidly “is far better” than the right decision made late and “one of the biggest mistakes a leader could make is to stifle initiative”
  •  Why communicating failures is more important than communicating successes

For the Agilist, add to cart if you want to learn:

  • Why command-and-control is essential to prevent misalignment, local optimization, chaos, even disaster
  • Why traditional conformance to a plan and strong change control and risk management is sometimes preferable to adaptive management
  • Why the economies of scale from centralized, shared resources are sometimes preferable to dedicated teams
  • Why clear roles and boundaries are sometimes preferable to swarming “the way five-year-olds approach soccer”
  • Why predictable behavior is more important than shared values for building trust and teamwork
  • Why even professionals should have synchronized coffee breaks…

Even in the first few pages, I’ve found some cracking good quotes.

Here’s one on economics and “the cost of late changes”:

Our central premise is that we do product development to make money.  This economic goal permits us to use economic thinking and allows us to see many issues with a fresh point of view.  It illuminates the grave problems with the current orthodoxy.

The current orthodoxy does not focus on understanding deeper economic relationships.  Instead, it is, at best, based on observing correlations between pairs of proxy variables.  For example, it observes that late design changes have higher costs than early design changes, and prescribes front-loading problem solving.  This ignores the fact that late changes can also create enormous economic value.  The economic effect of a late change can only be evaluated by considering its complete economic impact.

And on “worship of conformance”:

In addition to deeply misunderstanding variability, today’s product developers have deep-rooted misconceptions on how to react to this variability.  They believe that they should always strive to make actual performance conform to the original plan.  They assume that the benefit of correcting a deviation from the plan will always exceed the cost of doing so.  This places completely unwarranted trust in the original plan, and it blocks companies from exploiting emergent opportunities.  Such behaviour makes no economic sense.

We live in an uncertain world.  We must recognise that our original plan was based on noisy data, viewed from a long time-horizon…  Emergent information completely changes the economics of our original choice.  In such cases, blindly insisting on conformance to the original plan destroys economic value.

To manage product development effectively, we must recognise that valuable new information is constantly arriving throughout the development cycle.  Rather than remaining frozen in time, locked to the original plan, we must learn to make good economic choices using this emerging information.

Conformance to the original plan has become another obstacle blocking our ability to make good economic choices.  Once again, we have a case of a proxy variable, conformance, obscuring the real issue, which is making good economic decisions…

Next, on flow control and the sequencing of tasks:

We are interested in finding economically optimum sequences for tasks.  Current practices use fairly crude approaches to sequencing.

For example, it suggests that if subsystem B depends on subsystem A, it would be better to sequence the design of A first.  This logic optimises efficiency as a proxy variable.  When we consider overall economics, as we do in this book, we often reach different conclusions.  For example, it may be better to develop both A and B simultaneously, despite the risk of inefficient rework, because parallel development can save cycle time.

In this book, our model for flow control will not be manufacturing systems, since these systems primarily deal with predictable and homogeneous flows.  Instead, we will look at lessons that can be learned from telecommunications networks and computer operating systems.  Both of these domains have decades of experience dealing with non-homogeneous and highly variable flows.

Finally, on fast feedback:

Developers rely on feedback to influence subsequent choices.  Or, at least, they should.  Unfortunately, our current orthodoxy views feedback as an element of an undesirable rework loop.  It asserts that we should prevent the need for rework by having engineers design things right the first time.

We will present a radically different view, suggesting that feedback is what permits us to operate our product development process effectively in a very noisy environment.  Feedback allows us to efficiently adapt to unpredictability.

To be clear, Reinertsen’s book doesn’t just point out issues with what he calls “current practice” or “orthodoxy”.  He also points out shortcomings in various first generation lean models, such as Eliyahu Goldratt’s “Critical Chain” methodology (as described in Goldratt’s “Theory of Constraints”), and Kanban.  For example, in discussing the minimisation of Work In Process (WIP) inventory, Reinertsen says the following:

WIP constraints are a powerful way to gain control over cycle time in the presence of variability.  This is particularly important where variability accumulates, such as in product development…

We will discuss two common methods of constraining WIP: the kanban system and Goldratt’s Theory of Constraints.  These methods are relatively static.  We will also examine how telecommunications networks use WIP constraints in a much more dynamic way.  Once again, telecommunications networks are interesting to us as product developers, because they deal successfully with inherently high variability.

Hopefully that’s a good set of tasters for what will follow!

29 January 2010

A strategy for mobile app development

Filed under: Agile, applications, consulting, fragmentation, mashup* event, mobile web — David Wood @ 12:15 am

The mashup* event in London’s Canary Wharf district yesterday evening – hosted by Ogilvy – addressed the question,

  • Apps: What’s your strategy?

The meeting was described as follows:

This event will help people in strategic marcomms roles understand the key challenges with respect to apps and identify the building blocks of an app strategy:

  • What are the platform choices?
  • What are the app store choices?
  • What devices should you support? …

mashup* is bringing together several industry experts and specialist developers to help demystify, clarify and explain the issues around the rapidly emerging Apps channel…

The event was sold out, and the room was packed.  I didn’t hear anyone question the need for companies to have a mobile strategy.  Nowadays, that seems to be taken for granted.  The hard bit is to work out what the strategy should be.

One of the speakers, Charles Weir of Penrillian, gave a stark assessment of the difficulty in writing mobile apps:

  • For wide coverage of different devices, several different programming systems need to be used – apart from those (relatively few) cases where the functionality of the app can be delivered via web technology;
  • Rather than the number of different mobile platforms decreasing, the number is actually increasing: fragmentation is getting worse;
  • Examples of relatively new mobile platforms include Samsung’s bada and Nokia’s Maemo.

One mobile strategy is to focus on just one platform – such as the Apple iPhone.  Another strategy is to prioritise web-based delivery – as followed by another speaker, Mark Curtis, for the highly-successful Flirtomatic app.  But these restrictions may be unacceptable to companies who:

  • Want to reach a larger number of users (who use different devices);
  • Want to include richer functionality in their app than can be delivered via standard mobile browsers.

So what are the alternatives?

If anything, the development situation is even more complex than Charles described it:

  • Mobile web browsing suffers from its own fragmentation – with different versions of web browsers being used on different devices, and with different widget extensions;
  • Individual mobile platforms can have multiple UI families;
  • Different versions of a single mobile platform may be incompatible with each other

The mobile industry is aware of these problems, and is pursing solutions on multiple fronts – including improved developer tools, improved intermediate platforms, and improved management of compatibility.  For example, there is considerable hope that HTML 5.0 will be widely adopted as a standard.  However, at the same time as solutions are found, new incompatibilities arise too – typically for new areas of mobile functionality.

The suggestion I raised from the floor during the meeting is that companies ought in general to avoid squaring up to this fragmentation.  Instead, they should engage partners who specialise in handling this fragmentation on behalf of clients.  Fragmentation is a hard problem, which won’t disappear any time soon.  Worse, as I said, the nature of the fragmentation changes fairly rapidly.  So let this problem be handled by expert mobile professional services companies.

This can be viewed as a kind of “mobile apps as a service”.

These professional services companies could provide, not only the technical solutions for a number of platforms, but also up-to-date impartial advice on which platforms ought to be prioritised.  Happily, the number of these mobile-savvy professional services companies (both large and small) is continuing to grow.

My suggestion received broad general support from the panel of speakers, but with one important twist.  Being a big fan of agile development, I fully accept this twist:

  • The specification of successful applications is rarely fixed in advance;
  • Instead, it ought to evolve in the light of users’ experience with early releases;
  • The specification will therefore improve as the project unfolds.

This strongly argues against any hands-off outsourcing of mobile app development to the professional services company.  Instead, the professional services company should operate in close conjunction with the domain experts in the original company.  That’s a mobile application strategy that makes good sense.

1 October 2008

The student syndrome

Filed under: Agile, critical chain, Essay contest, predictability — David Wood @ 5:13 pm

Entries for Symbian’s 2008 Student Essay Contest have just closed. The deadline for submission of entries was midnight (GMT) on 30 September 2008.

The contest has been advertised since June. What proportion of all the entries do you suppose were submitted in the final six hours before the deadline expired? (Bear in mind that, out of a total competition duration of more than three months, six hours is about 1/400 of the available time.)

I’ll give the answer at the end of this article. It surprised me – though I ought to have anticipated the outcome. After all, for many years I’ve been telling people about “The Student Syndrome”.

I became familiar with the concept of the student syndrome some years ago, while reading Eliyahu Goldratt’s fine business-oriented novel “The Critical Chain“:

Like all Goldratt’s novels, Critical Chain mixes human interest with some intriguing ways of analysing business-critical topics. The ideas in these books had a big influence on the evolution of my own views about how to incorporate responsiveness and agility into large software projects where customers are heavily reliant on the software being delivered at pre-agreed dates.

Here’s what I said on the topic of “variable task estimates” in the chapter “Managing plans and change” in my own 2005 book “Symbian for software leaders“:

A smartphone project plan is made up from a large number of estimates for how long it will take to complete individual tasks. If the task involves novel work, or novel circumstances, or a novel integration environment, you can have a wide range of estimates for the length of time required.

It’s similar to estimating how long you will take to complete an unfamiliar journey in a busy city with potentially unreliable transport infrastructure. Let’s say that, if you are lucky, you might complete the journey in just 20 minutes. Perhaps 30 minutes is the most likely time duration. But in view of potential traffic hold-ups or train delays, you could take as long as one hour, or (in case of underground train derailments) even two hours or longer. So there’s a range of estimates, with the distribution curve having a long tail on the right hand side: there’s a non-negligible probability that the task will take at least twice as long as the individual most likely outcome.

It’s often the same with estimating the length of time for a task within a project plan.

Now imagine that the company culture puts a strong emphasis on fulfilling commitments, and never missing deadlines. If developers are asked to state a length of time in which they have (say) 95% confidence they will finish the task, they are likely to give an answer that is at least twice as long as the individual most likely outcome. They do so because:

  • Customers may make large financial decisions dependent on the estimate – on the assumption that it will be met;
  • Bonus payments to developers may depend on hitting the target;
  • The developers have to plan on unforeseen task interference (and other changes);
  • Any estimate the developers provide may get squashed down by aggressive senior managers (so they’d better pad their estimate in advance, making it even longer).

Ironically, even though such estimates are designed to be fulfilled around 95% of the time, they typically end up being fulfilled only around 50% of the time. This fact deserves some careful reflection. Even though the estimates were generous, it seems (at first sight) that they were not generous enough. In fact, here’s what happens:

  • In fulfilment of “Parkinson’s Law”, tasks expand to fill the available time. Developers can always find ways to improve and optimise their solutions – adding extra test cases, considering alternative algorithms and generalisations, and so forth;
  • Because there’s a perception (in at least the beginning of the time period) of there being ample time, developers often put off becoming fully involved in their tasks. This is sometimes called “the student syndrome”, from the observation that most students do most of the preparation for an exam in the time period just before the exam. The time lost in this way can never be regained;
  • Because there’s a perception of there being ample time, developers can become involved in other activities at the same time. However, these other activities often last longer than intended. So the developer ends up multi-tasking between two (or more) activities. But multi-tasking involves significant task setup time – time to become deeply involved in each different task (time to enter “flow mode” for the task). So yet more time is wasted;
  • Critically, even when a task is ready to finish earlier than expected, the project plan can rarely take advantage of this fact. The people who were scheduled for the next task probably aren’t ready to start it earlier than anticipated. So an early finish by one task rarely translates into an early start by the next task. On the other hand, a late finish by one task inevitably means a late start for the next start. This task asymmetry drives the whole schedule later.

In conclusion, in a company whose culture puts a strong emphasis upon fulfilling commitments and never missing deadlines, the agreed schedules are built from estimations up to twice as long as the individually most likely outcome, and even so, they often miss even these extended deadlines…

This line of analysis is one I’ve run through scores of times, in discussions with people, in the last four or five years. It feeds into the argument that the best way to ensure customer satisfaction and predictable delivery, is, counter-intuitively, to focus more on software quality, interim customer feedback, agile project management, self-motivated teams, and general principles of excellence in software development, than on schedule management itself.

It’s in line with what Steve McConnell says,

  • IBM discovered 20 years ago that projects that focused on attaining the shortest schedules had high frequencies of cost and schedule overruns;
  • Projects that focused on achieving high quality had the best schedules and the highest productivities.

Symbian’s experience over many years bears out the same conclusion. The more we’ve focused on achieving high quality, the better we’ve become with both schedule management and internal developer productivity.

As for the results of the student syndrome applied to the Symbian Essay Contest:

  • 54% of the essays submitted to the competition were received in the final six hours (approximately the final 1/400 of the time available)
  • Indeed, 16% of the essays submitted were received in the final 60 minutes.

That’s an impressively asymmetric distribution! (It also means that the competition judges will have to work harder than they had been expecting, right up to the penultimate day of the contest…)

13 July 2008

A picture is worth a thousand words: Enterprise Agile

Filed under: Agile, communications, waterfall — David Wood @ 8:44 pm

Communications via words often isn’t enough. You generally need pictures too.

For example, in seeking to explain to people about the merits of Agile over more traditional, “plan-based” software development methods, I’ve often found excerpts from the following sequence of pictures to be useful:

The last two pictures in this series are an attempt to show how Agile can be applied in multiple layers in the more complex environment of large-scale (“enterprise-scale”) software projects. Of course, it’s particularly challenging to gain the benefits of Agile in these larger environments.

I drew these diagrams (almost exactly 12 months ago) after having read fairly widely in the Agile literature. So these diagrams draw upon the insights of many Agile advocates. Someone who influenced me more than most was Dean Leffingwell, author of the easy-to-read yet full-of-substance book “Scaling Software Agility: Best practices for large enterprises” that I’ve already mentioned in this blog. I’d also like to highlight the “How to be Agile without being Extreme” course developed and delivered by Construx as being particularly helpful for Symbian.

Dean has carried out occasional training and consulting engagements for Symbian over the last twelve months. One outcome of this continuing dialog is an impressive new picture, which tackles many issues that are omitted by simpler pictures about Agile. The picture is now available on Dean’s blog:

If the picture intrigues you, I suggest you pay close attention to the next few posts that Dean makes, where he promises to provide annotations to the different elements. This could be the picture that generates many thousands of deeply insightful words…

Footnote: I’ve long held that Open Source is no panacea for complex software projects. If you aren’t world class in software development skills such as compatibility management, system architecture review, modular design, overnight builds, peer reviews, and systematic and extensive regression testing, then Open Source won’t magically allow you to compete with companies that do have these skillsets. One more item to add to this list of necessary skills is enterprise-scale agile. (Did I call it “one more item”? Scratch that – there are many skills involved, under this one label.)

19 June 2008

Seven principles of agile architecture

Filed under: Agile, Symbian — David Wood @ 9:37 pm

Agile software methodologies (associated with names like “Scrum” and “eXtreme Programming”) have historically been primarily adopted within small-team projects. They’ve tended to fare less well on larger projects.

Dean Leffingwell’s book “Scaling Software Agility: Best practices for large enterprises” is the most useful one that I’ve found, on the important topic of how best to apply the deep insights of Agile methodologies in the context of larger development projects. I like the book because it’s clear (easy to read) as well as being profound (well worth reading). I liked the book so much that I invited Dean to come to speak at various training seminars inside Symbian. We’ve learned a great deal from what he’s had to say.

As an active practitioner who carries out regular retrospectives, Dean keeps up a steady stream of new blog articles that capture the evolution of his thinking. Recently, he’s been publishing articles on “Agile architecture”, including a summary article that lists “Seven principles of agile architecture“:

  1. The teams that code the system design the system
  2. Build the simplest architecture that can possibly work
  3. When in doubt, code it out
  4. They build it, they test it
  5. The bigger the system, the longer the runway
  6. System architecture is a role collaboration
  7. There is no monopoly on innovation.

Dean says he’s working on an article that pulls all these ideas together. I’m looking forward to it!

Blog at WordPress.com.