The Map Is Not The Territory And A Wish List Is Not A Map (2)

The Limits of Prototyping in Agile Development

Central thesis of why software development is hard, and Part 1

So pretty soon in the late 90s it became common to find out what humans wanted out of computing systems by giving them a simulation of the system to play with, a prototype, which could come in all kinds of fidelities, from hand-sketched screens to, as the technology got better, clickable wireframes, to full front-ends. The craft of a UXer at the time was to be able to execute all these prototypes well with the tools available. The art was to fit prototyping at the proper fidelity into the software process such that you could find out the most, in order to decrease the risk of making the wrong thing as much as possible with the least resources. Sometimes the schedule would allow for a lot of time before development and you got to call that a “discovery phase”, sometimes you had to fit it into the Agile cycles somehow.

In a few engagements I even got both, so my team could make some outlandish mid-fidelity prototypes in Axure to run through with users and really elicit some deep thinking about their problems in the field we were working in, but then also do a broad test of the half-finished system mid-way through development to see if we were getting it right. The art there was to put the right user stories at the top of the backlog so you would have an unfinished but testable system halfway.

This is a notion that could fail in fun and unexpected ways. Like every map leaves something out of describing the territory, prototypes can’t be complete. For one effort, as we were in the UK, we put T&C stories at the bottom of the backlog, so to do after mid-way testing, because surely we didn’t need them to test the rental funnel? The prototype ended up failing testing in Germany because the test subjects insisted on thoroughly checking the T&Cs. And as I once had to explain to a group of stakeholders on another platform, we can’t prototype even more thoroughly to find all contingencies because by then you have basically just built the thing for a lot of money.

So prototypes are an answer, not the answer to dealing with the fallout from the rule

  1. Humans can not accurately describe what they want out of a software system until it exists.

The reason that knowing when to use prototypes, and which, is an art and not a craft is because Agile doesn’t actually know how to deal with product design. Check the original principles: they do talk about design in one spot, but it is a given software developers just take one next step at a time and then check if it was the right one with the business people, and that is the full extend of the thinking about what Agile makes. How it is decided what that step is, and how to make sure you end up with a coherent system across the multiple touch-points at the end, is left as an exercise to the reader. So when these Agile edicts were translated into repeatable and teachable processes like Scrum or Kanban, fitting in designing the experience became a matter of how the team or department wanted to organize, and the UX field has been struggling with that since.

Especially when the development field when through a long phase of demonizing Big Design Up Front and deciding instead software creation was supposed to be about jumping right in and asking in tiny steps if what was made was right, with a lot of bright people advocating you could go from a two-wheel kick-scooter to a Porsche SUV in small cyclical increments, of which the first stage got called MVP and rushed out. And if the market was only ready for a Porsche, well, you’d better hope you found that out by having some really deep kick-ass user interviews and conversations about that scooter MVP, or some other channel, because you’d never find out from sending that MVP out on the web and checking the numbers. Quant doesn’t give qual answers.

User research through prototyping made a resurgence, but flattened to be a repeatable and teachable process, called Design Sprint, to be only about asking people on the street what they want with half sketches, in a cycle that is only allowed to last a week. The rest of the knowledge to create a success has to come from… hunches from the product manager? Marketing? In my last job it was edicts by stakeholders, when it should have been customer service. Pulling all these signals together is the synthesis-between-departments glue UX Research and Product Design should really be bringing now and are often not empowered to do or can’t because they are stuck in cycles.

As Joanna Weber writes in this brilliant article about why Agile and Lean are such difficult fits in organizations that are vast and actually have to be trustworthy, coherent, and good: “If Scrum only worked for as long as there were waterfall systems in place to support it, we need to replace both with something that both acknowledges and improves that reality.

And the reality is rule nr 1 above, and that

  1. Humans can not accurately predict how long any software effort will take beyond four weeks. And after 2 weeks it is already dicey.

So that replacement has to stay incremental in nature and show a lot to users at every step. It’s a tough situation and it has been true for years: we still do not have a repeatable, teachable process to make great software systems that span multiple touch-points and are a joy to use and maintain. We have to navigate between speed of incremental delivery and allowing time for thinking for design.

Right now there are roughly three fundamental ways in which design fits into Agile of various forms, Sprint 0 (which can be Big Design Upfront), Sprint Ahead, and Parallel Tracks. Of these, Sprint 0 and Sprint Ahead are the ones I am finding the most, with Parallel Tracks, that could combine research and design into a very strong customer experience proposition, seeming the least popular, mostly because “devs want their designer embedded for synergy and speed”.

That should change, though. While UX Research and Product Design currently have an employment and cvredibility crisis, I recently did some prototyping with new tools that make me think there’s a whole new direction to go here. But this is already too long, so I will describe my ideas for the future next week.

The Map Is Not The Territory And A Wish List Is Not A Map (1)

“Where have all the task decompositions gone?” I was talking to a very experienced Head of UX about the state of our vocation when she asked that. I had to agree I have not seen one in years either. A task decomposition is when you take a task and divide it into steps, and then divide those into smaller steps, until you reach some granularity that makes sense for why you are doing this, like making screens or coordinating robot movements.

We used to do them all the time in UX, mostly when the field was still called HCI, to make sure we understood what the human was doing before we taught the computer to help them with it. There were many notation systems for them, and you could write PhD’s on comparing these notation systems and then inventing new ones.

Also something I haven’t seen in a decade is a specification full of descriptions of features that a system SHOULD and MUST and COULD have. The were called Functional Requirements, and while they often tried not to impose a view on how the system should look to users, you could tell how desperate the writer was trying to convey their needs in something else than fuzzy human language when they invariably started to use Word’s shapes tool to mock up screens—and then write the word SUGGESTION underneath so as to not offend their designers.

TDs and FRs are a relic from the waterfall period when you did a lot of design and understanding up front to make sure you were making the right thing for people before you committed the programming resources to make it. They were intrinsically incomplete in the same way a map always leaves things out of its description of the actual terrain, and expensive to make, and of limited use because:

  1. Humans can not accurately describe what they want out of a software system until it exists.

Bit of an issue.

Rule nr 1 is and was true all the time. You’d computerize a workflow of paper files in a shop or local government and at the end it would turn out that there were all these exceptions being made by clerks and admins using different color pens or writing in the margins that all the workers understood, but nobody above them working with IT did. The exception would be so important you’d have to retool the whole thing down to the tables in the database, and the project would be late and expensive.

When Agile originally said to deliver value frequently, it wasn’t to unlock money from customers cycle after cycle—that wasn’t even really possible until we started putting everything on the instantly monetizable web. It wasn’t for investors either, they will happily wait years for a return if the projected return is big enough. Agile wants frequent releases so you can show the results to humans fast and get feedback and then correct, instead of finding out when you deliver the whole thing after two years that rule nr 1 above always holds. It’s only around the time Lean Startup came along that every iteration wasn’t just to correct the course but also had to deliver some new mini-feature every time.

So if you want to replace Agile Scrum or Kanban with something, you have to deal with the fact that 40 years of trying to first find out how people work, and then making wish lists in all kinds of notations of how that work is to be done by computers, never was really successful and often a total failure.

Still, adding functionality bit by bit as you explore what is needed comes with things you should be aware of:

  1. The resulting system is kludged together cycle after cycle, unless you take some choice time between cycles to refactor huge chunks. This is why every seven years a software team wants to just start over, they can’t take doing archeology in all those cycles of hacks anymore and don’t feel they can add any more functionality without watching the tower of hacks fall over.
  2. It’s actually not faster than Waterfall. It just decreases the risk of ending up with garbage a sub-optimal product-market fit.

But, but, but, if wish lists specifications didn’t work because making software in itself changes the work the software is supposed to help out with, what about prototypes? That worked, right? Yes, with a list of caveats including that Agile actually doesn’t know when to use them and that AI-derived UX is deeply changing that game in the last 6 months, but I’ll discuss that in part 2.

Selection

Maybe I am too deeply connected to the Lean Startup / Lean UX / Agile UX circles in London, and what I am about to say is patently untrue because of confirmation bias, but one reason I haven’t written here much is because I have gotten very few requests of the “I am totally uninitiated in UX for my start-up” kind. It seems like tech start-ups are convinced of the value of UX from the start; the only wrangling I am dealing with know is how much to invest in it.

Still, how much to invest? What level of person to get? I have complained about it before, but I think I need to repeat it: if you are investing heavily in developer talent, why are you trying to go on the cheap with a junior or part-time UXer? You seriously want to spend your investment in dev talent to make software of which you have no idea if it will function for your target population before you release it? You want them iterating on a confusing implementation of the core idea, without proper guidance and process how to break out of this rut and start making software that fits you market?

Looking for a UX Designer just based on whether they can tart up your app or site is not going to get you the best value. It may seem cheap to get an up-skilled digital graphic designer, but the value in UX is knowing the processes with which you find out how your idea is viably brought to life, which includes helping decide for whom it is, how to find out what is important and what is secondary in your product, how to track what is hitting the spot and what isn’t, what many forms on inquiry to use to get insights, and how to translate those into something to build. There is actual science, craft, and experience-based lessons to answering these questions. Can you afford not to get good answers?

I was recently asked by a founder how to screen for a good UXer if you aren’t in the field yourself. There are plenty of heuristics, and besides the standard question of making sure you find someone you want to spend 70 hours a week with, I recommended the following:

  • Make sure the portfolio tell stories of how the questions I mentioned above were answered by the candidate in their projects. If all you see in a portfolio about projects is wireframes and finished screens, this person does not understand what UX is supposed to bring to the table, and they were just a cog in a big machine.
  • When looking at the portfolio, make sure you feel a mix of both recognition (“Yes, this looks pretty straightforward”) and a sense of innovation, and sometimes even both in the same project. You want someone whose thinking you can relate to, but who will augment you and your team in ways you currently lack.

Since I am answering fewer questions for start-ups, I will probably open this blog up to discuss the practice of UX more–as much as I can working in a big agency that wants to keep its secrets, of course.