The Rest of the Company Can’t Handle Agile

Medium for some reason thinks I continue to want to read about the demise of Agile project management methodologies for making software. My daily email has headline after headline by Product Managers and Engineering Leads writing why Agile—by which they usually actually mean Scrum, a framework of Agile principles turned into a repeatable, teachable methodology—is over.

(I am wondering if, for balance, some other designer who is heavily into Waterfall is getting a ton of headlines about why Agile is the best thing ever and will never die.)

The articles always discuss the shortfalls the authors think comes with Scrum with its short cycles and loads of coordination, but I have barely ever seen discussions of what I consider the actually most painful problem with Agile: the rest of the company can’t handle it. To put it as short as I can:

All our best methodologies to create successful software successfully—Agile, Lean UX, Outcome-oriented delivery—say that every software team needs to be in constant learning and adaptation mode, which means that delivery is now in an R&D model. But R&D is fundamentally unpredictable and most companies are not set up to live that way.

It becomes impossible to plan marketing for a launch, to budget commercials, to have an announcement for a conference, to set up your suppliers, to keep relationships with your partners, to project the ROI to finance and stakeholders, or in any way to hit your internal targets when every week or two the direction, delivery time, or the projected features of the digital product can change.

Inside anything but a small start-up, functions and departments depend on each-other and end up depending on what features and forms are on the website, and yet the software industry at all levels advocates not committing to both a time and scope of work beyond 4 weeks because predictions that include both time and scope beyond 4 weeks overwhelmingly turned out to be wrong in the last 80 years. That’s a really tough message to accept when you depend on that software. Every other discipline can deliver to time and scope, but somehow software can’t? Indeed it can’t, actually, but to truly internalize that you have to have been a software engineer who repeatedly has blown past deadlines you committed to with full confidence just a few months ago.

(Software, but see also residential bathroom remodels. I have never heard of one of those being on time and budget and on-plan either.)

It’s difficult for agencies

Now try being a digital agency, whose model depends on promising clients certain functionality by a certain date. The dance I have seen account managers do with clients to first tell them user and market research may uncover things they absolutely do not want to hear, and then that delivery of what they agree to make might have a large margin of error and thus cost (and never downwards), has been a sight to behold. But in the end, companies use agencies to lower their risk and so they will insist the statements of work eventually specify time and scope of what will be delivered — because who wants to spend money and not know what they are getting when? Most agencies will thus sign everyone in their ranks up to doing fake Agile: everything made in little chunks but with a fixed end date, leading always to overtime.

The best way I ever saw this managed when I worked agency-side was how an account manager, after a lot of discussion with the client and a number of smaller engagements where we had proven ourselves, negotiated that the client would only purchase weeks of time of a certain team. We got away from billing for finished pieces, features, screens, separate deliverables; instead we agreed together on what outcome the client was after, settled on an approach, and would follow design and delivery for it, with a lot of touchpoints to exchange progress and feedback and agree on course-corrections as we learned. We continued to work hard to keep the trust that allowed us to stay within this model that got us away from endless negotiations about milestones and estimates, while staying true to the Agile principle of committing long-term only to time or only to scope, not both—in this case, time.

And businesses that have been around for a while

Inside companies that make their software in-house, this boundary between the Agile and the more traditional parts can get especially painful if it that company has a centralized software group to make the websites and apps that other divisions depend on. This is the model you see most often in companies older than the Internet, who had to learn how to make software later; the company tries to control the unpredictability of creating software by keeping it isolated and concentrated in one place.

This boundary between the Agile software delivery side and the departments relying on features is usually a beleaguered Product Manager who is constantly trying to figure out what the priorities really are this month while also trying not to over-promise anything. They have to handle increasingly pressing questions from department heads about why this thing they need is not live on the website yet and when are we enabling this new product category in the CMS, in between having to convince stakeholders about what the latest tests and data uncovered, to suddenly have management believe your findings (without crediting you) and change direction and expecting results yesterday while the budget stays the same because budgets are set for the year. The Product Manager feel they aren’t really in charge of their product, the other departments feel they have no control over their future, and the C-suite doesn’t understand why everything is so slow and everyone is so defeated and hires another COO to clean things up.

The solution: pushing the Agile boundary up and out

I honestly believe that the only solution long-term to this is the opposite of having a specific IT department, but instead to push software-creation deeper into the company. As I evangelized inside one of the place set up this dysfunctional: “Do you think the business group inside Facebook that manages the friendslist writes up their feature priorities for the year and then submits them to a some central programming group? Sitting there, just hoping their features get prioritized over the needs of the calendaring business group, perhaps shouting louder on every call with the central programmers to get what they need? Of course not, inside Facebook everyone whose outcomes depend on what is on the website gets to program that little piece of the website with their own team — last I heard 7000 teams could push changes to the website. Do you think Spotify’s playlist recommender is a separate business division begging for time from the central Spotify programming teams? Of course not, the Playlist business group makes their own Playlist features. Yet here we are in this platform division of [this company I was working at] trying to juggle a backlog of stories listing what 5 different internal product divisions need from our web platform and asking the VP above them to please set the priority so we can exist without always having at least 4 knives in our back.”

It takes tons of coordination to keep a large digital service coherent when so many teams can push things to the web and apps, but we have structures and procedures for that, and at least it lets all the teams in the company chart their own course. But yes, it does mean a lot more people throughout such a legacy business have to learn about making software. It means that a lot of people who signed up 20 years ago to do, say, production chemistry or business admin, now due to their successful careers in management need to lead software efforts for their division and learn very quickly what difference is between an MVP and an MVT. And lot of people are instead more comfortable leaving that to a separate software group and then bitching at them.

A stunning amount of workplace friction, if not outright toxicity, comes directly from this misalignment between traditional corporate structures and Agile ways of working. It remains to be seen whether AI-assisted software engineering is going to change the unpredictability of making software or the unpredictability of what users will use; all signs are that AI will deliver more of everything per cycle — more screens, more prototypes, more code — but no AI prompting will compensate for not knowing what the market wants or what bugs hide in back-end integrations with your new code. It will remain necessary for software system creators to show their work often, to users, to stakeholders, in order to get feedback whether they’re on the right path. You can call it something else than Agile if you want, but the principles won’t change, and thus neither will the need to do the hard work inside the rest of the company to align everyone around it.

But what if you could explore a lot of terrain quickly, and not need maps?

Why software dev is so hard, Part 1, Part 2

Testing in production is really expensive

As discussed before, decades of experience teaches us we have to show users what they say they want so we can find out what will actually work for them, and we have to do this often. So often, that we created a whole inventory of ways to find out, from ‘painted door’ for concept tests to A/B tests for more tangible experiences, to interviews with prototypes before making things, and then validation testing after making things, in small conversations for insights or large numerical samples. But as this article by Judd Antin posits, what this led to is a lot of research that tells you about features, and very few answers to the big questions of what will actually make a change in people’s lives that they are willing to pay for.

For a decade a whole industry told all of software creation to “build test measure” but forgot to give a real founded answer to build what? Everyone could see that it actually meant “throw stuff you think will work at the wall and hope something sticks”, with stuff being whatever the poor Product Manager or HIPPO could come up with from either their hunches or customer service or competitor features, while at the same time we were telling ourselves it was ever so scientific and valid. You couldn’t but feel cognitive dissonance looking at the process. Meanwhile the design process didn’t even fit software development properly anyway, see Part 2.

It is then not surprising that many Product Managers end up frustrated and thinking that they could get the answers they really need by “getting out of the building” and “talking to a few users.” They feel the real gap in their knowledge is how their users really live with their products, not whether the OK button is green enough. (“Getting out of the building” without a rigorous agenda and user selection doesn’t work, BTW: just talking to whichever target person you end up with will just enable enormous amounts of confirmation bias.)

Don’t take just my word for it; Pavel Samsonov here discusses with other examples how cyclical design and development ends up unsatisfying. Many comments on the article are beautiful examples of copium: repeating Ur Doing It Wrong without engaging with the main problem that everything about making experiences for users, from deciding what to make to how to make it, chafes for every role in the team, and has for the last 15 years.

I don’t agree with the article, and others like it, that the solution is to redefine delivering value every cycle not as making a feature or capability, but to mean learning some lesson. The problem with that is you are still using your development team to ‘try something’ over and over, but this time with the explicit assumption they will throw 75% away. A full-fledged, production-code releasing dev team is a very expensive place to learn what not to build. Among other things, it puts a large psychological toll on designers and developers who are pouring blood, sweat, and tears on stuff to then have it be thrown away in the name of ‘learning’, over and over. You also can’t explain it to the budget people who will see the throw-away as waste.

This cycle was and will remain necessary for a while in organizations terrified of Big Design Up Front or that don’t have a separate deep research capability, but it will feel uncomfortable, and the team will slowly move back from releasing-to-learn to releasing-features-to-hit-KPIs. You don’t build careers by publicly admitting you ‘are going to waste’ 75% of your dev efforts.

But the article does point to a way out: what if your dev team didn’t have to create and release experiment after experiment to find out what to build for real? What if the design and research group could test alternatives much faster, and not just one-page A/B tests, but whole realistic funnels, whole new concepts, many at the same time, without needing dev resources?

Enter AI and no-code

In case you haven’t followed UX for the last 20 years, one of the questions we always debated was whether UX designers should be able to code, and with code we always seemed to mean HTML/CSS/JS (which then implied that UX was only about web pages). The pro-arguments included that designers who could code wouldn’t be wasting everyone’s time by designing impossible pages, or, on the extreme end of the arguments, that designers who actually coded their designs would speed up time to delivery so much because there wouldn’t be a whole specification (first Photoshop, then Sketch, now Figma) stage. There’s a whole set of assumptions in that statement that I don’t have time to unpack, even from experience because I used to be such a designer-developer, but the rarity of user-experience designers who code their own designs into production says enough about how realistic that idea was.

But creating front-end web code has become a lot easier.

  • In the same amount of time it takes to become ace at Figma, you can become an expert in Webflow, a system to visually make web-pages. I can now output a static HTML/CSS front end of acceptable code quality, ready for a JS programmer to add connectivity and motion to, as fast as I can wireframe. This means I can explore about three different directions in a day, and thus three different funnels in a week, ready to go live in a sandbox for qual and quant testing.
  • I recently asked vo.dev, a promp-based (so an LLM under the hood) site-builder, to make me a portfolio site for a very specific kind of UX candidate. It knocked the full code for 3 pages out in 5 minutes, ready for me to change with CSS styling cues.
  • UIzard (also seems LLM based) is like having a (really dumb) UX Production Assistant on call, that you have to explain a lot to, but will allow you to iterate your ideas inside the tool, prompt after prompt.
  • I know one high-powered web researcher who is turning around the deepest knowledge-representation experiments, experiments sponsored by the biggest data warehouses in the world, whose outcomes could change whole paradigms, twenty times faster than he used to because he can now add a menu to a web-page to select or transform stored data as easy and fast as he can ask Claude. (The interfaces look super basic and are definitely not production-stable, but for that experimentation that is irrelevant.)

It’s now just a matter of time until the AI UX generators find their way into the no-code graphical HTML tools, with full round trips of intention between the AI and the human. Pretty soon we will be

  • describing personas and tasks and creative directions to our UX tools,
  • see screens appear as designs but also code, for us to then move and re-color and warp and remix,
  • to then be submitted to stakeholders whose comments can be absorbed into the designs real-time,
  • to then output live code directly into our A/B sandboxes or crowd-sourced user-testing systems or interview workflows,
  • after which the results and statistics get pushed back into our tools so we can tweak the designs.

This is way beyond an AI plugin in Figma, but a live continuous dialog between design, development, and testing, mediated through language, iteration, integration, and direct manipulation like in our current design tools.

The tools themselves will not innovate, their designs will be bland, but they will inform. LLM-based tool only regurgitate what they already know, they are retrospective. ChatGPT will never tell you to query ChatGPT, because ChatGPT’s corpus doesn’t include ChatGPT—but v0 does know everything about what features and paths we have been grouping together in our interfaces. The very first time I used v0 I didn’t just become more productive, but also more complete: its output included some flourishes and ideas I had not considered yet but upon inspection were actually baseline for what I was doing.

(Yeah, there are issues here: LLMs are in the same category of resource-greedy as all crypto-currency, and the big LLMs were all created by stealing the intellectual property of everybody who ever published anything on the web. This can’t be discounted. But if the latest Chinese efforts bear fruit and you no longer need the complete energy output of Wichita and the daily production of Dasani to get a mock-up web-page, plus we factor in that as UX designers we were all copying each-other’s ideas already anyway, then using them for UX design actually could become ethical.)

If a design team can prototype at full fidelity quickly, a lot of current ways we organize designing software can change.

  • We can show users many things even better than we used to, and get many comments, keeping us on track. Rule 1 is taken into account.
  • Design will outrun dev even more than it actually already does in many places, except that now design needs that speed so they can thoroughly vet what they are making at whole new levels.
  • Product and UX as a discipline will have to get even better at finding hidden ideas and needs from all sources like interviews, customer support, comments, observation, stakeholder knowledge, and at getting quickly to statistical validity for choosing a direction. The main art of UX will go back to being the glue and shepherd this whole design process together to working conclusions, not decide on color wheels.
  • What you get out of the AI box will be so middle-of-the-road, that making the designs stand out for brands that need it, or for innovative new functions, will require real sweat.
  • Content Designers will have to push harder than ever for Product to please, please, please start with content first, and let them innovate on content first, or all content will be forced into the same 10 formats. However, a team with strong content designers that takes the time to design some different content formats first will be able to test very fast which of those formats gets the best results for their specific brand and goal.
  • Design gets a lot closer to being research. Design can experiment more to learn lessons faster, without eating up dev time on prototypes or experiments that will be thrown away.
  • Dev can now focus on robustly delivering the ‘winner’ but will have to take the design output and run every line through their own translators to integrate them into existing production code. And they will have to fight to be allowed to do it because what the AI / no code / Design cycle produced for testing will look good enough to release. There will be production UXers involved to keep all touch points coherent.
  • Which will then organize developers and designers into parallel tracks that are less dependent on each other and allow both disciplines to operate at their own speed, increasing comfort and quality.
  • Hand-off between stages will be even more of a trip. The AIs will help us manage the design systems, the code repositories, link the tokens and and variables directly, drop templates into the CMS for instant use, be helpful in all kinds of ways, yet somehow unpredictably fall flat on their face serving some users complete garbage in ways we can’t even imagine right now. We have to check all systems with intense reviews and QA before anything goes live.
  • We will need to know our users better than we ever have before we even start to design something, or we will flood the zone with so many bad ideas our users will run away from our product or brand at the speed of light. Y’all have no idea how many bad A/B experiments y’all were spared just because Optimizely had the built-in bottleneck of requiring JS programmers to really work. UX Research had better get really good at answering the big questions about our customers and their issues to keep us on track.

Yeah, design is absolutely in flux with the lay-offs and the re-orgs and the reckonings, but not in the way you think it is right now. How we work is about to be reorganized. The new designers are very ready to use no-code and AI tools to make the current wireframe jockeys look like chumps. Get ready.

The Map Is Not The Territory And A Wish List Is Not A Map (2)

The Limits of Prototyping in Agile Development

Central thesis of why software development is hard, and Part 1

So pretty soon in the late 90s it became common to find out what humans wanted out of computing systems by giving them a simulation of the system to play with, a prototype, which could come in all kinds of fidelities, from hand-sketched screens to, as the technology got better, clickable wireframes, to full front-ends. The craft of a UXer at the time was to be able to execute all these prototypes well with the tools available. The art was to fit prototyping at the proper fidelity into the software process such that you could find out the most, in order to decrease the risk of making the wrong thing as much as possible with the least resources. Sometimes the schedule would allow for a lot of time before development and you got to call that a “discovery phase”, sometimes you had to fit it into the Agile cycles somehow.

In a few engagements I even got both, so my team could make some outlandish mid-fidelity prototypes in Axure to run through with users and really elicit some deep thinking about their problems in the field we were working in, but then also do a broad test of the half-finished system mid-way through development to see if we were getting it right. The art there was to put the right user stories at the top of the backlog so you would have an unfinished but testable system halfway.

This is a notion that could fail in fun and unexpected ways. Like every map leaves something out of describing the territory, prototypes can’t be complete. For one effort, as we were in the UK, we put T&C stories at the bottom of the backlog, so to do after mid-way testing, because surely we didn’t need them to test the rental funnel? The prototype ended up failing testing in Germany because the test subjects insisted on thoroughly checking the T&Cs. And as I once had to explain to a group of stakeholders on another platform, we can’t prototype even more thoroughly to find all contingencies because by then you have basically just built the thing for a lot of money.

So prototypes are an answer, not the answer to dealing with the fallout from the rule

  1. Humans can not accurately describe what they want out of a software system until it exists.

The reason that knowing when to use prototypes, and which, is an art and not a craft is because Agile doesn’t actually know how to deal with product design. Check the original principles: they do talk about design in one spot, but it is a given software developers just take one next step at a time and then check if it was the right one with the business people, and that is the full extend of the thinking about what Agile makes. How it is decided what that step is, and how to make sure you end up with a coherent system across the multiple touch-points at the end, is left as an exercise to the reader. So when these Agile edicts were translated into repeatable and teachable processes like Scrum or Kanban, fitting in designing the experience became a matter of how the team or department wanted to organize, and the UX field has been struggling with that since.

Especially when the development field when through a long phase of demonizing Big Design Up Front and deciding instead software creation was supposed to be about jumping right in and asking in tiny steps if what was made was right, with a lot of bright people advocating you could go from a two-wheel kick-scooter to a Porsche SUV in small cyclical increments, of which the first stage got called MVP and rushed out. And if the market was only ready for a Porsche, well, you’d better hope you found that out by having some really deep kick-ass user interviews and conversations about that scooter MVP, or some other channel, because you’d never find out from sending that MVP out on the web and checking the numbers. Quant doesn’t give qual answers.

User research through prototyping made a resurgence, but flattened to be a repeatable and teachable process, called Design Sprint, to be only about asking people on the street what they want with half sketches, in a cycle that is only allowed to last a week. The rest of the knowledge to create a success has to come from… hunches from the product manager? Marketing? In my last job it was edicts by stakeholders, when it should have been customer service. Pulling all these signals together is the synthesis-between-departments glue UX Research and Product Design should really be bringing now and are often not empowered to do or can’t because they are stuck in cycles.

As Joanna Weber writes in this brilliant article about why Agile and Lean are such difficult fits in organizations that are vast and actually have to be trustworthy, coherent, and good: “If Scrum only worked for as long as there were waterfall systems in place to support it, we need to replace both with something that both acknowledges and improves that reality.

And the reality is rule nr 1 above, and that

  1. Humans can not accurately predict how long any software effort will take beyond four weeks. And after 2 weeks it is already dicey.

So that replacement has to stay incremental in nature and show a lot to users at every step. It’s a tough situation and it has been true for years: we still do not have a repeatable, teachable process to make great software systems that span multiple touch-points and are a joy to use and maintain. We have to navigate between speed of incremental delivery and allowing time for thinking for design.

Right now there are roughly three fundamental ways in which design fits into Agile of various forms, Sprint 0 (which can be Big Design Upfront), Sprint Ahead, and Parallel Tracks. Of these, Sprint 0 and Sprint Ahead are the ones I am finding the most, with Parallel Tracks, that could combine research and design into a very strong customer experience proposition, seeming the least popular, mostly because “devs want their designer embedded for synergy and speed”.

That should change, though. While UX Research and Product Design currently have an employment and cvredibility crisis, I recently did some prototyping with new tools that make me think there’s a whole new direction to go here. But this is already too long, so I will describe my ideas for the future next week.

The Map Is Not The Territory And A Wish List Is Not A Map (1)

“Where have all the task decompositions gone?” I was talking to a very experienced Head of UX about the state of our vocation when she asked that. I had to agree I have not seen one in years either. A task decomposition is when you take a task and divide it into steps, and then divide those into smaller steps, until you reach some granularity that makes sense for why you are doing this, like making screens or coordinating robot movements.

We used to do them all the time in UX, mostly when the field was still called HCI, to make sure we understood what the human was doing before we taught the computer to help them with it. There were many notation systems for them, and you could write PhD’s on comparing these notation systems and then inventing new ones.

Also something I haven’t seen in a decade is a specification full of descriptions of features that a system SHOULD and MUST and COULD have. The were called Functional Requirements, and while they often tried not to impose a view on how the system should look to users, you could tell how desperate the writer was trying to convey their needs in something else than fuzzy human language when they invariably started to use Word’s shapes tool to mock up screens—and then write the word SUGGESTION underneath so as to not offend their designers.

TDs and FRs are a relic from the waterfall period when you did a lot of design and understanding up front to make sure you were making the right thing for people before you committed the programming resources to make it. They were intrinsically incomplete in the same way a map always leaves things out of its description of the actual terrain, and expensive to make, and of limited use because:

  1. Humans can not accurately describe what they want out of a software system until it exists.

Bit of an issue.

Rule nr 1 is and was true all the time. You’d computerize a workflow of paper files in a shop or local government and at the end it would turn out that there were all these exceptions being made by clerks and admins using different color pens or writing in the margins that all the workers understood, but nobody above them working with IT did. The exception would be so important you’d have to retool the whole thing down to the tables in the database, and the project would be late and expensive.

When Agile originally said to deliver value frequently, it wasn’t to unlock money from customers cycle after cycle—that wasn’t even really possible until we started putting everything on the instantly monetizable web. It wasn’t for investors either, they will happily wait years for a return if the projected return is big enough. Agile wants frequent releases so you can show the results to humans fast and get feedback and then correct, instead of finding out when you deliver the whole thing after two years that rule nr 1 above always holds. It’s only around the time Lean Startup came along that every iteration wasn’t just to correct the course but also had to deliver some new mini-feature every time.

So if you want to replace Agile Scrum or Kanban with something, you have to deal with the fact that 40 years of trying to first find out how people work, and then making wish lists in all kinds of notations of how that work is to be done by computers, never was really successful and often a total failure.

Still, adding functionality bit by bit as you explore what is needed comes with things you should be aware of:

  1. The resulting system is kludged together cycle after cycle, unless you take some choice time between cycles to refactor huge chunks. This is why every seven years a software team wants to just start over, they can’t take doing archeology in all those cycles of hacks anymore and don’t feel they can add any more functionality without watching the tower of hacks fall over.
  2. It’s actually not faster than Waterfall. It just decreases the risk of ending up with garbage a sub-optimal product-market fit.

But, but, but, if wish lists specifications didn’t work because making software in itself changes the work the software is supposed to help out with, what about prototypes? That worked, right? Yes, with a list of caveats including that Agile actually doesn’t know when to use them and that AI-derived UX is deeply changing that game in the last 6 months, but I’ll discuss that in part 2.

The Two Rules Of Software Creation From Which Every Problem Derives

Scrum has been having a bad time for the last ten years, and thus so has Agile. My favorite article on this is truly exhaustive about all the problems we have encountered in the last two decades trying to deliver software of any kind using these methodologies. (I know nothing about the writer, this could totally be a Milkshake Duck experience; some algorithm just recommended this insanely long post to me one day and I went “Uh huh. Uh huh. Uh huh. Uncharitable but I can see where they are coming from. Baby and bathwater but yeah. Uh huh.” It’s been on my open tabs for 4 months now waiting for me to write this.)

Thing is, I remember the before times. Waterfall, short or long. I delivered projects in those systems. In fact, the first time I encountered Agile in a company, I was new and young and thus stupid enough to not ask what that was about and thus very confused until I looked up where it came from. I remained very confused how you get from the statements in the Agile manifesto (seriously, check them out again) to the rituals of Scrum like stand-ups, retros, pointing and everything else that makes programmers so angry they only get to program 50% of their day and have to talk to other people otherwise, until I actually did some work in Agile Scrum and understood what it was trying to do.

None of the critics are offering real alternatives, just modifications of Scrum to fit Design or Product management in. I don’t think anyone wants to go back to Waterfall, but they can’t really explain why not.

I can. With two rules (which may become more as we discuss them).

It’s the two rules that actually are behind every statement in the agile manifesto. The manifesto unfortunately doesn’t name them really; the people behind it were so steeped in the problems of software delivery—and what they thought would fix it—that they posited their statements without saying why each of these things are necessary to deliver good software. (Unfortunately, necessary but not enough for success, but that we found out in the next decades.)

They are

  1. Humans can not accurately describe what they want out of a software system until it exists.
  2. Humans can not accurately predict how long any software effort will take beyond four weeks. And after 2 weeks it is already dicey.

That’s it. Every other problem that you have to solve in software delivery rolls out of these two major issues, I think. I may be wrong, you may need a third or fourth.

Am I going to spend time proving these two correct? No. There’s enough literature out there, most of it documenting failed software efforts in Waterfall in the 60s, 70s, 80s, 90s, to support them, and I am not going to go over that. What I will, in the next few days, is go over their implications, how they led to current Agile practices, and how they can not be ignored when you want to make things better.

If culture eats strategy for breakfast, OKRs are just the cheap juice everyone gulps down first

Six hundred years ago, in the early 2010s, I was working a contract—a heritage company that needed their website to be responsive—where I met my first digital transformation consultant. A few weeks in I found out their day-rate was literally twice mine, so I asked them what they actually did. The answer was that after all the stakeholder fluff, their actual work was to find and look every function involved in the digital side of the company, and recommend how to align everyone’s incentives for a good outcome. I nodded sagely and had no idea what that actually meant; until then I had worked for large companies with long histories and already aligned missions, or tiny research teams that were making it up as they were going along. Well, I’ve worked for very different companies since then, and I have seen the pain of mismanaged alignments, usually in large companies that don’t take the time to define themselves.

There’s this moment burned in my brain from when I was in a 1-on-1 with an organizational advisor about some of the issues that were massively, massively frustrating trying to getting good design delivered in the company. I had pulled up an org chart that depicted two departments that were supposed to work together to make good things for customers but were instead just barely pushing a few new features out.
The advisor pointed to the topmost leaders of each department and told me how they advised both of these people and thus was cross-organizationally aware.
And soon confidently added: “They have the same OKRs, right, so that is how they stay aligned.”
I managed to not fall off my chair.

Gentle reader, if, for example, the Customer Service and the Product departments share an OKR of lowering customer complaints, but CS is culturally and financially incentivized for high throughput of calls, then CS may invest in a CRM to record the customer issues but they will not really work with the caller to find the root issue and then log it exhaustively. They will just report that 80% of all issues are password-related and put in some more voice-over messaging to send callers to the chat bot on the website, while the User Research department will have to spend a ton of money to find out what customers already desperately want to tell the company on the phones every day about what they are trying to do that doesn’t even need a password. The OKR is indeed empowering every department to chart their own course—separately.

Sales and Digital Delivery may share an OKR to increase new conversions, but if the management of Digital incentivizes rapid fail-fast online revolving experimentation, while Sales is leaning into the reliable, trustworthy, solid aspects of the established brand in their outreach, the user really will end up, at best, unnerved by the difference between what they are told and the rawness of what they use, and the designers and content strategists in Delivery trying to bridge this chasm into one experience will burn out in no time from being yelled at by Sales. By focusing only on a measurable quarterly outcome that can be pursued independently, the OKR is doing nothing to align where it counts and chewing up the people in-between.

These examples are made up, by the way, I can’t write about what I have actually seen fail.

The literature about OKRs all say things like “Objectives and Key Results (OKRs) provide a framework for businesses to execute and achieve their desired strategies through simple, collaborative goal setting” and that those create internal alignment, but the alignment ends up being only about which measurements to game this quarter or year. OKRs say nothing fundamental about what kind of relationship the company wants to have with their customer beyond “let’s make money this specific way right now”. It’s not a framework that includes a vision of how that specific money-making thing fits in the long-term relationship between the company and the customer, what the values and standards are of what the company considers acceptable to offer to customers. That’s the company culture and it needs to be set and maintained separately. Of course, that activity doesn’t generate an immediate return, so that’s a non-starter these days except in some very committed companies.

Some might right now say “wait, no, a good OKR absolutely defines a quality level: if the experience or product or marketing is below a certain quality level then you won’t make your key result, and therefore the OKR implicitly aligns the product and marketing and experience departments to this particular level; they have to talk to each-other to reach that ambitious result.”
To which my answer is: look around you at the disjointed mediocre experiences from all these companies who are supposedly all-in on OKRs. Without a company culture defining a baseline of expected quality that can only be achieved when groups work together, a culture maintained from above, departments will just do what they can themselves because creating that alignment horizontally by themselves is just too hard. The alignment will stop at the boundaries of departments committing to separate numerical targets. Every department will try to achieve the key result they settle on with the tools they control: one department may go all-in for quality in their space while the other goes for gaming and dark patterns. While everyone is also loudly complaining about internal waste and separation, of course.

OKRs supposedly communicate and align strategy. Culture eats strategy for breakfast, especially if the strategy comes in as a two-line description of a numerical target. Between all the merging and slicing of companies by current venture capitalism, the corporate cultures that unify departments have been totally eroded. When the main product companies actually are tasked to produce is a higher stock price, everything else becomes secondary to that, including even keeping customers alive, even if that was a guiding principle for absolutely decades. All that is left is departmental culture, and it only takes one group deciding on a different course from the others to make the experience disjointed to the customer.

There’s no free breakfast. Aligning takes work in very large companies, and it has to come from the top, looking at what is being put out, holding it up to the company standard you have taken time to define, doing the leg-work until you know who made what and why so you can shape teams and align incentives and create the right communication channels until everyone is making their part of the same thing. It’s so seductive as management to think you just need to write a couple of goals and target numbers in an OKR format and you can then just throw it down the pyramid to “empower” departments and everything will be alright and everyone will innovate and make their best stuff. And they will make their best stuff—within their constraints, expectations, and reward structures. Aligning those is where the real management work is.

Making Promises We Don’t Know How To Keep

I left London after ten years, and moved to Berlin, for various reasons. There is a lot of UX work here, but most of it is in digital B2C Product creation in the lean / Agile / data-informed variety for start-ups (or companies that think they want to be start-ups). Showing up with twenty years of product facilitation and advanced concepts on the CV meant a lot of searching to find a good fit, and I now am working in scientific publishing, enabling an internal platform for many business units.

I recently noticed a core piece of anxiety I used to have almost every day at work in London at the various agencies and clients, is gone. It isn’t just the way UX in London is so over-hyped with its constant events and conferences that makes you feel just doing your job is not good enough, but something else has qualitatively changed for me. I sat with it for a moment and realized what it was: I am not asked to do the impossible anymore.

The higher I got, the more unreasonable the requirements I had to take responsibility for became. I have been the Director of UX for a client where someone included in the pitch the promise that our version of their new website would increase conversion by 10%. Just like that. Or I have been co-tasked with revamping a whole web product to be more socially conscious and long-term oriented, but then also told the lead-generation part of it could absolutely not drop, even though offering those leads were in direct contravention in tone to the new product. That sort of thing, mostly just blithely required so someone upstairs could get or keep their bonus.

So, increasing the business, not hurting the money they make, why do I call that unreasonable? The problem with those kinds of requirements is that UX doesn’t have the tools to evaluate them before the work is taken on. I have no way of conclusively starting out and saying, oh yeah I can do that. Asking me to sign up to goals I can’t evaluate, well, yeah that is unreasonable.

Every UX Researcher will tell you: users never stop surprising you. Every UX Designer who has had their stuff user-tested a lot will tell you: the version before first testing is trash. We can throw however many years of experience we have at any design, however many heuristics we have to get it right, and we will still be surprised at how users interpret some aspect of a page: the copy, not seeing a button because of the surrounding elements, the sizes of boxes misleading the eye. Add to that how every design these days is very often not the actual full page, but a piece that will be inserted into a system of modules, journeys, cookie banners, mail sign-ups, surrounding content, and unpredictable ads, and we’re unable to have any certainty even if we did have predictive tools.

But we don’t have predictive tools in UX. Just a mountain of ways to lower risk by gathering information pre- or post-design. So when during a pitch I find out I am being signed up to a 10% increase of conversion I can’t actually say No and keep my job. I can barely say “I don’t know”, really. I have gotten away with “I’ll do my best” or “Well, their site is fundamentally ten years old–I really think we should be able to do better”.

Programming, i.e. software development, switched to Agile methodologies because it turns out the intricacies of legacy layers of code and business-requirements make it impossible to predict scope and time beyond four weeks. UX design tries to do the same shortening by advocating repeating quick build-test-measure cycles, calling it a “product experiment” to “fail fast at”, but it is still not accepted everywhere, and the reason this methodology needs to exists, that fundamental inability to predict during design how good a design is, just hasn’t properly percolated back up to decision makers yet. We also haven’t let it: part of clawing our way to a seat at the top table has been putting up this facade that “finally” letting us do “proper” proper Service / UX / Customer / Product / User-centered Design will surely lower risk and increase profits by making better products. That facade is hurting us by not letting us push back against requirements we can not fulfill, and that desire for a place at the table is holding us back by stopping us from staring the business in the face and point blank asking “Instead of making me obsess over 40 colors of green to increase click-through rates by .1%, have you considered making a product at a price people actually want? Because if you did, I could hide that Buy button and people would still click it.”

But no. Industry does not work that way. Instead I carried around a tiny gremlin far away in my consciousness, a gremlin I could easily explain and wave away as the way things are, or that by the time the client would notice I did not increase conversion by 10% or whatever, we would be further along the path of doing other things. But it still gnawed at me. I only notice now that it is gone by how much.

My job now has problems of how to display the contents of a book online that has, literally, 20.000 chapters, and how to let users search in reference media without having to ship the whole catalog over in hidden javascript–meaty questions of weighing user needs vs technological capability, and how to best communicate the trade-offs and effects. It’s fun work, it is hard work, it is serious work, but best of all, when I say “I’ll see what we can do, but no promises–this is kind of weird and unprecedented and we’re going to have to make it up for a while”, everybody understands, and nobody waves a contract at me where they go “But we promised the client that…!”

‘Conversion’ is not the work I signed up for twenty years ago when I wanted to make computing easier for my mother. This is.

Content Strategy: unless you are making something small, you need it

The first time I actually worked on something we could term User Experience, I was changing the default positions of my icons on a UNIX workstation running some version of SunOS (in other words, it was a long effing time ago) and part of that was designing some icons in black and white, pixel by pixel, for programs that did not have icons. These days I make big things for many people, and I wouldn’t release anything on an app or website unless it had been made by a trained, experienced, graphics professional, which I am not.
I used to be able to code a website front to back too, and I stopped doing that as part of all my other tasks in 2006 and now it’s so complicated you can’t just do it as a side task of many. Thinking I have the knowledge to do everything myself on an app or website to high quality is just so arrogant it is stupid.

The same goes for content. That might seem like a no-brainer, but you can tell who still doesn’t have these brains: teams who design with Lorem Ipsum, or teams on a project with more than ten pages who think all they need for their content is a copywriter.

I was there myself a few years ago inside a big agency, thinking that content is just something you quickly commissioned and wrote. I should have already totally been over that when a previous project for a car brand fell apart when they wanted a whole new website and the sole copywriter they hired wasn’t fast enough and didn’t really know what to do, so I was told I should just use last year’s copy. The result was a mess, and yet I had not learned my lesson. A huge strategic gap in the creation of a rich, vast website with existing content was staring me in the face and I simply did not see it.

At first I did not know how to deal with Content Strategists, but now I am glad the agency had the foresight to sell their services to the client who wanted a re-platform and a re-vamp that reflected the aspirations of their high-end brand, whether it was just to raise the billable hours or not. By working with Content Strategists, a whole weight fell off my shoulders as a UX Director and I could focus my team better. The whole question of what the hell are we going to do for educational content about the brand and its USP and its products, what can we re-use from the existing site, what is even there in their twenty-thousand pages, and how do we get our brand’s point across coherently, was now being handled by a group of people whose main focus was indeed making coherent publications.

They had tools to find out what was already there, and patience to inventory and tally what content users obviously already liked, or needed. The Content Strategists created criteria for what should be considered success, and failure, in an article, based on their knowledge as editors of how people absorb corpora on information. I could focus helping users find what they needed to do, they focused on what users needed to know to get it done, and then get good at it.
It is a absolutely indispensable discipline when you either know that the user has deep educational needs in the service you provide, or the client already has a vast collection of content that is usually stale and no longer to the point. Just being able to walk up to them with some existing content around a form and being able to ask “Does this live on? Do we re-use it? Is it any good?” and getting an answer based on usefulness, current use, current satisfaction, and adherence to tone of voice, sped up my designers to no end as we never had to come up with our own placeholders or use lorem ipsum, which the agency, correctly, had banned anyway.

If you are going to be content-driven, having people who do content research at the beginning of the project, based on the user needs UX and Research have worked out, is mandatory–and these days, being content driven as a site in itself is mandatory.

I am currently consulting for a large consumer-finance brand, so established and large it is basically an institution. They have their (semi-)celebrity blog on consumer hints and myths, their social content, random pages for SEO value, and more articles of which we know the user is searching for them and desperately needs education. None of it is linked through from the current web site where they have their account. What they do have in educational content inside the service part of the website is about 60 questions, so-called frequently asked. I begged for a content strategist, trying to make clear that just having a copywriter for gaps isn’t enough. I am now working with a Content Strategy team in what I consider a model cooperation.

  • We worked together to agree on a view of who our users are (very tricky in this specific case, because nominally we are designing for everybody here, so we had to agree on levels of understanding, language, and progression. The usual case is working from your personas).
  • The Content Strategists made an inventory of everything already there, and held it up to brand values, tone of voice, and user needs as we understood them.
  • My Product / UX team focused on the clicks and buttons and page journeys for the core service the brand provides.
  • We look together at these service pages and note the educational needs on each page. The Content Strategists either suggested a link, or extracts or videos from existing content, or we identified we had a gap and they started on writing briefs and getting the necessary content made ASAP.
  • They showed me their plan for all the articles and frequent questions and video and info-graphics: how they were grouped according to how they saw users currently search, what educational needs they satisfied, what was already available.
  • My Product / UX team worked with them to re-design a library section of our website that could contain or point to all this content according to our joint user research, from hub pages to article templates
  • My Product Managers are working with the third party that will host all this content, based on my UX templates. We needed a 3d party because this corpus needs to be easily seen by our phone support people, and integrate with their phone support tools in a piece of service design. Usually the content goes in the same CMS the website is being made with.
  • I don’t have to worry about porting the content, filling in the gaps, or making sure it all looks good. The Content team is on it, commissioning writers, illustrators, photography, and videographers. They write and execute the plan for governance and approval, maintenance, and adaptation of the content in the future. They got this. I don’t have to.

This all creates such an increase in speed and especially quality, that they are totally worth the investment. Furthermore, their expertise in re-use and adaptation is saving money where my UX team would have been unable to accurately identify the gaps or articulate to content creators how they should be filled.

(The best part is watching them go to a client with a presentation like “You have 20.000 pages of SEO garbage. Yes, we counted. Your users hit about 600 of those, if that. You can save a ton of money by getting rid of those 19.400 pages, and guess what, you also won’t look like cheap hucksters in a list of Google results anymore.” The look on a client’s face when they realize how much they have wasted on old-school SEO is priceless. Make sure your Content Strategist at that meeting is both authoritative and really soothing.)

What has made these collaborations so good is their knowledge gained from specialization; most of the Content Strategists I have worked with come from the worlds of journalism and publishing, where having a single voice over many kinds of content, focusing on what people want or need to know, understanding where and who your readers really are, and  how they absorb information, is a basic ingrained task. It’s where they always have lived and now live in a new medium.

I raise some hackles left and right these days when I proclaim I consider them part of User Experience (or Customer Experience, or Service Design, or whatever we are calling ourselves these days) as much as the visual graphics team or the user research team: many UX people do not understand their value yet, and Content Strategists themselves are still so isolated often this viewpoint is new to them. But I won’t work on one of the large websites I usually get asked for without them. I do not have the knowledge of what they are so good at, and I definitely do not have the time.

 

Agile and User Experience: The Real Problem Is Handing Over

In the last two posts I described two of the ways that I have seen Agile development and UX mix: Sprint Ahead, and a Two Streams model. Two streams is arguably a form of sprint ahead, but one in which the design and development sprints are not locked in their time periods. However you structure User Experience in the development process, though, it does have to come first, it has to help define what development is going to make. Even if you have a separate research department, even if the UXers are completely embedded and blending with developers, UX guides what is made and will be developed and visually designed by interpreting research.

That creates the problem of how to transition what UX specifies to development. It is in the handover that the friction appears. What can the development team handle in the time allotted? And what happens when a hypothesis gets disproven after a sprint?

Handing over: Simple / Better / Delightful

It can be sketches and conversations. Annotated comps. Discussions sitting together. Prototypes. But somehow the concept has to go from UX to developer, even when the work is done in the same sprint by a blended team. In a recent experience, the UX work I did would feed into development cycles, but because of ever-changing circumstances, decisions about what could be put in production could change from sprint to sprint, even while the finished designs were all ready and waiting.

The team soon settled on a handover structure that dealt with this:  I would design a Simple / Better / Delightful solution, and introduce them with a thorough background from testing, research, and stakeholder input.

  • The Simple alternative would be the minimum to reach the product goal, usually a very concentrated intervention on the existing product
  • Better would be close to state-of-the-art, perhaps also requiring changes to other areas to create a smoother flow with this new feature
  • Delightful would be all about creating a unified flow with nice micro-interactions, that would involve significant investment in smarter interactions and perhaps dropping previous product decisions.

While this sounds like designing everything in triplicate, it actually usually meant that I would start with Better, tone it down for Simple, and scale it up with what could almost be called flights of fancy for Delightful. Because of the way Agile design decomposes the work into small issues, this was all very manageable–I wasn’t designing a complete flow all the time, but features in context.

By exploring the solution space this way deeply, come decision making time the decision-maker, like the UX designer or Product Manager, can be very flexible to only go with what they have resources for to put in production. They can show the stakeholders variations together with their cost, and get buy-in for informed decisions.

And Then Everything Changes

As described previously in Sprint Ahead, for some substantial pieces of work I was lucky enough to get budget to do real concept research before the actual design sprints, allowing the design team to test some serious assumptions of what users wanted. This can be seen as expensive, but for major e-commerce it is still cheaper than producing a bad flow that users can’t understand. You do want to avoid getting to a place where you have designed the whole product as a testable prototype before design actually starts: that’s simply about not wasting money. Focus on testing only assumptions, pieces of the flow you have real doubts about. My personal heuristic on top of that is to test to disprove, and do so by testing wild concepts: conversations with users about inappropriate or magical interfaces have often guided me to the heart of their needs.

Currently in our modern Agile world, the Lean methodology is helping speed up product delivery and cut costs by doing all testing and research almost during development: little piece-meal cycles of adding and subtracting features. One thing I’ve never read is how you go from MVP to some sort of MVP+ to maybe a full-featured delightful product: the shops I see trying to get to MVP with whatever permutation of Lean they call Lean basically just keep going with the same process to get to delightful, and I am really not sure that is appropriate; it doesn’t feel like you ever really pay off the design debt incurred from only always adding little pieces together.

However, if you do insist, you will get to the point where you prove or disprove some major assumption about how your product is going to work. A recent case was making a product for which we had a tough time getting to the right specific users until we had something concrete to show, and we had to work mostly with proxies: people who knew the space and possible users really well. Any meeting could (and often did!) yield completely new fundamental insights about the user’s conceptual model. While we were building. This meant not only incurring a huge design debt, but having to pay it on the spot.

Design is unable to keep the development pipeline full, or keep it from outright mutiny, if you have to scrap everything every 3 weeks, no matter how you have structured the cooperation with sprint ahead or two streams or blending all together. It is too wasteful, and angering. Once I realized the team could end up in this situation, my mitigation (I first typed in “solution”, and it is so not) was to defer fundamental design decisions as far as possible, or opt for the most flexible solutions at all time even if they were a little more complex than what seemed self-evident, using my experience to intuit what decision could be fundamental.

An example would be in the Information Architecture: first we thought we would have four major areas of activity, so the suggestion arose to organize them as tabs. I held off strongly because tabs really do not scale on a screen very well if areas are added and could not be secure yet that users would never need to see information from both areas at the same time. I instead opted for collapsible pods that could be added vertically. A few months later we were up to 6 areas of activity, with a seventh on the horizon, and where there was significant difference in the amount of activity in each area. Tabs would indeed have been inappropriate.

In all honesty, the way I managed this designing for possible futures was by mentally, and in sketches, constantly preparing what v2.0 and v3.0 of this product might look like based on what I could extrapolate from what I knew when making v1.0. Also, every time I added something, I had to ask myself, what if this is not true later on, and have an alternative ready. I have to say that working initially on this product as a web app in a responsive design was very helpful, since responsive design done mobile first has to be so constrained in what constructs are used anyway.

In the end the process felt less like putting features together into coherent flows, but more a sculpting by chiseling away hazy possible digital futures down to a product in current reality. There were days it felt like a high-wire act, and I had to do emotional management to make sure I saw every new requirement as a fun challenge instead of a blow to the work already done. I do regret I was only around to take the product to a Beta, I really want to see what the first contact with reality, the first real tests would take it to.

Agile and User Experience: Two Streams

In the first article I described combining User Experience design with Agile timelines by having the design team be 1 sprint ahead for a specific project. But at another engagement, the development team had a steady weekly sprint rhythm, and plenty of backlog for an already live product that needed new features and continuous improvement. Slotting design and prototyping efforts into the weekly sprints made no sense, as this pre-development work was very separate from the development team, and would not impact weekly releases. Doing user research, doing user testing, looking at comparators and competitors, trying alternatives, making a prototype, deciding on tests, workshopping and all just doesn’t fit weekly schedules, nor need it.

Our Product Manager, already brilliantly straightening out the backlog and pointing process, suggested creating 2 streams: one for design, and one for development. Each stream would have its own Jira board, but design would be kanban, and development would continue for weekly sprints. She wrote the design stories based on management, and product ideas, and prioritized them along business and user priorities, with my input on structuring the work. I would pull the top one or two design stories and put them in flight.

The design stories would be worked on with whatever tools of the User Centered Design cycle as required to finish. When a story had been explored, prototypes, tested, discussed, workshopped, the results would then be turned into by us into stories on the development board, with the designs and justification attached. The product design team, by that time, would have worked so close in the process I would not even have to present the designs for grooming or sprint planning, as the Product Manager would know all the nuances. This way, the design stories could take the time they needed, but I did have a responsibility to stay timely and keep the development pipeline filled.

Untitled.002

Designing in the Browsers and Boundaries

One issue that we did not fully work out during the engagement was how to fit in Designing in the Browser. The visual designer I worked with on the green stream of work had also become an accomplished coder, and when we designed together he would create the designs directly in CSS & HTML, bypassing any comps stage. This means that the resulting design was guaranteed to be implementable–we pretty much had just coded it during the design phase, save for JS wiring–but some of the resources for the gray stream had been put into action before stakeholders had seen the results. The end result was delivered faster in the end by being able to skip the comps stage, but there was a certain unease that committing resources to coding had happened too early and could have ended up wasted. Unfortunately, we never got to the point where we could prove or disprove that designing in code was as cheap as whipping up a quick Sketch comp for people to approve or steer.

Next: Handing over Minimal / More / Complete

One of the issues that arose mixing Agile and UX in two streams like this was the amount of fine-tuning that had to be done after exploring and settling on a design, when handing it over to the development team and their ever-changing circumstances within a dynamic environment. Another issue was how, because the Agile process explores requirements as it builds systems, having a coherent experience as a goal could create major design debt in a moment when a fundamental assumption of the design suddenly got challenged by the result of a sprint or exploration with the client. I will address how these got managed in the next post.