[Reprinted from my old blog, dated October 2005]
One of the major conclusions of my work in content management theory is that systems designed to enforce decisions will fail. Consider a workflow system that forces users through the same process, regardless of circumstances. Invariably, situations arise where the basic workflow doesn’t hold — authors are absent from work, editors don’t have the right expertise to review a document, the content demands a new structure. Yet the majority of content management systems — indeed, most business systems — prevent humans from accommodating these scenarios because they are confined by the rules imposed by the computer.
I realized that computers should stick with what they’re good at — serving up information — and humans should be allowed to do what they’re good at — making decisions about day-to-day situations.
- Focus the system on providing information to users to support the decisions they need to make.
- Trust users to make good decisions if given the right information.
- Avoid enforcing business rules, which may change with each new situation.
- Evaluate requirements based on the extent to which they support decision-making.
Coincidentally, I have begun work on a new system that is ripe for applying this philosophy. This project has a couple things going for it:
- The tech lead and I met informally a few weeks ago, and it was clear that he and I see eye-to-eye on this approach. He lamented the existing system as inflexible, indicating that the users have been extremely frustrated that it’s not designed to match their thought process. They’ve had to develop some work-arounds to accommodate situations that are not as extraordinary as they first thought.
- The system is meant to support a process with more-or-less clear inputs (submissions from customers), outputs (approval or denial of submissions), and milestones (specific dates are determined at the outset). These clear-cut bookends make it easy to imagine the process in this way.
- The system has pretty clear roles: a lead (the person who makes the decision about the customer submission) and a reviewer (the person who does the initial review of the submission and makes a recommendation). Because of these clear roles, we could structure our conversations around the different people making different decisions.
(The purpose of the project is to support a process for reviewing submissions from this organization’s customers and providing some kind of response.)
We’re still in the early stages of the project, but so far it seems to be working well. The project manager, tech lead and analysts have been enthusiastic to play with these new techniques, for which I’m most grateful. The design team seems to know that I won’t let them down, and they’ve been positive about my lines of questioning, so I seem to be getting at information that will help inform the design. Here’s what we’ve done so far:
Instead of compiling all our observations into a list of requirements, we brainstormed a list of decisions our users have to make during the course of the process. These decisions were expressed in the form of questions like, “How many reviewers will we need?” Though the answer varies with each situation, they pretty much always have to answer this question.
As the brainstorming progressed, we sorted the decisions in more or less chronological order and categorized them in different events. For each event, we indicated who the key decision-maker was and the scope of the decisions — what the decisions applied to, whether that be the entire project, one of the customer submissions, or one element from one of the customer submissions. Most importantly, we began listing the inputs — the information that the users need to make the decisions.
The outcome of this process was a table that listed every decision our users have to make during this process. I also captured some of our outstanding questions about the process, so we could clarify with users at a later date. Some observations from this exercise:
- Different people answer the same questions differently. The best example is the assignment of reviewers to customer submissions. Some leads do this arbitrarily while others apply a thoughtful approach. It told us that we couldn’t use the system to enforce any particular method.
- Some decisions are repeated. Since our users are looking at a collection of items, they need to ask the same questions about each item. Sometimes they can make these decisions in bulk — about a number of items at once — and in other instances, they can only make them one at a time.
- Some people have to answer the same question based on different inputs. Leads and reviewers all have to decide how they’re going to priortize the customer submissions, but they use different criteria to make this decision.
Most of the data for this exercise came from the team’s existing knowledge. Obviously, first-hand user research is preferable, but at this point infeasible.
Sorting Decisions into Screens
Admittedly, after we came up with this list, I wasn’t sure how I would use it. Everyone got into the process of creating the list, but I hadn’t though much further ahead. I was used to personas — a flury of activity yielding deliverables that sat on the shelf.
After a requirements meeting with the client, we held a post mortem and the project manager indicated that we were starting to come up on the design deadline. I suggested a brainstorming meeting, but wasn’t entirely sure how I would facilitate it. At first I thought about an exercise I saw Marc Rettig demonstrate where he’d taken all the system features, put them on stickies, and organize them into related groups. It then occurred to me that I could do the same thing with our list of decisions.
I printed each decision out on a card and we did an internal card sort. There were a couple principles guiding the card sort:
- Two cards went together if they were related decisions — decisions about the same kind of thing or made virtually simultaneously. For example, two decisions considered related were, “Do we have enough people assigned to the project?” and “What method will we use to assign people to customer submissions?”
- Two cards went together if they were decisions that could be supported by the same information.
During the process, we elaborated on the kinds of information the users would need in order to make the decisions. We also realized that although we’d separated some decisions by events in the listing exercise (above), they really went together. In other instances, we divided a single event into multiple groups of decisions — one event ended up as three groups of decisions.
To facilitate the activity, we imagined each group a “screen”, such that a user should be able to make all the decisions in the pile without having to click away. In other words, one screen in the application should be able to support all the information required to make all the decisions in the pile of cards. This is how I set up the exercise for the team, and thought it made sense because we were trying to group like decisions. In the back of our minds we knew that this initial take may be totally off base, but it did help move the discussion along.
The feedback on this exercise was very positive. It allowed us to envision the user experience without getting caught up in screen design — even at a basic level. I didn’t stand at a white board and sketch rectangles. It was refreshing.
Even my wireframes look different. I’ve only just started, but for the first time my IA decisions have real traceability. In addition to functional annotations, the wireframes include the decisions that the screen needs to support. I’m taking a crack at ranking the decisions based on our card-sorting conversation. Each wireframe, therefore, includes a list of decisions that the user should be able to make when looking at the screen and which of those decisions is most crucial.
The wireframe itself is shaping up like my page description diagrams — a list of content in priority order. By including the list of decisions, I’m showing my work: the most important information on the screen corresponds with the most important decisions users have to make.
It’s hard to say at this point whether this approach is worthwhile, and the success of the wireframes will be a big part of that. The challenge for me will be to get out of the high level and focus on the details. (Though a decision-based approach does force this by putting crucial information front-and-center in the design process.) There are a few other things that we’ll need to work out:
- Interaction models: At this point, we haven’t thought about the transaction side of the application. In a sense, this approach is more bottom-up: we’re looking at the screens we need to design to support various decisions, but not how those screens should relate to each other. At this point, it’s conceived as a linear process, but clearly it’s more complex than that.
- Same screen, different users: The way these requirements shake out, it’s very focused on individual user types. Clearly, the same screen may be used by different users, even though they’re making different kinds of decisions. I’m not sure how I’ll resolve this.
- Validation: The current situation at the client makes genuine user research difficult, so I’m not entirely sure how we’ll validate the design. Like I said, the source of this information has been the team itself, who has done extensive requirements gathering. This isn’t a case of excluding the users because our team are decent surrogates, but it is still a bit of a vacuum.
- Extension of method: Will this method be useful for other projects? Only time will tell.
Despite these reservations, I’m really pleased with how this approach is working. For years I’ve wondered how to adapt the requirements process to the needs of an information architect. Conventional requirements (”The system shall…”) have always felt inadequate to capture user needs for our purposes. Personas are too high-level, too difficult to apply directly to requirements. Use cases are too time-intensive. For the first time, I feel like I have a useful mechanism for capturing IA requirements.