The following is an internal summary I wrote for a team that no longer exists, summarising a number of references from UXScotland, various book and blog posts. It extends the thoughts from my Pecha Kucha talk. For more details, please refer to the links throughout and the references at the bottom. The context here is consulting and long-term B2B projects, but some of these discussions are more widely relevant.
User Experience : Project considerations
User experience is about making sure we are solving the right problems in the right way. It is the intersection of design, users, and context. Context here is a combination of one or more of the device in use, the user’s location, any social cues such as nearby friends, and anything else that may be available from sensors or historical information.
In many cases, the requirements we have are assumptions (e.g. what users want is Facebook integration). Where the benefits of a requirement are unclear, we should treat it as an assumption to be tested. Embrace data, and analyse it.
At the requirements stage, we need to make sure we are solving the right problem (pretotyping : “building the right *it*”), and that our chosen design helps the user to solve the problem without frustration (i.e. prototyping the design, rather than the implementation, with wireframes/sketches).
In user testing, particularly in an agile development, we can refine those ideas by seeing how well the implementation solves the problem, by testing with users. We can also test deployed code by analysing heat maps and http logs to see what users are doing to inform further tests and the assumptions that feed into further design cycles.
In a Lean/Agile project, we need to be explicit about our assumptions about the user and test them at every stage of the development to ensure that we always meet user needs.
How does UX fit in our process?
A system that supports user’s need effectively will need to understand that the user is a Stakeholder in the process. Whilst the users themselves may not be directly involved in the generation or review of design artefacts, there should be a user representative, either a super-user on the customer side, or a 3rd party researcher who has determined user needs, and has authority to verify any proposed solution and high-level requirements against those needs.
Personas / Typical Users
A persona is an abstraction of a system user. In a simple system, there may be only one type of user, but more sophisticated systems will typically have users and administrators, and may have multiple classes of each. A persona is defined to encapsulate the types of tasks a specific user may wish to perform, and any limitations that may be imposed (for example, administrators may be able to install specific browsers or client software, but members of the public using the system must be supported across multiple browsers at multiple screen sizes).
Each persona will have one or more tasks they wish to perform in the system. A User Journey describes the tasks as a series of steps that a specific persona will follow in order to achieve that task.
Consider the tasks that a user wants to perform. See also BDD – design from user in.
User wants to process a case:
- User logs in to the system
- User selects case from their task list
- User reviews latest document
- User finds agent for case, and calls to discuss
- User adds comments to case
- User saves case and returns to their task list
This process may identify a new use case (“Display task list”), and specific actions that need to be defined within a use case (“Display latest document” and “Display agent contact details”)
The User Journeys provide the context between the Stakeholders (and User Types therein) and the Use Cases. Each User Journey will link to one or more Use Cases, but some Use Cases may not have an associated User Journey (nightly payment processing, for example).
If the solution is replacing or improving an existing system, the best source of information on the current system are the users. The requirements capture process should take into account both the tasks that the users perform and gather feedback on any areas of frustration. The prioritization exercise should consider these improvements as well as new functionality.
As well as testing the Use Cases for functional acceptance, the FAT/UAT process should also test that the final system supports the User Journeys defined up front.
Where projects have regular support meetings, the input of users has been valuable in identifying problems areas and possible changes. When on-going service delivery contracts are defined, SDMs should consider whether ongoing user feedback is appropriate as part of the planning and scoping of releases within that framework.
Questions to ask
- Have the requirements been tested on users? If not, why not? (Are these the right requirements?)
- Will users be given the opportunity to provide feedback on these through the development? (And if so, how, when and where?)
- What user outcomes are we trying to achieve with the release? These may not be requirements that we put a cost on, but an expectation that we can measure against to show improvement – we would need to communicate this appropriately.
- E.g. minimise clicks to access the 5 main functions
- E.g. reduce time-to-complete for function x, y and z by 10%
- E.g. Align existing UI with iOS and Android norms
- E.g. Increase usage of function z by 5%
- E.g. 99% AAA compliance
- Who represents users on the project team?
- How many user types do we need?
- Can normal users and administrators share UX, or are their goals divergent? – different apps, different ASP Areas, different branding, …
- What platforms and form factors need to be supported/tested?
- Does each platform need a native UX? If native app, probably yes, if web app, maybe.
- If mobile, do we need to adapt to context : location/orientation/communication with nearby devices/…
- If social, do we need to adapt to context : can I approve my own work?/who’s online/recommendations/who’s nearby/…
- Do we, as developers, have any input to the UI design? If not, why not?
- Have the designs been tested on users? If not, why not? (Does the UI fit user expectations?)
- Do we have appropriate guidelines for the appropriate platform, and are they listed in the requirements and estimates?
Potentially useful resources