Karen L. Ziech - Performance Technologist
Information Products & Training, Lucent Technologies
Cheryl L. Coyle, Ph.D. - Technical Manager, Human Factors
Bell Labs Research, Lucent Technologies
Human Factors (HF) has a long history at Lucent Technologies. The first Human Factors group was formed in AT&T Bell Labs back in the 1940’s. In the years between 2000 and 2003, however, as Lucent downsized, many human factors experts were decentralized, had their work re-directed to other functions, such as Systems Engineering or Development, or simply left the company. Those of us HF-minded employees remaining at Lucent continued to do our jobs, lamenting the fact that our community had disappeared. One small group of HF people (three technical folks and one manager) was the only official remnant of a former empire. This group was in occasional contact with four other HF-sympathetic Lucent employees: three re-deployed HF experts and a pro-usability Performance Technologist in the customer documentation and training organization.
During the summer of 2004, an idea sparked in the remaining small, centralized HF group to organize the other “usability-minded” individuals in Lucent in order to promote a feeling of community. We brainstormed and contacted all the folks we remembered to see what kind of interest existed in the formation of such a community. The purpose of the community was still ill defined, other than to come together for mutual support. Within a short time, a small group was on board, we had a name, Usability Special Interest Group (USIG), and by the time of the first meeting in August, about 20 members were on our distribution list.
Purpose of USIG
The first few USIG meetings were devoted to pulling the team together and identifying the group’s charter. Agreeing on the purpose came rather easily: to promote good usability in Lucent products. Defining how to achieve our goal was more difficult. Some ideas were negative in nature, such as asserting ourselves as usability “police”. Others focused on education and communication of the value of HF. A third category of ideas was awareness. At first we discussed how to make others in Lucent aware of USIG, then turned to raising awareness that usability is important. Finally, one member suggested we sponsor an award. Instead of focusing on what projects are doing wrong with regard to usability, we decided to reward a project for doing things right. An idea was born!
A Usability Award
We met regularly through the fall of 2004 to determine the award details and process:
- What kind of high-level judging criteria should we use?
- What kinds of projects/products/applications/systems, etc. would qualify?
- What did we want to reward – good usability methods or a usable interface?
- How should we advertise the award?
- How would we evaluate entries?
The decision to focus the usability award on the user-centered processes took shape slowly and followed lots of discussion about how to determine that nominated products were really usable. Heuristic evaluations and usability reviews had to be discarded because not all USIG members had experience with these methods. And, because we are all volunteers with demanding “day jobs,” time was an issue. In the end, the process piece seemed to be the most logical, especially because we did not want to bestow a Usability Award on a team that had built a usable product through sheer luck.
In the early stages of deciding how the award process would go, we agreed that we needed a sponsor. We wanted an executive-level champion to support the process and preside over the award ceremony. Since some USIG members work in Bell Labs, the original plan was to announce the award as a “Bell Labs Usability Award.” When we pursued this, however, we learned that we would need to articulate clearly our award process and guidelines. Since we hadn’t reached that point yet, we decided to not pursue the Bell Labs endorsement. Instead, we decided to call the competition what it was: the 2005 USIG Usability Award. We contacted the vice president of the small HF group and asked him to serve as champion and sponsor of the award. After asking some questions, he agreed.
Calling for Nominations
Once sponsorship had been secured, we faced the topic of calling for nominations. We soon defined two issues to resolve:
- In what Lucent publications could we publicize the Usability Award and call for nominations; and
- What criteria would we use to give directions to the submitters?
The first seemed the easier question to answer. We assumed that we would publish information in the internal, daily Lucent Technologies newsletter, LT Today. We also thought that the Bell Labs News would be a likely vehicle for getting out the word. One USIG member took responsibility for contacting the editors of these publications and the others turned their attention to categories and criteria.
The discussion about categories was quickly concluded: We would accept nominations in 4 categories:
- Web based product Graphical User Interfaces
- Non-web product GUI
- Non PC products – e.g., telephone or handheld products
- Software design, architecture, or procedures (e.g. maintenance, provisioning, installation)
- Documentation and training to support Lucent products
But when we faced the topic of judging criteria, we immediately ran into difficulty. Because our members come from a variety of disciplines, several outside of traditional human factors, we each had different ideas about which aspects of usability to focus on. We debated this issue in full USIG and Usability Award team meetings and in email for over a month in the fall of 2004. All USIG members were called upon to submit suggestions.
The ensuing email threads revolved around two representative recommendations. The first suggested that award entries cover 4 high-level topics: a description of product or interface usability; usability goals and methodologies; standards; and finally metrics. At the other end of the spectrum, the recommendation asked that entrants complete a more detailed, 3-part call for nominations, which included 11 software heuristics, criteria for hardware products, and evidence of user-centered design process.
During the next Award Team meeting, we realized that we were making the call for nominations harder than it needed to be. When we looked at the call from the typical entrant’s perspective, it was clear that we needed to focus on simple criteria. Since the award goal was to promote awareness of usability, we had to assume an audience who was less informed than we, one that might not even resonate with the description of the higher-level categories. With this in mind, we agreed on three straightforward questions, which one of the members rewrote as the criteria for nominations:
- What makes it a usable product? Identify the main advantages of your product for its end users.
- What did you do to make it usable? Tell us about any user involvement, user testing etc.
- Send screenshots or pictures of the main usability features of your product.
In the meantime, efforts to have the call for nominations published in LT Today, the Lucent-wide daily newsletter, had fallen through. We had secured articles in two other internal publications. In late January, the Bell Labs News ran an article, Calling all User Interface Designers, while the Software Watch featured, Lucent Usability Special Interest Group announces new award, which opened with the question “What is usability?” We waited and when, after about three weeks we hadn’t received any submissions, met to discuss why our message wasn’t being heard.
The Award Team re-read the articles and concluded that neither offered a strong-enough WIIFM (“what’s in it for me?”) to pique reader interest. “Who are the people in Lucent who design products and user interfaces and where do they live? What motivates them?” we asked. Once the questions were framed that way, the answer about how to publicize became clear. The people who design products and interfaces don’t call themselves User Interface Designers; they are systems engineers, architects and software developers and they “live” in the product organizations. We drafted an email call for nominations and emailed everyone in product management and development units.
The email went out on March 21, 2005. Its subject line read: Call for Award Nominations in Lucent Product Usability. Instead of asking for the definition of usability or for user interface designers to step forward, a USIG member rewrote the call with the needs of our target audience members in mind. To emphasize the value of taking time from their busy schedules, this article spoke to those who understood how much benefit a usability award could buy in the market. Based on contributions from Usability Award team members, the article opened with the clever, attention-getter:
"Things should be made as simple as possible, but not any simpler." -- Albert Einstein
By April 4 we had 10 nominations; on the 27th of April, we were up to 20 entries; by April 30, the deadline, we had 39. We were delighted and, again, because we’re a volunteer group, a little panicked at the thought of evaluating them all.
The Judging Process, Round One
In the 5 weeks during which we waited for the nominations to come in, we started organizing to judge the entries. We asked for volunteers from the USIG membership and began having discussions about who could judge. Since some of the USIG members had submitted nominations or had participated on teams that had entered the competition, we debated who could serve as objective, fair judges. As nominations continued to roll in and we saw the amount of time it would take to reduce the entries to a manageable number, we decided that any members who volunteered could participate in the first round. 12 people volunteered.
To determine how to judge, we developed a small trial in which volunteer judges assessed 3 randomly selected nominations and, based on no predefined criteria, wrote a short statement about why each entry was placed in one of the following 3 buckets:
- Definitely discard submission
- Maybe keep submission
- Definitely keep submission
The idea was to see how the judges defined usability and to develop the judging criteria based on the results of the exercise. Seven judges volunteered for the trial. Again, because we all come from different disciplines or focus on different aspects of user-centered design work, we were pleased at the consistency of the bucketing and decision-making comments.
From the judges’ statements about how they’d made their choices, a list of judging questions was proposed and agreed upon and a judging process defined: The 39 entries, which included those judged in the trial round, would be divided among the 12 judges. This ensured that three different judges would read each entry. In an additional effort to be as objective as possible, we made assignments so that no two judges shared more than 2 entries with another. In this first round, each judge was again to bucket their nominations, this time using the questions that constituted our criteria.
Two categories of questions required judges to consider the nomination itself and the described usability of the product. The first category covered how the entry was written—did it sufficiently answer the three questions from the call and did the nomination have a user-centered tone?
"The submission MUST:
- Answer all the submission questions in sufficient detail to allow for high-level analysis of the other criteria being used for assessment.
- Have a user-centered tone and show the value of the product or described changes to the users (e.g., does not emphasize cost cutting or new functionality, emphasizes hiding complexity from user and providing validation and status, talks about ways users abilities and needs were considered such as tool tips, validation, status, graphical representation being close to physical equipment, organization of tasks into logical steps, transfer of training, etc.)
- Present supporting evidence (e.g., screen shots) that is user centered (emphasizes benefits to user not cost cutting or new functionality)”
The second category asked for usability evidence on a minimum of two criteria from a list of 8.
“Two or more of the following criteria must be met; evidence could be written or part of supporting materials. The submission should:
- Show evidence of user involvement (e.g., gathering feedback, focus groups)
- Show evidence of user testing (e.g., user walkthroughs, usability testing)
- Show evidence of usability methods other than user testing (e.g., expert reviews, heuristic evaluations, surveys)
- Show evidence of using standards (e.g., UI standards, telecom standards)
- Show evidence of usability goals and metrics to reach goals (e.g., reduce time on task as evidenced by usability testing)
- Show evidence of considering accessibility
- Show evidence of considering internationalization/localization
- Show evidence of providing user assistance (e.g., context sensitive help, tool tips, on-line help, windows help, tutorials, training classes, etc.)”
When this round was complete, 11 entries had made what we called the semi-finals.
The Judging Process, Semi-Finals
Once again, we debated the judging process: Which USIG members could judge in this round; how to be more objective; whether to use Likert-based ratings; and what criteria we’d use to move from the 11 semi-finalists to a smaller set of finalists. We’d decided we would call for presentations for the final award or awards, as we had also agreed to the possibility of more than one winner. Now we’d had some experience and we completed this exercise fairly quickly.
In this round no USIG members who’d been involved with nominations could serve as judges. This decision reduced the number of judges to 8. All 8 judges reviewed each of the 11 submissions. Three categories and 9 criteria questions, based on the original call questions, were established.
- What makes your product usable (4 questions)?
- What did your team do to make it usable (4 questions)?
- Overall impression (1 question)
The 5-point Likert scale for each of the 9 questions follows:
1= no coverage of the topic
2= mention but no specifics
3= reports with specifics
4= several specific reports
5= seems to cover all relevant aspects
We gave ourselves three weeks to complete this exercise. A tally of the results provided the numbers, and one of our members performed statistical assessments on the data, both of which were shared with comments in email. We were set to debate the validity and implications of the findings before determining the finalists, but when we looked at two graphs of the data (see images below), we decided to go on what we could see. Since there were no huge breaks in the data to guide us, we used the graphs and experience with statistical bell curves to identify the top one third of the submissions. We identified the top third of the entries as a manageable amount for further consideration. We had a month to reach our award announcement deadline and needed to schedule ½ hour interviews for each of the finalist entries. Within about 15 minutes, we’d decided to take the top 5 entries into the final round.
Five ½ hour presentation interviews were held in mid-September, during which at least 6 of the 8 judges presided. The awards were announced at the end of September, in time for award winners to include the information in their performance review materials. The description of the presentation interviews, the final voting session, the awards ceremony, and our “lessons learned” follow in part two of this case study, to be available soon.
The process of defining the USIG Usability Award, securing sponsorship, calling for nominations, judging, and making the awards has taken the group nearly a year to complete. In that time, USIG members have bonded and grown in knowledge of how to talk about usability. As a result of the award, we accomplished our objective, to raise awareness of usability, have gained new members for USIG and have ideas about how to promote usability in Lucent’s products.