User-Centered Web Design Process

One of the first conceptual models for managing software development projects was software development life cycle (SDLC). It is a general model that is designed to impose structure on the way any type of software is produced and defines the phases of software development as Planning, Analysis, Design, Development, Testing, Delivery, and Maintenance. The idea is that by following these phases, the process of software development becomes more structured, more manageable, and less prone to failure.

Various implementations of the concept have since been developed and most system development processes or models borrow in some way from it.

One of the significant failings of the SDLC concept is its lack of adequate attention to the users of the system being developed. Once the requirements for a system have been gathered from them, they are typically not involved again until the end of the project during delivery. There is no paying attention to their feedback and characteristics all through the stages of development and using them to revise the system’s design. This problem is what subsequent processes like the UCD process are designed to address. The UCD process combines the elements of sequential and iterative techniques, and is quite sympathetic to users and their needs. For example, it is based on the principle that a system should be made to work the way users want it to work rather than the other way around. Figure 26.1 shows one representation of the stages of UCD, which are Task Analysis, Requirements gathering, Design, Implementation, Evaluation, and Delivery. The rest of this chapter discusses the typical tasks involved within each stage.

1. Task Analysis Phase

Consider a commission to develop a Web application to assist a group of people in performing their daily tasks. The first process would be to understand what these people (the potential users) do and how they do it. Technically speaking, task analysis is conducted on what they do. There are different ways of conducting task analysis, each capable of providing different types of information, but all have the same aim of identifying a goal and the series of tasks necessary to accomplish it in order to develop an understanding that enables their modeling. Two common ones are field studies and hierarchical task analysis.

1.1. Field Studies

A field study essentially entails observing people working in their natural environment and gathering as much information as possible for analysis. Although it can also include the analysis of documents and conversations or interviews with participants, observation is the main activity and this is usually conducted quietly. Being quiet is especially important so as not to bias users or make them change behavior. The output from the process is a description of the tasks these people perform when undertaking their duties, and if a computer system already exists, a description of their interaction with it. Although the process does not provide all the requirements for a system, it does provide an initial indication of the possible direction for design. Because field studies can help identify potential sources of problems right from the start, it can help avoid the problems at a later stage when they are more costly to correct.

1.2. Hierarchical Task Analysis

Hierarchical Task Analysis (HTA) is a process in which a goal is progressively broken down into smaller parts, such as tasks, sub-tasks, and actions, until the smallest task is reached; hence, the term “hierarchical.” The output from HTA can be in textual or graphical form. If in text, a hierarchical list of tasks is produced, along with a plan of the order in which they are carried out. Figure 26.2 shows an example of a graphical output for the typical tasks involved in cash withdrawal from an ATM machine. It is a commonly used example, because most people are familiar with the sequence of steps used for withdrawing cash from the ATM machine. It says the goal (i.e., 0) is to withdraw cash and the sequence is to perform task 1 and its sub-tasks (1.1 and 1.2), and then move on to task 2, and so on, until 5.2. A textual output is written just like a table of content.

2. Requirements-Gathering Phase

Having completed the necessary task analysis, the next phase is to gather requirements for the system you want to develop. These fall into two main categories: functional and usability. Functional requirements (i.e., what the system is required to do) are mostly derived from the HTA output, and usability requirements (i.e., the ease with which tasks can be carried out on the system) are derived from various other means, including interviews, questionnaires, observing users work in their natural work environment, and usability principles. Additional requirements may also relate to legal and ethical issues. These include, for example, whether or not copyright clearance for any media that will be used or compliance with any laws, such as data protection laws, are required. In all, the requirements should answer questions like what users’ preferences are; what their skills and experience are; and what they need. These questions should also help determine the weaknesses and strengths of users, based on their skills and experience, all of which contribute to deeper understanding of what is expected of the new system, including what design options to consider.

As might be expected, finalizing requirements the first time this phase is undertaken is rare, particularly as users are not usually clear about what they require or how to describe it, or whether or not what they want is technologically feasible; so, this phase, like most after it, may have to be revisited many times. The output from it is a report, known as the requirements statement, which clearly lists both the functional and usability requirements for the new system. For each requirement, what is required is described, along with the rationale for it and a way of measuring it in the completed system. This measurement can be quantitative, that is, in the form of measuring something, such as number of keystrokes or screen-touches it takes to complete a task. So, for example, a metric might be that anyone should be able to complete a task in five keystrokes or screen-touches. The measurement can also be qualitative, that is, in the form of asking users to complete questionnaires, for example, about how satisfied they are with the color scheme of a screen. Web accessibility requirements, which can be derived from Web accessibility guidelines, would be part of what is included in the requirements statement. In some cases, it may be necessary to provide a justification for the need to address Web accessibility, in which case, a business case would need to be provided. However, this is usually only relevant in large projects for organizations. The business case for Web accessibility is provided by w3.org.

3. Design Phase

During this phase, the requirements in the requirements statement from the previous phase are translated into design. To accomplish this, information and requirements already gathered are categorized, typically using card sorting (explained shortly), and prototypes of various design ideas are created from the output and evaluated in the next phase to identify and correct usability problems. This design-evaluation process is repeated until all obvious problems are identified and fixed, at which point implementation of the design that is agreed on can commence.

3.1. Card Sorting

Card sorting is used to organize information in order to help design the information structure of a system. For a website, it helps inform on the content of pages and the connection between the pages. A card sorting session involves participants (who can be subject experts or novice users) organizing topics into categories in a way that makes most sense to them. They may or may not also be required to name the categories, with each category translating into a page. On large sites, the pages can also be grouped to create sections. Card sorting can be conducted using actual physical cards or pieces of paper, each carrying a separate topic, or any of the various on-line software tools can be used. The method is especially useful because it can help you understand what users expect from a site.

There are two main types of card sorting: open and closed. In open card sort, participants are required to sort information into categories and name the categories. This approach is usually good for determining how to best structure information to benefit users. In closed card sort, users are required only to group topics into a pre-defined set of categories. This is suitable for checking if the current information structure matches users’ expectations. The two, of course, can be combined. Once categories and/or sections are determined, they are used as the basis for producing different design ideas, which are then visualized using prototypes.

3.2. Prototypes

A prototype is basically a rough version of a design idea that allows you to show the idea to users for consideration and constructive feedback before investing effort, time, and money in full development. No doubt it is easier and cheaper to make changes to a design idea early in development, when you are still planning and before any codes are even written, than after a site has been fully implemented. Prototypes ensure a product is produced that pleases the users and are typically high fidelity or low fidelity.

  • High-fidelity prototype: These are computer-based and usually allow user interaction, for example, via mouse, keyboard, and/or touch. Because they are as close as possible to the intended design, they are considered the best for collecting accurate user- performance data, such as those that require responses from the system. They are also preferred when demonstrating design ideas to clients.
  • Low-fidelity prototype: These are usually paper based and not interactive. They can be anything from hand-drawings on paper to printouts of diagrams. Because they are easy and cheap to produce, it is possible to create many alternative design ideas, thereby increasing the chances of arriving at the best possible design. Typical examples are flowcharts, wireframes, and paper prototypes, which are discussed next.
3.2.1. Flowcharts

Flowcharts have their origin in engineering and are visual representations of the flow of control (i.e., the steps involved in a process). They describe how things work and are like maps of events. They can be basic or very complex, depending on the complexity of the task being visualized. Figure 26.3 illustrates the concept. It visualizes a basic process in which the user is asked for a password. After it is entered, it is checked; if it is not correct, an error message is displayed and the user is asked for it again. The process is repeated until a valid password is entered, after which the user is allowed to go on to their page. The rectangles in the chart represent screens with messages and/or content for the user and can contain anything, including images, video, and animation. The diamond, known as decision symbol, indicates a decision-making process. The ovals are just terminals to indicate where a task starts and ends. There are various other symbols designed to communicate specific functions.

A flowchart is used to visualize many different kinds of processes and often annotated in different ways, so there are various types, but the goal is the same and that is to clearly communicate the workings or structure of something so that everyone involved can create it and/or use it. In Web design, a flowchart can be as basic as a diagram showing how the pages of a site connect to each other. This is why it is sometimes referred to as a sitemap or a site s navigational structure. For a complex website, flowchart can also include script-generated pages and loops to illustrate paths of user-interaction or decisions. Path of user-interaction or decisions is also sometimes referred to as user flow or user journey. Figure 26.4 shows two different navigational structures for a basic personal website. The left design shows a linear navigation structure in which users can only go from a page to the one that is adjacent. The right design shows a network navigation structure that allows users to go from one page to any other. In either design, the site can be entered from any page just by, for example, typing the address in a Web browser. Naturally, the design on the right makes more sense for a personal Website, as users are more likely to want to choose specific pages than navigate linearly.

In the example, if in order to go from the home page to any of the other pages a password is required, then how that process should work and the pages/messages that will be displayed are included in the flowchart. These could directly be part of the chart, or an appropriate symbol could be used to represent it and the full details presented separately.

3.2.2. Wireframes

A wireframe is a skeletal diagram, typically with no color or graphics, which is used to specify the details of everything that will enable a screen to be created as intended. These can include the layout of elements, color scheme, text color, font type, font size, media usage and intended treatments, and interactivity details, such as the description of actions possible from the screen. If videos or animations are involved, for example, the scripts to be spoken or used in voice-overs, if any, are included. Using the flowchart on the right in Figure 26.4 as reference, Figure 26.5 shows the wireframe of a possible design concept for the home page. The term “storyboard” is sometimes used to refer to a wireframe or a collection or sequence of wireframes.

3.2.3. Paper Prototypes

Paper prototypes are by far the quickest methods of collecting useful feedback on preliminary design ideas and information structure. They are straightforward to produce and basically involve using pieces of paper to simulate a user interface. The pieces can be stuck on one another, folded, and colored as necessary. The approach especially fosters creativity in that it encourages the exchange of ideas between different people. It is also cost effective, popular, requires no design or coding skills, and encourages rapid design evaluation. Figure 26.6 shows a snapshot of an example of a paper prototype from an NNGroup paper prototyping training video.

4. Evaluation Phase

Evaluation primarily serves the purpose of testing a website for various types of characteristics. It could be formative (i.e., conducted during development to ensure users’ needs continually shape development) or summative (i.e., conducted after a site is completed to check if it meets set requirements). The two main types of summative evaluation are alpha testing and beta testing. Alpha testing is the first evaluation after completion and usually done to see that a site works under varying setups and conditions. The errors and usage problems recorded are then fixed, after which beta testing is conducted. This is typically done with a limited number of people from the intended users in the normal environment. The reasons evaluation is done include the following:

  • Checking if a system performs efficiently or is not producing the expected results, perhaps due to incorrect logic in programming or misinterpretation of the system’s requirements.
  • Checking whether users find a system easy to learn and use, or like the way it works or looks in terms of aesthetics.
  • Checking conformity to some standards, such as Web Content Accessibility Guidelines (WCAG).

Where a commercial site is involved, evaluation can also be a useful tool to find out from potential users how they would like the site to work, look, and feel when finished. In reality, evaluation starts as soon as a project begins and design ideas are proposed. Also, it does not have to be of the system being developed; it can, for example, be of an existing system or systems to garner ideas for a new one. Apart from testing that the functions of a website work as intended, usability testing is the most important evaluation activity in a development process and is part of the general user experience testing. Also important is website accessibility testing.

4.1. Usability Testing

Ensuring high-degree usability of anything designed to be used is important in order for the thing to be acceptable and popular. A website is no different. In the face of many competitions, a website that is not easy to use and does not give visitors user satisfaction and good user experience is most likely to lose out, because people will probably stop visiting it. Website usability testing is the evaluation of a site for the property of being easy to use. It involves measuring users’ performance while they use the site to accomplish pre-determined tasks, or after they have accomplished the tasks. The primary aim is to determine whether users find the site usable for accomplishing the tasks for which it has been designed.

To conduct usability testing, users are given set tasks to complete and their performance and satisfaction level measured. Types of tasks can vary widely, ranging from reading from different types of interfaces with different color schemes or fonts to navigating content and searching for information. While these tasks are performed, performance data is collected in various ways, including through interaction logging techniques (e.g., recording keystrokes, mouse-button clicks, mouse movements, or touches) and video recording. User-satisfaction data is collected using query techniques (e.g., interviews and questionnaires) in the form of asking users to rate their feelings on a scale, usually immediately after completion of tasks when feelings are still fresh. Performance is measured mainly in terms of two elements—time and number, such as completion time and the number of steps it takes to complete a task. Typical usability performance measures used in the evaluation of an application include:

  • Time to complete a task.
  • Time to complete a task after being away from an application for a specified time.
  • Number of errors per task.
  • Number of errors per unit of time.
  • Number of navigations necessary to get to help information.
  • Number of users committing a particular error.
  • Number of users completing a task successfully.

Using combinations of these measures as appropriate, it is possible to evaluate the qualities that define the usability of a system, which are, as previously mentioned in Chapter 24, mainly learnability (ease of learning), memorability (ease of remembering how to use it), effectiveness (how effectively it can perform a task), efficiency (how quickly it can be used to perform a task), errors (number of errors, severity, and recoverability), utility (provision of useful functions), and satisfaction (how pleasant it is to use).

4.1.1. Usability Testing Methods

Website usability testing can be carried out in various ways. The following are some of the commonly used methods in computer application development, which naturally apply to Web applications.

4.1.1.1. Hallway Testing

This is a testing that is set up in an area of high pedestrian traffic and designed to test bystanders and passersby. It relies on people who have time to spare and are willing. It is especially suitable for websites intended for the general public, because it provides the opportunity to determine how usable the sites are to a large and diverse audience. To get the best results from the method:

  • Choose an area that has heavy foot traffic and avoid scheduling tests during inconvenient hours or when there are major events going on.
  • Use pleasant, outgoing, and determined greeters to identify and recruit test participants.
  • Ensure that the objectives of the test are clearly explained to participants.
  • Do not make a test last too long. The maximum time for an individual is 10 minutes.
  • Reward volunteers, for example, with gratitude or gifts, such as sweets or pens.
4.1.1.2. Paper Prototype

Testing Paper prototype testing entails testing paper prototypes described earlier under the Design phase. Usability testing with paper prototypes is usually iterative and one of the best methods for discovering potential usability problems with a design early. Finding problems as early as possible and addressing them can save a lot of time, effort, and money that would be lost if the design were developed and the problems found later. Another benefit of the technique is that it supports user involvement from an early stage.

Conducting usability testing on a paper prototype usually involves the user, a facilitator (who is a usability expert who records issues raised during the testing), the design expert (who understands the design being tested and quietly manipulates the paper prototype in response to the user), and observers (who are typically members of the development team, who observe and interpret the users’ interaction with the prototype and take note). The total number of users tested, like in other usability tests, is about five, which, according to the Nielsen Norman Group, is capable of catching about 85% of usability problems.

To evaluate a paper prototype, the user is asked to touch the desired feature on the paper representation of a screen to simulate a click. When this is done, the design expert changes the paper interface accordingly to simulate screen response. For example, if it is an option to go to another screen, then the current page is replaced with the requested one. If it is a menu, then a piece of paper representing a drop-down menu is shown, and so on.

4.1.1.3. Usability Laboratory Testing (Controlled Testing)

Controlled usability testing is conducted in a usability laboratory, or an environment that is similarly controlled, where users cannot be disturbed or interrupted. Such an environment would consist of the testing room, which is fitted with audiovisual facilities to record various interaction activities, including microphone to capture what is said and video cameras to record users’ behaviors, such as movements and facial expression. These activities are watched on monitors in an adjacent observation room separated from the testing room with a one-way mirror that allows investigators to observe the participants. The downside is that this type of setup lacks the features and activities of a natural work environment. This means that findings from evaluations conducted in usability labs may not necessary hold when a system is used in a natural work environment where there usually are disturbances, such as interruptions and noise. Additionally, usability labs can be very expensive to run and maintain. A common alternative is to set up monitoring equipment temporarily in the relevant work environment. When this is done, a work environment is, in effect, temporarily converted into a usability lab. This is relatively less expensive and provides a more natural environment for users, making findings more accurate.

4.1.1.4. Walkthroughs

In computing, walkthrough, usually called code walkthrough, is often used to describe the process of inspecting algorithms and source code to check if certain programming principles, such as coding style and naming conventions, are followed. In evaluation, walkthroughs seek to perform similar function, except that the aim is to identify usability problems. Essentially, experts use a system as if they are the intended users in order to identify usability problems. The most commonly used are cognitive and pluralistic

In a cognitive walkthrough, experts explore various sequences of steps users are likely to go through to accomplish a task on a system to identify usability problems. The main aim is usually to determine how easy it is to use a system. Because of the exploratory nature of the technique, the checks performed revolve round determining, through asking questions, if a system supports exploratory learning. Basically, for each step taken to accomplish a task, an account of why the step facilitates or does not facilitate usability is provided. To conduct a cognitive walkthrough, four items are required:

  1. A description of the prototype to be evaluated.
  2. A description of a specific task that the intended users will perform on the system.
  3. A complete list of the actions (i.e., steps) required to complete the task.
  4. A description of the characteristics of the intended users, such as experience level.

Equipped with this information, an evaluator follows the sequence of actions listed in Item 3 above to accomplish the specified task and then gives an account of associated usability issues. In order to be able to give this usability account, the evaluator asks questions that include the following:

  1. Will the action necessary to perform a task be obvious to users
    at that point? Will users know what to do to achieve the task?
  2. Will users notice that correct action is available? Will they notice, for example, the button or menu item that they can use to perform the action?
  3. Once users find the correct action, will they know that it is the one needed to complete the task?
  4. Will users understand the feedback they are provided after the action is completed? Will they know that they have made the right or wrong choice?

The evaluator keeps records of what is good and which aspects of the design need refinement. A set of standardized forms may be used for this. One form, used as cover, might list the information described in Items 1-4 listed earlier as well as the date and time of the walkthrough and the names of the evaluators. For each action listed in Item 3, a separate form might be completed that provides answers to the questions in a-d. Any negative answers to any questions for any action are carefully documented on a separate form, including the details of the system, its version number (if applicable), the date of evaluation, the evaluators’ names, and the description of the usability problem. Also normally included is the severity of the problem, such as the frequency of occurrence, its impact (whether it is easy or difficult to overcome), and its persistence (whether users will be repeatedly bothered by it). This information helps designers give priorities to the order in which problems are fixed. Systems that this technique is suited for include those that require complex operations to perform tasks. The downside is that it can be very time consuming and laborious, and requires a good understanding of the cognitive processes involved in completing a task.

In pluralistic walkthroughs, the same procedure is followed as in cognitive walkthroughs, except that it is done by a diverse group that includes users, interface designers, developers, usability experts, and management. All participants are asked by the coordinator to assume the role of a user. Bringing different types of participants together makes it possible to gather views from various perspectives, allowing for a greater number of usability problems to be identified. The method is particularly well suited for early development stages, enabling the discovery and resolution of usability problems quickly and early. Another advantage is that it makes developers more sensitive to users’ concerns, which can be very useful where usability can make the difference between life and death. The method is accomplished through a sequence of steps:

  1. Each participant is presented with a series of printed screens that are ordered in the same way that they would be displayed when users are performing specific tasks, and asked to write down, in as much detail as possible, the sequence of steps they would use to go from one screen to another; for example, “Press the up-arrow key four times, then press ‘Enter’.”
  2. A discussion is held about the actions all participants have suggested. Usually, the representative users speak first, so that they are not intimidated by the experts’ contributions. Next, the usability experts present their findings, and then the developers provide their comments, which would include the rationale for the design. The developers’ attitude would be welcoming. If necessary, the coordinator presents the correct set of actions and clarifies any unclear situations.
  3. All participants are asked to complete a brief questionnaire regarding the usability of the evaluated design.
  4. Steps 1-3 are repeated for all screens.
4.1.1.5. Expert Review—Heuristic Evaluation

Heuristic evaluation involves different experts assessing a system independently for its compliance with recognized heuristics (i.e., usability principles and practices), with the aim of finding usability problems in the system. The elements evaluated can be user-interface elements, such as color scheme, menu, navigation structure, and dialogue boxes, or functional elements, such as speed of response and error recovery. Ten general principles (or rules of thumb) are defined by the Nielsen Norman Group (NN/g) that should be followed to ensure usability:

  • Visibility of system status: A system should always inform users in good time about what is going on, using appropriate feedback, such as pop-ups and sound.
  • Match between system and the real world: A system should communicate in the users’ language, instead of in technical terms. User control and freedom: When users choose a system function by mistake, there should be a quick and clearly marked “emergency exit.” Undoing and redoing actions should also be supported. Consistency and standards: The system should be consistent in behavior, both by itself and in relation to other systems like it. Users should not be made to wonder whether different words, situations, or actions mean the same thing. For example, the same words should always be used to describe the same situations and actions, and established ways of doing things should be consistently maintained.
  • Error prevention: As well as providing appropriate error messages, a system should prevent errors from occurring to start with. For example, error-prone procedures, such as typing, should be replaced with less error-prone ones like drag-and-drop and users given the chance to confirm an action before committing to it. Recognition rather than recall: Objects, actions, and options should be made visible so as to minimize memory load and aid dialogue between the user and the system. For example, important information from one screen should be carried forward to the next, instead of making users remember it. Also, help information and instructions should be clearly visible and easily accessible. Flexibility and efficiency of use: A system should provide multiple levels or modes of interaction so that it can cater to both inexperienced and experienced users. This should also include user customization and the use of shortcuts that allow tasks to be accomplished in as few steps as possible.
  • Aesthetic and minimalist design: Dialogues should contain only information that is relevant or often needed. Too much information makes relevant information harder to see and can also make presentation aesthetically less pleasing.
  • Help users recognize, diagnose, and recover from errors: Error messages should be in plain and precise non-technical language, stating the problem and suggesting a solution.
  • Help and documentation: Help documentation should be provided that is structured and easy to search, and provides concrete steps that can be easily followed, particularly for complex systems. Five basic types of help are identified that can be provided, based on the types of questions users typically ask during interaction: goal- oriented (What can I use this application to do?), descriptive (What is this or what does this do?), procedural (How do I perform this task?), interpretive (Why has that happened?), and navigational (Where am I?).

During evaluation, these heuristics are matched against the features and functions of a system. For example, in the case of a website, for the first heuristic, an evaluator might check whether it provides feedback when the cursor points at an interactive element (such as a button) and whether the feedback is visible enough to be easily noticed, and then repeat this with various elements of the site before moving to the next heuristic. The higher the number of evaluators involved, the higher the number of usability problems likely to be found. A disadvantage of heuristic evaluation method is that it is not always as accurate as expected.

4.1.1.6. Expert Review—Key stroke-Lev el Model Method

The Keystroke-Level Model (KLM) is one of evaluation methods known as predictive modeling methods. These methods involve experts using formulas (known as predictive models) to predict user-performance at completing various types of tasks on various systems. This means that they can be used, for example, to evaluate whether a system is performing to standard, or compare the efficiency of different systems at performing the same set of tasks, or compare and choose between different user-interface designs for a proposed system, such as in terms of the effectiveness of their layouts for performing the same task.

KLM is the simplest and the most commonly used of the predictive models and one of a class of predictive methods known as Goals,

Operators, Methods, Selection (GOMS) rules. This is why KLM is also referred to as KLM-GOMS. It uses predefined classes of operators, each of which has estimated execution time assigned to it. This makes it possible to use the model to predict and compare the times it will take to perform a task on a system when using different sequence of actions. This is particularly useful for determining which of the different ways of performing a task is the most effective, or which design is most effective for performing a task. The original KLM defines the following six classes of operators and execution times:

  • K For single key-press
  • Best Typist (135 wpm): 0.08 seconds
  • Good Typist (90 wpm): 0.12 seconds
  • Poor Typist (40 wpm): 0.28 seconds
  • Average Skilled Typist (55 wpm): 0.20 seconds
  • Average Non-secretary Typist (40 wpm): 0.28 seconds
  • Typing Random Letters: 0.50 seconds
  • Typing Complex Codes: 0.75 seconds
  • Worst Typist (unfamiliar with keyboard): 1.20 seconds
  • P—For pointing the mouse to an object on screen: 1.10 seconds
  • B—For button press or release (e.g., mouse): 0.10 seconds
  • BB (or P j)—for button click (e.g., mouse), that is, pressing and releasing: 0.20 seconds
  • H—For moving hands from keyboard to mouse or vice versa: 0.40 seconds
  • M—For mental preparation for performing an action: 1.20 seconds
  • R (R(t)) —For system response, where the user has to wait when carrying out a task
  • T(n)—For typing a sequence of n characters on a keyboard (nxK sec)

To use the model to evaluate a system, the evaluator first chooses a representative task and determines the different ways in which it can be completed, or how users might complete it. Next, any assumptions are listed. For example, if the task is to delete an item, assumptions might state whether or not the Trashcan (Bin) is visible on the screen and can be pointed to, and that only one item will be deleted. They would also include stating the start and end position for a task, such as whether the hand starts and ends on the mouse, and where the cursor will end up at the end of the task. Next, the sequence of keystrokes-level actions (i.e., instructions) for each approach, such as “Point to file icon” and “Press and hold mouse button,” is listed, along with the corresponding operators, such as K and P If necessary, operators are included for when users must wait for the system to respond, or have to stop to think. Next, the execution time for each operator is included and the total time calculated for each method. The one with the smallest execution time represents the most efficient method of completing the task. This procedure is repeated for all representative tasks. The following example is the sequence of operators required to accomplish the task of dragging a file icon to the Recycle Bin that is visible in Windows platform and the total execution time for the task.

  1. Point to file icon (P)
  2. Press and hold mouse button (B)
  3. Drag file icon to Recycle Bin icon (P)
  4. Release mouse button (B)
  5. Point to original window (P)

Total execution time = 3P + 2B = 3×1.1 + 2 x 0.1 = 3.3 + 0.2 = 3.5 sec.

The main advantage of the KLM is that it allows decisions to be made about systems without the necessity for expensive procedures and the sometimes difficult task of conducting evaluation using users. The main downside is that the execution times used are only estimates that may not hold in real-life work environments. They do not, for example, make allowance for errors or various factors that influence user- performance when performing a task, such as fatigue, mental workload, and working style; nor do they make allowance for the fact that users do not always carry out tasks in a predictable sequential order. For these reasons and other limitations, predictive models in general are most useful only when tasks are short and clearly defined, with limited variations in the way they can be performed.

4.1.1.7. Expert Review—Fitts’ Law

Fitts’ law is another predictive modeling technique used in usability testing. It is a law that suggests a relationship between the time required to move from one point to a target, the distance to the target, and the target’s size. A useful piece of information from this is that the bigger a target, the more easily and more quickly it is to reach it, perhaps because people are more confident about their judgment and therefore more apt to advance more quickly than move at reduced speed. In essence, it provides the primary reason for why graphical user- interfaces with bigger buttons and icons are easier to use than those with smaller ones. The law is represented mathematically in various ways. Figure 26.7 shows an example of one of the simpler ones.

4.2. Evaluating Websites for Accessibility

Evaluation designed specifically to address accessibility is a relatively recent kind of evaluation, as accessibility is a more recent notion brought about by the popularity of the Web for delivering information to the general population by governments and organizations designed to help the general public. As with guidelines on how to implement

Web accessibility, detailed recommendations on when and how its evaluation should be conducted are provided on W3C WAPs website. As a result, only summaries of the key elements are discussed here.

Evaluation of accessibility serves numerous purposes. One is to help identify during the development of a website any problems that might compromise accessibility. Another purpose is to determine conformance to Web accessibility guidelines, which may be proprietary, government guidelines (such as America’s Section 508), or W3C WAPs WCAG. Evaluation of accessibility can also be to monitor an existing site on an ongoing basis to ensure accessibility is maintained. It is particularly useful when done throughout the development of a website, as it makes it possible to identify accessibility problems early when they are easier to correct or avoid.

The comprehensive evaluation of a website to determine whether it complies with all accessibility guidelines can be complex and time consuming. However, several automated and semi-automated tools are available that can speed up and facilitate the process. Unfortunately, though, these tools are usually not capable of checking all guidelines; therefore, manual evaluation by a knowledgeable human (ideally the author of the relevant website) is essential. Manual evaluation can help spot false or misleading results produced by automated tools and can also check compliance with guidelines that are better judged by humans, such as the use of clear and simple language, and ease of navigation. Naturally, evaluation can involve users as well as the use of some of the evaluation techniques previously discussed in this chapter. In particular, the involvement of people with disabilities is highly recommended where possible, as the concept of accessibility is largely about providing access to them. In addition, the involvement should be as early as possible to ensure a smooth and efficient development process.

Before evaluating a website for accessibility, a preliminary review is usually conducted to determine whether there are indeed any accessibility problems with the site. This is like the evaluation itself, but less rigorous; for example, only sample pages are reviewed. The output from the process is a report that (1) summarizes both positive and negative findings, and (2) recommends what needs to be done next (e.g., a full compliance test) and how identified problems can be resolved.

4.2.1. Evaluating a Website for Accessibility Conformance

Evaluating a website to determine whether or not it conforms to accessibility guidelines usually starts by disclosing the conformance level that the evaluation is targeting. This would have been determined through the preliminary review or some other means. Typically, all the pages of a site are evaluated. If this is not possible, then as many representative pages as possible are evaluated, using at least two different evaluation tools, since different tools tend to detect different problems. As well as evaluating a site for accessibility problems, the Web languages used to develop the site, such as HMTL and CSS, are usually validated to check whether they are used correctly, as this can affect how accessible a site is to assistive technologies, such as screen readers. These languages have been discussed in previous parts of this book and tools for validating their usage are available on the Web, some on W3C’s site.

To conduct manual evaluation, each page being evaluated is checked against necessary accessibility guidelines in a range of graphical browsers (e.g., Internet Explorer, Chrome, Firefox, Opera, and Safari) running on different operating systems. To expose whether or not Web accessibility guidelines have been followed, the settings for the browsers or/and the operating systems are adjusted in ways that would normally create problems for people with disabilities, such as those with visual and auditory impairment. For example, in order to evaluate whether the guideline that says to provide equivalent alternative to non-text content has been broken, images might be turned off in the browsers to see if the text for every image is available and adequately describes the image. Similarly, to evaluate whether the guideline that says to provide equivalent alternative for time- based visual and auditory content has been broken, videos might be checked to see if they are captioned or subtitled correctly. Each page is also evaluated in specialized browsers, such as a voice browser (e.g., Natural Reader) or text browser (e.g., Lynx), to see if their outputs match those of graphical browsers.

As part of the evaluation, the textual content of each page is also usually checked for correct grammar and for whether the writing is clear, simple, and appropriate for the purpose of the website and the target audience. If dynamically generated pages are involved, then the templates, as well as the pages they generate, are evaluated. Finally, for each page or page-type evaluated, a summary is produced that includes:

  • Any problems and good practices found, and the method used to identify them.
  • Recommendations on how to fix the problems, how to extend the good practices identified to other parts of the site, and how to continually maintain the site.

Web accessibility evaluation tools, which can be off-line- or on-line-based, usually do not require much more than specifying the page to be evaluated, specifying the guidelines to check against, and initiating the process. They work in two main ways: one is to do accessibility checks on a page according to a specified conformance level and correct any accessibility problems that can be corrected automatically. The other is to do the checks and highlight any accessibility problems, so that they can be manually checked and fixed. Accessibility tools therefore present the result of an evaluation in various ways. For example, output could be in the form of a report that indicates, at the minimum, the problems found, conformance level used as reference, and which guideline has been broken. Figure 26.8 shows an excerpt of such a report from an on-line tool known as A- Checker, which allows WCAG and other Web accessibility guidelines (e.g., BTIV, Section 508, and Stanca Act) as well as HTML and CSS syntax to be checked.

Different tools offer different combinations of features that inevitably render them suitable or unsuitable for different situations. This means that careful consideration is necessary when choosing accessibility evaluation tools, so as to ensure that they are (1) suitable for the targeted stages of development, and the complexity and size of website, (2) compatible with the host operating system, and (3) match evaluators’ skills and knowledge.

4.3. Evaluating Your Website for SEO

The search engine optimization (SEO) of a website is now a standard part of Web design and development and so has its own evaluation part. SEO evaluation is necessary to ensure that the best practices for optimizing a website have been followed and the website’s interaction with users and search engines is as best as it can be. Following known best practices improves a website’s visibility in search engines and makes it easier for search engines to crawl, index, and understand its content, which, as previously noted in Chapter 21, can translate into the website being placed at the top of search engines’ organic (non-paid for) query results. This can, in turn, translate into increased traffic and increased conversion to sales. SEO is not some big undertaking, but mostly just doing the right things to improve user experience, that is, following the various guidelines already given in various chapters of this book. This is why SEO in principle revolves around what is best for the users of a website; for example, what they are likely to search for and how to make doing this as easy as possible for them. Getting this right will inevitably result in high SEO ranking on search engine results pages (SERPs). One of the ways of evaluating the SEO of a website is to check it against Google’s SEO guidelines. A summary of the guidelines is presented here. Note that there are websites that offer SEO evaluation and Google Webmaster Tools are very useful too.

  • Use unique, accurate page titles: Each page should have a unique title (implemented with the <title> element) that concisely and accurately describes its content and is not stuffed with keywords. The title is displayed in bold in search results and can help users determine quickly whether the content of a page is relevant to their search. See Chapter 2 for how to use the <title>
  • Use page meta-descriptions: Each page should have a description (implemented with the <meta> element) that provides an accurate summary of the content of the page. The search engine might use it or part of it in the snippets it displays for the page in the search results. Meta-description can be any length but the recommendation is to keep it at a maximum of 155-160 characters, because search engines generally do not use more than 160. See the <meta> element in Chapter 2 for how to add meta-description.
  • Use microdata markup: Microdata markup allows bits of data on a page to be specified and used by search engines to improve the presentation of the page in search results. You can find more information and examples on microdata on the Web.
  • Use easy-to-understand URLs: To achieve this, have a structure that is logical and easy to navigate, and ensure that directories, sub­directories, categories, sub-categories, and files have meaningful names that accurately describe their contents. Use lowercase, because users expect this, and avoid the use of excessive keywords, as Google does not like this. Breadcrumb links (introduced in Chapter 24) can make a useful addition.
  • Use descriptive link text: Link text should accurately and concisely describe the content to which the link leads and links should stand out so that they are easy to recognize. See Chapter 4 for more on links and the <a> element, which is the element used to create them.
  • Provide information about images: Use descriptive but distinct and concise image filenames and alt text to provide accessibility and make appropriate engines recognize images easily. Also keep images in a common directory and use standard file formats. See the <img> element in Chapter 6 for how to add alt text, which is done with the alt Advanced Web authors also use image sitemap files (i.e., markup files that list locations of images) to provide information about images to search engines.
  • Use the heading elements for headings: Use the <h1> to <h6> elements to give a hierarchical structure to the content of a page, but they should not be overused (because this can make understanding a structure difficult) or used for styling content. See Chapter 3 for how to use the elements.
  • Specify what not to crawl: Indicate which parts of your site search engines should not crawl, using any of a number of methods. For example, you could use a txt file, which is placed in the root directory, or “noindex” with robots in the <meta> elements. Google Webmaster Tools provides a robots, txt generator and Chapter 2 shows how to use the <meta> element.
  • Specify links not to follow: You would do this if you do not want users to be linked to a site when they activate the link to it. This is done through setting the value of the rel attribute to “nofollow” in the <a> element, or using “nofollow” with robots in the <meta> element to apply it to all the links in a page. See Chapter 4 for how to use the <a> element and Chapter 2 for <meta>.
  • Check that your mobile website is indexed: Ensure that your website is recognized by search engines so that it can be indexed. You may need, for example, to create a mobile sitemap and submit it to Google. How to do this can be found at Google Webmaster Tools.

4.4. Data Collection for Evaluation

Data collection is central to accomplishing evaluations. The three most commonly used are methods data recording, query, and observation, and it is useful if you are going to evaluate a website to know about them.

4.4.1. Data Recording

There are many types of data recording, the most common of which are note taking, photographs taking, audio recording, and video recording, each of which may be used alone or in combination with others. Each type of data recording has its advantages and disadvantages in terms of, for example, ease of use, usefulness of the data it provides, cost, and how obtrusive it is.

  • Note taking costs the least and is the least obtrusive. It also presents the least complications technically to implement and is very flexible. However, it is difficult to listen, observe, and write at the same time, particularly writing as quickly as people talk. Also, what is recorded depends too much on the investigator’s discretion, which may result in missing some important points just because they are deemed unimportant.
  • Taking photographs is relatively easy, particularly with point-and- shoot cameras, but a photograph only captures the data of a moment in time, and it is not always easy to differentiate the data without annotation, which is often not possible to add until the photograph is out of the camera and in hardcopy, or transferred into a computer.
  • Audio recording, also, is relatively easy to accomplish and does not have the speed limitation of writing. It also allows the investigator to concentrate on talking to the data provider, but transcribing audio data can be time consuming, particularly where quality is poor, either due to surrounding noise or poor recording level, or speech that is not clear. Even when content is clear, audio data tends to provide limited meaning and is often most useful when combined and coordinated with other types of data, such as notes and photographs.
  • Video recording produces the most complete data because it captures real-life events (i.e., visual and audio data). However, it can be obtrusive, depending on setup, and is the most demanding to operate as well as the most expensive.

It can also limit focus of investigation, since it forces the investigator to focus only on the area covered by the field of view of the camcorder, although using multiple camcorders can reduce this problem. Naturally, participants may also play for the camera, which can affect the reliability of data, especially if it is behavioral data that is being gathered.

Consequently, which data recording techniques are used, either singly or combined, depends on the prevailing situation. Where specific data is needed and can be comfortably written down, note taking may be adequate, while photographs are ideal for capturing the way objects look, including the environment, and may be used with note taking or audio recording to provide additional data, such as the contents of documents. Where it is important to observe how an operation is performed, video recording is ideal, as it provides both visual and audio record that can be analyzed over and over again.

4.4.2. Query Techniques

Query techniques, also known as conversational techniques or verbal techniques, involve asking users their opinions and can take various forms, such as through interviews and questionnaires.

  • Interviews There are four main types of interviews: unstructured, structured, semi-structured, and group interviews. Which one is used is determined by a number of factors, such as the purpose of the interview, how much control is required on the scope, and at what point it is taking place in the development life cycle. For example, if what is required is for users to express their opinions freely about a product, then an unstructured interview would be the most suitable, whereas if what is required is feedback on a specific feature of a design, a structured interview might be used. Interview data can be in the form of interviewer’s notes, video recording, or audio recording. Opinions and responses to open questions are qualitative, while responses to closed questions and responses that are in numbers, such as age, are quantitative. The following are the differences between the various types:
    • Unstructured interviews: These are open-ended interviews in which the interviewer exerts minimum control on the scope and depth of response. Questions are open and designed to simply prompt interviewees to formulate and express their opinions freely. For example, the question, “What do you think about using the website?” prompts a general rather than specific response and is the type of question known as an open question. The response can be lengthy or brief and both interviewer and interviewee can control the direction of the interview. To ensure all relevant topics are addressed, it is common for the interviewer to have a list of such topics to use to steer the interview, if necessary. The main advantage of unstructured interviews is that they provide a lot of information that gives both deep and broad understanding of a topic. However, this can also easily be an issue, as such information may be difficult to analyze.
    • Structured interviews: In structured interviews, questions are predetermined and specific, and designed to elicit specific types of responses. Typically, the questions are short and clear, and require the interviewee to choose from a set of responses, such as “I agree,” “I strongly agree,” and “I disagree.” These types of questions are referred to as closed questions. Example usage of closed questions might be: “Which of the following colors do you like used for screen background: White, Red, Blue, or Black?” Every question is worded exactly the same way and asked in the same order for every interviewee. When working with children of pre-reading or early- reading age, how responses are designed is usually different. For example, if they are required to choose from options, such as Awful, Not very good, Good, Really good, and Brilliant, a smiley- o-meter gauge, shown in Figure 26.9, developed by Read, MacFarlane, and Casey in 2002, may be used. Structured interviews are typically used when quick responses are required and/or interviewees are in a rush or even mobile.
    • Semi-structured interviews: These are part unstructured and part structured interviews, which means they can contain both open and closed questions. The interviewer typically has a set of questions that is used to guide the interview so that the interviewee does not digress or say too little. Normally, the interviewer starts with a closed question, and then guides the interview as desired. For example, the interviewer might first ask: “Which of the following colors do you like used for screen background: White, Red, Blue, or Black?,” and then follow with the question on why a color has been chosen. In all, care is taken not to phrase questions in a leading way so as not to influence response.
    • Group interviews: A group interview typically involves an interviewer and a group of interviewees. An example of a group interview is a focus group, in which a number of people, commonly 3 to 10 of them, take part in a discussion that is facilitated by the interviewer (the facilitator) in a relaxed and informal environment. Typically, a simple question is posed, which is designed to create a starting point for a broader discussion, which the interviewer then mediates, ensuring everyone has their turn to voice their opinions. It is a flexible method that allows discussion to follow pre-prepared directions as well as unexpected ones, thereby possibly bringing out issues that might otherwise be missed.

It is common practice to record these types of interviews and analyze them later; and even ask people to explain their comments later, if they are not clear. This form of interview is particularly useful when gathering requirements for a product that is going to be used by different groups of people for different purposes. One disadvantage is that social pressure within a group can inhibit some people’s ability to speak their minds, which may limit the scope of the collected data.

  • Questionnaires Questionnaires are similar to interviews, in that they use both open and closed questions, depending on the intended goal. However, with questionnaires, questions need to be more clearly worded, particularly as they are usually completed without an interviewer around to clarify any ambiguous elements. Questions also need to be specific and, if possible, closed, with a range of answers offered, just as described under structured interviews, including the “none of these” or “No opinion” option. Having questions and answers of this nature ensures that a questionnaire can be completed more accurately and collected data analyzed efficiently. Questionnaires are especially well suited for collecting data from a large number of people, because they can be distributed widely, even though different versions might be necessary for different populations of respondents. They can be used on their own or together with other techniques of data gathering, such as interviews, meetings, and observation. Because many people tend to be warmer to these other methods than to questionnaires, the benefits of questionnaires are attainable only when people are willing and able to complete them. If they are not, then a structured interview is usually used.

Questionnaires are typically divided into two general sections: the section that gathers demographic information about a respondent (e.g., age, gender, place of birth, and experience level in subjects) and the one that gathers the respondent’s opinion about what is being evaluated. Demographic information is usually useful for putting questionnaire responses into context. For example, it can reveal that more females like something than males. Any of these sections, of course, can be further subdivided. For example, sometimes, a section for soliciting additional comments is added. A well- designed questionnaire should incorporate features that encourage respondents to complete it. For example, there should be clear instructions on how to complete it, and it should look good. This can be achieved via appropriate text styling, formatting, and ample white space. It should also be short: typically 10 to 15 questions long or less.

In order for responses to be as accurate as possible, the type of response allowed for a question must match the question. These response types are referred to as response formats and there are different types, each of which is suitable for a particular type of question. For example, an open question requires space for respondents to write (or type, in the case of on-line questionnaires), while a closed question requires a set of answers from which to choose. Questionnaire data can be in written form or in electronic form that is stored in a database. As with interviews, opinions and responses to open questions are qualitative, while responses to closed

    • Ranges and check boxes: These are commonly used to group quantities. Figure 26.10 shows an example usage. In this case, respondents are expected to be between the ages of 16 and 35. Notice that the ranges do not overlap, such as in 16-20, 20-25, as this can cause confusion. Notice, also, that the intervals are not equal and do not have to be. For example, the ranges in the figure could be Under 16, 16-35, and Over 35.

Sometimes, ranges are combined with check boxes, as shown in Figure 26.11, which respondents tick, for example, instead of circling their selection. Naturally, check boxes are used in various other ways, such as to present yes, no, and don’t know options.

However, in on-line questionnaires, check boxes are used only when responders are required to make multiple choices. Where only one selection is required, it is radio buttons that are used, as shown in Figure 26.12 .

    • Ranking: In this method, respondents are asked to rank the items of a list based on some criteria. Figure 26.13 shows an example usage of ranking.

In on-line questionnaires, input boxes may simply be provided for responders to type in their choices, or a dropdown list of numbers may be used. Scripting can be used to ensure that a ranking is used for only one item.

    • Rating scales: A rating scale is basically a set of options that vary in degree, such as strongly agree, agree, undecided, disagree, and strongly disagree. These scales are well suited for getting respondents to make judgment about things. Two of the commonly used, Likert and semantic differential scales, are described here. Likert scales use a set of statements that describe levels of opinion, emotion, and so on. They are used for measuring strength of feelings, opinions, and attitudes, and because of this are commonly used to evaluate subjective measures (experience), such as user satisfaction with products. Figure 26.14 shows two different versions of Likert scale. A particular strength of Likert scales when they use numbers, as in the top example, is that they allow data to be recorded quantitatively, making it easier to analyze the data statistically.

In contrast to Likert scales, semantic differential scales use pairs of words that represent extremes of possible options and respondents are asked to place a cross in one of the positions between the two extremes. Figures 26.15 and 26.16 show some examples with 7-point scales.

Generally, rating scales use 7-, 5-, or 3-point scales, or a 9-point scale, as used by Questionnaire for user interface satisfaction (QUIS), a well-tried and tested tool for evaluating user satisfaction on various interface elements. The matter of which is the best is debatable. For example, while one argument states that scales with many points help people to discriminate better, another suggests that people might be incapable of accurately discerning between many points and therefore scales with more than five points can be unnecessarily difficult to use. Some recommend using a small number when possibilities are very limited. When it comes to the matter of odd and even number of points, both have positives and negatives. Mainly, odd number of points provides a central/neutral point, thereby providing respondents a way out, whereas even number of points forces them to make a stand, even when they are unsure. On-line rating scales typically use radio buttons, drop-down menus, and even sliders.

4.5. Observational Techniques

Like queries, observational techniques are used at various stages of website development to gather data. During the early part of design, they are used to study and understand the way users’ perform tasks with an existing system to supplement the requirements- gathering process. Later in the development, they are used to investigate how users interact with a prototype. Observation can be direct or indirect. In addition, it can take place in the field, such as users’ normal work environment, or in a controlled environment, such as a usability laboratory. It can also be obtrusive or unobtrusive. Data that are in the form of observer’s notes, audio recordings, video recordings, photographs, and the description of behavior and task are qualitative data, while data that are in the form of numbers, such as time, are quantitative.

4.5.1. Direct Observation

In direct observation, the investigator observes users, in person or remotely (e.g., via closed-circuit television), as they perform their activities, either in the field or in a controlled environment. Observing people in the field is a very useful technique in evaluation, as it provides additional dimension of user-interaction data that interviews or questionnaires do not provide, such as information about social interaction and physical task performance, thereby filling in details that might otherwise be missed. For example, the observer is able to see why activities happen the way they do. However, in order for this type of observation to be as effective as it can be, it needs to be properly planned and conducted with care, otherwise too much irrelevant data might be produced.

In order to conduct productive observation, it is typical to use a framework to structure and focus the observation. A framework essentially provides a guide on what to look for during observation. Using it, while being flexible to any changes in circumstances, usually produces the best result. A framework can be basic for inexperienced observers, or detailed for experienced ones. A basic framework can be as simple as focusing on just who, where, and what; that is, who is using what, where they are using it, and what they are doing with it. A detailed framework focuses on many more items, such as details of the people involved, the activities they are taking part in and why, specific aspects of activities, sequence of events, what participants are trying to achieve, and what their individual and collective moods are.

As well as the use of a framework, another aspect of planning that can influence the outcome of observation is the choice of the level of participation. Two main approaches characterize this: passive observation and participant observation. In passive observation, the person conducting observation quietly observes and records the activities of the users. The level of participation is minimal, and so is the level of intrusion. However, because the observer is outside the observed situation, it can be difficult to capture enough details about users’ activities, although this problem can usually be minimized by also capturing the situation in photograph and/or video and transcribing later to produce a highly detailed analysis, even though these are typically elements of indirect observation.

In contrast, observation may be from inside a situation, in which case, the evaluator, referred to as a participant-observer, plays the dual role of a participant and an observer; that is, he/she performs tasks with users while also observing. This type of observation, known as participant observation, is especially useful when it is difficult for users to express how they accomplish tasks, or when aspects of team performance are being evaluated to understand how members organize and perform their tasks. The main challenge of the approach is separating effectively the role of the participant from that of the observer and being able to give an objective report of the observation.

Other typical elements of planning include decisions about how data are going to be recorded and the strategy for interacting productively with the people being observed, particularly in terms of giving equal attention and consideration to everyone. Generally, asking questions is limited, as this can upset the natural flow of how users work and interact, which is typically one of the things that are observed. Observation notes are made during a session or as soon as possible after the session in order to avoid forgetting any details. Photographing and videoing can also be used.

When observing children with this technique, it is particularly important to blend in so as to capture as much of their natural behavior as possible. For example, the observer should dress informally and not stand around, so as not to look like a figure of authority. To blend in more, the observer might also engage themselves with an activity, such as using a tool, and be informal and playful when asking the children questions. Because note taking can introduce the sense of formality or being scored, the person asking questions is usually not the one taking notes, and any note taking is discreet.

Observing people in a controlled environment is markedly different from observing them in the field. It is usually used during the evaluation stage of the development life cycle and so the system being evaluated would have been developed with users. It is a more formal method than observation in the field and especially benefits from a pre-prepared script that states how a session should progress. The script typically contains how every participant will be welcomed and told the aim of the study, its duration, and their rights, such as the right to leave at any time during the evaluation session. As with observations in the field, data are recorded through note taking, photographing, and videoing, all of which are aimed at capturing users’ interaction activities, such as those performed via computer keyboard and mouse. The equipment to be used for observation is normally set up prior to the session and arranged properly so that the required activities are captured.

An additional technique used during observation in a controlled environment is the think-aloud technique, which involves asking users to think aloud; that is, to say aloud what they are doing and thinking. It is designed to provide a window into what a user is thinking while interacting with a system instead of having to guess. However, the formal nature of controlled observations can also sometimes give some users too much of the sense of being watched, which may result in unnatural behavior.

4.5.2. Indirect Observation

Indirect observations are designed for when direct observations are not possible, such as when distance does not allow, or when they prove to be too obtrusive or too dangerous. The data collection techniques commonly used are video recordings, diaries, and interaction logs. Video recordings are done the same way as in direct observation by positioning video cameras as necessary to capture required activities, which are then analyzed later.

The diaries technique involves asking participants to keep a regular diary of the details of their interaction with the system being investigated. Examples of what is recorded include what they did, when they did it, how much time they spent on various activities, what they found hard or easy, and what their reactions were to the situation. A diary can be in any format, but having a standardized format for all users can be quite beneficial in terms consistency. It also simplifies storage into a database for analysis, if necessary. Although they are usually in text, the use of multiple media types is increasingly a possibility. Diaries are useful when conditions such as distance between participants or distance between participants and observers make direct observations impractical. They have the advantage of requiring very little in terms of resources, either in the form of equipment or personnel. The main disadvantage is that they rely too much on participants to be willing and reliable, although these problems can be minimized through providing incentives and making the diary-entry procedure itself easy.

Instead of relying on users to record their activities, the interaction­logging technique uses software to track and record users’ activities as they interact with a system, and the data collected are analyzed later, using any of a number of tools designed to process and give meaning to large amounts of data, including visualization tools. Activities logged vary according to the goals of study but typically include mouse activities (i.e., movements and button clicks), key presses, number of times the help system is used, and amount of time spent using it. Where possible, audio and video data may also be recorded and synchronized with these activities to help to further understand user interaction with the system being evaluated. As well as its use in evaluation, this technique is used for monitoring on-line activities of visitors to websites, possibly for the purpose of improving the sites or evaluating the effects of some improvements. Naturally, collecting data on users’ activities in this manner without their consent may raise some ethical issues, depending on what is collected. Unlike diaries, the interaction-logging technique is unobtrusive and can simply continue in the background.

4.6. Delivering an Application on the Web

Although a website can be hosted on a home computer, for serious websites, professional Web hosting services are used. These services may also be provided by Internet service providers (ISPs). A Web hosting service essentially enables anyone to make their Web applications accessible on the Internet via the Web. Four basic steps are typically required for publishing a website properly on the Internet: (1) registering a domain name, (2) choosing a Web hosting company, (3) linking the domain name with the Web host’s Web server, and (4) uploading files to the Web server. The following are the details.

4.6.1. Registration of a Domain Name

The first task when registering a domain name is to decide a name and then check its availability; that is, whether or not it is already taken. This can be done through any of various sites, for example, WHOIS (http://www.whois-search.com). The domain name is simply entered and searched. It is typical to choose a name that is catchy, easy to remember, and relevant to the purpose of the site, particularly if the aim of the site is to attract as many people as possible. To be valid and usable, a domain name has to be registered, and there are numerous reputable companies available for doing this. Network Solutions is one of the first companies to offer domain name registration and remains a leading competitor as of time of writing. Many companies that offer domain names also offer Web hosting along with the facility for checking availability. One of the components of a domain name is top-level domain (TLD); that is, .com, .org, and so on, and one must be specified when registering. As well as generic ones like .com, there are also country-specific TLDs, such as .co.uk, .co.in, and .jp, which are for the UK, India, and Japan, respectively. Typically, payment is yearly for the privilege of owning a domain name and the cost varies depending on company and the required TLD. For example, a generic TLD is generally cheaper than a country-specific TLD.

4.6.2. Choosing a Web Hosting Company

Web hosting companies provide a wide range of services. Most offer these services in multiple packages, each with a different level of functionality and price. A quick search on the Web should reveal a myriad of packages, and which is suitable depends on the purpose of the website to be hosted. For personal sites, a basic account, which typically provides a free domain name, sub-domains, storage on the server, unlimited bandwidth, e-mail, and FTP, should be adequate. Free Web hosting provides similar facilities, except that they may also require that adverts appear on Web pages. For professional sites, additional facilities, such as database, blogging and graphs tools, and a website builder that can be used to create a site, are provided. For e-commerce sites, data backup, data restore, streaming, security protocols, such as SSL (Secure Sockets Layer) and TLS (Transport Layer Security), which encrypt data to prevent eavesdropping and data tampering, are provided, along with support for standard server-side scripting languages, such as JavaScript, Ruby, PHP, Java, and Python.

4.6.3. Linking Domain Name with the Web Server

Linking a domain name with a Web hosting account ensures that the request for the domain name connects a Web client (Web browser) to the Web server of the Web hosting company. Implementing this typically requires following step-by-step instructions provided by the Web hosting company. Once the process is completed, there is usually a waiting period (e.g., from 24 to 72 hours) for the various name servers located around the world that store domain names to be updated with necessary information, after which the domain name is available to the world.

4.6.4. Uploading Files to the Web Server

The final step in publishing a Web application is to upload the files that constitute the application, such as HTML and CSS documents, and associated files, such as media files, to the Web hosting server. The easiest way to implement this process is to use an FTP program (FTP client), which allows files to be transferred to the desired directories on the Web hosting server, usually through dragging and dropping. Different file types are usually placed in different directories both to enhance their management and because they are required. The Web hosting company would normally provide necessary instructions. A requested media file from a Web server is normally downloaded, but can also be streamed, depending on the way it is requested from the server. However, for proper streaming, the use of specialized streaming technologies is required, which is usually an additional service from Web hosting services.

Source: Sklar David (2016), HTML: A Gentle Introduction to the Web’s Most Popular Language, O’Reilly Media; 1st edition.

Leave a Reply

Your email address will not be published. Required fields are marked *