Assignment M4 (Summer 2017)
Due: Sunday, July 9th, 2017, by 11:59PM UTC-12 (Anywhere on Earth). This assignment is based on lesson 3.6 (Evaluation), and focuses on laying out your evaluation plan.
Answer the following prompt in a maximum of 1200 words, with a recommended length of 1000 words. Including more than 1200 words may incur a grade penalty. Note that only the overall assignment length limit is enforced; per-section lengths are provided as recommendations, but are not enforced.
You are encouraged to complement your response with diagrams, drawings, pictures, etc.; these do not count against the word limit. If you would like to include additional information beyond the word limit, you may include it in clearly-marked appendices. These materials will not be used in grading your assignment, but they may help you get better feedback from your classmates and grader. For example, you might include copies of previous assignments, copies of your surveys, raw data, interview transcripts, raw notes, etc.: anything that does not directly address the assignment’s questions, but rather helps understand your progress as a whole.
In this assignment, you’ll plan three evaluations to perform: one qualitative, one empirical, and one predictive. Each evaluation should target a different prototype from your previous assignment; evaluate one qualitatively, one empirically, and one predictively. In Assignment M5, you’ll choose two of these evaluations to actually execute.
Qualitative Evaluation: ~400 words
For the qualitative evaluation, select one of the methods for qualitative evaluation, like interviews, surveys, think-aloud protocols, focus groups, or post-event protocols, as well as which prototype will be evaluated. Note that different types of evaluations work better for different types of prototypes. Surveys work well for textual or paper prototypes because both could be delivered asynchronously. Post-event protocols work well for Wizard of Oz and card prototypes because they limit the roles the interviewer has to play simultaneously. Think-aloud works well for verbal prototypes because the nature of the discussion can be used both for delivering the prototype and for soliciting evaluative information. These are just guidelines, however.
First, layout the evaluation plan, including who the participants will be, how they’ll be recruited, where the evaluation will take place, and whether/how it will be recorded. Note that for this assignment, you should plan to recruit from friends, family, or classmates rather than the public.
Then, lay out the actual content of the evaluation. The content will depend on the type of qualitative evaluation you choose:
- For interviews, what questions will you ask? Will the interview be structured (100% scripted) or semi-structured (scripted with room for follow-up questions and discussion)? What information about the task will you give the interviewee and when?
- For surveys, what are the actual questions?
- For think-aloud studies, what directions will the participant be given? What questions, if any, will you ask during their interaction?
- For post-event protocols, what directions will the participant be given? What data will be gathered during their engagement? What questions will you ask after they’re done?
Finally, discuss how the evaluation will address the requirements in the data inventory and/or requirements definition phases. How will this evaluation help you gauge whether or not the prototype has actually met the requirements?
Empirical Evaluation: ~350 words
Next, select one of the prototypes to evaluate empirically. First, define your control and experimental conditions: what are you testing, and what are you using as a point of comparison? Note that depending on your target problem and prototypes, you may have to create some variation within your existing prototype to create something to test empirically. This may especially be true if you’re designing for a new task, rather than redesigning an existing interface.
Then, define your null and alternative hypotheses; remember, the null hypothesis is what you assume to be true unless you can find conclusive proof for your alternative hypothesis. Then, describe the experimental method you will use. Will it be between-subjects or within-subjects? How will subjects be assigned to groups, what will they complete as part of their condition, and what data will they generate? What analysis will you use on this data? Finally, identify what lurking variables might confound your data.
Note that given the early stage of your project, empirical evaluation may be tough to design. In the next assignment, you will select two of these three evaluations to actually conduct, so you may design your empirical evaluation in a way that would be unfeasible to carry out based either on the available resources or the status of the prototypes. The important task here is to experience planning the three types of evaluation.
Predictive Evaluation: ~200 words
For the predictive evaluation, you’ll perform either a cognitive walkthrough of your prototype, or you’ll construct a GOMS model of it. Those are tasks for Assignment M5 if you choose, however.
Plan your predictive evaluation by first selecting which type of task analysis you’ll do: performing a cognitive walkthrough or creating one or more GOMS models. Then, describe the specific task or tasks that you’ll be addressing with that predictive evaluation. What will the user’s goal be? What operators will be available to them? Will you be evaluating a user accomplishing a single goal they know how to do in advance, or will you be evaluating a user’s navigation around the interface to figure out how to accomplish their goal?
Note that the predictive evaluation you choose should likely depend on the task you want to investigate. If you’re looking at how efficiently an expert user can perform a known task, you like want to construct a GOMS model of the operators, methods, and selection rules that will lead to accomplishing that goal. If you’re instead looking at how a novice user navigates a new interface, or how a user makes decisions and branches within the interface, you likely want to perform a cognitive walkthrough.
Note that with the predictive evaluation, the line between planning and execution is less well-defined because the evaluation is performed by you rather than real users. This is why less space in the assignment is dedicated to the predictive evaluation: the majority of the work will be in performing the predictive evaluation rather than in planning it.
Preparing to Execute: ~50 words
Finally, select two of these evaluations to complete for the next assignment, and explain why you selected those two. It is acceptable for the ‘why’ to be superficial reasons, e.g. “My prototype isn’t ready for empirical evaluation” or “I can’t recruit people to participate in my qualitative evaluation”.
Assignments should be submitted to the corresponding assignment on T-Square in accordance with the Assignment Submission Instructions. Most importantly, you should submit a single PDF for each assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working software prototypes, etc.) that cannot be provided in PDF, you should provide them separately (either through the class Resources folder or your own upload destination) and submit a PDF that describes how to access the assignment.
This is an individual assignment. Every student should submit an assignment individually.
Late work is not accepted without advanced agreement except in cases of medical or family emergencies. In the case of an emergency, please contact the Dean of Students.
As with all assignments and projects in this class, this assignment will be graded on a traditional A-F scale based on the extent to which your assignment meets expectations. Due to T-Square restrictions, your grade will be provided on a 5-point scale: a ‘5’ is an A, a ‘4’ is a B, a ‘3’ is a C, a ‘2’ is a D, a ‘1’ is an F, and a ‘0’ is a failure-to-submit.
After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone.
You will typically be assigned three classmates to review. You receive 1.5 participation points for completing a peer review by the end of the day Thursday; 1.0 for completing a peer review by the end of the day Sunday; and 0.5 for completing it after Sunday but before the end of the semester. For more details, see the participation policy.