Assignment M4 (Fall 2019)
Answer the following prompt in a maximum of 8 pages (excluding references) in JDF format. Any content beyond 8 pages will not be considered for a grade. 8 pages is a maximum, not a target; our recommended per-section lengths intentionally add to less than 8 pages to leave you room to decide where to delve into more detail. This length is intentionally set expecting that your submission will include diagrams, drawings, pictures, etc. These should be incorporated into the body of the paper unless specifically required to be included in an appendix.
If you would like to include additional information beyond the word limit, you may include it in clearly-marked appendices. These materials will not be used in grading your assignment, but they may help you get better feedback from your classmates and grader. For example, you might include copies of previous assignments, copies of your surveys, raw data, interview transcripts, raw notes, etc.: anything that does not directly address the assignment’s questions, but rather helps understand your progress as a whole.
In this assignment, you’ll plan three evaluations to perform: one qualitative, one empirical, and one predictive. Each evaluation should target a different prototype from your previous assignment; evaluate one qualitatively, one empirically, and one predictively. In Assignment M5, you’ll choose two of these evaluations to actually execute.
Abstract: ~0.25 pages
First, include an abstract that briefly introduces your project and gives context on the task you’re investigating throughout all your M assignments. You’ll include this abstract in each M assignment to give the grader and your peers context on what you’re working on. If you’d like to include more context than you can fit into 50 words, feel free to include an appendix containing an extended abstract.
Qualitative Evaluation: ~2 pages
For the qualitative evaluation, select one of the methods for qualitative evaluation, like interviews, surveys, think-aloud protocols, focus groups, or post-event protocols, as well as which prototype will be evaluated. Note that different types of evaluations work better for different types of prototypes. Surveys work well for textual or paper prototypes because both could be delivered asynchronously. Post-event protocols work well for Wizard of Oz and card prototypes because they limit the roles the interviewer has to play simultaneously. Think-aloud works well for verbal prototypes because the nature of the discussion can be used both for delivering the prototype and for soliciting evaluative information. These are just guidelines, however.
First, lay out the evaluation plan, including who the participants will be, how they’ll be recruited, where the evaluation will take place, and whether/how it will be recorded. Note that for this assignment, you should plan to recruit from friends, family, or classmates rather than the public.
Then, lay out the actual content of the evaluation. The content will depend on the type of qualitative evaluation you choose:
- For interviews, what questions will you ask? Will the interview be structured (100% scripted) or semi-structured (scripted with room for follow-up questions and discussion)? What information about the task will you give the interviewee and when?
- For surveys, what are the actual questions?
- For think-aloud studies, what directions will the participant be given? What questions, if any, will you ask during their interaction?
- For post-event protocols, what directions will the participant be given? What data will be gathered during their engagement? What questions will you ask after they’re done?
Finally, discuss how the evaluation will address the requirements in the data inventory and/or requirements definition phases. How will this evaluation help you gauge whether or not the prototype has actually met the requirements?
Empirical Evaluation: ~2 pages
Next, select one of the prototypes to evaluate empirically. First, define your control and experimental conditions: what are you testing, and what are you using as a point of comparison? Note that depending on your target problem and prototypes, you may have to create some variation within your existing prototype to create something to test empirically. This may especially be true if you’re designing for a new task, rather than redesigning an existing interface.
Then, define your null and alternative hypotheses; remember, the null hypothesis is what you assume to be true unless you can find conclusive proof for your alternative hypothesis. Then, describe the experimental method you will use. Will it be between-subjects or within-subjects? How will subjects be assigned to groups, what will they complete as part of their condition, and what data will they generate? What analysis will you use on this data? Finally, identify what lurking variables might confound your data.
Note that given the early stage of your project, empirical evaluation may be tough to design. In the next assignment, you will select two of these three evaluations to actually conduct, so you may design your empirical evaluation in a way that would be unfeasible to carry out based either on the available resources or the status of the prototypes. The important task here is to experience planning the three types of evaluation.
Predictive Evaluation: ~1.25 pages
For the predictive evaluation, you’ll perform either a cognitive walkthrough of your prototype, or you’ll construct a GOMS model of it. Those are tasks for Assignment M5 if you choose, however.
Plan your predictive evaluation by first selecting which type of task analysis you’ll do: performing a cognitive walkthrough or creating one or more GOMS models. Then, describe the specific task or tasks that you’ll be addressing with that predictive evaluation. What will the user’s goal be? What operators will be available to them? Will you be evaluating a user accomplishing a single goal they know how to do in advance, or will you be evaluating a user’s navigation around the interface to figure out how to accomplish their goal?
Note that the predictive evaluation you choose should likely depend on the task you want to investigate. If you’re looking at how efficiently an expert user can perform a known task, you like want to construct a GOMS model of the operators, methods, and selection rules that will lead to accomplishing that goal. If you’re instead looking at how a novice user navigates a new interface, or how a user makes decisions and branches within the interface, you likely want to perform a cognitive walkthrough.
Note that with the predictive evaluation, the line between planning and execution is less well-defined because the evaluation is performed by you rather than real users. This is why less space in the assignment is dedicated to the predictive evaluation: the majority of the work will be in performing the predictive evaluation rather than in planning it.
Preparing to Execute: ~0.5 pages
Finally, select two of these evaluations to complete for the next assignment, and explain why you selected those two. It is acceptable for the ‘why’ to be superficial reasons, e.g. “My prototype isn’t ready for empirical evaluation” or “I can’t recruit people to participate in my qualitative evaluation”.
Complete your assignment using JDF, then save your submission as a PDF. Assignments should be submitted to the corresponding assignment submission page in Canvas. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.
This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.
Late work is not accepted without advanced agreement except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.
Your assignment will be graded on a 20-point scale coinciding with a rubric designed to mirror the question structure. Make sure to answer every question posted by the prompt. Pay special attention to bolded words and question marks in the question text.
After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone.
You will typically be assigned three classmates to review. You receive 1.5 participation points for completing a peer review by the end of the day Thursday; 1.0 for completing a peer review by the end of the day Sunday; and 0.5 for completing it after Sunday but before the end of the semester. For more details, see the participation policy.