Assignment M5 (Summer 2018)
Answer the following prompt in a maximum of 1500 words, with a recommended length of 1200 words. Including more than 1500 words may incur a grade penalty. Note that only the overall assignment length limit is enforced; per-section lengths are provided as recommendations, but are not enforced.
You are encouraged to complement your response with diagrams, drawings, pictures, etc.; these do not count against the word limit. If you would like to include additional information beyond the word limit, you may include it in clearly-marked appendices. These materials will not be used in grading your assignment, but they may help you get better feedback from your classmates and grader. For example, you might include copies of previous assignments, copies of your surveys, raw data, interview transcripts, raw notes, etc.: anything that does not directly address the assignment’s questions, but rather helps understand your progress as a whole.
In this assignment, you’ll execute the two evaluations you chose in the previous assignment and report the results. Note that this means you will choose two of the following three sections to complete.
Abstract: ~50 words
First, include an abstract that briefly introduces your project and gives context on what task you’re addressing. You’ll include this abstract in each M assignment to give the grader and your peers context on what you’re working on. If you’d like to include more context than you can fit into 50 words, feel free to include an appendix containing an extended abstract.
Qualitative Evaluation: ~400 words
If you selected the qualitative analysis, report first on the evaluation as a whole: how many participants took part? How well did the sessions go? Were all sessions run the same, or did you make changes between participants? Is there anything you would do differently purely organizationally next time?
Then, report first the raw results of your evaluation. Depending on the type of evaluation you performed, this might be the raw survey results, the interview transcripts, the notes you took during interviews, etc. If these are too extensive to include in the body of your assignment, include them as an appendix and instead provide a descriptive summary of the results: do not analyze the results yet, but instead simply report the results objectively.
Then, analyze the feedback that the participants gave you. What are the main takeaways for improving the interface? What feedback surprised you? What feedback did you receive that you expected? Unlike the summary of the raw results, this analysis can be interpretive. What did the participants’ feedback actually tell you about your interface?
Finally, describe briefly the changes that feedback would already suggest in your prototypes. You’ll delve into more detail on these later, but comment specifically on the changes that might result from the qualitative evaluation.
Empirical Evaluation: ~400 words
If you selected empirical analysis, first report on the testing process as a whole. Did everything go as expected? What deviations, if any, did you encounter? What would you do differently next time? Did anything happen that will call into question the results?
Then, perform and report the results of your statistical test, including the t-statistic, z-statistic, p-value, or whatever else comes out of the test. As part of this, you should include the raw data, in an appendix if necessary. Based on this test, do you accept or reject the null hypothesis, and why? Report also the number of participants and any other information relevant to the statistical trial.
Then, analyze these results. Did they match what you expected? If not, why do you think the results were not as expected? If so, do you believe the results are because of real differences or because of lurking variables or experimental errors?
Finally, based on this analysis, what changes would you make to your interface, or what characteristics of the interface would you make sure to carry forward to the next round of prototyping?
Predictive Evaluation: ~400 words
If you selected the predictive analysis, perform the predictive analysis that you planned in Assignment M4.
If you planned on a cognitive walkthrough, walk through your interface specifically explaining how the user will know what to do at every stage of the process. Remember, the emphasis of a cognitive walkthrough is on cognitive: don’t just outline what the user should do, but emphasize how they’ll know what to do. In doing so, you should specifically leverage the design principles from Unit 2: you could use principles like affordances, mapping, and consistency to explain how the user will know what to do at a given time.
If you planned on creating one or more GOMS models, create them. Make sure to explicitly state what the goals, operators, methods, and selection rules are. Make sure that your operators are specified in terms of the prototype that you’ve created. You should generally have multiple methods for any interesting task, although you may also find that you have multiple tasks with only one method for each; either is acceptable. After creating your GOMS model, analyze it for its relative efficiency: could it be made more efficient? What would the trade-offs be? What could be simplified, and what other parts might become more complex as a result?
Evaluation Summary: ~350 words
Finally, based on the results of these evaluations, briefly describe what you would do in the next iteration through the design life cycle.
First, what information would you like to understand about the user more fully that could be investigated through additional needfinding exercises? What questions arose based on the evaluation that need further investigation separate from the prototypes themselves?
Second, has this experience brought to mind any additional design alternatives you might explore in a second iteration through the design life cycle? If so, what are they? These might be all-new designs, or they might be new wrinkles on your existing prototypes.
Third, what revisions to the prototypes would you already expect to make, either changing them at their current level of fidelity or raising them to the next level of fidelity? If your evaluations were generally positive on the prototype, what level of fidelity would you raise it to next? If they were a little more critical, how would you incorporate those critiques for a next round of evaluation?
Fourth, assuming you made those changes, what type of evaluation would you employ next? Are your prototypes ready for the rigor of a true empirical evaluation, or would you gather more feedback based on a qualitative or predictive evaluation first?
Assignments should be submitted to the corresponding assignment submission page in Canvas. You should submit a single PDF for this assignment. This PDF will be ported over to Peer Feedback for peer review by your classmates. If your assignment involves things (like videos, working prototypes, etc.) that cannot be provided in PDF, you should provide them separately (through OneDrive, Google Drive, Dropbox, etc.) and submit a PDF that links to or otherwise describes how to access that material.
This is an individual assignment. All work you submit should be your own. Make sure to cite any sources you reference, and use quotes and in-line citations to mark any direct quotes.
Late work is not accepted without advanced agreement except in cases of medical or family emergencies. In the case of such an emergency, please contact the Dean of Students.
Your assignment will be graded on a 20-point scale coinciding with a rubric designed to mirror the question structure. Make sure to answer every question posted by the prompt. Pay special attention to bolded words and question marks in the question text.
After submission, your assignment will be ported to Peer Feedback for review by your classmates. Grading is not the primary function of this peer review process; the primary function is simply to give you the opportunity to read and comment on your classmates’ ideas, and receive additional feedback on your own. All grades will come from the graders alone.
You will typically be assigned three classmates to review. You receive 1.5 participation points for completing a peer review by the end of the day Thursday; 1.0 for completing a peer review by the end of the day Sunday; and 0.5 for completing it after Sunday but before the end of the semester. For more details, see the participation policy.