Designing for Engagement: A Comparison of Canvas and a Visual Peer Review Dashboard (2026) ACM: TOCE

Peer review is widely used in higher education, but in visualization education it presents a specific challenge: evaluating visual work requires domain-specific judgment that general-purpose platforms are not designed to support. In this study, we examine how interface design shapes the quality of peer feedback.

We introduce VisPeerReview, a visualization-specific learning analytics dashboard that integrates the visualization under review with rubric-guided prompts and inline annotation tools. The system is designed to support evaluative reasoning directly at the point of interaction, rather than treating peer review as a detached textual task.

The study was conducted as a three-phase mixed-methods evaluation in an undergraduate data visualization course, comparing VisPeerReview with Canvas’s default peer review workflow. Using interaction logs, peer review text, and survey responses, we find that the dashboard condition produces longer and more linguistically structured feedback. Students also consistently preferred the system, and sentiment analysis indicates clearer evaluative intent and more constructive language.

Beyond comparing two tools, the results show that peer review quality is not only a function of student ability, but also of interface organization. When evaluation is supported through domain-aligned structures, students produce more meaningful and interpretable feedback. This positions interface design as a central factor in developing evaluative judgment in computing and visualization education.

Published: ACM Transactions on Computing Education (2026, Advance online publication)
https://doi.org/10.1145/3800962

Authors: Friedman, A., Rahman, M. D., Dinh, L., & Rosen, P.