News

Garber Announces Advisory Committee for Harvard Law School Dean Search

News

First Harvard Prize Book in Kosovo Established by Harvard Alumni

News

Ryan Murdock ’25 Remembered as Dedicated Advocate and Caring Friend

News

Harvard Faculty Appeal Temporary Suspensions From Widener Library

News

Man Who Managed Clients for High-End Cambridge Brothel Network Pleads Guilty

New Guide

The Q Guide needs an evaluation of its own

By The Crimson Staff, None

In the several weeks leading up to the end of the semester, students worrying about their final exams and papers were faced with yet another bother: persistent nagging. E-mails from the Registrar continually reminded students that on top of their other responsibilities, they were expected to evaluate their courses for next year's Q Guide. While the administration may be correct in that too many students are neglecting to fill out their evaluations, the Q Guide has also room to improve.

The general design of the guide, which has remained largely the same for several years, could be greatly improved with some crucial changes. First, the system of course evaluations is flawed, potentially resulting in misleading ratings. Some of the questions are redundant or not applicable for many courses and could be easily omitted on a course-by-course basis, but often are not. The rating rubric itself—in which a score of 2, the second-lowest grade, is labeled "Fair"—is far too superlative and causes confusion. The ratings would also be more useful if they were based in a more broad and precise scoring system, from 1 to 10, for example. This would be likely to provide a more honest picture of a course.

Furthermore, this year's online Q Guide is poorly designed and hard to use. The list of course names one is expected to navigate shows only the numbers, not the titles, of courses—a baffling departure from previous years. There are also no links to navigate from one course to the next without returning to the main menu, making browsing all of the courses in a department tedious.

The guide is rendered less effective by a lack of integration with the my.harvard.edu course shopping tool—for example, there is no way to get from a Q Guide entry to a course website. Incorporating the Q Guide entirely into the shopping tool could make course ratings more visible and, in turn, make shopping less tedious. The lack of certain other functionalities, such as the ability to search and rank by ratings, is also a common frustration—evidenced by the abundance of student-designed search engines and spreadsheets developed to make finding courses that fit a certain specification easier. If a student in CS 50 is capable of creating a better search engine for the Q guide, surely the Harvard administration could do so as well.

Fortunately, the Q Guide administrators are currently considering many of these changes, and we hope to see these implemented as soon as possible. It is true, however, that reforms will have little effect if not enough students fill out their course evaluations. One effective way that professors and teaching fellows have encouraged evaluations is to send more personalized e-mails to their students and to tailor evaluation questions to their specific classes. This can make students’ lives easier and shows them the real stake that faculty have in evaluations. The best solution, however, would be for the administration to give students real incentives to turn in their evaluations. Withholding grades until students finish their evaluations would be an excellent way of ensuring that few classes see insufficient response rates.

The Q Guide is a valuable institution that only works when students, faculty, and administrators are all committed to its success. If those who control the format and structure of the guide make the necessary changes, students will surely respond with an increased enthusiasm for completing their course evaluations.

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags