News
Garber Announces Advisory Committee for Harvard Law School Dean Search
News
First Harvard Prize Book in Kosovo Established by Harvard Alumni
News
Ryan Murdock ’25 Remembered as Dedicated Advocate and Caring Friend
News
Harvard Faculty Appeal Temporary Suspensions From Widener Library
News
Man Who Managed Clients for High-End Cambridge Brothel Network Pleads Guilty
There’s a fine line between watching and watching out for. With the increasing deployment of surveillance technology to monitor student protest, dozens of universities now find themselves on the wrong side of that line.
At least 37 universities have been confirmed as clients of Social Sentinel (also known by its new, rebrand-attempting name, Navigate 360), a technology monitoring company that tracks students’ social media behavior with the stated intention of identifying risks of self-harm or violence on campuses and alerting administrators.
The purported goal is admirable. Yet the chosen method — invasive, sweeping surveillance — is entirely inappropriate for the problem at hand. Worse still, the institutions using the service do not seem exclusively concerned with improving mental health. Sentinel’s services have, in fact, reportedly been deployed in far more nefarious contexts, following buzzwords like “protest” on social media and even scavenging one student’s social media posts after she alleged that administrators at North Carolina A&T mishandled her rape allegation.
Universities seem to be using Social Sentinel whenever surveillance might make their lives easier, even if doing so makes students’ lives worse. Meanwhile, many affected students aren’t even aware that such surveillance is occurring. Social Sentinel’s clients have taken an underhanded approach, making changes that deeply affect students’ privacy without having an honest discussion about why.
Mental health is a clear and pressing concern on college campuses; it’s one that demands administrative attention. But Big Brother-esque surveillance is hardly the solution administrators should turn to. An approach grounded in mass data gathering comes off as ineffective support at best and an insensitive justification for invading students’ privacy at worst. In particular, surveillance-centric approaches suggest that universities are more concerned with their liability in mental health-related incidents than with the actual well-being of their students. Rather than focusing on humanist approaches to rehabilitate struggling students, they have resorted to strategies that can act only as a band-aid.
We can even more comfortably condemn the application of this tool as a means to manage student protest. Social Sentinel’s creators not only knew their services were being used to track student protests; they also, on occasion, encouraged universities to use the surveillance tool in this way. Student protests have historically played a significant role in ensuring that administrations are held accountable — treating these vital elements of democratic participation with tactics reminiscent of authoritarian regimes is, to put it bluntly, not a good look for American universities.
As advances in artificial intelligence make the management of big data even easier, the tools of surveillance will become accessible to even smaller institutions. What was once the province of governments and dictators will become available to every overzealous administrator who wants to indulge in the folly of technological solutionism when presented with thorny, inconvenient problems. Mental health and violence on college campuses are immensely complex issues. They cannot be solved by simply collecting more data, developing more programs, or pushing more buttons. Oversimplifying such issues is both ineffective — the causes of the issues remain wholly unaddressed — and dangerous, as evidenced by the lack of regulation that such illusive solutionism invites.
While we maintain that AI-driven surveillance is problematic, elements of this new approach are here to say. That means that our centers of knowledge production must devote resources to studying the consequences; we maintain that Harvard should create an AI ethics institute. When such surveillance inevitably exceeds its stated goals — when watchful eyes are turned toward protest, or free association, or reproductive healthcare — we must have clear ethical guidance and policies for how to push back.
This responsibility certainly falls in large part on universities, but it also falls on all of us as students. We ought to make clear — through protests and editorials and advocacy — that we won’t go along to get along.
A line has been crossed; universities must take a step back and reassess how far past students’ comfort they will go in the name of ostensibly supporting these very same students. For our part, keeping a critical eye on surveillance is a good place to start. It’s time to watch the watchers.
This staff editorial solely represents the majority view of The Crimson Editorial Board. It is the product of discussions at regular Editorial Board meetings. In order to ensure the impartiality of our journalism, Crimson editors who choose to opine and vote at these meetings are not involved in the reporting of articles on similar topics.
Have a suggestion, question, or concern for The Crimson Editorial Board? Click here.
Want to keep up with breaking news? Subscribe to our email newsletter.