Before founding Facebook, Harvard’s most famous dropout made a predecessor: Facemash, a 2003 website that allowed his Harvard classmates to rank one another based on attractiveness.
Over the past decade, Mark Zuckerberg’s Facebook has received widespread criticism for what many characterize as unethical business practices: abusing user data, using algorithms that spread misinformation, and knowingly harming people’s mental health. And it seems pretty clear that ethics was not a major consideration when Zuckerberg created Facemash either.
“The Kirkland Facebook is open on my computer desktop and some of these people have pretty horrendous Facebook pics,” Zuckerberg, a Computer Science concentrator, wrote in an online journal at the time. “I almost want to put some of these faces next to pictures of farm animals and have people vote on which is more attractive.”
But the Embedded EthiCS @ Harvard program wants to prevent the next Facemash. The program is a joint initiative between the Department of Computer Science and the Department of Philosophy. Its slogan proudly announces: “bringing ethical reasoning into the computer science curriculum.”
According to the program’s website, Computer Science professor Barbara J. Grosz and Philosophy Professor Alison J. Simmons co-founded the program in 2017 after “overwhelming undergraduate interest.” Embedded EthiCS “embeds philosophers into courses.” In practice, this typically means a single ethics-based lecture in a semester-long course.
“The aim of Embedded EthiCS is to teach students to consider not merely what technologies they could create, but whether they should create them,” the website reads.
If Embedded EthiCS existed during Mark Zuckerberg’s time at Harvard, would Facemash — and perhaps Facebook — ever have existed?
***
About a month before writing this article, I sat through an Embedded EthiCS talk for a course I’m taking, Computer Science 51: “Abstraction and Design in Computation.” It was two days after our midterm, and our professor referred to the lecture as “a kind of post-exam treat.” Clearly, the students in the class didn’t see it that way. While the course has an enrollment of nearly 200, around 50 students showed up to the lecture.
I understood why. I even considered skipping myself, since in-person attendance was expected but not mandatory. But out of curiosity, I went.
For the first 20 minutes of the lecture, the Philosophy Ph.D. candidate dissected what, precisely, the word “responsible” meant. We learned about the distinction between forwards- and backward-looking responsibility, and how intervening agents impact our understanding of these issues. We were then presented with case studies. We were asked whether Facebook was responsible for allowing users to discriminate by race for housing ads (yes). And whether Amazon was responsible for a hiring algorithm it wrote that inadvertently discriminated by sex (yes), and whether it should have foreseen that (I didn’t feel like I had the technical expertise yet to answer that).
Near the end, she asked us to reflect: In your role as a software engineer, if you can reasonably foresee that a certain design choice or algorithm would lead to harmful outcomes, what should you do?
But after the lecture, I didn’t feel like I had any new ways to answer that. I’m not planning on being a software engineer. But I imagine that if I had a high-paying job at Facebook, and my product manager told me to code a harmful algorithm, I wouldn’t actually know how to respond. Should I downright refuse? Picket? Quit? I also had no idea what tools I could use to foresee harm. Even if, when my algorithm caused harm, I knew I was “responsible” from a philosophical standpoint, I never learned how to actually prevent the harm.
We were given an assignment of a 400-word essay answering this question. I dashed off my response: If I were a software engineer, and I could foresee a harmful outcome because of a particular design choice, I would not make that design choice. I mean, of course — isn’t that what any decent person would do?
***
In the following week, I couldn’t stop thinking about CS 51’s Embedded EthiCS module. Each time I did, I was reminded of another conversation I’d had with a teaching fellow in a Computer Science course this past fall, when he mentioned that his roommate had taken a job at Meta.
I’ve always had a pretty fraught relationship with Instagram — the way it sucks away my attention and throws me into negative spirals. As it turns out, this was by design: recent whistleblower reports revealed that the company knew that its algorithms caused misinformation and polarization and negatively contributed to peoples’ mental health.
So I asked the TF how his roommate felt about the ethics of working at Meta. The TF told me that he honestly didn’t think his roommate thought about it that much. He was making a huge salary, so why would he?
I know that the decision to work at Meta or any other big tech company can be a complicated one. But like it or not, after graduation, about 17 percent of graduating seniors will work in the technology industry. An additional 40 percent will work in consulting or finance, and will more than likely interact with the technology industry through their work (CS 51 itself teaches the programming language OCaml, which is used by almost no one except the Wall Street trading firm Jane Street Capital).
For Harvard students, the ethical questions surrounding technology are not mere hypotheticals or stories in the news. They are decisions they’ll have to make themselves, potentially as leaders in the industry.
And it seems like now, with the rise of increasingly sophisticated artificial intelligence, these tech-ethics questions are at the forefront of everyone’s mind. With AI technologies like ChatGPT that can swiftly complete tasks and generate concise, natural language, it’s hard not to wonder whether this technology will significantly alter what it means to be human. And many believe the threat AI poses might be even more harrowing: In one survey of machine learning experts, nearly half of respondents said there was a 10 percent chance AI ends up killing all of humanity.
***
I wasn’t sure if I was the only one who felt this way. I wanted to talk with other students involved in Computer Science to get a better sense of the atmosphere.
One of the first people I reached out to was Naomi Bashkansky ’25, a Computer Science concentrator involved in the Harvard AI Safety Team. Bashkansky and I met on a Saturday morning in the Leverett House Junior Common Room. She told me she hadn’t slept well the night before; she had stayed up thinking about Embedded EthiCS.
I asked her what she thought of the program. She paused for a full 30 seconds before she said, “I think a lot of people at Harvard don’t take Embedded EthiCS lectures very seriously. I think it's, at least in part, because it seems like Embedded EthiCS lecturers underestimate students.”
The questions the Embedded EthiCS answers are too simple, she told me, and the vast majority of people have enough of a grasp of ethics to answer them.
“Often, when you go into an Embedded EthiCS lecture, you kind of already know what the correct answer is going to be,” she continued after another pause. “Like, ‘Don’t harm this population,’ for instance.”
Alexander S. Pedersen ’23, a Computer Science concentrator and former Crimson Tech Editor, agreed that the “actual concerns about ethics in computer science” weren’t being adequately addressed by Embedded EthiCS lectures.
“The first one I had was really good,” he says. “And I think every single one after that has been sort of, almost funny-bad.”
He felt that many of the philosophers had “a pretty noticeable gap in their actual knowledge of the concepts that we’re working with in class.” He remembered the lecture in a machine learning course he took as a “missed opportunity.” Instead of focusing on issues that engaged with the technology — like exploring the “very prominent” issue of how biased data impacts a system’s predictions — he felt that they often focused on obscure philosophical issues.
“It was like some philosophy concept,” he said about the module from his Machine Learning course. “We made some flow chart. Basically, it wasn’t related at all.”
Amy X. Zhou ’23, a former Crimson Business Manager who jointly concentrates in Computer Science and Government, understood how it can sometimes seem difficult to marry Computer Science and Philosophy.
From a Philosophy class she is in about “Ethics of Computing Technologies,” Zhou said she had seen how helpful it can be to have a strong philosophical framework to work through complex, ethical issues. But she also saw how it can be hard to apply philosophy in practical considerations.
But she sees the speed at which technological development and philosophy move as potentially incompatible.
“The philosophy problems are never actually going to be solved, which is why they’re philosophy problems,” she said. “We do have to make a decision at some point.”
“The issue that we run into a lot in my philosophy class is that a lot of the issues that philosophers are tackling are not exactly easy to translate into the practical programming side,” she told me. “So it’s like, what is the practical purpose of discussing philosophy?”
But Zhou also had a positive experience at the lectures she attended.
“They've all been really interesting and cool, and have made me think about the world in different ways,” said Zhou about the Embedded EthiCS lectures she attended.
Another student in CS 51, Annabel S. Lowe ’26 told me she enjoyed the opportunity to learn about “a more real-world application” especially since the CS department typically focuses on “more theoretical” topics.
And even though some students bemoan the lack of technical material incorporated into embedded EthiCS, some courses make an effort to do this.
“My sense in cs120 is that, on the whole, students welcomed the break in non-stop technical material, to think about some of the philosophical issues associated with the algorithms they were learning,” wrote Computer Science professor Salil P. Vadhan ’95 in an email. “And it was pretty tightly connected to the technical course material - for example, we gave a homework problem around mathematically modeling some of the fairness conceptions discussed in the EmbE module.”
Simmons, the Embedded EthiCS co-director, wrote in an email that “the feedback has been extremely positive” for the program.
“Not being able to mount a full battery of team-taught seminars, Embedded EthiCS was created as a scalable way to increase the amount of ethical reasoning education across the CS curriculum,” she wrote.
“Our hope and expectation is that repeated engagement with modules will have an impact,” she continued. “Measuring success will take time since the proof is in the pudding, in this case in the work that Harvard students do once they are in industry.”
“I think it's super important,” Lowe said about Embedded EthiCS. “I just think there are definitely ways it could be improved, but the fact that they've recognized and started it is definitely a big step.”
Pedersen views Embedded EthiCS’s efforts differently.
“I don’t really think anyone’s taking too much away from it,” said Pedersen. “So it’s almost just performative.”
Meanwhile, Bashkansky wishes the program would shoot even higher.
“I think that Embedded EthiCS lectures should have a more grandiose goal of sorts,” Bashkansky said. “But I don’t think it’s the kind of lecture that would make someone question their career and their decision and think, ‘Wow, maybe I should be focusing on more ethical things or something.’”
***
From his blog post musings to the final hot-or-not product of Facemash, Zuckerberg’s ventures have certainly raised ethical concerns.
But the aftermath of Facemash tells things a little differently. After facing outrage from undergrads and administrators alike over the website, Zuckerberg took it down. In an apology letter, he wrote: “I understood that some parts were still a little sketchy and I wanted some more time to think about whether or not this was really appropriate to release to the Harvard community.”
Years later, Zuckerberg would be called to testify in front of Congress about the harm Facebook inflicted through the misuse of user’s data and their role in spreading fake news. Only then would he apologize. And he still denies whistleblower concerns.
Like many other students, it seems like college-aged Zuckerberg had a basic grasp of ethics. He didn’t need a philosopher to tell him that Facemash was “sketchy,” and that he was the one responsible for it. The problem, then, isn’t that Zuckerberg was blind to the questionable ethics of his project; it’s that he knew that Facemash was dubious, but he made it anyway.
Correction: April 24, 2023
A previous version of this article misspelled the name of Embedded EthiCS Co-Director Alison J. Simmons.
— Magazine writer Sage S. Lattman can be reached at sage.lattman@thecrimson.com. Follow her on Twitter @sagelattman.