News

Garber Announces Advisory Committee for Harvard Law School Dean Search

News

First Harvard Prize Book in Kosovo Established by Harvard Alumni

News

Ryan Murdock ’25 Remembered as Dedicated Advocate and Caring Friend

News

Harvard Faculty Appeal Temporary Suspensions From Widener Library

News

Man Who Managed Clients for High-End Cambridge Brothel Network Pleads Guilty

Former CEO and Chairman of Google Eric Schmidt Talks AI in National Security at Harvard IOP

Former Google CEO Eric Schmidt, right, spoke about the potential risks of artificial intelligence development at a Wednesday Institute of Politics forum.
Former Google CEO Eric Schmidt, right, spoke about the potential risks of artificial intelligence development at a Wednesday Institute of Politics forum. By Muskaan Arshad
By Camilla J. Martinez, Crimson Staff Writer

Former Google CEO Eric E. Schmidt discussed the future of artificial intelligence in national security at a Harvard Institute of Politics forum Wednesday evening.

The discussion — co-sponsored by the Harvard College Emerging Technologies Group — was moderated by Harvard Kennedy School Government professor Graham T. Allison ’62. The forum delved into the implications and effects of AI development on national and global security.

Schmidt opened the discussion with his predictions for the advancing landscape of AI within the next 3 years.

“The most likely scenario, in a positive sense, is getting rid of some of the loose, niche issues and adding something called groundedness, so it has actual facts and also recency,” he said. “These systems take a long time to train so they’re always out of date.”

In contrast, Schmidt pointed to the potential risks of AI development. He cited vulnerabilities to “significant damage from a cyber attack” or “biological tech.”

“How do we get these systems values to be in alignment with human values?” he asked.

Allison also asked Schmidt about his thoughts on the speed of China’s AI advancement in relation to that of the United States.

“They didn’t get to the LLM space. They didn’t get to this AI space early enough,” Schmidt said.

Two reasons they are trailing behind, he said, are because of the lack of Chinese language training data and their unfamiliarity with the concept of open source.

Looking toward the future, Schmidt predicted everyone will have an AI assistant in three to five years, adding that productivity will double with the assistance.

Schmidt highlighted the economic uncertainties in labor markets, prices, and demand signals as the change occurs.

“It will be episodic,” he said. “But we can hope that in the next five years, you will ultimately see an AI doctor that brings all of the medical care in the world up to some base level.”

On the topic of national security, Schmidt and Allison discussed lessons from Google’s early days and the importance of cost-effective and efficient innovation.

“I’m going to claim right now that soft power is going to be replaced by innovation power,” Schmidt said.

Schmidt argued for quantity over quality.

“The U.S. has a very small number of extremely exquisite surveillance systems,” he said. “Instead, we should have an awful lot of cheap satellites.”

“It’s an awful lot more defensible, because it’s very hard for the opponent in the war game to shoot down all of your surveillance systems over and over again,” Schmidt added. “I believe the future of national security is a very large number of distributed systems.”

In terms of an AI-enabled war, Schmidt said, “you’re better off focusing on offense than defense.”

“AI-enabled war is incredibly fast. You have to move very, very quickly. We don’t have time for human in the loop,” he said.

A decline in the cost of training AI models would result in a “proliferation problem,” Schmidt said, adding that the “systems will not be manageable.”

“We can generate the data you need synthetically. There are people here at Harvard who are busy doing them — and brilliant people. Data is not your problem,” Schmidt said. “Your problem is talent and algorithms, which is a much easier game to win in open source.”

During the question and answer section of the event, Schmidt addressed a question about the potential of psychological warfare and manipulation using these AI systems.

“What that tells me is that the elections in 2024 are going to be an unmitigated disaster,” he said.

“Why is everyone in our country so upset? It’s because the systems are paid to make you upset,” he said. “I have a specific proposal, which of course is not happening: Label users, label the content, hold people responsible for not doing that right now.”

Daniel Huttenlocher, the dean of MIT’s College of Computing, also chimed in during the open question session.

“We’re past any reasonable way to address this and our political system has also passed maybe some way to address this,” he said.

“I don’t even think the social media companies could do what you suggested, because either the left or the right will decide that they're being disadvantaged in that, and will go after those companies,” Huttenlocher said.

In response to a question about the possibility of AI reducing employment, Schmidt said he does not believe AI will result in widespread job loss.

“The demographic evidence is that there’s a shortage of humans being born. And that automation, which is what AI fundamentally does is the only way to grow GDP,” Schmidt said. “AI intelligence robots is a net positive for society.”

—Staff writer Camilla J. Martinez can be reached at camilla.martinez@thecrimson.com. Follow her on X @camillajinm.

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
IOPTechnologyArtificial Intelligence