AI Chatbots have proliferated in school settings since the launch of ChatGPT. But OpenAI, the company behind ChatGPT, just released a new AI tool that may make combating AI-generated assignments with AI detection more difficult. OpenAI’s new browser called Atlas follows the release of other browsers that incorporate AI technology. Built into these browsers are assistants that operate the browser without keyboard inputs or mouse clicks. That means they can navigate a Learning Management System (LMS) like Canvas and testing software on their own. OpenAI’s announcement for their new product included an endorsement from a college student who found the tool to aid their learning. However, students and researchers are sounding the alarms that these tools put academic integrity and personal data at risk in classrooms already upended by a rise in AI use.
In online posts, students use these so-called “agentic browsers” to take over academic platforms like Canvas and Coursera and complete quizzes assigned to them. The CEO of Perplexity, the creator of the agentic browser Comet, even responded to a student displaying how they used the tool to complete a quiz saying “Absolutely don’t do this”.
These browsers interact with websites at the user’s request to complete tasks like shopping, web navigation and form submission. They can even complete schoolwork without a student’s hands needing to touch the keyboard. See and example below:
Carter Schwalb, a senior business analytics major at Bradley University, heads the school’s AI Club. He said he’s experimented with agentic browsers for planning trips and apartment searching as well as summarizing information found on various websites. However, he’s talked to many professors at his university who report that students are submitting AI-generated responses for their assignments.
“I’ve seen a lot of instances, even from talking to professors, of the students just blatantly submitting ChatGPT generated responses,” Schwalb said.
For students, agentic browsers offer a new sort of convenience, with their built-in chatbots and their abilities to complete and submit assignments automatically. For teachers that want to combat these issues, looking at the version history on Google Docs can help determine if students are using AI assistants to complete and submit entire written works.
Students like Schwalb, though, are refraining from the use of these tools for hands-free assignment completion. For Schwalb, he said he doesn’t want to lose his critical thinking abilities by offloading all of his work to AI tools.
“I need to keep my ability to critically think and I think that needs to be emphasized, probably both from teachers to their students as well as parents to their children,” Schwalb said.
Not everyone shares Schwalb’s outlook. But, agentic browser use not only presents concerns for academic integrity and engagement in education. In a study authored by University of California Davis PhD student Yash Vekaria and others, researchers concluded that generative AI assistant browser extensions store and share the personal data of their users.
“Sometimes this may involve collecting information and storing information which is sensitive to a user,” Vekaria said.
The study was carried out in late 2024 when agentic browsers were not a part of mainstream AI usage. Starting in May 2025, searches for “AI in browser” and “Comet browser” (the tool created by Perplexity) on Google started to ramp up. However, the conclusions researchers settled on apply to agentic browsers, according to Vekaria. Additionally, he said, agentic browsers may present more privacy risks than were covered by the study.
“The assistant is always present in the side panel, so it’s able to access and view everything that the user is doing,” Vekeria said. “Agentic browsers collect all this information and have, if not similar, at least more risks in my opinion.”
Many students who use agentic browsers for academic or personal tasks don’t understand these risks, Vekaria noted. When used on academic platforms like Canvas, AI assistant tools collected and shared student academic records with other sites. The privacy of students’ educational records is supposed to be protected under a federal law called FERPA.
“We saw that it was able to exfiltrate student academic records, which is a risk under FERPA that protects students’ academic data in the U.S.,” Vekaria said. “In general there should be more regulatory enforcement that should happen.”
However, universities across the nation haven’t demonstrated a cohesive response to the use of these tools by their own students. While AI detectors can assess submitted work by students, multiple choice tests and discussion forums don’t incorporate these checks. Students are using these tools regardless and Schwalb argues that restriction is not the answer.
“I haven’t seen a good enough argument against AI to be fully adopted at a university, other than we don’t want kids using it which is just not reasonable,” Schwalb said. “It’s like the internet coming out and telling somebody not to use the internet or like the Industrial Revolution and telling somebody not to make something on an assembly line.”
As new tools are emerging, the realities for students and professors keep changing. Companies looking to support educational institutions are releasing different tools like advanced AI detectors that protect the user data that agentic browsers may put at risk.
“The option is here, and students are going to take it,” Schwalb said. “The job is not whether to and not how do we restrict this. It’s how do we incorporate.”
