by Hope Rasa
When OpenAI (an American artificial intelligence organization headquartered in San Francisco) released ChatGPT in November 2022, it placed a powerful form of generative AI in the public’s hands.
Generative AI is different from traditional AI, which most Americans have been using daily for decades. Siri, spam filters and Netflix recommendations — all traditional AI — provide suggestions based on existing data. Generative AI, like ChatGPT, learns from existing data to create new content.

Data visualizations: Hope Rasa with Flourish
Number of data centers per city in Washington state. There are over 4,173 data centers in the United States and 134 in Washington. Only two are in Whatcom County. The rest are spread across the state, with the biggest usually located in rural areas. Data from Data Center Map.
Tyler Jones is confident that he’s caught four of his students using artificial intelligence (AI) to cheat.
“I’ve received entire final projects — that are summative for the whole quarter — that are just blatantly, entirely AI and barely work,” Jones said. “It’s really frustrating.”
Jones has been a computer science TA (teaching assistant) at Western Washington University (WWU) for almost two years. He said the risk of students cheating with AI adds stress to his job and increases his workload.
ChatGPT Is Most Prevalent
“ChatGPT is probably the big offender,” Jones said.
There are a lot of other big-name generative AI platforms, like Claude.ai, that students can use to cheat. Jones said he assumes it’s mostly ChatGPT, since it has significant updates, and it’s mostly free.
Emily Borda, a professor at WWU, said she wasn’t familiar with generative AI before it gained momentum in 2022.
“It felt like all of a sudden, I was listening to the news and they announced this tool,” Borda said. “It wasn’t there one day and now it’s here, and we need to scramble and figure out its implications.”
Borda is also the director of science, math and technology education (SMATE) at WWU. She said that AI comes up a fair amount in conversations with her colleagues. Academic dishonesty, Borda said, is a huge concern.
“… The real concern is that it substitutes for my students’ learning,” Borda said.
More Meaningful Assignments
To make it harder for students to use ChatGPT, Borda changed some of her assignments to be more reflective and less research-based.
“I actually think it pushed me personally to think about more meaningful assignments for my students,” Borda said.
To avoid false positives, Jones said he doesn’t examine every assignment with suspicion. Instead, he waits for AI to jump out at him. Students have to write code for many of the assignments in Jones’ class. He said it’s obvious when a student uses AI to write their code.
“Especially when students leave comments in the code that the AI produces,” Jones said.
A survey of 1,274 young people ages 14-22 found 51 percent had used AI before (1). Of those AI users, 15 percent reported using it for coding.
When a student asks ChatGPT to complete an assignment, it returns a completed version, often with little comments left throughout — like notes in the margin. Sometimes students submit their assignments without realizing this.
Jones also noticed these AIgenerated assignments contained things his class hadn’t learned. Plus, the code itself was barely functional.
“Your code is just very much worse when you use AI,” Jones said.
Lack of “Proof”
These clues that a student’s work is AI-generated are just that. Clues, not proof. Unless he spots comments from an AI chatbot, Jones is never 100 percent sure a student’s work is AI-generated. Instead, Jones said it’s usually a combination of details that indicate a student’s work was made using AI.
For instance, if a student’s work contains information they didn’t learn in class, they may have gotten it from a legitimate source. It’s not a sure sign they used AI. Jones said he’s only swift to believe a student used AI when he finds comments left by the chatbot.
“It [AI] has definitely gotten better over time, and I think if a student is smart about it, it’s pretty easy for them to fool me,” Jones said.
If a student deletes those comments and cleans up other evidence in their code, Jones said he probably wouldn’t know they used AI.
“That’s probably happened several times,” Jones said. “It’s not something I think I can really fight in that scenario.”
AI Detection Tools
Borda said that she’s sure her students could get it by her if they used AI for an assignment. Borda said she relies on her students not to. Plus, they know she can use AI detection tools.
Last year, one of Borda’s TAs suspected some students had used AI for their assignments. Borda said that’s how she learned about AI detectors (Originality.ai, GPTZero, etc.).
“So, I had some talks with those students, and it turns out that those tools themselves are not very reliable,” Borda said.
From what he’s seen, Jones said AI detectors produce too many false positives. Despite their inaccuracy, a survey of U.S. middle and high school teachers found 60 percent regularly use AI detection tools (2). Turnitin boasts that its AI detection tool has a less than 1 percent false positive rate, while a study from The Washington Post found it to be over 50 percent (3). Unfortunately, methods for catching AI aren’t as advanced as AI itself.
Dana Smith, the assistant director of communications and community community relations for Bellingham Public Schools (BPS), said that students having access to tools that can do their work for them isn’t a new problem in education.
“I mean, there was controversy when I was a kid about whether it was okay to use a calculator at school,” Smith said. “And now I don’t think we can imagine a math class without a bucket full of calculators.”
Policy Changes
Over the summer, BPS adjusted its AI policies. Now, classrooms can use three approved AI platforms: Microsoft Copilot 13+, Colleague AI and Canva AI. Teachers can decide how much AI use is appropriate for each assignment and set “levels” for their students. These levels are outlined in guidance from the Washington Office of Superintendent of Public Instruction and range from no AI use to extensive AI use. When students use AI, they must communicate that and fact-check any information it supplies them.
Bill Palmer, BPS’ director of teaching and learning, focusing on technology integration, said that student use of these approved AI tools is minimal — about three classrooms have engaged with Copilot so far.
When describing the changes she made to her coursework in response to ChatGPT, Borda said it was somewhat like what happened during the Covid-19 pandemic. When educators had to take their classes online, they could either keep their assignments the same or alter them so they’re harder to cheat on.
“And that’s a choice that we’re faced with with AI as well,” Borda said. “So I am trying to take it in that direction of improving my teaching and the types of assignments that I give to my students.”
For one of Jones’ classes, he said some of their work is handwritten. Jones thinks that’s partially because the professor doesn’t want to worry about students using AI. Students may use AI regardless, and just handwrite the answers it gives them. Still, this deters AI use.
“I don’t think there’s a bulletproof way to catch any student in the act,” Jones said.
Academic Integrity
When Jones suspects a student used AI, his first step is to talk to them. After that, if the student submits something that seems AI-generated again, that’s when Jones sends the situation up to his professor. From there, it can turn into academic integrity conversations at the higher level.
Jones said he thinks students mostly resort to cheating with AI when they’re struggling.
“It’s unfortunate to me because, as a TA, I’m the one they’re supposed to turn to,” Jones said. “I want to help them.”
When Jones finds glaring uses of AI, he said his professor is very willing to take the next steps.
AI actually creates more work for Jones. He has to write emails, arrange meetings and have uncomfortable conversations after he finds AI in a student’s work.
“But even if AI was out of the question, I still might have worries about students cheating off each other, and then I would still have hard conversations that way,” Jones said.
Among TAs, Jones said the attitude around AI in academia is pretty negative. Jones is also a student, and he said that culture isn’t so against it. He thinks some students will “just ask ChatGPT” questions for their classes.
Learning Is Key
The professors Jones has worked with have been clear that they don’t want their students to use AI. Jones thinks this is mostly because using AI doesn’t facilitate learning.
“In general, the use of AI is deferring thinking to a machine,” Jones said. “If you’re deferring your thinking, you’re not doing it [thinking], and therefore not learning.”
This is a growing concern for educators at all levels. K-12 teachers worry AI may cause students to procrastinate more, damage their critical thinking skills and make them less competent (4).
As BPS integrates generative AI into classrooms, Palmer said they’re having conversations about its environmental impacts. He said they’re comparing the district’s energy consumption with AI to that of AI as a whole.

photo: Hope Rasa
The Marina Building in Bellingham. Lunavi, one of the building’s tenants, houses a colocation data center. This type of data center isn’t the same as the massive ones companies like Microsoft operate. Colocation data centers like this one rent their space and equipment to businesses so they don’t have to build their own facilities from the ground up. This data center and others like it possess the infrastructure necessary for their tenants to run AI workloads. These smaller facilities are sometimes easy to overlook, but they’re quite common.
Data Centers
Generative AI is powered by data centers, which consume large amounts of energy. Nearly everything on the internet is enabled by data centers. Social media, email and a myriad of other online activities wouldn’t be possible without giant facilities full of hardware to store and process that data.
However, AI data centers consume far more energy than traditional data centers. Data centers with the heavy IT infrastructure necessary to support AI have immense water and electricity needs.
Larger AI data centers use up to 5 million gallons of water per day, equal to a city of at least 10,000 people (5). Last year, AI data centers were responsible for about 1.5 percent of global electricity consumption (6).
Large AI data centers are typically in rural areas for their low power costs. One of Microsoft’s biggest AI data centers (800,000 sq. ft.) is in Quincy, Wash. By early 2026, Microsoft will finish constructing a new data center in Malaga, Wash. (7)(8)
Asking ChatGPT, “What is the capital of France?” uses 23 times more energy than Googling the same question (9). Palmer said the queries students and teachers are doing equate to watching a few seconds of Netflix.
AI Advantages
Despite how AI is causing problems for him as a TA, Jones doesn’t condemn it entirely. He thinks AI can do good in other areas, like nursing.
“If I could make generative AI disappear with the snap of my fingers, I don’t know if I would,” Jones said.
Smith said that in conversations around the district, educators recognize AI can be powerfully supportive for students with disabilities, who need accommodations to demonstrate what they know.
“… Making sure not to throw the baby out with the bath water,” Smith said. “Not to say it [AI] is absolutely right or absolutely wrong under every circumstance, but [to decide] under what conditions does it make sense.”
AI is in the world now, and it’s not going away, so Smith said it’s educators’ responsibility to teach kids what it is and how to use it responsibly. A poll in 2025 found 53 percent of Americans support integrating AI into schools, while 43 percent believe it should be barred (10).
Jones said students don’t approach him with questions about AI.
Taboo Subject
“I feel like it’s almost a taboo subject from student to instructor,” Jones said.
He thinks students who use AI dishonestly don’t want to get caught, and students with innocent questions about AI don’t want Jones to confuse them with the former.
“It’s a really bad dynamic, unfortunately, and I feel like conversation regarding AI should be a lot more open,” Jones said.
Even if there were no consequences for using AI in his class, Jones likes to assume most students would try to learn the material themselves instead of falling back on AI. Since he’s noticed students typically turn to AI when they’re stuck, Jones doesn’t think using it is their first instinct.
“If there were no consequences, people still wouldn’t be using it willy-nilly,” Jones said. “But I don’t know, because the way some people talk about it [AI], it does feel like they’re using it instead of actually learning or doing the thinking.”
Before they resort to using AI, if a student is struggling with work for Jones’ class, he said they should just email him instead. He’ll meet with them anywhere, anytime on campus to help them.
“I fortunately get paid to do it,” Jones said. “It’s a rewarding experience for me and them, because I get to teach and they get to learn.”
________________________________________
Hope Rasa is a journalism – news/editorial student at Western Washington University with a passion for environmental awareness. Her previous reporting for The Front covered local social issues such as public health, incarceration and education. Hope’s interest in journalism began when she joined her high school newspaper. She wishes to continue reporting on pertinent and underreported topics in Bellingham and the rest of Whatcom County.
Article links:
- Nagelhout, R. (2024, September 10). Students Are Using AI Already. Here’s What They Think Adults Should Know. Harvard Graduate School of Education. https://www.gse.harvard.edu/ideas/usableknowledge/24/09/students-are-using-ai-already-heres-what-they-think-adults-should-know
- Dwyer, M., Laird, E. (2024, March). Up in the Air: Educators Juggle the Potential of Generative AI with Detection, Discipline and Trust. Center for Democracy and Technology. https://cdt.org/wp-content/uploads/2024/03/2024-03-21-CDT-Civic-Tech-Generative-AI-Survey-Research-final.pdf
- Fowler, G. (2023, April 1). We Tested a New ChatGPT Detector for Teachers. It Flagged an Innocent Student. The Washington Post. https://www.washingtonpost.com/technology/2023/04/01/chatgptcheating-detection-turnitin/
- Hadi Mogavi, R., et al. (2024). ChatGPT in Education: A Blessing or a Curse? A Qualitative Study Exploring Early Adopters’ Utilization and Perceptions. Computers in Human Behaviour: Artificial Humans. https://www.sciencedirect.com/science/article/pii/S2949882123000270?via%3Dihub
- Yañez-Barnuevo, M. (2025, January 25). Data Centers and Water Consumption. Environmental Energy and Study Institute. https://www.eesi.org/articles/view/data-centers-and-waterconsumption
- Internal Energy Agency. (2025). Energy and AI. https://www.iea.org/reports/energy-and-ai/energydemand-from-ai
- Microsoft Quincy Campus. Data Center Map. https://www.datacentermap.com/usa/washington/quincy/microsoft-quincy/
- Malaga Datacenter Project Overview. Microsoft. https://local.microsoft.com/blog/malagadatacenter-project-overview/
- Wells, C. (2025, August 22). As AI Becomes a Part of Everyday Life, It Brings a Hidden Climate Cost. AP News. https://apnews.com/article/ai-data-center-climate-impact-environment-c6218681ffdbad5bf427b47347fddcb9
- Marquez, A., Yang, A. (2025, July 18). Poll: As Americans Form Views On AI, They’re Divided On Its Role in School and Everyday Life. NBC News. https://www.nbcnews.com/politics/nbc-news-polls/poll-americans-form-views-ai-divided-role-school-everyday-life-rcna212782




























