News

TURNITIN CAN FLAG YOU, BUT SHOULD IT?

todayJuly 8, 2025 42 4 5

Background
share close

James Klusener 

@jamesklusener

 

Since the launch of ChatGPT in November 2022, many students have begun using Artificial Intelligence (AI), not just as a tool, but almost like an appendix – an attached dependency that replaces rather than supports their own thinking. With this shift in student workflows, universities, including the North-West University (NWU), have begun to incorporate tools, such as Turnitin, which includes a built-in AI detection feature, to deter and discourage total AI usage for written tasks. These tools have been met with criticism and skepticism, though.

 

However, there are issues with these detection tools, as many students have reported. There needs to be a larger conversation around AI use, and how ethical AI use can be practised. 

 

According to a report conducted by Higher Education Policy Institute (HEPI), an independent think tank in the United Kingdom (UK), roughly 92% of students now use AI, with that number likely being higher in the past few months. 

 

This begs the question: how do these detection tools really work; and are tools like ChatGPT changing how we use language? 

 

Firstly, understanding AI and how it works is vital to understanding how the tools built to detect it work. AI tools like Gemini and ChatGPT run on large language models, or simply LLMs. To avoid getting too technical, these LLMs are based on large sets of training data, such as movies, tweets, internet articles, social media posts, etc. This data is then taken by these LLMs to predict patterns in the data and output coherent responses based on user queries. Basically, these AI tools aren’t actually thinking, but simply using probability algorithms to determine what should come next. 

 

As simple as this may sound, it is extremely complex, power hungry and undoubtedly very powerful; but it’s not intelligence – it’s math. 

 

With this in mind, you may begin to understand how AI detection software might work. It predicts the prediction by the chatbot, right? Yes, somewhat. You see, AI has patterns that researchers have picked up on and attempted to implement into tools such as Turnitin, which includes the common “That’s not just X, it’s Y ”. This sentence structure is quite common, and is frequently used by LLMs, and can become easy to pick up if used continuously. AI detection software, such as Turnitin, work similarly. These tools use algorithm models to detect patterns that are commonly associated with AI outputs. This explains why many students complain that they didn’t use AI tools, but have been flagged by Turnitin for having used these tools in their work. 

 

So what can you do? Well, using AI tools, such as ChatGPT or NotebookLM is not shameful or wrong in any sense – similarly to how googling for information instead of reading books in the library isn’t a crime. However, these AI tools cannot be seen as a substitute for human creativity. These tools are powerful and should be used in a way that elevates your own work, not replaces it for a synthetic substitute. 

 

AI tools are here to stay, and shouldn’t be seen as second-rate or awful in any way. Many people fret, and rightfully so, that AI will replace us or take all our jobs. Whilst that may be true for some industries and professions, human creativity will always prevail. Those that can harness AI to improve themselves will always be better off than those that disregard it entirely.

A student using AI for her assignments (Source: Hong Kong TESOL)

Edited by Kyle Bauermeister

Written by: Wapad

Rate it

Post comments (0)

Leave a reply

Your email address will not be published. Required fields are marked *


0%