AI Hallucination as a Major Threat to Academic Integrity: 5 Areas Where AI Can Get You into Trouble

Many students using AI systems to assist them with their academic writing see AI as a ‘more knowledgeable other’. With that being said, the current realisation of artificial intelligence is much closer to the famous ‘Chinese room’ concept. Imagine a person inside a room answering your questions asked in Chinese. In reality, they do not know this language and have a manual in English telling them how they should respond to each of your questions.

If the instructions are explicit and they follow them diligently, you will receive correct answers. But the problems begin when the person in the room receives a request they don't have a ready-made reply to. With no actual understanding of Chinese and an obligation to answer your question in a language they do not know, they will have to invent something to serve as an answer.

Enter AI hallucinations.

In this article, we will discuss five areas where this phenomenon can pose substantial threats to academic integrity and get you into trouble.

1. Fake Citations

At first glance, everything looks great. You have just received a document created by AI that perfectly covers your sphere of interest. The platform has summarised key theories in the field and provided a list of recent articles on each topic flawlessly cited in Harvard, MLA or APA, as per your request. The only problem is that some of these theories and references do not exist in reality. The worst thing is, there is no way for you to quickly tell which ones are real and which are not.

What's the solution? You will have to spend substantial time checking all sources manually to ensure that they actually exist and contain the information included in summaries. Otherwise, you will accidentally submit this reference list to your university and will have to have a serious conversation with your supervisor possessing real knowledge in the field.

2. Blurred Narratives

If you play videogames, you are probably aware of the way systems like DLSS work. They create fake frames by inserting approximations of two adjacent images between them to increase the frame rate. The result is snappier than the original video, but it becomes blurrier as an inevitable payoff. The same is true for AI-generated academic work. The algorithm is a ‘Chinese room’ having no idea whether the facts it explores are true or whether the analysed information is correct. When it lacks ‘frames’, it simply inserts approximations, making the result look more professional.

While this may be sufficient for writing a blog post, academic research relies on criticisms, comparisons, historical analysis of debates in a certain field, and key contributions made by your research. If you fail to grasp with absolute clarity what the current status quo is, this substantially decreases your chances of producing a high-quality work possessing sufficient academic novelty.

3. Central Tendencies

AIs are trained on certain data in order to provide ‘correct’ answers. If you analyse ten popular online articles on how to cook Spaghetti Bolognese, the approximation will probably allow you to cook a decent supper. The problem is that the world of academic research is a field of an ongoing debate with several popular ‘recipes’, none of which is wrong. AI completely misses this aspect, which results in homogenised outputs following some central tendencies.

If you follow such patterns blindly without doing your own research, you can miss important viewpoints in your field. Ignoring discordant ideas frequently results in a lack of criticism and one-dimensional findings. This effect can be especially prominent in the case of AI hallucinations in a new field where the system does not have sufficient recent data to work with.

4. Inherent Biases

The earlier-mentioned training procedure implies that AI software can only rely on the data they were allowed to work with during this process. If the original information was incomplete, the system can ‘invent’ some facts to fill in the blind spots. However, when this data was inherently biased, this bias will manifest itself in the algorithm’s outputs. These problems have already emerged in multiple spheres in relation to patterns emerging within the outputs.

This can create a self-fulfilling prophecy where your conclusions suggested by AI are based on its inherent biases. Your study’s findings will simply reinforce this incorrect information, leading to further inaccuracies.

5. Lack of Predictability

Last but not least, the main problem of AI hallucinations is their unpredictability. You simply cannot trust the algorithm 100% of the time. This leads to mental exhaustion if you meticulously check every reference. Over time, the savings become questionable.

The main problem is that you need to be knowledgeable in your field to recognise the mistakes made by AI. However, this requires you to develop such expertise yourself or contact human experts to assist you with problematic topics. Using AI without such preparation puts you in a situation of uncertainty where you have to blindly trust it at times, without being able to check your results and ensure their accuracy and academic integrity.

As an alternative solution, you can hire expert academic writers to help with your assignment, coursework, essay and dissertation writing. 15 Writers offer academic writing help at any stage of your studies, contact us today!