this post was submitted on 15 Jun 2025
334 points (100.0% liked)

World News

47509 readers
2314 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News [email protected]

Politics [email protected]

World Politics [email protected]


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
 

Guardian investigation finds almost 7,000 proven cases of cheating – and experts says these are tip of the iceberg

Thousands of university students in the UK have been caught misusing ChatGPT and other artificial intelligence tools in recent years, while traditional forms of plagiarism show a marked decline, a Guardian investigation can reveal.

A survey of academic integrity violations found almost 7,000 proven cases of cheating using AI tools in 2023-24, equivalent to 5.1 for every 1,000 students. That was up from 1.6 cases per 1,000 in 2022-23.

Figures up to May suggest that number will increase again this year to about 7.5 proven cases per 1,000 students – but recorded cases represent only the tip of the iceberg, according to experts.

The data highlights a rapidly evolving challenge for universities: trying to adapt assessment methods to the advent of technologies such as ChatGPT and other AI-powered writing tools.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 68 points 1 day ago (6 children)

Maybe we need a new way to approach school. I don't think I agree with turning education into a competition where the difficulty is curved towards the most competitive creating a system that became so difficult that students need to edge each other out any way they can.

[–] [email protected] 15 points 22 hours ago (4 children)

I guess what I don’t understand is what changed? Is everything homework now? When I was in school, even college, a significant percentage of learning was in class work, pop quizzes, and weekly closed book tests. How are these kids using LLMs so much for class if a large portion of the work is still in the classroom? Or is that just not the case anymore? It’s not like ChatGPT can handwrite an essay in pencil or give an in person presentation (yet).

[–] [email protected] 6 points 7 hours ago (1 children)

University was always guided self-learning, at least in the UK. The lecturers are not teachers. The provide and explain material, but they're not there to hand-hold you through it.

University education is very different to what goes on at younger ages. It has to be when a class is 300 rather than 30 people.

[–] [email protected] 1 points 1 hour ago (1 children)

WTF? 300? There were barely 350 people in my graduating class of high school and that isn’t a small class for where I am from. The largest class size at my college was maybe 60. No wonder people use LLMs. Like, that’s just called an auditorium at that point, how could you even ask a question? Self-guided isn’t supposed to mean “solo”.

[–] [email protected] 1 points 35 minutes ago (1 children)

You can ask questions in auditorium classes.

The 300+ student courses typically were high volume courses like intro or freshman courses.

Second year cuts down significantly in class size, but also depends on the subject.

3rd and 4th year courses, in my experience, were 30-50 students

[–] [email protected] 1 points 30 minutes ago (1 children)

You can ask questions in auditorium classes.

I am going to be honest; I don’t believe you. I genuinely don’t believe that in a class with more people than minutes in the session that a person could legitimately have time to interact with the professor.

The 60 person class I referred to was a required lecture portion freshman science class with a smaller lab portion. That we could ask questions in the lab was the only reason 60 people was okay in the lecture and even then the professor said he felt it was too many people.

[–] [email protected] 1 points 24 minutes ago (1 children)

That’s fine if you don’t, but you can ask questions.

They even have these clickers that allow the professor to ask “snap questions” with multiple choice answers so they can check understanding

[–] [email protected] 1 points 20 minutes ago

I can’t believe people go into debt for that experience. I would be livid.

[–] [email protected] 1 points 16 hours ago

In the US we went common core. That means the school board decides the courses at the beginning of the year, and they set tests designed to ensure the students are learning. But there are two issues. 1. The students are not being taught. Teachers dont get paid enough to care nor provide learning materials, so they just have yhe students read the textbook and do homework until the test. This means students are not learning critical thinking or the material, they merely memorize this weeks material long enough to pass the test. 2. The tests are poorly designed. As I hinted at with point 1, the tests merely ensure that you have memorized this weeks material. They do not and are not designed to ensure that you actually learn.

These issues are by design, not by accident. Teachers pay rates have stagnated along with the rest of the working class, with the idea being to slowly give the working class less and less propetional buying power and therefore economic control. In addition, edicating your populace runs directly contradictory to what the current reigning faction wants. An educated populace is harder to lie to.

load more comments (2 replies)
load more comments (5 replies)
[–] [email protected] 24 points 23 hours ago (2 children)

Actually caught, or caught with a "ai detection" software?

[–] [email protected] 35 points 22 hours ago

"Read this document. Was it made with Ai?"

"Yes, it sure was! Great catch!"

"You're wrong, I just wrote it myself 15 minutes ago."

"Teeheehee oopsie! Silly me! I'll try to do better next time then! Is there anything else I can help with?"

[–] [email protected] 7 points 21 hours ago (1 children)

Actually caught. That's why it's tip of the iceberg, all the cases that were not caught.

[–] [email protected] 5 points 16 hours ago (1 children)

The article does not state that. It does, however, mention that AI detection tools were used, and that they failed to detect AI writing 90 something % of the time. It seems extremely likely they used ai detection software.

[–] [email protected] 2 points 8 hours ago

I'm saying this a someone that has worked for multiple institutions, raised hundreds of conduct cases and has more on the horizon.

The article says proven cases. Which means that the academic conduct case was not just raised but upheld. AI detection may have been used (there is a distinct lack of concencus between institutions on that) but would not be the only piece of evidence. Much like the use of Turnitin for plagiarism detection, it is an indication for further investigation but a case would not be raised based solely on a high tii score.

There are variations in process between institutions and they are changing their processes year on year in direct response to AI cheating. But being upheld would mean that there was direct evidence (prompt left in text), they admitted it in (I didn't know I wasn't allowed to, yes but I only, etc) and/or there was a viva and based on discussion with the student it was clear that they did not know the material.

It is worth mentioning that in a viva it is normally abundantly clear if a given student did/didn't write the material. When it is not clear, then (based on the institutions I have experience with) universities are very cautious and will give the students the benefit of the doubt (hence tip of iceberg).

[–] [email protected] 19 points 23 hours ago (1 children)

Surprise motherfuckers. Maybe don't give grant money to LLM snakeoil fuckers, and maybe don't allow mass for-profit copyright violations.

[–] [email protected] 2 points 18 hours ago* (last edited 18 hours ago) (1 children)

So is it snake oil, or dangerously effective (to the point it enables evil)?

[–] [email protected] 5 points 18 hours ago (1 children)

it is snake oil in the sense that it is being sold as "AI", which it isn't. It is dangerous because LLMs can be used for targeted manipulation of millions if not billions of people.

[–] [email protected] 2 points 18 hours ago* (last edited 17 hours ago)

Yeah, I do worry about that. We haven't seen much in the way of propaganda bots or even LLM scams, but the potential is there.

Hopefully, people will learn to be skeptical they way they did with photoshopped photos, and not the way they didn't with where their data is going.

[–] [email protected] 28 points 1 day ago

And thats just the ones that were stupid enough to get caught realistically I think this is more like 5% instead of 0.5%

[–] [email protected] 27 points 1 day ago (4 children)

In some regard I don’t think it should be considered cheating. Don’t beat me up yet, I’m old and think AI sucks at most things.

AI typically outputs crap. So why does this use of a new and widely available tech get called out differently?

Using Google (in the don’t be evil timeframe) wasn’t cheating when open book was permitted. Using the text book was cheating on a closed book test. In some cases using a calculator was cheating.

Is it cheating if you write a paper completely on your own and use spell check and grammar check within word? What if a grammarly type extension is used? It’s a slippery slope that advances with technology.

I remember testing and assignments that were designed to make it harder to cheat, show your work, for math type approaches. Quizzes and short essays that make demonstration of the subject matter necessary.

Why doesn’t the education environment adapt to this? For writing assignments, maybe they need to be submitted with revision history so the teacher can see it wasn’t all done in one go via an LLM.

The quick answer responses are somewhat like using Wikipedia for a school paper. Don’t site Wikipedia and don’t use the generated text for anything but a base understanding of the topic. Now go use all the sources these provided, to actually do the assignment.

[–] [email protected] 35 points 1 day ago (6 children)

Chatgpt output isn't crap anymore. I teach introductory physics at a university and require fully written out homework, showing math steps, to problems that I've written. I wrote my own homework many years ago when chegg blew up and all major textbook problems were on chegg.

Just two years ago, chatgpt wasn't so great at intro physics and math. It's pretty good now, and shows all the necessary steps to get the correct answer.

I do not grade my homework on correctness. Students only need to show me effort that they honestly attempted each problem for full credit. But it's way quicker for students to simply upload my homework pdf to chatgpt and copy down the output than give it their own attempt.

Of course, doing this results in poor exam performance. Anecdotally, my exams from my recent fall semester were the lowest they've ever been. I put two problems on my final that directly came from from my homework, one of them being the problem that made me realize roughly 75% of my class was chatgpt'ing all the homework as chatgpt isn't super great at reading angles from figures, and it's like these students had never even seen a problem like it before.

I'm not completely against the use of AI for my homework. It could be like a tutor that students ask questions to when stuck. But unfortunately that takes more effort than simply typing "solve problems 1 through 5, showing all steps, from this document" into chatgpt.

[–] Taiatari 11 points 1 day ago (2 children)

Personally, I think we have homework the wrong way around. Instead of teaching the subject in class and then assign practice for home, we should be learn the subject at home and so the practice in class.

I always found it easier to read up on something, get an idea of a concept by my self. But when trying to solve the problems I ran into questions, but no one was there I could ask. If the problem were to be solved in class I could ask fellow students or the teacher.

Plus if the kids want to learn the concept from ChatGPT or Wikipedia that's fine by me as long as they learn it somehow.

Of course this does not apply to all concepts, subjects and such but as a general rule I think it works.

[–] [email protected] 9 points 23 hours ago

Instead of teaching the subject in class and then assign practice for home, we should be learn the subject at home and so the practice in class.

Then you get students who get mad because they're "teaching themselves". Not realizing at all that the teacher curated what they're reading/doing and is an SME that's available to them when they're completely lost.

[–] [email protected] 6 points 1 day ago

This is mostly the purpose of my homework. I assign daily homework. I don't expect students to get the correct answers but instead attempt them and then come to class with questions. My lectures are typically short so that i can dedicate class time to solving problems and homework assignments.

I always open my class with "does anyone have any questions on the homework?". Prior chatgpt, students would ask me to go through all the homework, since much of my homework is difficult. Last semester though, with so many students using chatgpt, they rarely asked me about the homework... I would often follow up with "Really? No questions at all?"

load more comments (5 replies)
[–] [email protected] 11 points 1 day ago

It's absolutely cheating - it's plagiarism. It's no different in that regard than copying a paper found online, or having someone else write the paper for you. It's also a major self-own - these students have likely one opportunity to better themselves through higher education, and are trashing that opportunity with this shit.

I do agree that institutions need to adapt. Edit history is an interesting idea, though probably easy to work around. Imo, direct teacher-student interfacing would be the most foolproof, but also incredibly taxing on time and effort for teachers. It would necessitate pretty substantial changes to current practices.

load more comments (2 replies)
[–] [email protected] 4 points 19 hours ago

we're doomed

[–] [email protected] 9 points 23 hours ago

god i love ppl outsourcing their learning to Microsoft

[–] [email protected] 17 points 1 day ago

"Get back in that bottle you stupid genie!"

[–] [email protected] 5 points 23 hours ago

it is a paradigm shift.

what they learn from this is to make sure to not get caught in the future.

load more comments
view more: next ›