902
College professors are going back to paper exams and handwritten essays to fight students using ChatGPT
(www.businessinsider.com)
This is a most excellent place for technology news and articles.
Prof here - take a look at it from our side.
Our job is to evaluate YOUR ability; and AI is a great way to mask poor ability. We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.
I am not arguing exams are perfect mind, but I'd rather doubt a few student's inability (maybe it was just a bad exam for them) than always doubt their ability (is any of this their own work).
Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students, but do suggest they can obfuscate AI work well.
Is AI going to go away?
In the real world, will those students be working from a textbook, or from a browser with some form of AI accessible in a few years?
What exactly is being measured and evaluated? Or has the world changed, and existing infrastructure is struggling to cling to the status quo?
Were those years of students being forced to learn cursive in the age of the computer a useful application of their time? Or math classes where a calculator wasn't allowed?
I can hardly think just how useful a programming class where you need to write it on a blank page of paper with a pen and no linters might be, then.
Maybe the focus on where and how knowledge is applied needs to be revisited in light of a changing landscape.
For example, how much more practically useful might test questions be that provide a hallucinated wrong answer from ChatGPT and then task the students to identify what was wrong? Or provide them a cross discipline question that expects ChatGPT usage yet would remain challenging because of the scope or nuance?
I get that it's difficult to adjust to something that's changed everything in the field within months.
But it's quite likely a fair bit of how education has been done for the past 20 years in the digital age (itself a gradual transition to the Internet existing) needs major reworking to adapt to changes rather than simply oppose them, putting academia in a bubble further and further detached from real world feasibility.
I'll field this because it does raise some good points:
It all boils down to how much you trust what is essentially matrix multiplication, trained on the internet, with some very arbitrarily chosen initial conditions. Early on when AI started cropping up in the news, I tested the validity of answers given:
For topics aimed at 10--18 year olds, it does pretty well. It's answers are generic, and it makes mistakes every now and then.
For 1st--3rd year degree, it really starts to make dangerous errors, but it's a good tool to summarise materials from textbooks.
Masters+, it spews (very convincing) bollocks most of the time.
Recognising the mistakes in (1) requires checking it against the course notes, something most students manage. Recognising the mistakes in (2) is often something a stronger student can manage, but not a weaker one. As for (3), you are going to need to be an expert to recognise the mistakes (it literally misinterpreted my own work at me at one point).
The irony is, education in its current format is already working with AI, it's teaching people how to correct the errors given. Theming assessment around an AI is a great idea, until you have to create one (the very fact it is moving fast means that everything you teach about it ends up out of date by the time a student needs it for work).
However, I do agree that education as a whole needs overhauling. How to do this: maybe fund it a bit better so we're able to hire folks to help develop better courses - at the moment every "great course" you've ever taken was paid for in blood (i.e. 50 hour weeks teaching/marking/prepping/meeting arbitrary research requirement).
(1) seems to be a legitimate problem. (2) is just filtering the stronger students from the weaker ones with extra steps. (3) isn't an issue unless a professor teaching graduate classes can't tell BS from truth in their own field. If that's the case, I'd call the professor's lack of knowledge a larger issue than the student's.
You may not know this, but "Masters" is about uncovering knowledge nobody had before, not even the professor. That's where peer reviews and shit like LK-99 happen.
It really isn't. You don't start doing properly original research until a year or two into a PhD. At best a masters project is going to be doing something like taking an existing model and applying it to an adjacent topic to the one it was designed for.