Scientists who write their papers with an LLM should get a lifetime ban from publishing papers.
Science Memes
Welcome to c/science_memes @ Mander.xyz!
A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.
Rules
- Don't throw mud. Behave like an intellectual and remember the human.
- Keep it rooted (on topic).
- No spam.
- Infographics welcome, get schooled.
This is a science community. We use the Dawkins definition of meme.
Research Committee
Other Mander Communities
Science and Research
Biology and Life Sciences
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- !reptiles and [email protected]
Physical Sciences
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
- [email protected]
Humanities and Social Sciences
Practical and Applied Sciences
- !exercise-and [email protected]
- [email protected]
- !self [email protected]
- [email protected]
- [email protected]
- [email protected]
Memes
Miscellaneous
I played around with ChatGTP to see if it could actually improve my writing. (I've been writing for decades.)
I was immediately impressed by how "personable" the things are and able to interpret your writing and it's able to detect subtle things you are trying to convey, so that part was interesting. I also was impressed by how good it is at improving grammar and helping "join" passages, themes and plot-points, it has advantages that it can see the entire writing piece simultaneously and can make broad edits to the story-flow and that could potentially save a writers days or weeks of re-writing.
Now that the good is out of the way, I also tried to see how well it could just write. Using my prompts and writing style, scenes that I arranged for it to describe. And I can safely say that we have created the ultimate "Averaging Machine."
By definition LLM's are designed to always find the most probable answers to queries, so this makes sense. It has consumed and distilled vast sums of human knowledge and writing but doesn't use that material to synthesize or find inspiration, or what humans do which is take existing ideas and build upon them. No, what it does is always finds the most average path. And as a result, the writing is supremely average. It's so plain and unexciting to read it's actually impressive.
All of this is fine, it's still something new we didn't have a few years ago, neat, right? Well my worry is that as more and more people use this, more and more people are going to be exposed to this "averaging" tool and it will influence their writing, and we are going to see a whole generation of writers who write the most cardboard, stilted, generic works we've ever seen.
And I am saying this from experience. I was there when people started first using the internet to roleplay, making characters and scenes and free-form writing as groups. It was wildly fun, but most of the people involved were not writers, but many discovered literation for the first time there, it's what led to a sharp increase in book-reading and suddenly there were giant bookstores like Barns & Noble popping up on every corner. They were kids just doing their best, but that charming, terrible narration became a social standard. It's why there are so many atrocious dialogue scenes in shows and movies lately, I can draw a straight line to where kids learned to write in the 90's. And what's coming next is going to harm human creativity and inspiration in ways I can't even predict.
I am a young person who doesn't read recreationally, and I avoid writing wherever I can. Thank you for sharing your insight as well as sparking an interesting discussion in this thread.
Reading is incredibly important for mental development, it teaches your brain how to have the language tools to create abstractions of the world around you and then use those abstractions to change perspectives, communicate ideas and understand your own thoughts and feelings.
It's never too late to start exercising that muscle, and it really is a muscle, a lot of people have a hard time getting started reading later in life because they simply don't have the practice in forming words into images and scenes.... but think about how strong that makes your brain when you can form text into whole vivid worlds, when you can create images and people and words and situations in your mind to explore the universe around you and invent simulated situations with more accuracy... I cannot scream enough how critically important it is for us to exercise this muscle, I hope you keep looking for things that spark your interest just enough that you get a foothold in reading and writing :)
I can confirm that a lot of student's writing have become "averaged" and it seems to have gotten worse this semester. I am not talking about students who clearly used an AI tool, but just by proximity or osmosis writing feels "cardboardy". Devoid of passions or human mistakes.
This is how I was taught to write up to highschool. Very "professional", persuasive essays, arguing in favor of something or against it "objectively". (Assignment seemed to dictate what side I could be on LOL.) Limit humor and "emotional speech." Cardboard.
I was taken aback in my first political science course at the local community college, where I was instructed to convey my honest arguments about a book assignment on polarization in U.S politics. "Whether you think it's fantastic or you think it sucks, just make a good case for your opinion." Wait, what?! I get to write like a person?!
I was even more shocked when I got a high mark for reading the first few chapters, skimming the rest, and truthfully summarizing by saying it was plain that the author just kept repeating their main point for like 5 more chapters so they could publish a book, and it stopped being worth the time as that poor horse was already dead by the 3rd chapter.
It was when it hit me, that writing really was about communication, not just information.
I worry about that these days: That this realization won't come to most, and they'll use these Ai tools or be influenced by them to simply "convey information" that nobody wants to read, get their 85%, and breeze through the rest of their MBA, not caring about what any of this is actually for, or for what a beautiful miracle writing truly is to humanity.
That isn't what I mean by cardboard. Persuasive, research, argumentative essays have been taught to be written the way tou described. They are meant to be that way. But even then, the essays I have read and graded still have this cardboard feel. I have read plenty of research essays where you can feel the emotion, you can surmise the position and most of all passion of the author. This passion and the delicate picking of words and phrases are not there. It is "averaged".
I think we're saying a similar thing, but I understand your point better.
I have read plenty of research essays where you can feel the emotion, you can surmise the position and most of all passion of the author.
Exactly! That's what I mean. There's so many subjects I expected to be incredibly dry, but the writing reminded me it was written by a person who obviously cares about other people reading the text. One can communicate any subject without giving up their soul.
(I am always surprised, but I find this in programming books often, haha.)
But that's what I meant by cardboard as well, I think we might be in agreement:
We expect to see a lot more writing that comes across like "This is what writing should look like, right?"
Writing that understands words, and "averages" the most likely way to convey information or fill a requirement, but doesn't know how to wield language as an art to share ideas with another person.
the writing reminded me it was written by a person who obviously cares about other people reading the text.
This is what's missing being discussed in nearly every online argument about AI art that I read online, there are rarely people who make the actual argument that the whole purpose of art and writing is to share an experience, to give someone else the experience that the author or artist is feeling.
Even if I look at a really bad poem or a terrible drawing, if the artist was really doing their best to share the image in their head or the feeling they were having when they wrote it, it will be 1000X more significant and poignant than a machine that crushes the efforts of thousands of people together and averages them out.
Sure there are billions of people who are content with looking at a cool image and think no deeper of it and are even annoyed at criticism of AI work, but on some level I think everyone prefers content made by another human trying to share something.
BuT tHE HuMAn BrAin Is A cOmpUteEr.
Edit: people who say this are vegetative lifeforms.
Vegetative electron microscopes!
It immediately demonstrates a lack of both care and understanding of the scientific process.
When I was in grad school I mentioned to the department chair that I frequently saw a mis-citation for an important paper in the field. He laughed and said he was responsible for it. He made an error in the 1980s and people copied his citation from the bibliography. He said it was a good guide to people who cited papers without reading them.
At university, I faked a paper on economics (not actually my branch of study, but easily to fake) and put it on the shelf in their library. It was filled with nonsense formulas that, if one took the time and actually solved the equations properly, would all produce the same number as a result: 19920401 (year of publication, April Fools Day). I actually got two requests from people who wanted to use my paper as a basis for their thesis.
Congratulations! You are now a practicing economist. This is exactly how that field works.
Guys, can we please call it LLM and not a vague advertising term that changes its meaning on a whim?
Wouldn't it be OCR in this case? At least the scanning?
Yes, but the LLM does the writing. Someone probably carelessly copy pasta'd some text from OCR.
Fair enough, though another possibility I see is that the automated training process for LLMs used OCR for those papers (Or an already existing text version in the internet was using bad OCR) and those papers with the mashed word were written partially or fully by an LLM.
Either way, the blanket term "AI" sucks and it's honestly getting kind of annoying. Same with how much LLMs are used.
It is worthwhile to note that the enzyme did not attack Norris of Leeds university, that would be tragic.
I recently reviewed a paper, for a prestigious journal. Paper was clearly from the academic mill. It was horrible. They had a small experimental engine, and they wrote 10 papers about it. Results were all normalized and relative, key test conditions not even mentioned, all described in general terms.. and I couldn't even be sure if the authors were real (korean authors, names are all Park, Kim and Lee). I hate where we arrived in scientific publishing.
To be fair, scientific publishing has been terrible for years, a deeply flawed system at multiple levels. Maybe this is the push it needs to reevaluate itself into something better.
And to be even fairer, scientific reviewing hasn't been better. Back in my PhD days, I got a paper rejected from a prestigious conference for being too simple and too complex from two different reviewers. The reviewer that argue "too simple" also gave a an example of a task that couldn't be achieved which was clearly achievable.
Goes without saying, I'm not in academia anymore.
People shit on Hossenfelder but she has a point. Academia partially brought this on themselves.
People shit on Hossenfelder but she has a point. Academia partially brought this on themselves.
Somehow I briefly got her and Pluckrose reversed in my mind, and was still kinda nodding along.
If you don't know who I mean, Pluckrose and two others produced a bunch of hoax papers (likening themselves to the Sokal affair) of which 4 were published and 3 were accepted but hadn't been published, 4 were told to revise and resubmit and one was under review at the point they were revealed. 9 were rejected, a bit less than half the total (which included both the papers on autoethnography). The idea was to float papers that were either absurd or kinda horrible like a study supporting reducing homophobia and transphobia in straight cis men by pegging them (was published in Sexuality & Culture) or one that was just a rewrite of a section of Mein Kampf as a feminist text (was accepted by Affilia but not yet published when the hoax was revealed).
My personal favorite of the accepted papers was "When the Joke Is on You: A Feminist Perspective on How Positionality Influences Satire" just because of how ballsy it is to spell out what you are doing so obviously in the title. It was accepted by Hypatia but hadn't been published yet when the hoax was revealed.
Do you usually get to see the names of the authors you are reviewing papers of in a prestigious journal?
I try to avoid reviews, but the editor is a close friend of mine and i'm an expert of the topic. The manuscript was only missing the date
It is by no spores either
Another basic demonstration on why oversight by a human brain is necessary.
A system rooted in pattern recognition that cannot recognize the basic two column format of published and printed research papers
To be fair the human brain is a pattern recognition system. it’s just the AI developed thus far is shit
The human brain has a pattern recognition system. It is not just a pattern recognition system.
The LLM systems are pattern recognition without any logic or awareness is the issue. It's pure pattern recognition, so it can easily find some patterns that aren't desired.
"Science" under capitalism.
https://theanarchistlibrary.org/library/paul-avrich-what-is-makhaevism
The peer review process should have caught this, so I would assume these scientific articles aren't published in any worthwhile journals.
One of them was in Springer Nature’s Environmental Science and Pollution Research, but it has since been retracted.
The other journals seem less impactful (I cannot truly judge the merit of journals spanning several research fields)
Wait how did this lead to 20 papers containing the term? Did all 20 have these two words line up this way? Or something else?
AI consumed the original paper, interpreted it as a single combined term, and regurgitated it for researchers too lazy to write their own papers.
Hot take: this behavior should get you blacklisted from contributing to any peer-reviewed journal for life. That's repugnant.
Thank you for highlighting the important part 🙏
The most disappointing timeline.