this post was submitted on 13 Mar 2025
1883 points (100.0% liked)

People Twitter

6983 readers
1459 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 2 months ago* (last edited 2 months ago) (8 children)

This, but for Wikipedia.

Edit: Ironically, the down votes are really driving home the point in the OP. When you aren't an expert in a subject, you're incapable of recognizing the flaws in someone's discussion, whether it's an LLM or Wikipedia. Just like the GPT bros defending the LLM's inaccuracies because they lack the knowledge to recognize them, we've got Wiki bros defending Wikipedia's inaccuracies because they lack the knowledge to recognize them. At the end of the day, neither one is a reliable source for information.

[–] [email protected] 1 points 6 days ago

Well yes but also no. Every text will be potentially wrong because authors tend to incorporate their subjectivity in their work. It is only through inter-subjectivity that we can get closer to objectivity. How do we do that ? By making our claims open to scrutiny of others, such as by citing sources, publishing reproducible code and making available the data we gathered on which we base our claims. Then others can understand how we came to the claim and find the empirical and logical errors in our claims and thus formulate very precise criticism. Through this mutual criticism, we, as society, will move ever closer to objectivity. This is true for every text with the goal of formulating knowledge instead of just stating opinions.

However one can safely say that Chatgpt is designed way worse then Wikipedia, when it comes to creating knowledge. Why ? Because Chatgpt is non-reproducible. Every answer is generated differently. The erroneous claim you read in a field you know nothing about may not appear when a specialist in that field asks the same question. This makes errors far more difficult to catch and thus they "live" for far longer in your mind.

Secondly, Wikipedia is designed around the principle of open contribution. Every error that is discovered by a specialist, can be directly corrected. Sure it might take more time then you expected until your correction will be published. On the side of Chatgpt however there is no such mechanism what so ever. Read an erroneous claim? Well just suck it up, and live with the ambiguity that it may or may not be spread.

So if you catch errors in Wikipedia. Go correct them, instead of complaining that there are errors. Duh, we know. But an incredible amount of Wikipedia consists not of erroneous claims but of knowledge open to the entire world and we can be gratefull every day it exists.

Go read "Popper, Karl Raimund. 1980. „Die Logik der Sozialwissenschaften“. S. 103–23 in Der Positivismusstreit in der deutschen Soziologie, Sammlung Luchterhand. Darmstadt Neuwied: Luchterhand." if you are interested in the topic

Sorry if this was formulated a little aggressively. I have no personal animosity against you. I just think it is important to stress that while yes, both may have their flaws, Chatgpt and Wikipedia. Wikipedia is non the less way better designed when it comes to spreading knowledge then Chatgpt, precisely because of the way it handles erroneous claims.

[–] [email protected] 41 points 2 months ago (3 children)

Do not bring Wikipedia into this argument.

Wikipedia is the library of Alexandria and the amount of effort people put into keeping Wikipedia pages as accurate as possible should make every LLM supporter be ashamed with how inaccurate their models are if they use Wikipedia as training data

[–] [email protected] 11 points 2 months ago (1 children)

TBF, as soon as you move out of the English language the oversight of a million pair of eyes gets patchy fast. I have seen credible reports about Wikipedia pages in languages spoken by say, less than 10 million people, where certain elements can easily control the narrative.

But hey, some people always criticize wikipedia as if there was some actually 100% objective alternative out there, and that I disagree with.

[–] [email protected] 7 points 2 months ago* (last edited 2 months ago)

Fair point.

I don't browse Wikipedia much in languages other than English (mainly because those pages are the most up-to-date) but I can imagine there are some pages that straight up need to be in other languages. And given the smaller number of people reviewing edits in those languages, it can be manipulated to say what they want it to say.

I do agree on the last point as well. The fact that literally anyone can edit Wikipedia takes a small portion of the bias element out of the equation, but it is very difficult to not have some form of bias in any reporting. I more use Wikipedia as a knowledge source on scientific aspects which are less likely to have bias in their reporting

[–] [email protected] 3 points 2 months ago (2 children)

Idk it says Elon Musk is a co-founder of openAi on wikipedia. I haven't found any evidence to suggest he had anything to do with it. Not very accurate reporting.

[–] [email protected] 7 points 2 months ago* (last edited 2 months ago) (1 children)

It is true, though.

The company counts Elon Musk among its cofounders, though he has since cut ties and become a vocal critic of it (while launching his own competitor).

[–] [email protected] 1 points 2 months ago

Paywalled link, but yes, someone pointed that out and I was surprised that there is such a small pool of info about it. You'd think wiki would elaborate more on it, or that OpenAi wiki might detail it. BUT, I haven't read either in their entirety. Just something I saw that wasn't detailed too well.

[–] [email protected] 3 points 2 months ago (1 children)

Isn't co-founder similar to being made partner at a firm? You can kind of buy your way in, even if you weren't one of the real originals.

[–] [email protected] 3 points 2 months ago

That is definitely how I view it. I'm always open to being shown I am wrong, with sufficient evidence, but on this, I believe you are accurate on this.

[–] [email protected] 1 points 2 months ago (1 children)

With all due respect, Wikipedia's accuracy is incredibly variable. Some articles might be better than others, but a huge number of them (large enough to shatter confidence in the platform as a whole) contain factual errors and undisguised editorial biases.

[–] [email protected] 4 points 2 months ago

It is likely that articles on past social events or individuals will have some bias, as is the case with most articles on those matters.

But, almost all articles on aspects of science are thoroughly peer reviewed and cited with sources. This alone makes Wikipedia invaluable as a source of knowledge.

[–] [email protected] 16 points 2 months ago (1 children)

If this were true, which I have my doubts, at least Wikipedia tries and has a specific goal of doing better. AI companies largely don't give a hot fuck as long as it works good enough to vacuum up investments or profits

[–] [email protected] 1 points 2 months ago (1 children)

Your doubts are irrelevant. Just spend some time fact checking random articles and you will quickly verify for yourself how many inaccuracies are allowed to remain uncorrected for years.

[–] [email protected] 4 points 2 months ago

Small inaccuracies are different to just being completely wrong though

[–] [email protected] 15 points 2 months ago (1 children)

What topics are you an expert on and can you provide some links to Wikipedia pages about them that are wrong?

[–] [email protected] 3 points 2 months ago (2 children)

I'm a doctor of classical philology and most of the articles on ancient languages, texts, history contain errors. I haven't made a list of those articles because the lesson I took from the experience was simply never to use Wikipedia.

[–] [email protected] 20 points 2 months ago

The fun part about Wikipedia is you can take your expertise and help correct the information, that's the entire point of the site

[–] [email protected] 10 points 2 months ago (1 children)

Can you at least link one article and tell us what is wrong about it?

[–] [email protected] 5 points 2 months ago* (last edited 2 months ago)

How do you get a fucking PhD but you can't be bothered to post a single source for your unlikely claims? That person is full of shit.

[–] [email protected] 12 points 2 months ago (2 children)

why don't you then go and fix these quoting high quality sources? are there none?

[–] [email protected] 3 points 2 months ago (1 children)

There are plenty of high quality sources, but I don't work for free. If you want me to produce an encyclopedia using my professional expertise, I'm happy to do it, but it's a massive undertaking that I expect to be compensated for.

[–] [email protected] 5 points 2 months ago

Many FOSS projects don't have money to pay people

[–] [email protected] 2 points 2 months ago (1 children)

Because some don't let you. I can't find anything to edit Elon musk or even suggest an edit. It says he is a co-founder of OpenAi. I can't find any evidence to suggest he has any involvement. Wikipedia says co-founder tho.

[–] [email protected] 8 points 2 months ago (2 children)

https://openai.com/index/introducing-openai/

https://www.theverge.com/2018/2/21/17036214/elon-musk-openai-ai-safety-leaves-board

Tech billionaire Elon Musk is leaving the board of OpenAI, the nonprofit research group he co-founded with Y Combinator president Sam Altman to study the ethics and safety of artificial intelligence.

The move was announced in a short blog post, explaining that Musk is leaving in order to avoid a conflict of interest between OpenAI’s work and the machine learning research done by Telsa to develop autonomous driving.

He's not involved anymore, but he used to be. It's not inaccurate to say he was a co-founder.

[–] [email protected] 3 points 2 months ago

Interesting! Cheers! I didn't go farther than openai wiki tbh. It didn't list him there so I figured it was inaccurate. It turns out it is me who is inaccurate!

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

Ah, but, don't forget that OpenAI is intending to share their models (if not their data too) with the federal government in exchange for special treatment. And you know who's in the government now?

[–] [email protected] 7 points 2 months ago

There's an easy way to settle this debate. Link me a Wikipedia article that's objectively wrong.

I will wait.

[–] [email protected] 4 points 2 months ago

This, but for all media.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

The obvious difference being that Wikipedia has contributors cite their sources, and can be corrected in ways that LLMs are flat out incapable of doing

Really curious about anything Wikipedia has wrong though. I can start with something an LLM gets wrong constantly if you like