Even worse is calling a proprietary, absolutely closed source, closed data and closed weight company "OpeanAI"
memes
Community rules
1. Be civil
No trolling, bigotry or other insulting / annoying behaviour2. No politics
This is non-politics community. For political memes please go to !politicalmemes@lemmy.world3. No recent reposts
Check for reposts when posting a meme, you can only repost after 1 month4. No bots
No bots without the express approval of the mods or the admins5. No Spam/Ads
No advertisements or spam. This is an instance rule and the only way to live.A collection of some classic Lemmy memes for your enjoyment
Sister communities
- !tenforward@lemmy.world : Star Trek memes, chat and shitposts
- !lemmyshitpost@lemmy.world : Lemmy Shitposts, anything and everything goes.
- !linuxmemes@lemmy.world : Linux themed memes
- !comicstrips@lemmy.world : for those who love comic stories.
Especially after it was founded as a nonprofit with the mission to push open source AI as far and wide as possible to ensure a multipolar AI ecosystem, in turn ensuring AI keeping other AI in check so that AI would be respectful and prosocial.
Sorry, that was a PR move from the get-go. Sam Altman doesn't have an altruistic cell in his whole body.
It's even crazier that Sam Altman and other ML devs said that they reached the peak of what current Machine Learning models were capable of years ago
But that doesn't mean shit to the marketing departments
“Look at this shiny.”
Investment goes up.
“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”
Investment goes up.
“Look at this shiny.”
Investment goes up.
“Same shiny, but look at it and we need to warn you that we’re developing a shinier one that could harm everyone. But think of how shiny.”
Seems kinda reductive about what makes it different from most other LLM’s. Reading the comments i see the issue is that the training data is why some consider it not open source, but isn’t that just trained from the other AI? It’s not why this AI is special. And the way it uses that data, afaik, is open and editable, and the license to use it is open. Whats the issue here?
Seems kinda reductive about what makes it different from most other LLM’s
The other LLMs aren't open source, either.
isn’t that just trained from the other AI?
Most certainly not. If it were, it wouldn't output coherent text, since LLM output degenerates if you human-centipede its' outputs.
And the way it uses that data, afaik, is open and editable, and the license to use it is open.
From that standpoint, every binary blob should be considered "open source", since the machine instructions are readable in RAM.
It’s just AI haters trying to find any way to disparage AI. They’re trying to be “holier than thou”.
The model weights are data, not code. It’s perfectly fine to call it open source even though you don’t have the means to reproduce the data from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.
Let's transfer your bullshirt take to the kernel, shall we?
The kernel is instructions, not code. It’s perfectly fine to call it open source even though you don’t have the code to reproduce the kernel from scratch. You are allowed to modify and distribute said modifications so it’s functionally free (as in freedom) anyway.
🤡
Edit: It's more that so-called "AI" stakeholders want to launder it's reputation with the "open source" label.
Right. You could train it yourself too. Though its scope would be limited based on capability. But that’s not necessarily a bad thing. Taking a class? Feed it your text book. Or other available sources, and it can help you on that subject. Just because it’s hard didn’t mean it’s not open
The weights aren't the source, they're the output. Modifying the weights is analogous to editing a compiled binary, and the training dataset is analogous to source code.
You could train it yourself too.
How, without information on the dataset and the training code?
i mean, if it's not directly factually inaccurate, than, it is open source. It's just that the specific block of data they used and operate on isn't published or released, which is pretty common even among open source projects.
AI just happens to be in a fairly unique spot where that thing is actually like, pretty important. Though nothing stops other groups from creating an openly accessible one through something like distributed computing. Which seems to be a fancy new kid on the block moment for AI right now.
The running engine and the training engine are open source. The service that uses the model trained with the open source engine and runs it with the open source runner is not, because a biiiig big part of what makes AI work is the trained model, and a big part of the source of a trained model is training data.
When they say open source, 99.99% of the people will understand that everything is verifiable, and it just is not. This is misleading.
As others have stated, a big part of open source development is providing everything so that other users can get the exact same results. This has always been the case in open source ML development, people do provide links to their training data for reproducibility. This has been the case with most of the papers on natural language processing (overarching branch of llm) I have read in the past. Both code and training data are provided.
Example in the computer vision world, darknet and tool: https://github.com/AlexeyAB/darknet
This is the repo with the code to train and run the darknet models, and then they provide pretrained models, called yolo. They also provide links to the original dataset where the tool models were trained. THIS is open source.
But it is factually inaccurate. We don't call binaries open-source, we don't even call visible-source open-source. An AI model is an artifact just like a binary is.
An "open-source" project that doesn't publish everything needed to rebuild isn't open-source.
The training data would be incredible big. And it would contain copyright protected material (which is completely okay in my opinion, but might invoce criticism). Hell, it might even be illegal to publish the training data with the copyright protected material.
They published the weights AND their training methods which is about as open as it gets.
They could disclose how they sourced the training data, what the training data is and how you could source it. Also, did they publish their hyperparameters?
They could jpst not call it Open Source, if you can't open source it.
For neural nets the method matters more. Data would be useful, but at the amount these things get trained on the specific data matters little.
They can be trained on anything, and a diverse enough data set would end up making it function more or less the same as a different but equally diverse set. Assuming publicly available data is in the set, there would also be overlap.
The training data is also by necessity going to be orders of magnitude larger than the model itself. Sharing becomes impractical at a certain point before you even factor in other issues.
Source - it’s about open source, not access to the database
I mean that's all a model is so.... Once again someone who doesn't understand anything about training or models is posting borderline misinformation about ai.
Shocker
A model is an artifact, not the source. We also don't call binaries "open-source", even though they are literally the code that's executed. Why should these phrases suddenly get turned upside down for AI models?
A model can be represented only by its weights in the same way that a codebase can be represented only by its binary.
Training data is a closer analogue of source code than weights.
Yet another so-called AI evangelist accusing others of not understanding computer science if they don't want to worship their machine god.
Uuuuh… why?
Do you only accept open source code if you can see every key press every developer made?
Open source means you can recreate the binaries yourself. Neiter Facebook. Nor the devs of deepseek published which training data they used, nor their training algorithm.
They published the source code needed run the model. It’s open source in the way that anyone can download the model, run it locally, and further build on it.
Training from scratch costs millions.
Open source isn't really applicable to LLM models IMO.
There is open weights (the model), and available training data, and other nuances.
They actually went a step further and provided a very thorough breakdown of the training process, which does mean others could similarly train models from scratch with their own training data. HuggingFace seems to be doing just that as well. https://huggingface.co/blog/open-r1
Edit: see the comment below by BakedCatboy for a more indepth explanation and correction of a misconception I've made
It's worth noting that OpenR1 have themselves said that DeepSeek didn't release any code for training the models, nor any of the crucial hyperparameters used. So even if you did have suitable training data, you wouldn't be able to replicate it without re-discovering what they did.
OSI specifically makes a carve-out that allows models to be considered "open source" under their open source AI definition without providing the training data, so when it comes to AI, open source is really about providing the code that kicks off training, checkpoints if used, and details about training data curation so that a comparable dataset can be compiled for replicating the results.
They published the source code needed run the model.
Yeah, but not to train it
anyone can download the model, run it locally, and further build on it.
Yeah, it's about as open source as binary blobs.
Training from scratch costs millions.
So what? You still can gleam something if you know the dataset on which the model has been trained.
If software is hard to compile, can you keep the source code closed and still call software "open source"?
It really comes down to this part of the "Open Source" definition:
The source code [released] must be the preferred form in which a programmer would modify the program
A compiled binary is not the format in which a programmer would prefer to modify the program - it's much preferred to have the text file which you can edit in a text editor. Just because it's possible to reverse engineer the binary and make changes by patching bytes doesn't make it count. Any programmer would much rather have the source file instead.
Similarly, the released weights of an AI model are not easy to modify, and are not the "preferred format" that the internal programmers use to make changes to the AI mode. They typically are making changes to the code that does the training and making changes to the training dataset. So for the purpose of calling an AI "open source", the training code and data used to produce the weights are considered the "preferred format", and is what needs to be released for it to really be open source. Internal engineers also typically use training checkpoints, so that they can roll back the model and redo some of the later training steps without redoing all training from the beginning - this is also considered part of the preferred format if it's used.
OpenR1, which is attempting to recreate R1, notes: No training code was released by DeepSeek, so it is unknown which hyperparameters work best and how they differ across different model families and scales.
I would call "open weights" models actually just "self hostable" models instead of open source.
Open Source (generally and for AI) has an established definition.
This is exactly it, open source is not just the availability of the machine instructions, it's also the ability to recreate the machine instructions. Anything less is incomplete.
It strikes me as a variation on the "free as in beer versus free as in speech" line that gets thrown around a lot. These weights allow you to use the model for free and you are free to modify the existing weights but being unable to re-create the original means it falls short of being truly open source. It is free as in beer, but that's it.
it's only open source if the source code is open.
There are lots of problems with the new lingo. We need to come up with new words.
How about “Open Weightings”?
I like how when America does it we call it AI, and when China does it it's just an LLM!
Judging by OP’s salt in the comments, I’m guessing they might be an Nvidia investor. My condolences.
Arguably they are a new type of software, which is why the old categories do not align perfectly. Instead of arguing over how to best gatekeep the old name, we need a new classification system.
k