this post was submitted on 09 Mar 2025
283 points (100.0% liked)

Technology

67242 readers
3671 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

This is again a big win on the red team at least for me. They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

A step further, thank you AMD.

PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

top 40 comments
sorted by: hot top controversial new old
[–] TheGrandNagus@lemmy.world 122 points 2 weeks ago (1 children)

Properly open source.

The model, the weighting, the dataset, etc. every part of this seems to be open. One of the very few models that comply with the Open Software Initiative's definition of open source AI.

[–] foremanguy92_@lemmy.ml 20 points 2 weeks ago

Look at the picture in my post.

There was others open models but they were very below the "fake" open source models like Gemma or Llama, but Instella is almost to the same level, great improvement

[–] SnotFlickerman@lemmy.blahaj.zone 35 points 2 weeks ago (1 children)

3B

That's one more than 2B so she must be really hot!

/nierjokes

AMD knew what they were doing.

[–] altkey@lemmy.dbzer0.com 7 points 2 weeks ago

Can't judge you for wanting to **** her or whatever, just don't ask her for freebies. She won't care if you are a human at that point.

[–] 1rre@discuss.tchncs.de 15 points 2 weeks ago (1 children)

Every AI model outperforms every other model in the same weight class when you cherry pick the metrics... Although it's always good to have more to choose from

[–] foremanguy92_@lemmy.ml 7 points 2 weeks ago

I've shared this AI because it's one of the best fully open source AI

[–] Zarxrax@lemmy.world 13 points 2 weeks ago (2 children)

And we are still waiting on the day when these models can actually be run on AMD GPUs without jumping through hoops.

[–] grue@lemmy.world 18 points 2 weeks ago

In other words, waiting for the day when antitrust law is properly applied against Nvidia's monopolization of CUDA.

[–] foremanguy92_@lemmy.ml 3 points 2 weeks ago

That is a improvement, if the model is properly trained with rocm it should be able to run on amd GPU easier

[–] art@lemmy.world 12 points 2 weeks ago (2 children)

Help me understand how this is Open Source? Perhaps I'm missing something, but this is Source Available.

[–] foremanguy92_@lemmy.ml 24 points 2 weeks ago

Instead of the traditional open models (like llama, qwen, gemma...) that are only open weight, this model says that it has :

Fully open-source release of model weights, training hyperparameters, datasets, and code

Making it different from other big tech "open" models. Tough it exists other "fully open" models like GPT neo, and more

[–] frezik@midwest.social 3 points 2 weeks ago

The source code on these models is almost too boring to care about. Training data and weights is what really matters.

[–] A_A@lemmy.world 11 points 2 weeks ago (1 children)

Nice and open source . Similar performance to Qwen 2.5.
(also ... https://www.tomsguide.com/ai/i-tested-deepseek-vs-qwen-2-5-with-7-prompts-heres-the-winner ← tested DeepSeek vs Qwen 2.5 ... )
→ Qwen 2.5 is better than DeepSeek.
So, looks good.

[–] foremanguy92_@lemmy.ml 1 points 2 weeks ago

Dont know if this test in a good representation of the two AI, but in this case it seems pretty promising, the only thing missing is a high parameters model

[–] MITM0@lemmy.world 6 points 2 weeks ago

I'll be bookmarking the website & thank you

[–] BitsAndBites@lemmy.world 4 points 2 weeks ago (3 children)

Nice. Where do I find the memory requirements? I have an older 6GB GPU so I've been able to play around with some models in the past.

[–] Danitos@reddthat.com 6 points 2 weeks ago

No direct answer here, but my tests with models from HuggingFace measured about 1.25GB of VRAM per 1B parameters.

Your GPU should be fine if you want to play around.

[–] ikidd@lemmy.world 5 points 2 weeks ago

LMstudio usually lists the memory recommendations for the model.

[–] foremanguy92_@lemmy.ml 2 points 2 weeks ago

Following this page it should be enough based on the requirements of qwen2.5-3B https://qwen-ai.com/requirements/

[–] humanspiral@lemmy.ca 4 points 1 week ago (1 children)

OpenCL not mentioned, and so raw hardware level code most likely. Maybe no one else cares, but higher level code means more portability.

[–] foremanguy92_@lemmy.ml 1 points 1 week ago (1 children)

What is the link with rocm?

[–] humanspiral@lemmy.ca 3 points 1 week ago

AMD uses opencl as its high level API. Nvidia, Intel also supports it. Chinese cards might too. Very few LLMs use high level APIs such as CUDA or OpenCL

[–] Canadian_Cabinet@lemmy.ca 3 points 2 weeks ago (1 children)

I know it's not the point of the article but man that ai generated image looks bad. Like who approved that?

[–] foremanguy92_@lemmy.ml 1 points 2 weeks ago

Oh yeah you're right :-)

[–] Ulrich@feddit.org 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I don't know why open sourcing malicious software is worthy of praise but okay.

[–] domi@lemmy.secnd.me 18 points 2 weeks ago (1 children)

I'll bite, what is malicious about this?

[–] Ulrich@feddit.org 1 points 2 weeks ago (2 children)

What's malicious about AI and LLMs? Have you been living under a rock?

At best it is useless, and at worst it is detrimental to society.

[–] domi@lemmy.secnd.me 18 points 2 weeks ago (1 children)

I disagree, LLMs have been very helpful for me and I do not see how an open source AI model trained with open source datasets is detrimental to society.

[–] Ulrich@feddit.org 1 points 2 weeks ago (1 children)

I don't know what to say other than pull your head outta the sand.

[–] sugar_in_your_tea@sh.itjust.works 12 points 2 weeks ago (1 children)

No you.

Explain your exact reasons for thinking it's malicious. There's a lot of FUD surrounding "AI," a lot of which come from unrealistic marketing BS and poor choices by C-suite types that have nothing to do with the technology itself. If you can describe your concerns, maybe I or others can help clarify things.

[–] frezik@midwest.social 5 points 2 weeks ago (1 children)

These models are trained on human creations with the express intent to drive out those same human creators. There is no social safety net available so those creators can maintain a reasonable living standard without selling their art. It won't even work--the models aren't good enough to replace these jobs, but they're good enough to fool the C-suite into thinking they can--but they'll do lots of damage in the attempt.

The issues are primarily social, not technical. In a society that judges itself on how well it takes care of the needs of everyone, I would have far less of an issue with it.

[–] sugar_in_your_tea@sh.itjust.works 2 points 2 weeks ago (1 children)

The issues are primarily social, not technical.

Right, and having a FOSS alternative is certainly a good thing.

I think it's important to separate opposition to AI policy from a specific implementation. If your concerns are related to the social impact of a given technology, that is where the opposition should go, not toward the technology itself.

That said, this is largely similar to opposition to other types of technological change. Every time a significant change in technology comes about, there is a significant impact to jobs. The printing press destroyed the livelihood of scribes, but it made books dramatically cheaper, which created new jobs for typesetters, booksellers, etc. The automobile dramatically cut back jobs like farriers, stable hands, etc, but created new jobs for drivers, mechanics, etc. I'm sure each of those large shifts in technology also had an overreaction by business owners as they adjusted to the new normal. It certainly sucks for those impacted, but it tends to benefit those who can quickly adapt and make use of the new technology.

So I totally understand the hesitation around AI, especially given the overreaction by C-suites in gutting their workforce based on the promises made by AI marketing teams. However, that has nothing to do with the technology, but the social issues around the technology. Instead of hating AI in general, redirect that anger onto the actual problems:

  • poor social safety net
  • expensive education
  • lack of consequences for false marketing
  • lack of consequences for C-suite mistakes

Hating on a FOSS model just because it's related to an industry that is seeing abuse is the wrong approach.

[–] frezik@midwest.social 0 points 1 week ago (1 children)

Was there anything in the posts above mine that suggest this was a technical issue, or did you read that in as an assumption?

Every time a significant change in technology comes about, there is a significant impact to jobs. The printing press destroyed the livelihood of scribes, but it made books dramatically cheaper, which created new jobs for typesetters, booksellers, etc.

Take a look at the history of the first people called "Luddites". They were early socialists focusing on the dismal working conditions that new automation would bring to the workers. And they were correct.

Not every technological change is good. Our society has defaulted to saying yes to every change, and it's caused a whole lot of problems.

[–] sugar_in_your_tea@sh.itjust.works 1 points 1 week ago (1 children)

Was there anything in the posts above mine that suggest this was a technical issue, or did you read that in as an assumption?

I was responding both to you and to the parent to your comment and making it clear that it's not a technical issue. I'm agreeing with you.

And they were correct.

I disagree.

Yes, not every technological change is good, we can look at Social Media as a shining example of that. However, technological change is usually inevitable, especially if you value freedom in your society, so it's a lot better to solve the issues that surround it than ban it.

[–] frezik@midwest.social 1 points 1 week ago (1 children)

There is absolutely nothing inevitable about technological change. We think that way because of the specific place we are in history. A specific place that is an aberration in how fast those changes have come. For the most part, humans throughout history have used much the same techniques and tools that their parents did.

You also can't separate AI technology from the social change. They're not dumping billions into data centers and talking about using entire nuclear reactors to power them just because they think AI is a fun toy.

A specific place that is an aberration in how fast those changes have come

That's really hard to quantify, but yeah, innovation is probably happening faster today than it has in the past, which is likely due to:

  • increased connectivity - more people have access to advanced technology
  • lower barriers to trade - despite Trump's best efforts, trade/competition between countries still happens
  • better access to education

People generally fear change, and change comes with work. Just because you were screwing on toothpaste caps in a factory yesterday doesn't mean that job will make sense forever. Nor should it. Jobs that don't need to be done by humans shouldn't, and people should instead take more useful and fulfilling jobs.

But sometimes people get caught in the crossfire, such as creative people having to compete with machines that can churn out decent, derivative works far more quickly. But that just means that the nature of work will change. If we use the printing press eliminating scribe jobs as an example, people have largely moved from reproducing text to designing new typefaces for branding purposes (or being commissioned for a calligraphy piece).

I think the same is happening w/ art right now. Traditional, 9-5 artists producing largely derivative work is going away, because most people don't need something truly original. So the number of artists will go down, but the truly great artists will still have a place in creating original works and innovating new types of art. We will still need people with an artistic eye to tune what the AI produces, so instead of manually creating the art, they'll guide the art w/ tools, much like how farmers don't hoe fields manually and instead use tractors (which will become increasing autonomous as time passes).

I've gotten into chess recently, and chess is a game that is largely "solved" by AI, meaning the best bot will beat or tie the best human player every time. There's still some competition between the best bots, but bot v human is pretty firmly in the bot camp and has been for years. However, chess is still a vibrant sport, and people still earn a living playing it (and perhaps more than ever!). It turns out we value the human aspect of chess, and I don't see that changing anytime soon. I think the same applies to art and other fields AI can "replace," because that human touch still very much has value.

If you fight technology, you will lose. So instead of that, fight for fairness and opportunity.

They’re not dumping billions into data centers and talking about using entire nuclear reactors to power them just because they think AI is a fun toy.

Well yeah, they're doing it because they think it'll make us more productive. For a business owner/exec, that means higher profits. For the rest of us, that usually means higher inflation-adjusted incomes (either through increased wages or reduces costs).

[–] MITM0@lemmy.world 6 points 2 weeks ago (1 children)

So in a nutshell, it's malicious because you said so

Ok gotcha Mr/Ms/Mrs TechnoBigot

[–] Ulrich@feddit.org 1 points 2 weeks ago (1 children)

Yes, that's totally what I said.

[–] MITM0@lemmy.world 5 points 2 weeks ago

Something we all agree on

[–] MonkderVierte@lemmy.ml 1 points 2 weeks ago

It's about AI.