this post was submitted on 09 Mar 2025
260 points (98.1% liked)

Technology

64934 readers
4127 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

This is again a big win on the red team at least for me. They developed a "fully open" 3B parameters model family trained from scratch on AMD Instinct™ MI300X GPUs.

AMD is excited to announce Instella, a family of fully open state-of-the-art 3-billion-parameter language models (LMs) [...]. Instella models outperform existing fully open models of similar sizes and achieve competitive performance compared to state-of-the-art open-weight models such as Llama-3.2-3B, Gemma-2-2B, and Qwen-2.5-3B [...].

As shown in this image (https://rocm.blogs.amd.com/_images/scaling_perf_instruct.png) this model outperforms current other "fully open" models, coming next to open weight only models.

A step further, thank you AMD.

PS : not doing AMD propaganda but thanks them to help and contribute to the Open Source World.

top 43 comments
sorted by: hot top controversial new old
[–] humanspiral@lemmy.ca 1 points 4 hours ago (1 children)

OpenCL not mentioned, and so raw hardware level code most likely. Maybe no one else cares, but higher level code means more portability.

[–] foremanguy92_@lemmy.ml 1 points 4 hours ago (1 children)

What is the link with rocm?

[–] humanspiral@lemmy.ca 1 points 3 hours ago

AMD uses opencl as its high level API. Nvidia, Intel also supports it. Chinese cards might too. Very few LLMs use high level APIs such as CUDA or OpenCL

[–] MITM0@lemmy.world 6 points 9 hours ago

I'll be bookmarking the website & thank you

[–] 1rre@discuss.tchncs.de 13 points 14 hours ago (1 children)

Every AI model outperforms every other model in the same weight class when you cherry pick the metrics... Although it's always good to have more to choose from

[–] foremanguy92_@lemmy.ml 7 points 10 hours ago

I've shared this AI because it's one of the best fully open source AI

[–] TheGrandNagus@lemmy.world 112 points 1 day ago (1 children)

Properly open source.

The model, the weighting, the dataset, etc. every part of this seems to be open. One of the very few models that comply with the Open Software Initiative's definition of open source AI.

[–] foremanguy92_@lemmy.ml 18 points 1 day ago

Look at the picture in my post.

There was others open models but they were very below the "fake" open source models like Gemma or Llama, but Instella is almost to the same level, great improvement

[–] SnotFlickerman@lemmy.blahaj.zone 27 points 1 day ago (2 children)

3B

That's one more than 2B so she must be really hot!

/nierjokes

AMD knew what they were doing.

[–] altkey@lemmy.dbzer0.com 7 points 19 hours ago

Can't judge you for wanting to **** her or whatever, just don't ask her for freebies. She won't care if you are a human at that point.

[–] greybeard@lemmy.one 21 points 1 day ago (1 children)

That's a real stretch. 3B is basically stating the size of the model, not the name of the model.

[–] weew@lemmy.ca 13 points 1 day ago (1 children)
[–] Lost_My_Mind@lemmy.world 2 points 21 hours ago* (last edited 21 hours ago)
[–] art@lemmy.world 9 points 20 hours ago (2 children)

Help me understand how this is Open Source? Perhaps I'm missing something, but this is Source Available.

[–] frezik@midwest.social 3 points 8 hours ago

The source code on these models is almost too boring to care about. Training data and weights is what really matters.

[–] foremanguy92_@lemmy.ml 22 points 16 hours ago

Instead of the traditional open models (like llama, qwen, gemma...) that are only open weight, this model says that it has :

Fully open-source release of model weights, training hyperparameters, datasets, and code

Making it different from other big tech "open" models. Tough it exists other "fully open" models like GPT neo, and more

[–] A_A@lemmy.world 10 points 23 hours ago (1 children)

Nice and open source . Similar performance to Qwen 2.5.
(also ... https://www.tomsguide.com/ai/i-tested-deepseek-vs-qwen-2-5-with-7-prompts-heres-the-winner ← tested DeepSeek vs Qwen 2.5 ... )
→ Qwen 2.5 is better than DeepSeek.
So, looks good.

[–] foremanguy92_@lemmy.ml 1 points 16 hours ago

Dont know if this test in a good representation of the two AI, but in this case it seems pretty promising, the only thing missing is a high parameters model

[–] Zarxrax@lemmy.world 10 points 1 day ago (2 children)

And we are still waiting on the day when these models can actually be run on AMD GPUs without jumping through hoops.

[–] grue@lemmy.world 14 points 21 hours ago

In other words, waiting for the day when antitrust law is properly applied against Nvidia's monopolization of CUDA.

[–] foremanguy92_@lemmy.ml 2 points 16 hours ago

That is a improvement, if the model is properly trained with rocm it should be able to run on amd GPU easier

[–] BitsAndBites@lemmy.world 4 points 21 hours ago (3 children)

Nice. Where do I find the memory requirements? I have an older 6GB GPU so I've been able to play around with some models in the past.

[–] ikidd@lemmy.world 5 points 19 hours ago

LMstudio usually lists the memory recommendations for the model.

[–] Danitos@reddthat.com 6 points 20 hours ago

No direct answer here, but my tests with models from HuggingFace measured about 1.25GB of VRAM per 1B parameters.

Your GPU should be fine if you want to play around.

[–] foremanguy92_@lemmy.ml 2 points 16 hours ago

Following this page it should be enough based on the requirements of qwen2.5-3B https://qwen-ai.com/requirements/

[–] MonkderVierte@lemmy.ml -1 points 13 hours ago

It's about AI.

[–] Canadian_Cabinet@lemmy.ca 3 points 1 day ago (1 children)

I know it's not the point of the article but man that ai generated image looks bad. Like who approved that?

[–] foremanguy92_@lemmy.ml 1 points 16 hours ago

Oh yeah you're right :-)