this post was submitted on 22 Feb 2024
236 points (93.1% liked)

Technology

59243 readers
3437 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Scientists at Princeton University have developed an AI model that can predict and prevent plasma instabilities, a major hurdle in achieving practical fusion energy.

Key points:

  • Problem: Plasma escaping containment in donut-shaped tokamak reactors disrupts fusion reactions and damages equipment.
  • Solution: AI model predicts instabilities 300 milliseconds before they happen, allowing for adjustments to keep plasma contained.
  • Significance: This is the first time AI has been used to proactively prevent tearing instabilities in fusion experiments.
  • Future: Researchers hope to refine the model for other reactors and optimize fusion reactions.
you are viewing a single comment's thread
view the rest of the comments
[–] webghost0101@sopuli.xyz 3 points 8 months ago (1 children)

I would hope scientific experts understand the natures of their work well enough to know when its hallucinating.

I use ai for coding and sure it can hallucinate horrible code but i wouldn’t copy it without reading trough the logic,

[–] SkyeStarfall@lemmy.blahaj.zone 2 points 8 months ago (1 children)

This AI won't hallucinate because it's not an LLM

LLMs are not the only form of AI, or machine learning.

[–] webghost0101@sopuli.xyz 1 points 8 months ago

I know but it remains applicable to llm in general with is what worries most people when they read ai. And it is bot unlikely politician, doctors may be using those soon.

Machine learning as a tool for science is as safe as science or ai gets.