[-] Fern@lemmy.world 8 points 6 days ago

Quite unnerving

[-] Fern@lemmy.world 8 points 2 weeks ago

More would be great. What sort of arguments did you make? We're you discussing the science?

[-] Fern@lemmy.world 3 points 2 weeks ago* (last edited 2 weeks ago)

Forgive me for being suspicious of your comment. There is a huge anti-vegan bias in society, and many argue against veganism, not in good faith. Can you provide any examples of the mods doing this?

[-] Fern@lemmy.world 1 points 2 weeks ago

Have you looked at any science on this issue? Or are you just using cOmMoN SeNsE to decide who should have their brain removed?

[-] Fern@lemmy.world 2 points 3 weeks ago

Doh! Didn't mean to link to a specific time in the video.

84
submitted 3 weeks ago by Fern@lemmy.world to c/technology@lemmy.world

Even though I'm on here, I honestly lurk most of the time and don't fully understand the activitypub vs at protocol war. This was a great explainer that will reach a lot of people. Really appreciate a lot of David's takes. I hope David and others at MKBHD become aware of and talk about Lemmy soon too.

[-] Fern@lemmy.world 4 points 3 weeks ago* (last edited 3 weeks ago)

Definitely. The thing you might want to consider as well is what you are using it for. Is it professional? Not reliable enough. Is it to try to understand things a bit better? Well, it's hard to say if it's reliable enough, but it's heavily biased just as any source might be, so you have to take that into account.

I don't have the experience to tell you how to suss out its biases. Sometimes, you can push it in one direction or another with your wording. Or with follow-up questions. Hallucinations are a thing but not the only concern. Cherrypicking, lack of expertise, the bias of the company behind the llm, what data the llm was trained on, etc.

I have a hard time understanding what a good way to double-check your llm is. I think this is a skill we are currently learning, as we have been learning how to sus out the bias in a headline or an article based on its author, publication, platform, etc. But for llms, it feels fuzzier right now. For certain issues, it may be less reliable than others as well. Anyways, that's my ramble on the issue. Wish I had a better answer, if only I could ask someone smarter than me.


Oh, here's gpt4o's take.

When considering the accuracy and biases of large language models (LLMs) like GPT, there are several key factors to keep in mind:

1. Training Data and Biases

  • Source of Data: LLMs are trained on vast amounts of data from the internet, books, articles, and other text sources. The quality and nature of this data can greatly influence the model's output. Biases present in the training data can lead to biased outputs. For example, if the data contains biased or prejudiced views, the model may unintentionally reflect these biases in its responses.
  • Historical and Cultural Biases: Since data often reflects historical contexts and cultural norms, models might reproduce or amplify existing stereotypes and biases related to gender, race, religion, or other social categories.

2. Accuracy and Hallucinations

  • Factual Inaccuracies: LLMs do not have an understanding of facts; they generate text based on patterns observed during training. They may provide incorrect or misleading information if the topic is not well represented in their training data or if the data is outdated.
  • Hallucinations: LLMs can "hallucinate" details, meaning they can generate plausible-sounding information that is entirely fabricated. This can occur when the model attempts to fill in gaps in its knowledge or when asked about niche or obscure topics.

3. Context and Ambiguity

  • Understanding Context: While LLMs can generate contextually appropriate responses, they might struggle with nuanced understanding, especially in cases where subtle differences in wording or context significantly change the meaning. Ambiguity in a prompt or query can lead to varied interpretations and outputs.
  • Context Window Limitations: LLMs have a fixed context window, meaning they can only "remember" a certain amount of preceding text. This limitation can affect their ability to maintain context over long conversations or complex topics.

4. Updates and Recency

  • Outdated Information: Because LLMs are trained on static datasets, they may not have up-to-date information about recent events, scientific discoveries, or new societal changes unless explicitly fine-tuned or updated.

5. Mitigating Biases and Ensuring Accuracy

  • Awareness and Critical Evaluation: Users should be aware of potential biases and inaccuracies and approach the output critically, especially when discussing sensitive or fact-based topics.
  • Diverse and Balanced Data: Developers can mitigate biases by training models on more diverse and balanced datasets and employing techniques such as debiasing algorithms or fine-tuning with carefully curated data.
  • Human Oversight and Expertise: Where high accuracy is critical (e.g., in legal, medical, or scientific contexts), human oversight is necessary to verify the information provided by LLMs.

6. Ethical Considerations

  • Responsible Use: Users should consider the ethical implications of using LLMs, especially in contexts where biased or inaccurate information could cause harm or reinforce stereotypes.

In summary, while LLMs can provide valuable assistance in generating text and answering queries, their accuracy is not guaranteed, and their outputs may reflect biases present in their training data. Users should use them as tools to aid in tasks, but not as infallible sources of truth. It is essential to apply critical thinking and, when necessary, consult additional reliable sources to verify information.

[-] Fern@lemmy.world 34 points 1 month ago

Fitness goals.

[-] Fern@lemmy.world 8 points 1 month ago

Actually, popcorn is surprisingly a good source of fiber. It's a pretty healthy snack if you avoid the butter/oil/salt.

[-] Fern@lemmy.world 3 points 1 month ago* (last edited 1 month ago)

I'm afraid oils are pretty dang unhealthy. See 1, 2, 3,, and more here.

[-] Fern@lemmy.world 72 points 1 month ago

Now this is what I assume is usually happening on 4-chan

[-] Fern@lemmy.world 8 points 1 month ago

Woah, when and how did you learn that?

20
submitted 2 months ago by Fern@lemmy.world to c/science@lemmy.world

Lemmyversers, I'm looking for some help developing a new mnemonic device.

Inspired by a video by Epic Spaceman, where he explains a handy system for comparing the size of things from a banana to an atom, I’ve come up with a mnemonic device to aid in remembering these scales.

He lists items, each smaller than the previous by a factor of 10:

It goes:

  • Banana
  • Coin
  • Edge of the coin
  • Waterbear/microorganism
  • Red blood cell
  • Bacteria
  • "Good virus"/Bacteriophage
  • Corona Virus/"Bad Virus"
  • DNA
  • Atom

So a coin is roughly 1/10 a banana, and the edge of that coin is roughly 1/10 the size if that coin.

It gives good references for thinking about other things if similar size. A sort of banana for scale at each factor of 10.

And allows you to quickly determine approximations like Covid is roughly 1000 times smaller than a red blood cell. Or an atom is roughly 1 billion times smaller than a banana. (That doesn't sound right. Is that actually right?)

Do you think that's a useful memory tool? And are these best touchstones for scale at each level?

The mnemonic I've come up with for it as you may have guessed, is:

  • Be
  • Cool
  • Even
  • When
  • Really
  • Big
  • Goblins
  • Casually
  • Drop
  • Acid

Do you have any better ideas or tweaks you"d recommend for the mnemonic or the touchstones?

Would this be helpful when trying to wrap your head around the scale of the micro?

Also, what would make for a good macro version of this? Where everything got bigger by a factor of 10?

view more: next ›

Fern

joined 1 year ago