this post was submitted on 24 Aug 2023
559 points (94.4% liked)

Technology

59207 readers
3234 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Driverless cars worse at detecting children and darker-skinned pedestrians say scientists::Researchers call for tighter regulations following major age and race-based discrepancies in AI autonomous systems.

you are viewing a single comment's thread
view the rest of the comments
[–] eager_eagle@lemmy.world 23 points 1 year ago* (last edited 1 year ago) (2 children)

I hate all this bias bullshit because it makes the problem bigger than it actually is and passes the wrong idea to the general public.

A pedestrian detection system shouldn't have as its goal to detect skin tones and different pedestrian sizes equally. There's no benefit in that. It should do the best it can to reduce the false negative rates of pedestrian detection regardless, and hopefully do better than human drivers in the majority of scenarios. The error rates will be different due to the very nature of the task, and that's ok.

This is what actually happens during research for the most part, but the media loves to stir some polarization and the public gives their clicks. Pushing for a "reduced bias model" is actually detrimental to the overall performance, because it incentivizes development of models that perform worse in scenarios they could have an edge just to serve an artificial demand for reduced bias.

[–] zabadoh@lemmy.ml 13 points 1 year ago (2 children)

I think you're misunderstanding what the article is saying.

You're correct that it isn't the job of a system to detect someone's skin color, and judge those people by it.

But the fact that AVs detect dark skinned people and short people at a lower effectiveness is a reflection of the lack of diversity in the tech staff designing and testing these systems as a whole.

They staff are designing the AVs to safely navigate in a world of people like them, but when the staff are overwhelmingly male, light skinned, young and single, and urban, and in the United States, a lot of considerations don't even cross their minds.

Will the AVs recognize female pedestrians?

Do the sensors sense light spectrum wide enough to detect dark skinned people?

Will the AVs recognize someone with a walker or in a wheelchair, or some other mobility device?

Toddlers are small and unpredictable.

Bicyclists can fall over at any moment.

Are all these AVs being tested in cities being exposed to all the animals they might encounter in rural areas like sheep, llamas, otters, alligators and other animals who might be in the road?

How well will AVs tested in urban areas fare on twisty mountain roads that suddenly change from multi lane asphalt to narrow twisty dirt roads?

Will they recognize tractors and other farm or industrial vehicles on the road?

Will they recognize something you only encounter in a foreign country like an elephant or an orangutan or a rickshaw? Or what's it going to do if it comes across that tomato festival in Spain?

Engineering isn't magical: It's the result of centuries of experimentation and recorded knowledge of what works and doesn't work.

Releasing AVs on the entire world without testing them on every little thing they might encounter is just asking for trouble.

What's required for safe driving without human intelligence is more mind boggling the more you think about it.

[–] rDrDr@lemmy.world 19 points 1 year ago (4 children)

But the fact that AVs detect dark skinned people and short people at a lower effectiveness is a reflection of the lack of diversity in the tech staff designing and testing these systems as a whole.

No, it isn't. Its a product of the fact that dark people are darker and children are smaller. Human drivers have a harder time seeing these individuals too. They literally send less data to the camera sensor. This is why people wear reflective vests for safety at night, and ninjas dress in black.

[–] lud@lemm.ee 4 points 1 year ago

That doesn't make it better.

It doesn't matter why they are bad at detecting X, it should be improved regardless.

Also maybe Lidarr would be a better idea.

[–] ashok36@lemmy.world 4 points 1 year ago

This is true but tesla and others could compensate for this by spending more time and money training on those form factors, something humans can't really do. It's an opportunity for them to prove the superhuman capabilities of their systems.

[–] hellothere@sh.itjust.works 3 points 1 year ago

They literally send less data to the camera sensor.

So maybe let's not limit ourselves to using hardware which cannot easily differentiate when there is other hardware, or combinations of hardware, which can do a better job at it?

Humans can't really get better eyes, but we can use more appropriate hardware in machines to accomplish the task.

[–] SocialMediaRefugee@lemmy.world -1 points 1 year ago

That is true. I almost hit a dark guy, wearing black, who was crossing a street at night with no streetlight as I turned into it. Almost gave me a heart attack. It is bad enough almost getting hit, as a white guy, when I cross a street with a streetlight.

[–] eager_eagle@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

These are important questions, but addressing them for each model built independently and optimizing for a low "racial bias" is the wrong approach.

In academia we have reference datasets that serve as standard benchmarks for data driven prediction models like pedestrian detection. The numbers obtained on these datasets are usually the referentials used when comparing different models. By building comprehensive datasets we get models that work well across a multitude of scenarios.

Those are all good questions, but need to be addressed when building such datasets. And whether model M performs X% better to detect people of that skin color is not relevant, as long as the error rate of any skin color is not out of an acceptable rate.

[–] SocialMediaRefugee@lemmy.world -2 points 1 year ago

The media has become ridiculously racist, they go out of their way to make every incident appear to be racial now