this post was submitted on 09 Dec 2023
96 points (97.1% liked)

Technology

59593 readers
4493 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 9 comments
sorted by: hot top controversial new old
[–] Humanius@lemmy.world 11 points 11 months ago (1 children)
[–] GigglyBobble@kbin.social 15 points 11 months ago (4 children)

Designing the model to prevent it from generating illegal content

Yeah, good luck designing that.

[–] barsoap@lemm.ee 6 points 11 months ago* (last edited 11 months ago) (1 children)

That's the Parliament wishlist, not the actual text of the law. (At least I think that's the version that got passed).

Stuff like that is why it's a good idea parliamentarians aren't drafting stuff, but an army of technocrats. It's all too easy to vote in a training requirement into a section about transparency when it's 3 o'clock in the morning and you and everyone else in the committee wants to go home.

Here's the transparency article:

Article 52
Transparency obligations for certain AI systems

  1. Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
  2. Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
  3. Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated. However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
  4. Paragraphs 1, 2 and 3 shall not affect the requirements and obligations set out in Title III of this Regulation.

Most of the AI uses out there only have these very limited requirements mostly around transparency. There's some stuff about training in Article 2 listing outlawed practices, e.g. you may not train models to be subliminal.

Where things get strict is around things like using AI to screen prospective employees where you have to make sure they're not picking up any unwarranted biases, e.g. judging by sex or nationality. Even more stricter are high-risk systems, listed in Annex III, which are largely uses in administration, critical infrastructure, etc.


All in all I'd say as a first of its kind, the law is pretty darn good, in particular that it classifies requirements for systems not by technology employed, but by their area of application. And the "likeness of natural person" has arts and freedom of speech exception so this kind of stuff doesn't even need disclosure.

[–] SuckMyFingerKFC@fanaticus.social -3 points 11 months ago (1 children)

No way someone is reading this wall of text lol

[–] barsoap@lemm.ee 0 points 11 months ago

Speak about yourself.

[–] PinkPanther@sh.itjust.works 3 points 11 months ago* (last edited 11 months ago)

The law makers doesn't even know what how the internet works, and they're supposed to write the laws around it? Sounds like your general politicians.

[–] theterrasque@infosec.pub 2 points 11 months ago

In other news, they also regulated that knives must be designed to prevent stabbing people, and guns must be designed to only shoot bad guys.

[–] Humanius@lemmy.world 0 points 11 months ago* (last edited 11 months ago)

I can mostly find myself agreeing (or at least not having big issues with) with all of the points, except for that one.
Let's just hope they mean requiring a best effort, rather than outright preventing it in the first place.

[–] autotldr@lemmings.world 1 points 11 months ago

This is the best summary I could come up with:


European Union lawmakers have agreed on the terms for landmark legislation to regulate artificial intelligence, pushing ahead with enacting the world’s most restrictive regime on the development of the technology.

“The AIAct is much more than a rulebook—it’s a launchpad for EU start-ups and researchers to lead the global AI race.”

The deal followed years of discussions among member states and politicians on the ways AI should be curbed to have humanity’s interest at the heart of the legislation.

European companies have expressed their concern that overly restrictive rules on the technology, which is rapidly evolving and gained traction after the popularisation of OpenAI’s ChatGPT, will hamper innovation.

Last June, dozens of some of the largest European companies, such as France’s Airbus and Germany’s Siemens, said the rules were looking too tough to nurture innovation and help local industries.

That event attracted leading tech figures such as OpenAI’s Sam Altman, who has previously been critical of the EU’s plans to regulate the technology.


The original article contains 314 words, the summary contains 163 words. Saved 48%. I'm a bot and I'm open source!