this post was submitted on 04 Jun 2024
662 points (98.5% liked)

Technology

59572 readers
3219 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

this rootless Python script rips Windows Recall's screenshots and SQLite database of OCRed text and allows you to search them.

you are viewing a single comment's thread
view the rest of the comments
[–] qjkxbmwvz@startrek.website 70 points 5 months ago (6 children)

Hilarious to me that it OCRs the text. The text is generated by the computer. It's almost like when Lt. Cmdr. Data wants to get information from the computer database, so he tells the computer to display it and just keeps increasing the speed


there are way more efficient means of getting information from A to B than displaying it, imaging it, and running it though image processing!

I totally get that this is what makes sense, and it's independent of the method/library used for generating text, but still...the computer "knows" what it's displaying (except for images of text), and yet it has to screenshot and read it back.

[–] Wispy2891@lemmy.world 28 points 5 months ago (1 children)

It happens the same on android for some reason

Like 5-8 years ago the google assistant app was able to select and copy text from any app when invoked, I think it was called “now on tap”. Then because they’re google and they’re contractually obligated to remove features after some time, they removed this from the google app and integrated it in the pixel app switcher (and who cares if 99% of android users aren’t using a pixel, they say). The new implementation sucks, as it does ocr instead of just accessing the raw text…

It only works fine with us English and not with other languages. But maybe it’s ok as it seems that google’s development style is us-centric

[–] nawa@lemmy.world 13 points 5 months ago (1 children)

Now on Tap also used OCR. Both Google Lens and Now on Tap get the same bullshit results on any languages that are not Latin. Literally, Ж gets read as >|< by both exactly the same.

[–] Wispy2891@lemmy.world 9 points 5 months ago

They changed it, in the beginning it was using the text and not ocr

For example this app could be set as assistant and get the raw text https://play.google.com/store/apps/details?id=com.weberdo.apps.copy

But only the app set on system as assistant can do it

I was very disappointed when they changed it around 2018 as it produced garbage in my language when it was working so good…

[–] 4am@lemm.ee 25 points 5 months ago* (last edited 5 months ago) (1 children)

Hey, yeah… why aren’t they just tapping the font rendering DLL?

are they tapping the front rendering dll??

[–] HelloHotel@lemm.ee 2 points 5 months ago

My guess is that they looked at their screen reader API, saw that it wasnt 100% of the text on screen and said fuck it! Were using OCR!

[–] space@lemmy.dbzer0.com 24 points 5 months ago

Having worked on a product that actually did this, it's not as easy as it seems. There are many ways of drawing text on the screen.

GDI is the most common, which is part of the windows API. But some applications do their own rendering (including browsers).

Another difficulty, even if you could tap into every draw call, you would also need a way to determine what is visible on the screen and what is covered by something else.

[–] catloaf@lemm.ee 20 points 5 months ago

That's the thing, it doesn't really know what it's displaying. I can send a bunch of textboxes, but if they're hidden, or drawn off-screen, or underneath another element, then they're not actually displayed.

[–] eager_eagle@lemmy.world 9 points 5 months ago

Text from OCR is one kind of match. Recall also runs visual comparisons with the image tokens stored.

[–] TheGrandNagus@lemmy.world 3 points 5 months ago

To be fair, Data was designed to be like a human, and was made in the image of his creator. He has a number of design decisions that are essentially down to his creator wanting to create something like a human. Including that which you describe.

Data was never intended to work like a PC, it's very normal that he can't just wirelessly interface with stuff.