this post was submitted on 03 Nov 2024
65 points (95.8% liked)
Asklemmy
44169 readers
1558 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy ๐
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I have two projects for it right now. The first is shoving my labyrinth of HOA documents into it so I can answer quick questions about the HOA docs or at least find the right answer more effectively.
The second is for work, I shoved a couple months of slack, some Google docs, some PDFs all about our production product. Next I'm going to start shoving some of GitHub in there. It would be kind of nice to have something that I could ask where is the shorting algorithm and how does it work and it could give me back where the source code is in any documentation related to it.
The HOA docs I could feed into GPT, I'm still a little apprehensive to handover all of our production code to a public AI though.
I've got it running on a 2070 super and I've got another instance running on a fairly new ARC. It's not fast, But it's also not miserable. I'm running on the medium sized models I only have so much VRAM to deal with. It's kind of like trying to read the output off a dot matrix printer.
The natural language aspect is better than trying to shove it into a conventional search engine, say I don't know what a particular function is called or some aspect or what the subcompany my HOA uses to review architectural requests. Especially for the work stuff when there's so many different types of documents lying around. I still need to try some different models though my current model is a little dumb about context. I'm also having a little trouble with technical documentation that doesn't have a lot of English fluff. It's like I need it to digest a dictionary to go along with the documents.
HOA docs didn't even cross my mind, that's resourceful.
Has the AI been particularly accurate, and does it cite where it found your information? With more technical stuff it's always confidently wrong
ty for the response btw
It tells me what document in the collection it used, But it doesn't give me too much in the way of context or anything about the exact location in the document. It will usually give me some wording if I'm missing it and I can go to the document and search for that wording.
I'm just one person searching a handful of documents so the sample size is pretty small for repeatability, so far, if it says it's in there, it's in there. It definitely misses things though, I'm still early in the process. I need to try some different models and perhaps clean up the data a little bit for some of the stuff.
Using the documentation as source data It doesn't seem to hallucinate or insist things are wrong, it's more likely to say I don't see any information about that when the data is clearly in the data set somewhere.
YW on the responses I'm having fun with it even if it's taking forever to get it to dial in and be truly useful.
That's pretty smart, using it for legal documents. If the accuracy is high, it might be nice to just copy paste any tos or whatever to get the highlights in plain language (which imo should be a legal requirement of contracts in general, but especially ones written by a team of bad faith lawyers intended for people they don't expect to read it and deliberately written to discourage reading the whole thing).
We're a long way from trusting it to do something critical without intervention.
AI would be good at looking at an X-ray after a doctor and pointing out anomalies. But it would be bad to have it tell the doctor that everything looks fine.