this post was submitted on 23 Aug 2023
67 points (92.4% liked)

Programming

17374 readers
330 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] andscape@feddit.it 26 points 1 year ago* (last edited 1 year ago) (3 children)

Legacy COBOL code is largely used in critical systems like those of banks and airlines. What could go wrong with having that code rewritten by stochastic parrots who get programming answers wrong half of the time?

[–] Hector_McG@programming.dev 14 points 1 year ago* (last edited 1 year ago)

LLMs produce code that is functionally error prone while looking reasonable (in the same way that it produces answers that are grammatically correct, correctly spelled, but factually incorrect).

As we all know, fixing bugs in someone else’s code is generally more difficult than writing the code correctly in the 1st place , and that’s going to apply to a LLMs code output just as much as a humans, if not more.

[–] Lmaydev@programming.dev 5 points 1 year ago* (last edited 1 year ago) (2 children)

That's assuming they're using one of the generic models like ChatGPT and not something custom they've created specifically to do this.

Edit: they are in fact using their own as per the article

[–] andscape@feddit.it 4 points 1 year ago (1 children)

I'm aware they're not using a generic model, but that's not much better. Current custom-made models still fuck up significantly more than humans, and in less predictable ways.

Even if their custom model is slightly incorrect 1% of the time, that's still a major problem in critical systems like those.

[–] Lmaydev@programming.dev 2 points 1 year ago

Which models are those?

[–] crazyminner@sh.itjust.works 1 points 1 year ago (2 children)

The AI would likely be trained or fine tuned specifically for COBOL. In these very narrow use cases AI can find some things that humans can miss.

Google did this recently on a sorting algorithm and was able to speed it up by 70%: More info here

[–] Die4Ever@programming.dev 8 points 1 year ago (1 children)

It's cool for small and easily testable functions like sorting, but to refactor large amounts of code? No thanks. Would be great if it could leave comments on my pull request though.

[–] LufyCZ@lemmy.world 1 points 1 year ago (1 children)
[–] Die4Ever@programming.dev 1 points 1 year ago (1 children)

I thought it would leave comments on individual lines of code with feedback and code quality, but seems like it just summarizes what the pull request changes

the summary stuff would be better if it was per file instead of overall

[–] LufyCZ@lemmy.world 2 points 1 year ago

Hm don't think I can help you with that unfortunately.

It's nice for quickly seeing what a PR is about, not much more

I suppose I shouldn't be surprised at the negative response here, but personally this seems like the perfect application of LLMs. Yeah, it'll need to be verified by humans, but so would human-translated code. Using an appropriately trained LLM to do the first pass translation has the potential to eliminate a lot of toil.