103
submitted 1 year ago* (last edited 1 year ago) by twelvewings@lemmy.world to c/chatgpt@lemmy.world

I asked GPT4 to refactor a simple, working python script for my smart lights... and it completely butchered the code and apologized mid-generation.

No amount of pleading or correction would get it to function as it did just a week or two ago.

It is so over.

top 35 comments
sorted by: hot top controversial new old
[-] ultranaut@lemmy.world 13 points 1 year ago

I noticed this today working on some bash scripts. Compared to a few weeks ago it's become noticeably dumber, but also faster.

[-] whatstyxscorpio@lemmy.world 11 points 1 year ago

They must be understandably desperate to burn less cash with each API call.

[-] turbodrooler@lemmy.world 13 points 1 year ago

Ctrl-K to apologize

[-] Aidan@lemm.ee 8 points 1 year ago* (last edited 1 year ago)

Gpt4 is not good at writing code. I think it’s because it has a lower token limit. Ask Gpt 4 to write out detailed specs for the code you want, then copy and paste that into a Gpt-3.5 session and ask it to write the code

And if it gets cut off, paste in the last line it output successfully and ask it to continue with the line following that one. Then just copy and paste the blocks together

[-] Totendax@feddit.de 7 points 1 year ago

I had their plus plan for a while but with the speed it gets worse I got rid of it.

Nowadays latest model is without any doubt worse then what we had last year in December if you just regard answer quality.

[-] vegivamp@feddit.nl 7 points 1 year ago

Microsoft bought it. They're not going to let their paying userbase of millions of coders evaporate...

[-] KonaKoder@lemmy.world 5 points 1 year ago

Microsoft wants to own tools crucial to the mainstream of software development. They also want to own the cloud infrastructure on which those tools depend. Today, they might lose dimes on every LLM call. In five years, they’ll make a penny on orders of magnitude more calls. Microsoft has many flaws, including cloud capacity, but they aren’t short-sighted about investment. (I used to work in DevDiv and Azure Machine Learning.)

[-] Venator@kbin.social 1 points 1 year ago

It is Microsoft: they might let that happen. 😂

[-] DarkenLM@kbin.social 1 points 1 year ago

It's Microsoft. Expecting them to make good and logical decisions is completely delusional.

[-] Madrigal@lemmy.world 0 points 1 year ago

Good and logical decisions are plausible. However, expecting Microsoft to make consistent decisions and be able to work as a single cohesive team, now that’s delusional.

[-] fossilesque@mander.xyz 4 points 1 year ago
[-] twelvewings@lemmy.world 1 points 1 year ago

Oh great, so now I have to pay on top of my Plus membership?

A couple of prompts already cost be $0.03. I could easily run up hundreds of dollars if I used it as much as I have in the past.

Do you have any tips or preferring settings when using the Playground to code?

[-] fossilesque@mander.xyz 9 points 1 year ago* (last edited 1 year ago)

It does not add up that much unless you're tokenizing a metric ton of content. It's a few extra bucks and you can set a rate limit. I have rarely hit a 5 dollar mark with consistent writing. Limit it to a fiver and see how it goes. As for the settings, it is really model specific. There are a ton of guides out there, though for effective instructing. I believe Wolfram Alpha has a few. Their formatting is nice: https://medium.com/machine-minds/chatgpt-python-machine-learning-prompts-abefc544412c Playground pricing is what they use for commercial API use. It has to be affordable.

[-] mango@sopuli.xyz 8 points 1 year ago

It actually works out cheaper in most cases to use the playground over actually paying for chat gpt plus

[-] fossilesque@mander.xyz 3 points 1 year ago

^ This guy playgrounds.

[-] aslaii@lemmy.world 4 points 1 year ago

Rewriting the whole code sometimes happen to me. But it just means gpt is already at token limit on one answer. You can copy the unfinished code then make it to continue.

Try avoiding to make gpt generate a long code.

[-] Xandar437@feddit.nl 4 points 1 year ago

Noticed the same yesterday, seems like something is wrong. Gave it a simple row off numbers for 10 days. So for example day 5: 2, 4, 56, 8, 12, then asked it to give me for example day 7. And it keeps on mixing all the number for all the days. Then I correct the Ai, it apologies, gives the correct numbers for a prompt or two, then again mixes them all up...

[-] GutterPunch@lemmy.world 2 points 1 year ago

It's moderately good at in-line commenting functions and creating full function doc comments for the specific language / documentation format you need, but its code generation abilities are still not game-changing. Getting it to generate anything longer than a few helper functions is a test of patience.

[-] twelvewings@lemmy.world 7 points 1 year ago* (last edited 1 year ago)

This wasn't always this case. I had zero Python experience a month ago, and managed to make a 300 line Python script that checks credit card validation, and has a beautiful UI. This would be impossible today.

[-] BoomBoomLemon@lemmy.world 28 points 1 year ago* (last edited 1 year ago)

You said “beautiful” but I’m not sure you are using that word right.

[-] twelvewings@lemmy.world 3 points 1 year ago

Considering how long I've been using Python, and how it looked when I started, it is to me. And here is the ancient one I was previously using:

[-] LoafyLemon@kbin.social 2 points 1 year ago

Somehow this looks better to me, possibly because of less redundancy (copy buttons aren't needed when you have ctrl+c).

[-] mycroft@lemmy.world 1 points 1 year ago

They had to make it too dumb to draw Disney Characters... you think I'm joking, try getting it to render a disney character in SVG or javascript...

[-] twelvewings@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

It's worse that I thought. Both 3.5 and 4 butcher Python code by skipping words and inserting "```python" whenever you click the "Continue generating" button.

Literally unusable now.

Not to mention they even made the entire context window thinner and now there's a horizontal scrollbar and code off screen.

[-] Spzi@lemm.ee 1 points 1 year ago

Any clue what the 'mistake' was for which it apologized?

[-] twelvewings@lemmy.world 1 points 1 year ago

I mean, I could copy the code that works, but it's not really the point.

Not only did it apologize mid-codebox and not even comment it out, it started the code from the top again instead of actually fixing.

Just so asinine and bad it's not even worth analyzing further.

load more comments
view more: next ›
this post was submitted on 06 Jul 2023
103 points (96.4% liked)

ChatGPT

8828 readers
14 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS