this post was submitted on 29 Sep 2024
37 points (84.9% liked)
Godot
6005 readers
13 users here now
Welcome to the programming.dev Godot community!
This is a place where you can discuss about anything relating to the Godot game engine. Feel free to ask questions, post tutorials, show off your godot game, etc.
Make sure to follow the Godot CoC while chatting
We have a matrix room that can be used for chatting with other members of the community here
Links
Other Communities
- !inat@programming.dev
- !play_my_game@programming.dev
- !destroy_my_game@programming.dev
- !voxel_dev@programming.dev
- !roguelikedev@programming.dev
- !game_design@programming.dev
- !gamedev@programming.dev
Rules
- Posts need to be in english
- Posts with explicit content must be tagged with nsfw
- We do not condone harassment inside the community as well as trolling or equivalent behaviour
- Do not post illegal materials or post things encouraging actions such as pirating games
We have a four strike system in this community where you get warned the first time you break a rule, then given a week ban, then given a year ban, then a permanent ban. Certain actions may bypass this and go straight to permanent ban if severe enough and done with malicious intent
Wormhole
Credits
- The icon is a modified version of the official godot engine logo (changing the colors to a gradient and black background)
- The banner is from Godot Design
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Completed via a keyboard shortcut is perfect.
As far as other features I want; I don't want any. I just want code completion via keyboard shortcut.
I think a hard aspect is figuring out what context to feed the LLM. Iirc GitHub Copilot only feeds what is in the current file, above the cursor, but I think feeding the whole file + other open code tabs would be super useful.
You are right in that it can be useful to feed in all of the contents in other related files.
However!
LLMs take a really long time before writing anything with a large context input. the fact that githubs copilot can generate code so quickly even though it has to keep the entire code file in context is a miracle to me.
Including all related or opened GDScript files would be way too much for most models and it would likely take about 20 seconds for it to actually start generate some code (also called
first token lag
). So I will likely only implement the current file into the context window, as that might already take some time. Remember, we are running local LLMs here, so not everyone has a blazingly fast GPU or CPU (I use a GTX1060 6GB for instance).Example
I just tried it and it took a good 10 seconds for it to complete some 111 line code without any other context using this pretty small model and then about 6 seconds for it to write about 5 lines of comment documentation (on my CPU). It takes about 1 second with a very short script.You can try this yourself using something like HuggingChat to test out a big context window model like Command R+ and fill its context windw with some really really long string (copy paste it a bunch times) and see how it takes longer to respond. For me, it's the difference between one second and 13 seconds!
I am thinking about
embedding
either the current working file, or maybe some other opened files though, to get the most important functions out of the script to keep context length short. This way we can shorten thisfirst token delay
a bit.This is a completely different story with hosted LLMs, as they tend to have blazingly quick
first token delays
, which makes the wait trivial.Makes sense. Glad anything will exist at all though!