Yesterday I was playing with the Qwen 2.5 70B model to generate some Python code but it didn't work.
So today I tried to run the Llama.cpp server and see if using Continue plugin with Visual Studio Code will help to fix the code.
To make it more responsive, running the server with a 7B model:
./llama-server -m ../../models/qwen2.5-coder-7b-instruct-q8_0.gguf -c 4096 --host 0.0.0.0 -ngl 99
After pointing Continue to the local llama server and asking it to fix it, it replied with some convincing suggestions.
However, it still doesn't work. So follow up again...
And third time is the charm? Nope.
No comments:
Post a Comment