Tuesday, April 1, 2025

Trying out code generation with Qwen 2.5 32B

I was playing with some of the smaller LLM models (Llama 2 7B Q4, Llama 3.1 8B Q8 etc) and found that they hallucinate a lot (as expected) .

So I decided to add more RAM to my PC to run a bigger model. My computer is a Ryzen 5 5600G with integrated graphics. Although it definitely lacks the GPU computation power, one benefit with AMD iGPU is that it can access more system memory as needed as GTT. On Linux, by default it can allocate up to 50% of system memory to the GPU (more by specifying amdgpu.gttsize kernel parameter).



So with 96GB of system memory, I was able to run Qwen Coder 2.5 32B to try out some code generation.

./llama-cli -m ../../models/qwen2.5-coder-32b-instruct-q8_0.gguf -c 16384 -e -ngl 99

Here is a prompt I got somewhere from the internet:

Write a Python program that shows a ball bouncing inside a spinning hexagon. The ball should be affected by gravity and friction, and it must bounce off the rotating walls realistically

The result was much better than Qwen 2.5 14B but the generated code was still wrong. It didn't get the physics right. The ball simply fell through the hexagon. 



And the LLM output speed was around 1.35 tokens per second.

llama_perf_sampler_print:    sampling time =     182.90 ms /  1662 runs   (    0.11 ms per token,  9087.18 tokens per second)
llama_perf_context_print:        load time =    9746.62 ms
llama_perf_context_print: prompt eval time =    3818.63 ms /    43 tokens (   88.81 ms per token,    11.26 tokens per second)
llama_perf_context_print:        eval time = 1202554.57 ms /  1618 runs   (  743.24 ms per token,     1.35 tokens per second)
llama_perf_context_print:       total time = 1310292.02 ms /  1661 tokens


Thursday, March 27, 2025

Building llama.cpp with vulkan on openSUSE



When trying to run llama.cpp locally, I found that the instructions for building the docker image with vulkan acceleration doesn't work on my openSUSE Tumbleweed machine.

Instead, I needed to build and run the client directly on my host machine.

First, make sure both "vulkan-devel" and "shaderc" packages are installed.

Next, build it with vulkan

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
mkdir build
cd build
cmake .. -DGGML_VULKAN=on -DCMAKE_BUILD_TYPE=Release
make

The client should detect and use GPU via vulkan library.

[~/work/llama.cpp/build/bin] $ ./llama-cli -m ../../models/Meta-Llama-3.1-8B-Instruct-Q5_K_L.gguf -p "Building a website can be done in 10 simple steps:" -n 600 -e -ngl 99  

ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon Graphics (RADV RENOIR) (radv) | uma: 1 | fp16: 1 | warp size: 64 | shared memory: 65536 | matrix cores: none
build: 4967 (f17a3bb4) with cc (SUSE Linux) 14.2.1 20250220 [revision 9ffecde121af883b60bbe60d00425036bc873048] for x86_64-suse-linux
main: llama backend init
......





Friday, December 27, 2024

Trying out vector embeddings

newsSum is a Google App Engine application that bundle articles from different news sources. To try out ML embeddings, I decided to add a suggestions service.


High-level idea



  • There will be no changes to the backend of "newssum". The suggestion service "newssum-sug" will be implemented as a separate service
  • The frontend of "newssum" will check if the suggestion service "newssum-sug" is available. If so, it will allow the user to expand an article and query the suggestion service for additional information to display


Implementation of the suggestion service

  • Technically, "newssum-sug" could gather suggestions from any sources (e.g. Google search results, a Youtube video etc). But for now, it will process articles from selected "newssum" sources. So, there will be scheduled tasks to collect articles from "newssum" and prepare them for searching.
  • Vector embeddings will be used to find similar articles. A machine learning model is used to turn a news headline into a vector of numbers. When a query comes in, an embedding will also be generated from that query. By comparing the distance between vectors, we could find articles that are related to the query.
  • The embeddings generated during batch processing are stored in a vector database. The database will also provide the mechanism for searching vectors by distance.
  • Since "newssum" is for current news only, embeddings will only be kept for 2 days.
  • The suggestion service can also be used for free-text search. But for now, the frontend only uses it for article suggestions.

While "newssum" is open source, the "newssum-sug" service is still under development in closed source. But the basic functionality has been integrated and available on the demo site.

Tuesday, December 24, 2024

A machine learning model to identify music albums from photos

 




I was looking for a home automation project to select and play specific music album from stream services. There are similar ideas of using NFC tags. Basically, it means preparing some NFC tags with album/movie cover arts on them. And putting a tag on the reader will trigger the playback of that album/movie. While it brings the joy of handling and selecting physical collections, it costs money and time to prepare those NFC tags and I wanted to avoid that.

Since now we have those machine learning models and classifiers, I thought I can just train up a model to look at a webcam photo of a record / CD and tell me the Spotify link to play that album.

BTW, I know Microsoft co-pilot (or maybe OpenAI too) can do it without any special training, but then I don't want to pay extra for that and just wanted to host the model on my own machines.

I imagine it will be something like this:

I put an album in front of a webcam...


... and the model will tell me the Spotify URL to pass on to the music streamer


Long story short, my model can a identify my music collection with a 98% correctness (more on that later). If you are interested in the technical details and the scripts used to train the model, they are available on github: https://github.com/kitsook/AlbumSpotter

But eventually I didn't integrate this into my home automation, which is kind of related to the correctness. When I got a new CD / vinyl record, I always add that to my collection on Spotify. So I can just get the cover arts from Spotify to train my model. But then I discovered there are at least two problems that will affect the correctness:

  • there are many editions of the same album. e.g. I could have a physical CD of the standard edition but my Spotify collection has the extended edition with a different song list
  • nowadays artists tend to release an album with several "special" cover arts. My physical copy could look totally different from the one on Spotify

That means I will need to cleanup the data for a more accurate result. As procrastination kicks in, I am stopping the project with just the machine learning model and the home automation part will be a future project.


Monday, September 23, 2024

OpenSUSE Tumbleweed - another bug


 


Tumbleweed used to be great in the past eight years or so that I used it on my main desktop. But recently it is just bugs after bugs after every update.

Here is another one after updating today: the KDE screenlocker crashed with segfault and failed to unlock the screen. Seems to be pam library related.




Saturday, June 15, 2024

Google cloud AppEngine deployment issue with cache



Notes to self: when deploying Google Cloud applications, if encounter the following "no such object" error, it is because the deployment process is looking for cache that doesnt exist.

"build": ERROR: failed to initialize analyzer: getting previous image: getting config file for image

To solve it, deploy with "--no-cache" parameter.

Friday, February 23, 2024

Segfault with bluez and A2DP


Currently (as of Feb 23, 2024) openSUSE Tubleweed with bluez 5.71-2.4 is experiencing segfault whenever I try to connect my Sony WH-H800 headphone (but fine with others).

After some digging, seems like it is plagued by a recent bug. Waiting for Tumbleweed to include the patch.

[Fri Feb 23 16:23:24 2024] bluetoothd[9971]: segfault at 5600bf0628b5 ip 00005605de5743c1 sp 00007ffee8f18f70 error 4 in blu
etoothd[5605de552000+d6000] likely on CPU 7 (core 1, socket 0)
[Fri Feb 23 16:23:24 2024] Code: 41 83 2c 24 01 0f 85 1b ff ff ff 4c 89 e7 e8 96 f4 fd ff e9 0e ff ff ff 90 41 55 41 54 55 5

3 48 83 ec 08 48 8b 2a 48 8b 7a 08 <48> 8b 45 20 4c 8b ad 88 00 00 00 4c 8b 20 48 85 ff 74 19 c7 47 08

$ sudo coredumpctl info 9971
          PID: 9971 (bluetoothd)
          UID: 0 (root)
          GID: 0 (root)
       Signal: 11 (SEGV)
    Timestamp: Fri 2024-02-23 16:23:24 PST (4h 2min ago)
 Command Line: /usr/libexec/bluetooth/bluetoothd
   Executable: /usr/libexec/bluetooth/bluetoothd
Control Group: /system.slice/bluetooth.service
         Unit: bluetooth.service
        Slice: system.slice
      Boot ID: 153713284dde4d7cba57f31e2956690d
   Machine ID: 5c42528e25094a3cb1af7e2c43a85357
     Hostname: linux-lct7
      Storage: /var/lib/systemd/coredump/core.bluetoothd.0.153713284dde4d7cba57f31e2956690d.9971.1708734204000000.zst (present)
 Size on Disk: 138.1K
      Message: Process 9971 (bluetoothd) of user 0 dumped core.
                
               Stack trace of thread 9971:
               #0  0x00005605de5743c1 n/a (bluetoothd + 0x463c1)
               #1  0x00005605de55f4d0 n/a (bluetoothd + 0x314d0)
               #2  0x00005605de55f5a8 n/a (bluetoothd + 0x315a8)
               #3  0x00005605de569767 n/a (bluetoothd + 0x3b767)
               #4  0x00007fc0b22daf30 n/a (libglib-2.0.so.0 + 0x5bf30)
               #5  0x00007fc0b22dcb58 n/a (libglib-2.0.so.0 + 0x5db58)
               #6  0x00007fc0b22dd42f g_main_loop_run (libglib-2.0.so.0 + 0x5e42f)
               #7  0x00005605de5555dc n/a (bluetoothd + 0x275dc)
               #8  0x00007fc0b1e2a1f0 __libc_start_call_main (libc.so.6 + 0x2a1f0)
               #9  0x00007fc0b1e2a2b9 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x2a2b9)
               #10 0x00005605de5566b5 n/a (bluetoothd + 0x286b5)
               ELF object binary architecture: AMD x86-64


Thursday, February 1, 2024

Logitech Media Server commands




While moving the music stream service from Tidal to Spotify on Home Assistant, I found I forgot how to get the playlist ID from Logitech Media Server (LMS).

Note to self: connect to LMS with "nc lms-address 9090" and issue the command "playlists 0 100 tags:E". The command doc is available at "lms-address:port/html/docs/cli-api.html"

Wednesday, December 6, 2023

Solving the Knapsack Problem with recursion in Java



Source available on Github.

Wikipedia has detailed description on the Knapsack problem and pseudo code on solving it. However, while implementing it with Elixir, seems that using simple recursion yields cleaner code. This is not the fastest way to solve the problem, but just easy to understand.


Basically, three return points within the recursion:

  • if no items, result is 0 and return
  • pop the first item. if item weight is over the max limit, return recursion result of the remaining items
  • return the max between (current item value + recursion result with the remaining item) or (just the recursion result with the remaining items only)

However, not only it is inefficient, recursion will cause stack overflow error in Java. See the last (disabled) test case. Use -Xss parameter to increase stack size and run it for fun.

Wednesday, November 8, 2023

Caffeine (Simplified)

 



Found my old Turbo Pascal book on the bookshelf. Just so happen I was searching for a Windows program to prevent screen lock and found the open source Caffeine also written in Pascal.

Forked the repository and fired up Lazarus. Revised the UI a bit. Now it starts as a tray icon and can be controlled by the right-click menu.

Source code and executable available on github.