Cristian Adam

From React to Qt Widgets using AI

In “From llama.vim to Qt Creator using AI” I showed how I converted the llama.vim source code to a Qt Creator code completion plugin.

Code completion is cool, but people would want to chat with the chat bots, right?

My rough idea was:

  1. Get the markdown text from the model
  2. Parse it with md4c and generate html
  3. Display the html with litehtml

Here is the gpt-oss 20B response to this prompt:

write me a Qt markdown renderer, using md4c as parser library, and qlitehtml for rendering using litehtml as a html browser

llama-server does come with a web server that would provide a nice chat interface, with a way to store the conversations, export them, and so on.

This is part of llama.cpp/tools/server/webui and is implemented using TypeScript and React. I had no knowledge about these technologies. But AI does!

At first I’ve tried with Qwen3 Coder 30B and gpt-oss 20B to convert the whole webui project (4208 lines). But, as it turns out both models had issues with this task.

Qwen3 did recognize that the task was too big and offered a simplified version:

Because the original code is > 4000 lines, I do not rewrite every single line – that would be a huge undertaking and would not help you understand how the pieces map.

gpt-oss started generating a cpp file that had only headers:

#include <QApplication>

// ... after 230 headers that were in part repeating

#include <QSortFilterProxyModel>

I’ve stopped the chat.

Since these were the models that I had, I had to select less source files that I wanted to convert.

Less is more!

With less source files gpt-oss 20B was able to give me a skeleton to work with.

Sqlite Bug

The part with the storage:

4. storage.h – IndexedDB ↔︎ SQLite (simplified)

The original TS code used IndexedDB via Dexie.
Qt offers SQLite out of the box – a perfect match for the same relational schema.

Had a bug that shows that gpt-oss 20B has no idea what is doing smile

Here is the code:

Storage::Storage()
{
    db = QSqlDatabase::addDatabase("QSQLITE");
    db.setDatabaseName("llamacppwebui.db");
    if (!db.open())
    {
        qFatal("Failed to open database: %s", qPrintable(db.lastError().text()));
    }
    /* create tables if not exist */
    QSqlQuery q(db);
    q.exec("CREATE TABLE IF NOT EXISTS conversations "
           "(id TEXT PRIMARY KEY, lastModified INTEGER, currNode INTEGER, name TEXT)");
    q.exec("CREATE TABLE IF NOT EXISTS messages "
           "(id INTEGER PRIMARY KEY, convId TEXT, type TEXT, timestamp INTEGER, role TEXT, "
           "content TEXT, parent INTEGER, "
           "children TEXT, // JSON array of ints "
           "FOREIGN KEY(convId) REFERENCES conversations(id))");
    // create indexes for quick lookups
    q.exec("CREATE INDEX IF NOT EXISTS idx_messages_convId ON messages(convId)");
}

The line with the bug was:

           "children TEXT, // JSON array of ints "

This is what happens when you use strings for SQL statements smile

QLabel vs QLiteHtmlWidget

Qt Creator is using litehtml with a Qt wrapper as QLiteHtmlWidget to render the documentation.

Unfortunately the Chat view was using a QAbstractScrollArea and a stack of message widgets. A chat message is supposed to grow in size when new text is received from the model.

As it turns out QLiteHtmlWidget is using also a QAbstractScrollArea, this means that the chat message wouldn’t expand as the text would be received.

By using its internal, but exported, classes DocumentContainer and DocumentContainerContext I was able to render the html message.

That’s good right? Well no. Text selection was missing, and most importantly there was no incremental rendering when text was incrementally received. On every new text the whole message would have had to be rendered.

This is why I had to stick to QLabel for a while. QLabel had selection, and was working fine with incrementally received text.

Later I switched using a QTextBrowser for messages, which I could do some incremental section rendering (split after h1, newlines). This also helped implementing search in the conversation!

Application Font Icon

The React webui used heroicons for all the buttons. These icons are available as SVG files at github.

I had no idea that one has two issues when using SVGs for buttons:

  1. When using them as HTML “a img” links. They have the wrong resolution. See QTBUG-89843
  2. They do not look good in a Dark theme.

The quick solution for both issues is by using a Font. And with Creating fonts from SVGs – with automation! 🤖 I was able to convert the SVGs that I needed into a TrueType font!

The blog post had a Python2 script that used fontforge to do the magic. I’ve quickly used gpt-oss 20B to do the Python3 conversion.

gpt-oss and Qwen3 Coder

I have used both gpt-oss and Qwen3 to improve the Chat capability in llama.qtcreator. Both are very knowlegeable about Qt Widgets and C++! You “only” have to be very concrete when asking for a fix.

See it in action below:

This is the v2.0.0 of llama.qtcreator. At the Release page you can see the list of changes!

Comments