mirror of
https://github.com/ggml-org/llama.cpp.git
synced 2025-10-27 08:21:30 +00:00
* feat: Per-conversation loading states and tracking streaming stats * chore: update webui build output * refactor: Chat state management Consolidates loading state management by using a global `isLoading` store synchronized with individual conversation states. This change ensures proper reactivity and avoids potential race conditions when updating the UI based on the loading status of different conversations. It also improves the accuracy of statistics displayed. Additionally, slots service methods are updated to use conversation IDs for per-conversation state management, avoiding global state pollution. * feat: Adds loading indicator to conversation items * chore: update webui build output * fix: Fix aborting chat streaming Improves the chat stream abortion process by ensuring that partial responses are saved before the abort signal is sent. This avoids a race condition where the onError callback could clear the streaming state before the partial response is saved. Additionally, the stream reading loop and callbacks are now checked for abort signals to prevent further processing after abortion. * refactor: Remove redundant comments * chore: build webui static output * refactor: Cleanup * chore: update webui build output * chore: update webui build output * fix: Conversation loading indicator for regenerating messages * chore: update webui static build * feat: Improve configuration * feat: Install `http-server` as dev dependency to not need to rely on `npx` in CI
llama.cpp Web UI
A modern, feature-rich web interface for llama.cpp built with SvelteKit. This UI provides an intuitive chat interface with advanced file handling, conversation management, and comprehensive model interaction capabilities.
Features
- Modern Chat Interface - Clean, responsive design with dark/light mode
- File Attachments - Support for images, text files, PDFs, and audio with rich previews and drag-and-drop support
- Conversation Management - Create, edit, branch, and search conversations
- Advanced Markdown - Code highlighting, math formulas (KaTeX), and content blocks
- Reasoning Content - Support for models with thinking blocks
- Keyboard Shortcuts - Keyboard navigation (Shift+Ctrl/Cmd+O for new chat, Shift+Ctrl/Cmdt+E for edit conversation, Shift+Ctrl/Cmdt+D for delete conversation, Ctrl/Cmd+K for search, Ctrl/Cmd+V for paste, Ctrl/Cmd+B for opening/collapsing sidebar)
- Request Tracking - Monitor processing with slots endpoint integration
- UI Testing - Storybook component library with automated tests
Development
Install dependencies:
npm install
Start the development server + Storybook:
npm run dev
This will start both the SvelteKit dev server and Storybook on port 6006.
Building
Create a production build:
npm run build
The build outputs static files to ../public directory for deployment with llama.cpp server.
Testing
Run the test suite:
# E2E tests
npm run test:e2e
# Unit tests
npm run test:unit
# UI tests
npm run test:ui
# All tests
npm run test
Architecture
- Framework: SvelteKit with Svelte 5 runes
- Components: ShadCN UI + bits-ui design system
- Database: IndexedDB with Dexie for local storage
- Build: Static adapter for deployment with llama.cpp server
- Testing: Playwright (E2E) + Vitest (unit) + Storybook (components)