chore(model gallery): 🤖 add 1 new models via gallery agent (#9425)

chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
This commit is contained in:
LocalAI [bot] 2026-04-20 00:42:44 +02:00 committed by GitHub
parent 60633c4dd5
commit cb77a5a4b9
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -1,4 +1,66 @@
---
- name: "qwopus-glm-18b-merged"
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
urls:
- https://huggingface.co/Jackrong/Qwopus-GLM-18B-Merged-GGUF
description: |
# 🪐 Qwen3.5-9B-GLM5.1-Distill-v1
## 📌 Model Overview
**Model Name:** `Jackrong/Qwen3.5-9B-GLM5.1-Distill-v1`
**Base Model:** Qwen3.5-9B
**Training Type:** Supervised Fine-Tuning (SFT, Distillation)
**Parameter Scale:** 9B
**Training Framework:** Unsloth
This model is a distilled variant of **Qwen3.5-9B**, trained on high-quality reasoning data derived from **GLM-5.1**.
The primary goals are to:
- Improve **structured reasoning ability**
- Enhance **instruction-following consistency**
- Activate **latent knowledge via better reasoning structure**
## 📊 Training Data
### Main Dataset
- `Jackrong/GLM-5.1-Reasoning-1M-Cleaned`
- Cleaned from the original `Kassadin88/GLM-5.1-1000000x` dataset.
- Generated from a **GLM-5.1 teacher model**
- Approximately **700x** the scale of `Qwen3.5-reasoning-700x`
- Training used a **filtered subset**, not the full source dataset.
### Auxiliary Dataset
- `Jackrong/Qwen3.5-reasoning-700x`
...
license: "apache-2.0"
tags:
- llm
- gguf
- reasoning
icon: https://cdn-uploads.huggingface.co/production/uploads/66309bd090589b7c65950665/BnSg_x99v9bG9T5-8sKa1.png
overrides:
backend: llama-cpp
function:
automatic_tool_parsing_fallback: true
grammar:
disable: true
known_usecases:
- chat
options:
- use_jinja:true
parameters:
model: llama-cpp/models/Qwopus-GLM-18B-Merged-GGUF/Qwopus-GLM-18B-Healed-Q4_K_M.gguf
template:
use_tokenizer_template: true
files:
- filename: llama-cpp/models/Qwopus-GLM-18B-Merged-GGUF/Qwopus-GLM-18B-Healed-Q4_K_M.gguf
sha256: 13bd039f95c9ea46ef1d75905faa7be6ca4e47a5af9d4cf62e298a738a5b195f
uri: https://huggingface.co/Jackrong/Qwopus-GLM-18B-Merged-GGUF/resolve/main/Qwopus-GLM-18B-Healed-Q4_K_M.gguf
- name: "qwen3.6-35b-a3b-apex"
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
urls: