chore(model gallery): 🤖 add 1 new models via gallery agent (#9400)

chore(model gallery): 🤖 add new models via gallery agent

Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
This commit is contained in:
LocalAI [bot] 2026-04-17 17:56:41 +02:00 committed by GitHub
parent 55c05211d3
commit 844b0b760b
No known key found for this signature in database
GPG key ID: B5690EEEBB952194

View file

@ -1,4 +1,67 @@
---
- name: "qwen3.6-35b-a3b-apex"
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
urls:
- https://huggingface.co/mudler/Qwen3.6-35B-A3B-APEX-GGUF
description: |
# Qwen3.6-35B-A3B
[](https://chat.qwen.ai)
> [!Note]
> This repository contains model weights and configuration files for the post-trained model in the Hugging Face Transformers format.
>
> These artifacts are compatible with Hugging Face Transformers, vLLM, SGLang, KTransformers, etc.
Following the February release of the Qwen3.5 series, we're pleased to share the first open-weight variant of Qwen3.6. Built on direct feedback from the community, Qwen3.6 prioritizes stability and real-world utility, offering developers a more intuitive, responsive, and genuinely productive coding experience.
## Qwen3.6 Highlights
This release delivers substantial upgrades, particularly in
- **Agentic Coding:** the model now handles frontend workflows and repository-level reasoning with greater fluency and precision.
- **Thinking Preservation:** we've introduced a new option to retain reasoning context from historical messages, streamlining iterative development and reducing overhead.
For more details, please refer to our blog post Qwen3.6-35B-A3B.
## Model Overview
...
license: "apache-2.0"
tags:
- llm
- gguf
- qwen3
- vision
icon: https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen3.6/Figures/qwen3.6_35b_a3b_score.png
overrides:
backend: llama-cpp
function:
automatic_tool_parsing_fallback: true
grammar:
disable: true
known_usecases:
- chat
mmproj: llama-cpp/mmproj/Qwen3.6-35B-A3B-APEX-GGUF/mmproj.gguf
options:
- use_jinja:true
parameters:
min_p: 0
model: llama-cpp/models/Qwen3.6-35B-A3B-APEX-GGUF/Qwen3.6-35B-A3B-APEX-Quality.gguf
presence_penalty: 1.5
repeat_penalty: 1
temperature: 0.7
top_k: 20
top_p: 0.8
template:
use_tokenizer_template: true
files:
- filename: llama-cpp/mmproj/Qwen3.6-35B-A3B-APEX-GGUF/mmproj.gguf
sha256: 356dfaa3111376a4f7165e32e8749713378d1700b37cf52e0c50d9f23322334d
uri: https://huggingface.co/mudler/Qwen3.6-35B-A3B-APEX-GGUF/resolve/main/mmproj.gguf
- filename: llama-cpp/models/Qwen3.6-35B-A3B-APEX-GGUF/Qwen3.6-35B-A3B-APEX-Quality.gguf
sha256: b5aa0676be588bf6ef3bbdb89905d7d239b2a809637f0766a6ce23aed6c6b5b4
uri: https://huggingface.co/mudler/Qwen3.6-35B-A3B-APEX-GGUF/resolve/main/Qwen3.6-35B-A3B-APEX-Quality.gguf
- name: "qwen3.6-35b-a3b"
url: "github:mudler/LocalAI/gallery/virtual.yaml@master"
urls: