unsloth/tests
andrewor14 3ffb3bdcfe Fix QAT + LoRA fast path, add tests (#3307)
**Summary:** The existing QAT + LoRA path only applied fake
quantization to the original slow path, but the default is the
fast path that calls unsloth's fast LoRA primitives. This commit
integrates fake quantization into these fast primitives as well,
and add unit tests to assert that fake quantization is actually
taking place.

**Test Plan:**

Unit tests:
```
pytest tests/utils/test_qat.py
```

End-to-end test: https://gist.github.com/andrewor14/6360dd69b5784c71c46e80c14f53e6b6

Full fine-tuning Llama3.1-8B with and without QAT + LoRA on yahma/alpaca-cleaned for 1 epoch:

- Batch size = 8 (no grad accum)
- Learning rate = 2e-4
- Quantization scheme = int4 weight only (with bf16 activations)

Wikitext perplexity:

- Baseline = int4 quantized model finetuned without QAT
- QAT int4 quantized model (with this PR) achieved 33% lower perplexity than the int4 baseline
- QAT int4 quantized model without this PR was worse than the int4 baseline

```
==> unsloth_model_lora_baseline_output/lm_eval_float.log <==
|        |       |none  |     0|word_perplexity|↓  |7.5551|±  |   N/A|

==> unsloth_model_lora_baseline_output/lm_eval_quantized.log <==
|        |       |none  |     0|word_perplexity|↓  |8.7655|±  |   N/A|

==> unsloth_model_lora_qat_int4_output/lm_eval_quantized.log <==
|        |       |none  |     0|word_perplexity|↓  |8.3548|±  |   N/A|
```
2025-09-17 15:18:17 -07:00
..
qlora Fix typos (#2540) 2025-05-15 04:23:27 -07:00
saving Add TorchAO quantization tests with FP16 models and serialization workarounds (#3269) 2025-09-04 17:22:07 -07:00
utils Fix QAT + LoRA fast path, add tests (#3307) 2025-09-17 15:18:17 -07:00
__init__.py Qwen 3, Bug Fixes (#2445) 2025-04-30 22:38:39 -07:00
test_model_registry.py Qwen 3, Bug Fixes (#2445) 2025-04-30 22:38:39 -07:00