Browse Source

ggml: Prevent kv cache quanitization on gpt-oss

KV cache quantization has a dependency on the flash attention kernel.
We currently cannot use flash attention with gpt-oss as it requires
additional operations.

The model definition does not call flash attention, so it works
regardless of the setting but the cache will pick up the
quantization type. This updates the flash attention setting earlier
in the loading flow so that all downstream settings are also set correctly.

Fixes: #11671
mxyng/convert v0.11.2
Jesse Gross 8 months ago
committed by Jesse Gross
parent
commit
8253ad4d2b
  1. 4
      fs/ggml/ggml.go

4
fs/ggml/ggml.go

@ -761,6 +761,10 @@ func (f GGML) SupportsFlashAttention() bool {
return false
}
if f.KV().Architecture() == "gptoss" {
return false
}
// Check head counts match and are non-zero
headCountK := f.KV().EmbeddingHeadCountK()
headCountV := f.KV().EmbeddingHeadCountV()

Loading…
Cancel
Save