LocalAI/core/schema
Ettore Di Giacinto 87e6de1989
feat: wire transcription for llama.cpp, add streaming support (#9353)
Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
2026-04-14 16:13:40 +02:00
..
agent_jobs.go feat(api): Allow coding agents to interactively discover how to control and configure LocalAI (#9084) 2026-04-04 15:14:35 +02:00
anthropic.go fix(anthropic): show null index when not present, default to 0 (#9225) 2026-04-04 15:13:17 +02:00
anthropic_test.go feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00
backend.go feat: Add backend gallery (#5607) 2025-06-15 14:56:52 +02:00
elevenlabs.go feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00
finetune.go feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00
gallery-model.schema.json [gallery] add JSON schema for gallery model specification (#7890) 2026-01-06 22:10:43 +01:00
jina.go fix(reranker): tests and top_n check fix #7212 (#7284) 2025-11-16 17:53:23 +01:00
localai.go feat(sam.cpp): add sam.cpp detection backend (#9288) 2026-04-09 21:49:11 +02:00
message.go feat(vllm): parity with llama.cpp backend (#9328) 2026-04-13 11:00:29 +02:00
message_test.go feat(vllm): parity with llama.cpp backend (#9328) 2026-04-13 11:00:29 +02:00
ollama.go feat(api): add ollama compatibility (#9284) 2026-04-09 14:15:14 +02:00
openai.go feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00
openresponses.go feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00
prediction.go fix: implement encoding_format=base64 for embeddings endpoint (#9135) 2026-03-25 17:38:07 +01:00
quantization.go feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00
request.go feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00
schema_suite_test.go feat(llama.cpp): consolidate options and respect tokenizer template when enabled (#7120) 2025-11-07 21:23:50 +01:00
tokenize.go feat(api): Allow coding agents to interactively discover how to control and configure LocalAI (#9084) 2026-04-04 15:14:35 +02:00
transcription.go feat: wire transcription for llama.cpp, add streaming support (#9353) 2026-04-14 16:13:40 +02:00
transcription_format.go feat: add distributed mode (#9124) 2026-03-30 00:47:27 +02:00