LocalAI is the open-source AI engine. Run any model - LLMs, vision, voice, image, video - on any hardware. No GPU required.
Find a file
LocalAI [bot] 224063f0f7
feat(swagger): update swagger (#5983)
Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: mudler <2420543+mudler@users.noreply.github.com>
2025-08-07 00:32:11 +02:00
.devcontainer feat: refactor build process, drop embedded backends (#5875) 2025-07-22 16:31:04 +02:00
.devcontainer-scripts feat: refactor build process, drop embedded backends (#5875) 2025-07-22 16:31:04 +02:00
.github fix: build kokoro-hipblas on self-hosted 2025-08-06 15:50:54 +02:00
.vscode feat: refactor build process, drop embedded backends (#5875) 2025-07-22 16:31:04 +02:00
aio feat: split piper from main binary (#5858) 2025-07-19 08:31:33 +02:00
backend fix(llama.cpp): do not default to linear rope (#5982) 2025-08-06 23:20:28 +02:00
configuration refactor: move remaining api packages to core (#1731) 2024-03-01 16:19:53 +01:00
core feat: add reasoning effort and metadata to template (#5981) 2025-08-06 21:56:05 +02:00
custom-ca-certs feat(certificates): add support for custom CA certificates (#880) 2023-11-01 20:10:14 +01:00
docs docs: ⬆️ update docs version mudler/LocalAI (#5967) 2025-08-04 21:00:23 +00:00
examples chore: create examples/README to redirect to the new repository 2024-10-30 09:11:32 +01:00
gallery fix(harmony): improve template by adding reasoning effort and system_prompt (#5985) 2025-08-07 00:31:37 +02:00
internal feat: cleanups, small enhancements 2023-07-04 18:58:19 +02:00
models Add docker-compose 2023-04-13 01:13:14 +02:00
pkg feat(backends): install from local path (#5962) 2025-08-03 14:24:50 +02:00
prompt-templates Requested Changes from GPT4ALL to Luna-AI-Llama2 (#1092) 2023-09-22 11:22:17 +02:00
scripts feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
swagger feat(swagger): update swagger (#5983) 2025-08-07 00:32:11 +02:00
tests feat: refactor build process, drop embedded backends (#5875) 2025-07-22 16:31:04 +02:00
.dockerignore feat: do not bundle llama-cpp anymore (#5790) 2025-07-18 13:24:12 +02:00
.editorconfig feat(stores): Vector store backend (#1795) 2024-03-22 21:14:04 +01:00
.env feat: refactor build process, drop embedded backends (#5875) 2025-07-22 16:31:04 +02:00
.gitattributes chore(linguist): add *.hpp files to linguist-vendored (#4154) 2024-11-14 14:12:16 +01:00
.gitignore chore: try to speedup build 2025-07-23 21:21:23 +02:00
.gitmodules docs/examples: enhancements (#1572) 2024-01-18 19:41:08 +01:00
.goreleaser.yaml feat: refactor build process, drop embedded backends (#5875) 2025-07-22 16:31:04 +02:00
.yamllint fix: yamlint warnings and errors (#2131) 2024-04-25 17:25:56 +00:00
CONTRIBUTING.md Update CONTRIBUTING.md (#3723) 2024-10-03 20:03:35 +02:00
docker-compose.yaml feat: refactor build process, drop embedded backends (#5875) 2025-07-22 16:31:04 +02:00
Dockerfile chore(build): Rename sycl to intel (#5964) 2025-08-04 11:01:28 +02:00
Dockerfile.aio feat(aio): entrypoint, update workflows (#1872) 2024-03-21 22:09:04 +01:00
Entitlements.plist Feat: OSX Local Codesigning (#1319) 2023-11-23 15:22:54 +01:00
entrypoint.sh feat: ⚠️ reduce images size and stop bundling sources (#5721) 2025-06-26 18:41:38 +02:00
go.mod feat(backends): install from local path (#5962) 2025-08-03 14:24:50 +02:00
go.sum feat(backends): install from local path (#5962) 2025-08-03 14:24:50 +02:00
LICENSE chore(docs): update license year 2025-02-15 18:17:15 +01:00
main.go feat: refactor build process, drop embedded backends (#5875) 2025-07-22 16:31:04 +02:00
Makefile feat(kokoro): complete kokoro integration (#5978) 2025-08-06 15:23:29 +02:00
README.md fix(intel): Set GPU vendor on Intel images and cleanup (#5945) 2025-07-31 19:44:46 +02:00
renovate.json ci: manually update deps 2023-05-04 15:01:29 +02:00
SECURITY.md Create SECURITY.md 2024-02-29 19:53:04 +01:00
webui_static.yaml chore: drop embedded models (#4715) 2025-01-30 00:03:01 +01:00




LocalAI forks LocalAI stars LocalAI pull-requests

LocalAI Docker hub LocalAI Quay.io

Follow LocalAI_API Join LocalAI Discord Community

mudler%2FLocalAI | Trendshift

💡 Get help - FAQ 💭Discussions 💬 Discord 📖 Documentation website

💻 Quickstart 🖼️ Models 🚀 Roadmap 🥽 Demo 🌍 Explorer 🛫 Examples Try on Telegram

testsBuild and Releasebuild container imagesBump dependenciesArtifact Hub

LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that's compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by Ettore Di Giacinto.

📚🆕 Local Stack Family

🆕 LocalAI is now part of a comprehensive suite of AI tools designed to work together:

LocalAGI Logo

LocalAGI

A powerful Local AI agent management platform that serves as a drop-in replacement for OpenAI's Responses API, enhanced with advanced agentic capabilities.

LocalRecall Logo

LocalRecall

A REST-ful API and knowledge base management system that provides persistent memory and storage capabilities for AI agents.

Screenshots

Talk Interface Generate Audio
Screenshot 2025-03-31 at 12-01-36 LocalAI - Talk Screenshot 2025-03-31 at 12-01-29 LocalAI - Generate audio with voice-en-us-ryan-low
Models Overview Generate Images
Screenshot 2025-03-31 at 12-01-20 LocalAI - Models Screenshot 2025-03-31 at 12-31-41 LocalAI - Generate images with flux 1-dev
Chat Interface Home
Screenshot 2025-03-31 at 11-57-44 LocalAI - Chat with localai-functioncall-qwen2 5-7b-v0 5 Screenshot 2025-03-31 at 11-57-23 LocalAI API - c2a39e3 (c2a39e3639227cfd94ffffe9f5691239acc275a8)
Login Swarm
Screenshot 2025-03-31 at 12-09-59 Screenshot 2025-03-31 at 12-10-39 LocalAI - P2P dashboard

💻 Quickstart

Run the installer script:

# Basic installation
curl https://localai.io/install.sh | sh

For more installation options, see Installer Options.

Or run with docker:

CPU only image:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest

NVIDIA GPU Images:

# CUDA 12.0
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12

# CUDA 11.7
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-11

# NVIDIA Jetson (L4T) ARM64
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nvidia-l4t-arm64

AMD GPU Images (ROCm):

docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas

Intel GPU Images (oneAPI):

docker run -ti --name local-ai -p 8080:8080 --device=/dev/dri/card1 --device=/dev/dri/renderD128 localai/localai:latest-gpu-intel

Vulkan GPU Images:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan

AIO Images (pre-downloaded models):

# CPU version
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu

# NVIDIA CUDA 12 version
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12

# NVIDIA CUDA 11 version
docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-11

# Intel GPU version
docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel

# AMD GPU version
docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas

For more information about the AIO images and pre-downloaded models, see Container Documentation.

To load models:

# From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting https://models.localai.io)
local-ai run llama-3.2-1b-instruct:q4_k_m
# Start LocalAI with the phi-2 model directly from huggingface
local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
# Install and run a model from the Ollama OCI registry
local-ai run ollama://gemma:2b
# Run a model from a configuration file
local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
# Install and run a model from a standard OCI registry (e.g., Docker Hub)
local-ai run oci://localai/phi-2:latest

Automatic Backend Detection: When you install models from the gallery or YAML files, LocalAI automatically detects your system's GPU capabilities (NVIDIA, AMD, Intel) and downloads the appropriate backend. For advanced configuration options, see GPU Acceleration.

For more information, see 💻 Getting started

📰 Latest project news

Roadmap items: List of issues

🚀 Features

🔗 Community and integrations

Build and deploy custom containers:

WebUIs:

Model galleries

Other:

🔗 Resources

📖 🎥 Media, Blogs, Social

Citation

If you utilize this repository, data in a downstream project, please consider citing it with:

@misc{localai,
  author = {Ettore Di Giacinto},
  title = {LocalAI: The free, Open source OpenAI alternative},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/go-skynet/LocalAI}},

❤️ Sponsors

Do you find LocalAI useful?

Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.

A huge thank you to our generous sponsors who support this project covering CI expenses, and our Sponsor list:


🌟 Star history

LocalAI Star history Chart

📖 License

LocalAI is a community-driven project created by Ettore Di Giacinto.

MIT - Author Ettore Di Giacinto mudler@localai.io

🙇 Acknowledgements

LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

🤗 Contributors

This is a community project, a special thanks to our contributors! 🤗