LocalAI: Difference between revisions
From DWIKI
m (→Scripts) |
m (→Links) |
||
(6 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
=Links= | =Links= | ||
*[https://localai.io LocalAI homepage] | *[https://localai.io LocalAI homepage] | ||
*[https://localai.io/faq/ LocalAI FAQ] | |||
*[https://github.com/open-webui/open-webui Check out Open webui] | |||
=HOWTO= | |||
==List models== | |||
curl http://localhost:8080/v1/models | |||
==Audio to text== | |||
*https://localai.io/features/audio-to-text/ | |||
*https://docs.llamaindex.ai/en/stable/examples/llm/localai/ | |||
==Apply model== | |||
curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{ | |||
"id": "huggingface@TheBloke/Yarn-Mistral-7B-128k-GGUF/yarn-mistral-7b-128k.Q5_K_M.gguf" | |||
}' | |||
=Scripts= | =Scripts= | ||
Line 11: | Line 27: | ||
-d '{ "model": "gpt-4", "messages": [{"role": "user", "content": "'"$A"'", "temperature": 0.1}] }' |\ | -d '{ "model": "gpt-4", "messages": [{"role": "user", "content": "'"$A"'", "temperature": 0.1}] }' |\ | ||
jq '.choices[].message.content' | sed 's/\\n/\n/g' | sed 's/\\"/"/g' | jq '.choices[].message.content' | sed 's/\\n/\n/g' | sed 's/\\"/"/g' | ||
=FAQ= | |||
==File/directory locations== | |||
===Local-ai log=== | |||
/usr/share/local-ai/llama.log | |||
==Messages== | |||
===GPU device found but no CUDA backend present=== | |||
If running in docker, try restarting docker |
Latest revision as of 15:34, 14 August 2024
Links
HOWTO
List models
curl http://localhost:8080/v1/models
Audio to text
- https://localai.io/features/audio-to-text/
- https://docs.llamaindex.ai/en/stable/examples/llm/localai/
Apply model
curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{ "id": "huggingface@TheBloke/Yarn-Mistral-7B-128k-GGUF/yarn-mistral-7b-128k.Q5_K_M.gguf" }'
Scripts
Talk to the chat interface
#!/bin/bash echo -n "Ask me anything: " read A curl -s http://localhost:8080/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "gpt-4", "messages": [{"role": "user", "content": "'"$A"'", "temperature": 0.1}] }' |\ jq '.choices[].message.content' | sed 's/\\n/\n/g' | sed 's/\\"/"/g'
FAQ
File/directory locations
Local-ai log
/usr/share/local-ai/llama.log
Messages
GPU device found but no CUDA backend present
If running in docker, try restarting docker