LocalAI: Difference between revisions

From DWIKI
mNo edit summary
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
=Links=
=Links=
*[https://localai.io LocalAI homepage]
*[https://localai.io LocalAI homepage]
*[https://localai.io/faq/ LocalAI FAQ]
*[https://github.com/open-webui/open-webui Check out Open webui]


=HOWTO=
=HOWTO=
Line 31: Line 33:
===Local-ai log===
===Local-ai log===
   /usr/share/local-ai/llama.log
   /usr/share/local-ai/llama.log
==Messages==
===GPU device found but no CUDA backend present===
If running in docker, try restarting docker

Latest revision as of 15:34, 14 August 2024

Links

HOWTO

List models

curl http://localhost:8080/v1/models

Audio to text


Apply model

curl http://localhost:8080/models/apply -H "Content-Type: application/json" -d '{
  "id": "huggingface@TheBloke/Yarn-Mistral-7B-128k-GGUF/yarn-mistral-7b-128k.Q5_K_M.gguf"
}'

Scripts

Talk to the chat interface

#!/bin/bash
echo -n "Ask me anything: "
read A
curl -s http://localhost:8080/v1/chat/completions \
   -H "Content-Type: application/json" \
   -d '{ "model": "gpt-4", "messages": [{"role": "user", "content": "'"$A"'", "temperature": 0.1}] }' |\
    jq     '.choices[].message.content' | sed 's/\\n/\n/g' | sed 's/\\"/"/g'


FAQ

File/directory locations

Local-ai log

 /usr/share/local-ai/llama.log

Messages

GPU device found but no CUDA backend present

If running in docker, try restarting docker