summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNate Sesti <sestinj@gmail.com>2023-07-28 10:23:15 -0700
committerNate Sesti <sestinj@gmail.com>2023-07-28 10:23:15 -0700
commitf6fe993a782d68cf9eb0ba91c964914e4d1baf17 (patch)
tree6f00723be5ef4428d143716da7e07a2b333850ab
parent5bc80e2e6d3141922c966c404a6d32a496097960 (diff)
parent500f62fcc55ed7ccb04fd9ccef3c66c8b5ff1721 (diff)
downloadsncontinue-f6fe993a782d68cf9eb0ba91c964914e4d1baf17.tar.gz
sncontinue-f6fe993a782d68cf9eb0ba91c964914e4d1baf17.tar.bz2
sncontinue-f6fe993a782d68cf9eb0ba91c964914e4d1baf17.zip
Merge branch 'main' of https://github.com/continuedev/continue
-rw-r--r--docs/docs/customization.md2
1 files changed, 2 insertions, 0 deletions
diff --git a/docs/docs/customization.md b/docs/docs/customization.md
index c768c97d..f383de48 100644
--- a/docs/docs/customization.md
+++ b/docs/docs/customization.md
@@ -25,6 +25,8 @@ If you have access, simply set `default_model` to the model you would like to us
See our [5 minute quickstart](https://github.com/continuedev/ggml-server-example) to run any model locally with ggml. While these models don't yet perform as well, they are free, entirely private, and run offline.
+Once the model is running on localhost:8000, set `default_model` in `~/.continue/config.py` to "ggml".
+
### Self-hosting an open-source model
If you want to self-host on Colab, RunPod, Replicate, HuggingFace, Haven, or another hosting provider you will need to wire up a new LLM class. It only needs to implement 3 methods: `stream_complete`, `complete`, and `stream_chat`, and you can see examples in `continuedev/src/continuedev/libs/llm`.