diff options
author | Nate Sesti <33237525+sestinj@users.noreply.github.com> | 2023-07-28 09:07:26 -0700 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-07-28 09:07:26 -0700 |
commit | 500f62fcc55ed7ccb04fd9ccef3c66c8b5ff1721 (patch) | |
tree | 0db946237bb6c163c96afe98cb6d5c672dd90be7 /docs | |
parent | 9ded1ea41e65d83e32ed74ca1fb5bd1f00a5d054 (diff) | |
download | sncontinue-500f62fcc55ed7ccb04fd9ccef3c66c8b5ff1721.tar.gz sncontinue-500f62fcc55ed7ccb04fd9ccef3c66c8b5ff1721.tar.bz2 sncontinue-500f62fcc55ed7ccb04fd9ccef3c66c8b5ff1721.zip |
Update customization.md
Diffstat (limited to 'docs')
-rw-r--r-- | docs/docs/customization.md | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/docs/docs/customization.md b/docs/docs/customization.md index c768c97d..f383de48 100644 --- a/docs/docs/customization.md +++ b/docs/docs/customization.md @@ -25,6 +25,8 @@ If you have access, simply set `default_model` to the model you would like to us See our [5 minute quickstart](https://github.com/continuedev/ggml-server-example) to run any model locally with ggml. While these models don't yet perform as well, they are free, entirely private, and run offline. +Once the model is running on localhost:8000, set `default_model` in `~/.continue/config.py` to "ggml". + ### Self-hosting an open-source model If you want to self-host on Colab, RunPod, Replicate, HuggingFace, Haven, or another hosting provider you will need to wire up a new LLM class. It only needs to implement 3 methods: `stream_complete`, `complete`, and `stream_chat`, and you can see examples in `continuedev/src/continuedev/libs/llm`. |