From 500f62fcc55ed7ccb04fd9ccef3c66c8b5ff1721 Mon Sep 17 00:00:00 2001 From: Nate Sesti <33237525+sestinj@users.noreply.github.com> Date: Fri, 28 Jul 2023 09:07:26 -0700 Subject: Update customization.md --- docs/docs/customization.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/docs/customization.md b/docs/docs/customization.md index c768c97d..f383de48 100644 --- a/docs/docs/customization.md +++ b/docs/docs/customization.md @@ -25,6 +25,8 @@ If you have access, simply set `default_model` to the model you would like to us See our [5 minute quickstart](https://github.com/continuedev/ggml-server-example) to run any model locally with ggml. While these models don't yet perform as well, they are free, entirely private, and run offline. +Once the model is running on localhost:8000, set `default_model` in `~/.continue/config.py` to "ggml". + ### Self-hosting an open-source model If you want to self-host on Colab, RunPod, Replicate, HuggingFace, Haven, or another hosting provider you will need to wire up a new LLM class. It only needs to implement 3 methods: `stream_complete`, `complete`, and `stream_chat`, and you can see examples in `continuedev/src/continuedev/libs/llm`. -- cgit v1.2.3-70-g09d2