summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorNate Sesti <sestinj@gmail.com>2023-07-30 22:30:00 -0700
committerNate Sesti <sestinj@gmail.com>2023-07-30 22:30:00 -0700
commit57a572a420e16b08301f0c6738a1b414c59bce85 (patch)
tree2bdbc7831d66aafefe30a9e236ecc150d80024cc /docs
parent1bc5777ed168e47e2ef2ab1b33eecf6cbd170a61 (diff)
parent8bd76be6c0925e0d5e5f6d239e9c6907df3cfd23 (diff)
downloadsncontinue-57a572a420e16b08301f0c6738a1b414c59bce85.tar.gz
sncontinue-57a572a420e16b08301f0c6738a1b414c59bce85.tar.bz2
sncontinue-57a572a420e16b08301f0c6738a1b414c59bce85.zip
Merge remote-tracking branch 'continuedev/main' into llm-object-config-merge-main
Diffstat (limited to 'docs')
-rw-r--r--docs/docs/customization.md14
-rw-r--r--docs/docs/how-continue-works.md22
2 files changed, 31 insertions, 5 deletions
diff --git a/docs/docs/customization.md b/docs/docs/customization.md
index c768c97d..fa4d110e 100644
--- a/docs/docs/customization.md
+++ b/docs/docs/customization.md
@@ -11,6 +11,7 @@ Change the `default_model` field to any of "gpt-3.5-turbo", "gpt-3.5-turbo-16k",
New users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code.
Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps:
+
1. Copy your API key from https://platform.openai.com/account/api-keys
2. Use the cmd+, (Mac) / ctrl+, (Windows) to open your VS Code settings
3. Type "Continue" in the search bar
@@ -25,6 +26,8 @@ If you have access, simply set `default_model` to the model you would like to us
See our [5 minute quickstart](https://github.com/continuedev/ggml-server-example) to run any model locally with ggml. While these models don't yet perform as well, they are free, entirely private, and run offline.
+Once the model is running on localhost:8000, set `default_model` in `~/.continue/config.py` to "ggml".
+
### Self-hosting an open-source model
If you want to self-host on Colab, RunPod, Replicate, HuggingFace, Haven, or another hosting provider you will need to wire up a new LLM class. It only needs to implement 3 methods: `stream_complete`, `complete`, and `stream_chat`, and you can see examples in `continuedev/src/continuedev/libs/llm`.
@@ -33,21 +36,24 @@ If by chance the provider has the exact same API interface as OpenAI, the `GGML`
### Azure OpenAI Service
-If you'd like to use OpenAI models but are concerned about privacy, you can use the Azure OpenAI service, which is GDPR and HIPAA compliant. After applying for access [here](https://azure.microsoft.com/en-us/products/ai-services/openai-service), you will typically hear back within only a few days. Once you have access, set `default_model` to "gpt-4", and then set the `azure_openai_info` property in the `ContinueConfig` like so:
+If you'd like to use OpenAI models but are concerned about privacy, you can use the Azure OpenAI service, which is GDPR and HIPAA compliant. After applying for access [here](https://azure.microsoft.com/en-us/products/ai-services/openai-service), you will typically hear back within only a few days. Once you have access, set `default_model` to "gpt-4", and then set the `openai_server_info` property in the `ContinueConfig` like so:
```python
config = ContinueConfig(
...
- azure_openai_info=AzureInfo(
- endpoint="https://my-azure-openai-instance.openai.azure.com/",
+ openai_server_info=OpenAIServerInfo(
+ api_base="https://my-azure-openai-instance.openai.azure.com/",
engine="my-azure-openai-deployment",
- api_version="2023-03-15-preview"
+ api_version="2023-03-15-preview",
+ api_type="azure"
)
)
```
The easiest way to find this information is from the chat playground in the Azure OpenAI portal. Under the "Chat Session" section, click "View Code" to see each of these parameters. Finally, find one of your Azure OpenAI keys and enter it in the VS Code settings under `continue.OPENAI_API_KEY`.
+Note that you can also use `OpenAIServerInfo` for uses other than Azure, such as self-hosting a model.
+
## Customize System Message
You can write your own system message, a set of instructions that will always be top-of-mind for the LLM, by setting the `system_message` property to any string. For example, you might request "Please make all responses as concise as possible and never repeat something you have already explained."
diff --git a/docs/docs/how-continue-works.md b/docs/docs/how-continue-works.md
index 588b1308..06aada52 100644
--- a/docs/docs/how-continue-works.md
+++ b/docs/docs/how-continue-works.md
@@ -8,4 +8,24 @@ The `Continue` library consists of an **SDK**, a **GUI**, and a **Server** that
2. The **GUI** lets you transparently review every automated step, providing the opportunity to undo and rerun any that ran incorrectly.
-3. The **Server** is responsible for connecting the GUI and SDK to the IDE as well as deciding which steps to take next. \ No newline at end of file
+3. The **Server** is responsible for connecting the GUI and SDK to the IDE as well as deciding which steps to take next.
+
+
+## Running the server manually
+
+If you would like to run the Continue server manually, rather than allowing the VS Code to set it up, you can follow these steps:
+
+1. `git clone https://github.com/continuedev/continue`
+2. `cd continue/continuedev`
+3. Make sure packages are installed with `poetry install`
+ - If poetry is not installed, you can install with
+ ```bash
+ curl -sSL https://install.python-poetry.org | python3 -
+ ```
+ (official instructions [here](https://python-poetry.org/docs/#installing-with-the-official-installer))
+4. `poetry shell` to activate the virtual environment
+5. Either:
+
+ a) To run without the debugger: `cd ..` and `python3 -m continuedev.src.continuedev.server.main`
+
+ b) To run with the debugger: Open a VS Code window with `continue` as the root folder. Ensure that you have selected the Python interpreter from virtual environment, then use the `.vscode/launch.json` we have provided to start the debugger.