diff options
author | Nate Sesti <sestinj@gmail.com> | 2023-09-12 00:59:20 -0700 |
---|---|---|
committer | Nate Sesti <sestinj@gmail.com> | 2023-09-12 00:59:20 -0700 |
commit | 331b2adcb6f8d962e4ed19292fd2ab5838ba479e (patch) | |
tree | 055989d31f18f18971d9f8e3e5764b59ed0c2be5 /docs | |
parent | e9afb41bed9a723876cf1cf95d636b2ea498a6b3 (diff) | |
download | sncontinue-331b2adcb6f8d962e4ed19292fd2ab5838ba479e.tar.gz sncontinue-331b2adcb6f8d962e4ed19292fd2ab5838ba479e.tar.bz2 sncontinue-331b2adcb6f8d962e4ed19292fd2ab5838ba479e.zip |
docs: :memo: major docs improvements
Diffstat (limited to 'docs')
37 files changed, 696 insertions, 422 deletions
diff --git a/docs/docs/concepts/ide.md b/docs/docs/concepts/ide.md index bd31481b..d4b48f0a 100644 --- a/docs/docs/concepts/ide.md +++ b/docs/docs/concepts/ide.md @@ -17,11 +17,11 @@ SDK provides "IDEProtocol" class so that steps can interact with VS Code, etc... ### VS Code
-You can install the VS Code extension [here](../getting-started.md)
+You can install the VS Code extension [here](../quickstart.md)
### GitHub Codespaces
-You can install the GitHub Codespaces extension [here](../getting-started.md)
+You can install the GitHub Codespaces extension [here](../quickstart.md)
## IDE Protocol methods
diff --git a/docs/docs/customization.md b/docs/docs/customization.md deleted file mode 100644 index fb7dc0c5..00000000 --- a/docs/docs/customization.md +++ /dev/null @@ -1,375 +0,0 @@ -# Customization - -Continue can be deeply customized by editing the `ContinueConfig` object in `~/.continue/config.py` (`%userprofile%\.continue\config.py` for Windows) on your machine. This file is created the first time you run Continue. - -## Summary of Models - -Commercial Models - -- [MaybeProxyOpenAI](#adding-an-openai-api-key) (default) - Use gpt-4 or gpt-3.5-turbo free with our API key, or with your API key. gpt-4 is probably the most capable model of all options. -- [OpenAI](#azure-openai-service) - Use any OpenAI model with your own key. Can also change the base URL if you have a server that uses the OpenAI API format, including using the Azure OpenAI service, LocalAI, etc. -- [AnthropicLLM](#claude-2) - Use claude-2 with your Anthropic API key. Claude 2 is also highly capable, and has a 100,000 token context window. - -Local Models - -- [Ollama](#run-llama-2-locally-with-ollama) - If you have a Mac, Ollama is the simplest way to run open-source models like Code Llama. -- [OpenAI](#local-models-with-openai-compatible-server) - If you have access to an OpenAI-compatible server (e.g. llama-cpp-python, LocalAI, FastChat, TextGenWebUI, etc.), you can use the `OpenAI` class and just change the base URL. -- [GGML](#local-models-with-ggml) - An alternative way to connect to OpenAI-compatible servers. Will use `aiohttp` directly instead of the `openai` Python package. -- [LlamaCpp](#llamacpp) - Build llama.cpp from source and use its built-in API server. - -Open-Source Models (not local) - -- [TogetherLLM](#together) - Use any model from the [Together Models list](https://docs.together.ai/docs/models-inference) with your Together API key. -- [ReplicateLLM](#replicate) - Use any open-source model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models) with your Replicate API key. -- [HuggingFaceInferenceAPI](#huggingface) - Use any open-source model from the [Hugging Face Inference API](https://huggingface.co/inference-api) with your Hugging Face token. - -## Change the default LLM - -In `config.py`, you'll find the `models` property: - -```python -from continuedev.src.continuedev.core.models import Models - -config = ContinueConfig( - ... - models=Models( - default=MaybeProxyOpenAI(model="gpt-4"), - medium=MaybeProxyOpenAI(model="gpt-3.5-turbo") - ) -) -``` - -The `default` and `medium` properties are different _model roles_. This allows different models to be used for different tasks. The available roles are `default`, `small`, `medium`, `large`, `edit`, and `chat`. `edit` is used when you use the '/edit' slash command, `chat` is used for all chat responses, and `medium` is used for summarizing. If not set, all roles will fall back to `default`. The values of these fields must be of the [`LLM`](https://github.com/continuedev/continue/blob/main/continuedev/src/continuedev/libs/llm/__init__.py) class, which implements methods for retrieving and streaming completions from an LLM. - -Below, we describe the `LLM` classes available in the Continue core library, and how they can be used. - -### Adding an OpenAI API key - -With the `MaybeProxyOpenAI` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. - -Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps: - -1. Copy your API key from https://platform.openai.com/account/api-keys -2. Open `~/.continue/config.py`. You can do this by using the '/config' command in Continue -3. Change the default LLMs to look like this: - -```python -API_KEY = "<API_KEY>" -config = ContinueConfig( - ... - models=Models( - default=MaybeProxyOpenAI(model="gpt-4", api_key=API_KEY), - medium=MaybeProxyOpenAI(model="gpt-3.5-turbo", api_key=API_KEY) - ) -) -``` - -The `MaybeProxyOpenAI` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. - -These classes support any models available through the OpenAI API, assuming your API key has access, including "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", and "gpt-4-32k". - -### claude-2 - -Import the `AnthropicLLM` LLM class and set it as the default model: - -```python -from continuedev.src.continuedev.libs.llm.anthropic import AnthropicLLM - -config = ContinueConfig( - ... - models=Models( - default=AnthropicLLM(api_key="<API_KEY>", model="claude-2") - ) -) -``` - -Continue will automatically prompt you for your Anthropic API key, which must have access to Claude 2. You can request early access [here](https://www.anthropic.com/earlyaccess). - -### Run Llama-2 locally with Ollama - -[Ollama](https://ollama.ai/) is a Mac application that makes it easy to locally run open-source models, including Llama-2. Download the app from the website, and it will walk you through setup in a couple of minutes. You can also read more in their [README](https://github.com/jmorganca/ollama). Continue can then be configured to use the `Ollama` LLM class: - -```python -from continuedev.src.continuedev.libs.llm.ollama import Ollama - -config = ContinueConfig( - ... - models=Models( - default=Ollama(model="llama2") - ) -) -``` - -### Local models with OpenAI-compatible server - -If you are locally serving a model that uses an OpenAI-compatible server, you can simply change the `api_base` in the `OpenAI` class like this: - -```python -from continuedev.src.continuedev.libs.llm.openai import OpenAI - -config = ContinueConfig( - ... - models=Models( - default=OpenAI( - api_key="EMPTY", - model="<MODEL_NAME>", - api_base="http://localhost:8000", # change to your server - ) - ) -) -``` - -Options for serving models locally with an OpenAI-compatible server include: - -- [text-gen-webui](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai#setup--installation) -- [FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md) -- [LocalAI](https://localai.io/basics/getting_started/) -- [llama-cpp-python](https://github.com/abetlen/llama-cpp-python#web-server) - -### Local models with ggml - -See our [5 minute quickstart](https://github.com/continuedev/ggml-server-example) to run any model locally with ggml. While these models don't yet perform as well, they are free, entirely private, and run offline. - -Once the model is running on localhost:8000, change `~/.continue/config.py` to look like this: - -```python -from continuedev.src.continuedev.libs.llm.ggml import GGML - -config = ContinueConfig( - ... - models=Models( - default=GGML( - max_context_length=2048, - server_url="http://localhost:8000") - ) -) -``` - -### Llama.cpp - -Run the llama.cpp server binary to start the API server. If running on a remote server, be sure to set host to 0.0.0.0: - -```shell -.\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models\meta\llama\codellama-7b-instruct.Q8_0.gguf -``` - -After it's up and running, change `~/.continue/config.py` to look like this: - -```python -from continuedev.src.continuedev.libs.llm.llamacpp import LlamaCpp - -config = ContinueConfig( - ... - models=Models( - default=LlamaCpp( - max_context_length=4096, - server_url="http://localhost:8080") - ) -) -``` - -### Together - -The Together API is a cloud platform for running large AI models. You can sign up [here](https://api.together.xyz/signup), copy your API key on the initial welcome screen, and then hit the play button on any model from the [Together Models list](https://docs.together.ai/docs/models-inference). Change `~/.continue/config.py` to look like this: - -```python -from continuedev.src.continuedev.core.models import Models -from continuedev.src.continuedev.libs.llm.together import TogetherLLM - -config = ContinueConfig( - ... - models=Models( - default=TogetherLLM( - api_key="<API_KEY>", - model="togethercomputer/llama-2-13b-chat" - ) - ) -) -``` - -### Replicate - -Replicate is a great option for newly released language models or models that you've deployed through their platform. Sign up for an account [here](https://replicate.ai/), copy your API key, and then select any model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models). Change `~/.continue/config.py` to look like this: - -```python -from continuedev.src.continuedev.core.models import Models -from continuedev.src.continuedev.libs.llm.replicate import ReplicateLLM - -config = ContinueConfig( - ... - models=Models( - default=ReplicateLLM( - model="replicate/codellama-13b-instruct:da5676342de1a5a335b848383af297f592b816b950a43d251a0a9edd0113604b", - api_key="my-replicate-api-key") - ) -) -``` - -If you don't specify the `model` parameter, it will default to `replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781`. - -### Hugging Face - -Hugging Face Inference API is a great option for newly released language models. Sign up for an account and add billing [here](https://huggingface.co/settings/billing), access the Inference Endpoints [here](https://ui.endpoints.huggingface.co), click on “New endpoint”, and fill out the form (e.g. select a model like [WizardCoder-Python-34B-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0)), and then deploy your model by clicking “Create Endpoint”. Change `~/.continue/config.py` to look like this: - -```python -from continuedev.src.continuedev.core.models import Models -from continuedev.src.continuedev.libs.llm.hf_inference_api import HuggingFaceInferenceAPI - -config = ContinueConfig( - ... - models=Models( - default=HuggingFaceInferenceAPI( - endpoint_url: "<INFERENCE_API_ENDPOINT_URL>", - hf_token: "<HUGGING_FACE_TOKEN>", - ) -) -``` - -### Self-hosting an open-source model - -If you want to self-host on Colab, RunPod, HuggingFace, Haven, or another hosting provider you will need to wire up a new LLM class. It only needs to implement 3 primary methods: `stream_complete`, `complete`, and `stream_chat`, and you can see examples in `continuedev/src/continuedev/libs/llm`. - -If by chance the provider has the exact same API interface as OpenAI, the `GGML` class will work for you out of the box, after changing the endpoint at the top of the file. - -### Azure OpenAI Service - -If you'd like to use OpenAI models but are concerned about privacy, you can use the Azure OpenAI service, which is GDPR and HIPAA compliant. After applying for access [here](https://azure.microsoft.com/en-us/products/ai-services/openai-service), you will typically hear back within only a few days. Once you have access, instantiate the model like so: - -```python -from continuedev.src.continuedev.libs.llm.openai import OpenAI - -config = ContinueConfig( - ... - models=Models( - default=OpenAI( - api_key="my-api-key", - model="gpt-3.5-turbo", - api_base="https://my-azure-openai-instance.openai.azure.com/", - engine="my-azure-openai-deployment", - api_version="2023-03-15-preview", - api_type="azure" - ) - ) -) -``` - -The easiest way to find this information is from the chat playground in the Azure OpenAI portal. Under the "Chat Session" section, click "View Code" to see each of these parameters. Finally, find one of your Azure OpenAI keys and enter it in the VS Code settings under `continue.OPENAI_API_KEY`. - -Note that you can also use these parameters for uses other than Azure, such as self-hosting a model. - -## Customize System Message - -You can write your own system message, a set of instructions that will always be top-of-mind for the LLM, by setting the `system_message` property to any string. For example, you might request "Please make all responses as concise as possible and never repeat something you have already explained." - -System messages can also reference files. For example, if there is a markdown file (e.g. at `/Users/nate/Documents/docs/reference.md`) you'd like the LLM to know about, you can reference it with [Mustache](http://mustache.github.io/mustache.5.html) templating like this: "Please reference this documentation: {{ Users/nate/Documents/docs/reference.md }}". As of now, you must use an absolute path. - -## Custom Commands with Natural Language Prompts - -You can add custom slash commands by adding a `CustomCommand` object to the `custom_commands` property. Each `CustomCommand` has - -- `name`: the name of the command, which will be invoked with `/name` -- `description`: a short description of the command, which will appear in the dropdown -- `prompt`: a set of instructions to the LLM, which will be shown in the prompt - -Custom commands are great when you are frequently reusing a prompt. For example, if you've crafted a great prompt and frequently ask the LLM to check for mistakes in your code, you could add a command like this: - -```python -config = ContinueConfig( - ... - custom_commands=[ - CustomCommand( - name="check", - description="Check for mistakes in my code", - prompt=dedent("""\ - Please read the highlighted code and check for any mistakes. You should look for the following, and be extremely vigilant: - - Syntax errors - - Logic errors - - Security vulnerabilities - - Performance issues - - Anything else that looks wrong - - Once you find an error, please explain it as clearly as possible, but without using extra words. For example, instead of saying "I think there is a syntax error on line 5", you should say "Syntax error on line 5". Give your answer as one bullet point per mistake found.""") - ) - ] -) -``` - -## Custom Slash Commands - -If you want to go a step further than writing custom commands with natural language, you can use a `SlashCommand` to run an arbitrary Python function, with access to the Continue SDK. To do this, create a subclass of `Step` with the `run` method implemented, and this is the code that will run when you call the command. For example, here is a step that generates a commit message: - -```python -class CommitMessageStep(Step): - async def run(self, sdk: ContinueSDK): - - # Get the root directory of the workspace - dir = sdk.ide.workspace_directory - - # Run git diff in that directory - diff = subprocess.check_output( - ["git", "diff"], cwd=dir).decode("utf-8") - - # Ask the LLM to write a commit message, - # and set it as the description of this step - self.description = await sdk.models.default.complete( - f"{diff}\n\nWrite a short, specific (less than 50 chars) commit message about the above changes:") - -config=ContinueConfig( - ... - slash_commands=[ - ... - SlashCommand( - name="commit", - description="Generate a commit message for the current changes", - step=CommitMessageStep, - ) - ] -) -``` - -## Temperature - -Set `temperature` to any value between 0 and 1. Higher values will make the LLM more creative, while lower values will make it more predictable. The default is 0.5. - -## Context Providers - -When you type '@' in the Continue text box, it will display a dropdown of items that can be selected to include in your message as context. For example, you might want to reference a GitHub Issue, file, or Slack thread. All of these options are provided by a `ContextProvider` class, and we make it easy to write your own or use our builtin options. See the [Context Providers](./context-providers.md) page for more info. - -## Custom Policies - -Policies can be used to deeply change the behavior of Continue, or to build agents that take longer sequences of actions on their own. The [`DefaultPolicy`](https://github.com/continuedev/continue/blob/main/continuedev/src/continuedev/plugins/policies/default.py) handles the parsing of slash commands, and otherwise always chooses the `SimpleChatStep`, but you could customize by for example always taking a "review" step after editing code. To do so, create a new `Policy` subclass that implements the `next` method: - -```python -class ReviewEditsPolicy(Policy): - - default_step: Step = SimpleChatStep() - - def next(self, config: ContinueConfig, history: History) -> Step: - # Get the last step - last_step = history.get_current() - - # If it edited code, then review the changes - if isinstance(last_step, EditHighlightedCodeStep): - return ReviewStep() # Not implemented - - # Otherwise, choose between EditHighlightedCodeStep and SimpleChatStep based on slash command - if observation is not None and isinstance(last_step.observation, UserInputObservation): - if user_input.startswith("/edit"): - return EditHighlightedCodeStep(user_input=user_input[5:]) - else: - return SimpleChatStep() - - return self.default_step.copy() - - # Don't do anything until the user enters something else - return None -``` - -Then, in `~/.continue/config.py`, override the default policy: - -```python -config=ContinueConfig( - ... - policy_override=ReviewEditsPolicy() -) -``` diff --git a/docs/docs/customization/models.md b/docs/docs/customization/models.md index e69de29b..93ea2a57 100644 --- a/docs/docs/customization/models.md +++ b/docs/docs/customization/models.md @@ -0,0 +1,92 @@ +# Models + +Continue makes it easy to swap out different LLM providers. Once you've added any of these to your `config.py`, you will be able to switch between them with the model selection dropdown. + +Commercial Models + +- [MaybeProxyOpenAI](#adding-an-openai-api-key) (default) - Use gpt-4 or gpt-3.5-turbo free with our API key, or with your API key. gpt-4 is probably the most capable model of all options. +- [OpenAI](#azure-openai-service) - Use any OpenAI model with your own key. Can also change the base URL if you have a server that uses the OpenAI API format, including using the Azure OpenAI service, LocalAI, etc. +- [AnthropicLLM](#claude-2) - Use claude-2 with your Anthropic API key. Claude 2 is also highly capable, and has a 100,000 token context window. + +Local Models + +- [Ollama](#run-llama-2-locally-with-ollama) - If you have a Mac, Ollama is the simplest way to run open-source models like Code Llama. +- [OpenAI](#local-models-with-openai-compatible-server) - If you have access to an OpenAI-compatible server (e.g. llama-cpp-python, LocalAI, FastChat, TextGenWebUI, etc.), you can use the `OpenAI` class and just change the base URL. +- [GGML](#local-models-with-ggml) - An alternative way to connect to OpenAI-compatible servers. Will use `aiohttp` directly instead of the `openai` Python package. +- [LlamaCpp](#llamacpp) - Build llama.cpp from source and use its built-in API server. + +Open-Source Models (not local) + +- [TogetherLLM](#together) - Use any model from the [Together Models list](https://docs.together.ai/docs/models-inference) with your Together API key. +- [ReplicateLLM](#replicate) - Use any open-source model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models) with your Replicate API key. +- [HuggingFaceInferenceAPI](#huggingface) - Use any open-source model from the [Hugging Face Inference API](https://huggingface.co/inference-api) with your Hugging Face token. + +## Change the default LLM + +In `config.py`, you'll find the `models` property: + +```python +from continuedev.src.continuedev.core.models import Models + +config = ContinueConfig( + ... + models=Models( + default=MaybeProxyOpenAI(model="gpt-4"), + medium=MaybeProxyOpenAI(model="gpt-3.5-turbo") + ) +) +``` + +The `default` and `medium` properties are different _model roles_. This allows different models to be used for different tasks. The available roles are `default`, `small`, `medium`, `large`, `edit`, and `chat`. `edit` is used when you use the '/edit' slash command, `chat` is used for all chat responses, and `medium` is used for summarizing. If not set, all roles will fall back to `default`. The values of these fields must be of the [`LLM`](https://github.com/continuedev/continue/blob/main/continuedev/src/continuedev/libs/llm/__init__.py) class, which implements methods for retrieving and streaming completions from an LLM. + +Below, we describe the `LLM` classes available in the Continue core library, and how they can be used. + +## Adding an OpenAI API key + +## claude-2 + +## Run Llama-2 locally with Ollama + +## Local models with OpenAI-compatible server + +## Local models with ggml + +## Llama.cpp + +## Together + +## Replicate + +## Hugging Face + +## Self-hosting an open-source model + +If you want to self-host on Colab, RunPod, HuggingFace, Haven, or another hosting provider you will need to wire up a new LLM class. It only needs to implement 3 primary methods: `stream_complete`, `complete`, and `stream_chat`, and you can see examples in `continuedev/src/continuedev/libs/llm`. + +If by chance the provider has the exact same API interface as OpenAI, the `GGML` class will work for you out of the box, after changing the endpoint at the top of the file. + +## Azure OpenAI Service + +If you'd like to use OpenAI models but are concerned about privacy, you can use the Azure OpenAI service, which is GDPR and HIPAA compliant. After applying for access [here](https://azure.microsoft.com/en-us/products/ai-services/openai-service), you will typically hear back within only a few days. Once you have access, instantiate the model like so: + +```python +from continuedev.src.continuedev.libs.llm.openai import OpenAI + +config = ContinueConfig( + ... + models=Models( + default=OpenAI( + api_key="my-api-key", + model="gpt-3.5-turbo", + api_base="https://my-azure-openai-instance.openai.azure.com/", + engine="my-azure-openai-deployment", + api_version="2023-03-15-preview", + api_type="azure" + ) + ) +) +``` + +The easiest way to find this information is from the chat playground in the Azure OpenAI portal. Under the "Chat Session" section, click "View Code" to see each of these parameters. Finally, find one of your Azure OpenAI keys and enter it in the VS Code settings under `continue.OPENAI_API_KEY`. + +Note that you can also use these parameters for uses other than Azure, such as self-hosting a model. diff --git a/docs/docs/customization/other-configuration.md b/docs/docs/customization/other-configuration.md index 088b2aac..8049e8d6 100644 --- a/docs/docs/customization/other-configuration.md +++ b/docs/docs/customization/other-configuration.md @@ -1 +1,52 @@ # Other Configuration + +See the [ContinueConfig Reference](../reference/config) for the full list of configuration options. + +## Customize System Message + +You can write your own system message, a set of instructions that will always be top-of-mind for the LLM, by setting the `system_message` property to any string. For example, you might request "Please make all responses as concise as possible and never repeat something you have already explained." + +System messages can also reference files. For example, if there is a markdown file (e.g. at `/Users/nate/Documents/docs/reference.md`) you'd like the LLM to know about, you can reference it with [Mustache](http://mustache.github.io/mustache.5.html) templating like this: "Please reference this documentation: {{ Users/nate/Documents/docs/reference.md }}". As of now, you must use an absolute path. + +## Temperature + +Set `temperature` to any value between 0 and 1. Higher values will make the LLM more creative, while lower values will make it more predictable. The default is 0.5. + +## Custom Policies + +Policies can be used to deeply change the behavior of Continue, or to build agents that take longer sequences of actions on their own. The [`DefaultPolicy`](https://github.com/continuedev/continue/blob/main/continuedev/src/continuedev/plugins/policies/default.py) handles the parsing of slash commands, and otherwise always chooses the `SimpleChatStep`, but you could customize by for example always taking a "review" step after editing code. To do so, create a new `Policy` subclass that implements the `next` method: + +```python +class ReviewEditsPolicy(Policy): + + default_step: Step = SimpleChatStep() + + def next(self, config: ContinueConfig, history: History) -> Step: + # Get the last step + last_step = history.get_current() + + # If it edited code, then review the changes + if isinstance(last_step, EditHighlightedCodeStep): + return ReviewStep() # Not implemented + + # Otherwise, choose between EditHighlightedCodeStep and SimpleChatStep based on slash command + if observation is not None and isinstance(last_step.observation, UserInputObservation): + if user_input.startswith("/edit"): + return EditHighlightedCodeStep(user_input=user_input[5:]) + else: + return SimpleChatStep() + + return self.default_step.copy() + + # Don't do anything until the user enters something else + return None +``` + +Then, in `~/.continue/config.py`, override the default policy: + +```python +config=ContinueConfig( + ... + policy_override=ReviewEditsPolicy() +) +``` diff --git a/docs/docs/customization/intro.md b/docs/docs/customization/overview.md index a82b5dbf..0d433cd6 100644 --- a/docs/docs/customization/intro.md +++ b/docs/docs/customization/overview.md @@ -1,10 +1,10 @@ -# Customizing Continue +# Overview Continue can be deeply customized by editing the `ContinueConfig` object in `~/.continue/config.py` (`%userprofile%\.continue\config.py` for Windows) on your machine. This file is created the first time you run Continue. Currently, you can customize the following: -- [Models](./models.md) - Use Continue with any LLM, including local models, Azure OpenAI service, and any OpenAI-compatible API. -- [Context Providers](./context-providers.md) - Define which sources you want to collect context from to share with the LLM. Just type '@' to easily add attachments to your prompt. -- [Slash Commands](./slash-commands.md) - Call custom prompts or programs written with our SDK by typing `/` in the prompt. -- [Other Configuration](./other-configuration.md) - Configure other settings like the system message, temperature, and more. +- [Models](./models.md) - Use Continue with any LLM, including local models, Azure OpenAI service, any OpenAI-compatible API, and more. +- [Context Providers](./context-providers.md) - Just type '@' to easily add attachments to your prompt. Define which sources you want to easily reference, including GitHub Issues, terminal output, and preset URLs. +- [Slash Commands](./slash-commands.md) - Call custom prompts or programs written with our SDK by typing `/`. +- [Other Configuration](./other-configuration.md) - Configure other settings like the system message and temperature. diff --git a/docs/docs/customization/slash-commands.md b/docs/docs/customization/slash-commands.md index e69de29b..17f07075 100644 --- a/docs/docs/customization/slash-commands.md +++ b/docs/docs/customization/slash-commands.md @@ -0,0 +1,72 @@ +# Slash Commands + +Slash commands are shortcuts that can be activated by prefacing your input with '/'. For example, the built-in '/edit' slash command let you stream edits directly into your editor. + +There are two ways to add custom slash commands: + +1. With natural language prompts - this is simpler and only requires writing a string or string template. +2. With a custom `Step` - this gives you full access to the Continue SDK and allows you to write arbitrary Python code. + +## "Custom Commands" (Use Natural Language) + +You can add custom slash commands by adding a `CustomCommand` object to the `custom_commands` property. Each `CustomCommand` has + +- `name`: the name of the command, which will be invoked with `/name` +- `description`: a short description of the command, which will appear in the dropdown +- `prompt`: a set of instructions to the LLM, which will be shown in the prompt + +Custom commands are great when you are frequently reusing a prompt. For example, if you've crafted a great prompt and frequently ask the LLM to check for mistakes in your code, you could add a command like this: + +```python +config = ContinueConfig( + ... + custom_commands=[ + CustomCommand( + name="check", + description="Check for mistakes in my code", + prompt=dedent("""\ + Please read the highlighted code and check for any mistakes. You should look for the following, and be extremely vigilant: + - Syntax errors + - Logic errors + - Security vulnerabilities + - Performance issues + - Anything else that looks wrong + + Once you find an error, please explain it as clearly as possible, but without using extra words. For example, instead of saying "I think there is a syntax error on line 5", you should say "Syntax error on line 5". Give your answer as one bullet point per mistake found.""") + ) + ] +) +``` + +## Custom Slash Commands + +If you want to go a step further than writing custom commands with natural language, you can use a `SlashCommand` to run an arbitrary Python function, with access to the Continue SDK. To do this, create a subclass of `Step` with the `run` method implemented, and this is the code that will run when you call the command. For example, here is a step that generates a commit message: + +```python +class CommitMessageStep(Step): + async def run(self, sdk: ContinueSDK): + + # Get the root directory of the workspace + dir = sdk.ide.workspace_directory + + # Run git diff in that directory + diff = subprocess.check_output( + ["git", "diff"], cwd=dir).decode("utf-8") + + # Ask the LLM to write a commit message, + # and set it as the description of this step + self.description = await sdk.models.default.complete( + f"{diff}\n\nWrite a short, specific (less than 50 chars) commit message about the above changes:") + +config=ContinueConfig( + ... + slash_commands=[ + ... + SlashCommand( + name="commit", + description="Generate a commit message for the current changes", + step=CommitMessageStep, + ) + ] +) +``` diff --git a/docs/docs/collecting-data.md b/docs/docs/development-data.md index 95beeee7..267a746e 100644 --- a/docs/docs/collecting-data.md +++ b/docs/docs/development-data.md @@ -1,4 +1,4 @@ -# Collecting data +# 🧑💻 Development Data When you use Continue, you automatically collect data on how you build software. By default, this development data is saved to `.continue/dev_data` on your local machine. When combined with the code that you ultimately commit, it can be used to improve the LLM that you or your team use (if you allow). diff --git a/docs/docs/how-continue-works.md b/docs/docs/how-continue-works.md index 06aada52..07d16474 100644 --- a/docs/docs/how-continue-works.md +++ b/docs/docs/how-continue-works.md @@ -1,4 +1,4 @@ -# How Continue works
+# ⚙️ How Continue works
![Continue Architecture Diagram](/img/continue-architecture.png)
@@ -10,7 +10,6 @@ The `Continue` library consists of an **SDK**, a **GUI**, and a **Server** that 3. The **Server** is responsible for connecting the GUI and SDK to the IDE as well as deciding which steps to take next.
-
## Running the server manually
If you would like to run the Continue server manually, rather than allowing the VS Code to set it up, you can follow these steps:
@@ -25,7 +24,7 @@ If you would like to run the Continue server manually, rather than allowing the (official instructions [here](https://python-poetry.org/docs/#installing-with-the-official-installer))
4. `poetry shell` to activate the virtual environment
5. Either:
-
+
a) To run without the debugger: `cd ..` and `python3 -m continuedev.src.continuedev.server.main`
b) To run with the debugger: Open a VS Code window with `continue` as the root folder. Ensure that you have selected the Python interpreter from virtual environment, then use the `.vscode/launch.json` we have provided to start the debugger.
diff --git a/docs/docs/how-to-use-continue.md b/docs/docs/how-to-use-continue.md index 1fd8e99c..bf61a033 100644 --- a/docs/docs/how-to-use-continue.md +++ b/docs/docs/how-to-use-continue.md @@ -1,4 +1,4 @@ -# How to use Continue +# 🧑🎓 How to use Continue :::info **TL;DR: Using LLMs as you code can accelerate you if you leverage them in the right situations. However, they can also cause you to get lost and confused if you trust them when you should not. This page outlines when and where we think you should and should not use Continue.** @@ -36,6 +36,7 @@ Here are tasks that Continue excels at helping you complete: Continue works well in situations where find and replace does not work (i.e. “/edit change all of these to be like that”) Examples + - "/edit Use 'Union' instead of a vertical bar here" - “/edit Make this use more descriptive variable names” @@ -44,6 +45,7 @@ Examples Continue can help you get started building React components, Python scripts, Shell scripts, Makefiles, unit tests, etc. Examples + - “/edit write a python script to get posthog events" - “/edit add a react component for syntax highlighted code" @@ -52,6 +54,7 @@ Examples Continue can go even further. For example, it can help build the scaffolding for a Python package, which includes a typer cli app to sort the arguments and print them back out. Examples + - “/edit use this schema to write me a SQL query that gets recently churned users” - “/edit create a shell script to back up my home dir to /tmp/" @@ -60,6 +63,7 @@ Examples After selecting the code section(s), try to refactor it with Continue (e.g “/edit change the function to work like this” or “/edit do this everywhere”) Examples + - “/edit migrate this digital ocean terraform file into one that works for GCP” - “/edit rewrite this function to be async” @@ -68,6 +72,7 @@ Examples If you don't understand how some code works, highlight it and ask "how does this code work?" Examples + - “where in the page should I be making this request to the backend?” - “how can I communicate between these iframes?” @@ -80,6 +85,7 @@ Continue can also help explain errors / exceptions and offer possible solutions. Instead of switching windows and getting distracted, you can ask things like "How do I find running process on port 8000?" Examples + - "what is the load_dotenv library name?" - "how do I find running process on port 8000?" @@ -88,6 +94,7 @@ Examples Instead of leaving your IDE, you can ask open-ended questions that you don't expect to turn into multi-turn conversations. Examples + - “how can I set up a Prisma schema that cascades deletes?” - "what is the difference between dense and sparse embeddings?" @@ -96,6 +103,7 @@ Examples You can highlight an entire file and ask Continue to improve it as long as the file is not too large. Examples + - “/edit here is a connector for postgres, now write one for kafka” - "/edit Rewrite this API call to grab all pages" @@ -108,6 +116,7 @@ Similar to how you would make changes manually, focus on one file at a time. But There are many more tasks that Continue can help you complete. Typically, these will be tasks that don't involve too many steps to complete. Examples + - “/edit make an IAM policy that creates a user with read-only access to S3” - “/edit change this plot into a bar chart in this dashboard component” @@ -137,4 +146,4 @@ If you highlight very long lines (e.g. a complex SVG), you might also run into i ### Tasks with many steps -There are other tasks that Continue won't be able to take on entirely at once. However, typically, if you figure out how to break the task into sub-tasks, you can get help from Continue with those.
\ No newline at end of file +There are other tasks that Continue won't be able to take on entirely at once. However, typically, if you figure out how to break the task into sub-tasks, you can get help from Continue with those. diff --git a/docs/docs/getting-started.md b/docs/docs/quickstart.md index 18d99f08..af2cd29d 100644 --- a/docs/docs/getting-started.md +++ b/docs/docs/quickstart.md @@ -1,4 +1,4 @@ -# Getting started
+# ⚡️ Quickstart
1. Click `Install` on the **[Continue extension in the Visual Studio Marketplace](https://marketplace.visualstudio.com/items?itemName=Continue.continue)**
@@ -6,4 +6,4 @@ 3. Once you do this, you will see the Continue logo show up on the left side bar. If you click it, then the Continue extension will then open up:
-![vscode-install](/img/continue-screenshot.png)
\ No newline at end of file +![vscode-install](/img/continue-screenshot.png)
diff --git a/docs/docs/reference/Context Providers/diff.md b/docs/docs/reference/Context Providers/diff.md new file mode 100644 index 00000000..a0aaedcf --- /dev/null +++ b/docs/docs/reference/Context Providers/diff.md @@ -0,0 +1,17 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# DiffContextProvider + +The ContextProvider class is a plugin that lets you provide new information to the LLM by typing '@'. +When you type '@', the context provider will be asked to populate a list of options. +These options will be updated on each keystroke. +When you hit enter on an option, the context provider will add that item to the autopilot's list of context (which is all stored in the ContextManager object). + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/diff.py) + +## Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "diff", "type": "string"}' required={false} default="diff"/><ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "Diff", "type": "string"}' required={false} default="Diff"/><ClassPropertyRef name='description' details='{"title": "Description", "default": "Output of 'git diff' in current repo", "type": "string"}' required={false} default="Output of 'git diff' in current repo"/><ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/><ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "description": "Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type '@search <STRING_TO_SEARCH>'. This will change the behavior of the UI so that it can indicate the expectation for a query.", "default": false, "type": "boolean"}' required={false} default="False"/><ClassPropertyRef name='workspace_dir' details='{"title": "Workspace Dir", "type": "string"}' required={false} default=""/><ClassPropertyRef name='DIFF_CONTEXT_ITEM_ID' details='{"title": "Diff Context Item Id", "default": "diff", "type": "string"}' required={false} default="diff"/> + +### Inherited Properties + diff --git a/docs/docs/reference/Context Providers/file.md b/docs/docs/reference/Context Providers/file.md new file mode 100644 index 00000000..d1ef0761 --- /dev/null +++ b/docs/docs/reference/Context Providers/file.md @@ -0,0 +1,14 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# FileContextProvider + +The FileContextProvider is a ContextProvider that allows you to search files in the open workspace. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/file.py) + +## Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "file", "type": "string"}' required={false} default="file"/><ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "Files", "type": "string"}' required={false} default="Files"/><ClassPropertyRef name='description' details='{"title": "Description", "default": "Reference files in the current workspace", "type": "string"}' required={false} default="Reference files in the current workspace"/><ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": false, "type": "boolean"}' required={false} default="False"/><ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "description": "Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type '@search <STRING_TO_SEARCH>'. This will change the behavior of the UI so that it can indicate the expectation for a query.", "default": false, "type": "boolean"}' required={false} default="False"/> + +### Inherited Properties + diff --git a/docs/docs/reference/Context Providers/filetree.md b/docs/docs/reference/Context Providers/filetree.md new file mode 100644 index 00000000..07c39630 --- /dev/null +++ b/docs/docs/reference/Context Providers/filetree.md @@ -0,0 +1,17 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# FileTreeContextProvider + +The ContextProvider class is a plugin that lets you provide new information to the LLM by typing '@'. +When you type '@', the context provider will be asked to populate a list of options. +These options will be updated on each keystroke. +When you hit enter on an option, the context provider will add that item to the autopilot's list of context (which is all stored in the ContextManager object). + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/filetree.py) + +## Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "tree", "type": "string"}' required={false} default="tree"/><ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "File Tree", "type": "string"}' required={false} default="File Tree"/><ClassPropertyRef name='description' details='{"title": "Description", "default": "Add a formatted file tree of this directory to the context", "type": "string"}' required={false} default="Add a formatted file tree of this directory to the context"/><ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/><ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "description": "Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type '@search <STRING_TO_SEARCH>'. This will change the behavior of the UI so that it can indicate the expectation for a query.", "default": false, "type": "boolean"}' required={false} default="False"/><ClassPropertyRef name='workspace_dir' details='{"title": "Workspace Dir", "type": "string"}' required={false} default=""/> + +### Inherited Properties + diff --git a/docs/docs/reference/Context Providers/github.md b/docs/docs/reference/Context Providers/github.md new file mode 100644 index 00000000..45482957 --- /dev/null +++ b/docs/docs/reference/Context Providers/github.md @@ -0,0 +1,15 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# GitHubIssuesContextProvider + +The GitHubIssuesContextProvider is a ContextProvider +that allows you to search GitHub issues in a repo. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/github.py) + +## Properties + +<ClassPropertyRef name='repo_name' details='{"title": "Repo Name", "type": "string"}' required={true} default=""/><ClassPropertyRef name='auth_token' details='{"title": "Auth Token", "type": "string"}' required={true} default=""/><ClassPropertyRef name='title' details='{"title": "Title", "default": "issues", "type": "string"}' required={false} default="issues"/><ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "GitHub Issues", "type": "string"}' required={false} default="GitHub Issues"/><ClassPropertyRef name='description' details='{"title": "Description", "default": "Reference GitHub issues", "type": "string"}' required={false} default="Reference GitHub issues"/><ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": false, "type": "boolean"}' required={false} default="False"/><ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "description": "Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type '@search <STRING_TO_SEARCH>'. This will change the behavior of the UI so that it can indicate the expectation for a query.", "default": false, "type": "boolean"}' required={false} default="False"/> + +### Inherited Properties + diff --git a/docs/docs/reference/Context Providers/google.md b/docs/docs/reference/Context Providers/google.md new file mode 100644 index 00000000..6538802e --- /dev/null +++ b/docs/docs/reference/Context Providers/google.md @@ -0,0 +1,17 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# GoogleContextProvider + +The ContextProvider class is a plugin that lets you provide new information to the LLM by typing '@'. +When you type '@', the context provider will be asked to populate a list of options. +These options will be updated on each keystroke. +When you hit enter on an option, the context provider will add that item to the autopilot's list of context (which is all stored in the ContextManager object). + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/google.py) + +## Properties + +<ClassPropertyRef name='serper_api_key' details='{"title": "Serper Api Key", "type": "string"}' required={true} default=""/><ClassPropertyRef name='title' details='{"title": "Title", "default": "google", "type": "string"}' required={false} default="google"/><ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "Google", "type": "string"}' required={false} default="Google"/><ClassPropertyRef name='description' details='{"title": "Description", "default": "Search Google", "type": "string"}' required={false} default="Search Google"/><ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/><ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "default": true, "type": "boolean"}' required={false} default="True"/><ClassPropertyRef name='GOOGLE_CONTEXT_ITEM_ID' details='{"title": "Google Context Item Id", "default": "google_search", "type": "string"}' required={false} default="google_search"/> + +### Inherited Properties + diff --git a/docs/docs/reference/Context Providers/intro.md b/docs/docs/reference/Context Providers/intro.md deleted file mode 100644 index 1e0981f1..00000000 --- a/docs/docs/reference/Context Providers/intro.md +++ /dev/null @@ -1 +0,0 @@ -# Intro diff --git a/docs/docs/reference/Context Providers/search.md b/docs/docs/reference/Context Providers/search.md new file mode 100644 index 00000000..5276daa2 --- /dev/null +++ b/docs/docs/reference/Context Providers/search.md @@ -0,0 +1,17 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# SearchContextProvider + +The ContextProvider class is a plugin that lets you provide new information to the LLM by typing '@'. +When you type '@', the context provider will be asked to populate a list of options. +These options will be updated on each keystroke. +When you hit enter on an option, the context provider will add that item to the autopilot's list of context (which is all stored in the ContextManager object). + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/search.py) + +## Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "search", "type": "string"}' required={false} default="search"/><ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "Search", "type": "string"}' required={false} default="Search"/><ClassPropertyRef name='description' details='{"title": "Description", "default": "Search the workspace for all matches of an exact string (e.g. '@search console.log')", "type": "string"}' required={false} default="Search the workspace for all matches of an exact string (e.g. '@search console.log')"/><ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/><ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "default": true, "type": "boolean"}' required={false} default="True"/><ClassPropertyRef name='workspace_dir' details='{"title": "Workspace Dir", "type": "string"}' required={false} default=""/><ClassPropertyRef name='SEARCH_CONTEXT_ITEM_ID' details='{"title": "Search Context Item Id", "default": "search", "type": "string"}' required={false} default="search"/> + +### Inherited Properties + diff --git a/docs/docs/reference/Context Providers/terminal.md b/docs/docs/reference/Context Providers/terminal.md new file mode 100644 index 00000000..37c70ab4 --- /dev/null +++ b/docs/docs/reference/Context Providers/terminal.md @@ -0,0 +1,17 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# TerminalContextProvider + +The ContextProvider class is a plugin that lets you provide new information to the LLM by typing '@'. +When you type '@', the context provider will be asked to populate a list of options. +These options will be updated on each keystroke. +When you hit enter on an option, the context provider will add that item to the autopilot's list of context (which is all stored in the ContextManager object). + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/terminal.py) + +## Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "terminal", "type": "string"}' required={false} default="terminal"/><ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "Terminal", "type": "string"}' required={false} default="Terminal"/><ClassPropertyRef name='description' details='{"title": "Description", "default": "Reference the contents of the terminal", "type": "string"}' required={false} default="Reference the contents of the terminal"/><ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/><ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "description": "Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type '@search <STRING_TO_SEARCH>'. This will change the behavior of the UI so that it can indicate the expectation for a query.", "default": false, "type": "boolean"}' required={false} default="False"/><ClassPropertyRef name='workspace_dir' details='{"title": "Workspace Dir", "type": "string"}' required={false} default=""/><ClassPropertyRef name='get_last_n_commands' details='{"title": "Get Last N Commands", "default": 3, "type": "integer"}' required={false} default="3"/> + +### Inherited Properties + diff --git a/docs/docs/reference/Context Providers/url.md b/docs/docs/reference/Context Providers/url.md new file mode 100644 index 00000000..b0cfac07 --- /dev/null +++ b/docs/docs/reference/Context Providers/url.md @@ -0,0 +1,17 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# URLContextProvider + +The ContextProvider class is a plugin that lets you provide new information to the LLM by typing '@'. +When you type '@', the context provider will be asked to populate a list of options. +These options will be updated on each keystroke. +When you hit enter on an option, the context provider will add that item to the autopilot's list of context (which is all stored in the ContextManager object). + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/url.py) + +## Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "url", "type": "string"}' required={false} default="url"/><ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "URL", "type": "string"}' required={false} default="URL"/><ClassPropertyRef name='description' details='{"title": "Description", "default": "Reference the contents of a webpage", "type": "string"}' required={false} default="Reference the contents of a webpage"/><ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/><ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "default": true, "type": "boolean"}' required={false} default="True"/><ClassPropertyRef name='preset_urls' details='{"title": "Preset Urls", "default": [], "type": "array", "items": {"type": "string"}}' required={false} default="[]"/><ClassPropertyRef name='static_url_context_items' details='{"title": "Static Url Context Items", "default": [], "type": "array", "items": {"$ref": "#/definitions/ContextItem"}}' required={false} default="[]"/><ClassPropertyRef name='DYNAMIC_URL_CONTEXT_ITEM_ID' details='{"title": "Dynamic Url Context Item Id", "default": "url", "type": "string"}' required={false} default="url"/> + +### Inherited Properties + diff --git a/docs/docs/reference/Models/anthropic.md b/docs/docs/reference/Models/anthropic.md index 1aa31324..8fec179a 100644 --- a/docs/docs/reference/Models/anthropic.md +++ b/docs/docs/reference/Models/anthropic.md @@ -2,10 +2,27 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; # AnthropicLLM +Import the `AnthropicLLM` class and set it as the default model: +```python +from continuedev.src.continuedev.libs.llm.anthropic import AnthropicLLM + +config = ContinueConfig( + ... + models=Models( + default=AnthropicLLM(api_key="<API_KEY>", model="claude-2") + ) +) +``` + +Claude 2 is not yet publicly released. You can request early access [here](https://www.anthropic.com/earlyaccess). [View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/anthropic.py) ## Properties -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "type": "string"}' required={true}/><ClassPropertyRef name='title' details='{"title": "Title", "type": "string"}' required={false}/><ClassPropertyRef name='system_message' details='{"title": "System Message", "type": "string"}' required={false}/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "default": 2048, "type": "integer"}' required={false}/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "type": "string"}' required={false}/><ClassPropertyRef name='model' details='{"title": "Model", "default": "claude-2", "type": "string"}' required={false}/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "default": 300, "type": "integer"}' required={false}/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "default": {}, "type": "object"}' required={false}/>
\ No newline at end of file + + +### Inherited Properties + +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={true} default=""/><ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "claude-2", "type": "string"}' required={false} default="claude-2"/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/ggml.md b/docs/docs/reference/Models/ggml.md index dafc8870..fbaf12d0 100644 --- a/docs/docs/reference/Models/ggml.md +++ b/docs/docs/reference/Models/ggml.md @@ -2,10 +2,29 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; # GGML +See our [5 minute quickstart](https://github.com/continuedev/ggml-server-example) to run any model locally with ggml. While these models don't yet perform as well, they are free, entirely private, and run offline. +Once the model is running on localhost:8000, change `~/.continue/config.py` to look like this: + +```python +from continuedev.src.continuedev.libs.llm.ggml import GGML + +config = ContinueConfig( + ... + models=Models( + default=GGML( + max_context_length=2048, + server_url="http://localhost:8000") + ) +) +``` [View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/ggml.py) ## Properties -<ClassPropertyRef name='title' details='{"title": "Title", "type": "string"}' required={false}/><ClassPropertyRef name='system_message' details='{"title": "System Message", "type": "string"}' required={false}/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "default": 2048, "type": "integer"}' required={false}/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "type": "string"}' required={false}/><ClassPropertyRef name='model' details='{"title": "Model", "default": "ggml", "type": "string"}' required={false}/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "default": 300, "type": "integer"}' required={false}/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false}/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "type": "string"}' required={false}/><ClassPropertyRef name='server_url' details='{"title": "Server Url", "default": "http://localhost:8000", "type": "string"}' required={false}/><ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "type": "boolean"}' required={false}/><ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "type": "string"}' required={false}/>
\ No newline at end of file +<ClassPropertyRef name='server_url' details='{"title": "Server Url", "description": "URL of the OpenAI-compatible server where the model is being served", "default": "http://localhost:8000", "type": "string"}' required={false} default="http://localhost:8000"/><ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether SSL certificates should be verified when making the HTTP request", "type": "boolean"}' required={false} default=""/><ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to use (optional for the GGML class)", "default": "ggml", "type": "string"}' required={false} default="ggml"/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false} default="{'edit': '[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]'}"/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/hf_inference_api.md b/docs/docs/reference/Models/hf_inference_api.md new file mode 100644 index 00000000..605813be --- /dev/null +++ b/docs/docs/reference/Models/hf_inference_api.md @@ -0,0 +1,29 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# HuggingFaceInferenceAPI + +Hugging Face Inference API is a great option for newly released language models. Sign up for an account and add billing [here](https://huggingface.co/settings/billing), access the Inference Endpoints [here](https://ui.endpoints.huggingface.co), click on “New endpoint”, and fill out the form (e.g. select a model like [WizardCoder-Python-34B-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0)), and then deploy your model by clicking “Create Endpoint”. Change `~/.continue/config.py` to look like this: + +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.hf_inference_api import HuggingFaceInferenceAPI + +config = ContinueConfig( + ... + models=Models( + default=HuggingFaceInferenceAPI( + endpoint_url: "<INFERENCE_API_ENDPOINT_URL>", + hf_token: "<HUGGING_FACE_TOKEN>", + ) +) +``` + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/hf_inference_api.py) + +## Properties + +<ClassPropertyRef name='hf_token' details='{"title": "Hf Token", "description": "Your Hugging Face API token", "type": "string"}' required={true} default=""/><ClassPropertyRef name='endpoint_url' details='{"title": "Endpoint Url", "description": "Your Hugging Face Inference API endpoint URL", "type": "string"}' required={false} default=""/> + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to use (optional for the HuggingFaceInferenceAPI class)", "default": "Hugging Face Inference API", "type": "string"}' required={false} default="Hugging Face Inference API"/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false} default="{'edit': '[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]'}"/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/hf_tgi.md b/docs/docs/reference/Models/hf_tgi.md new file mode 100644 index 00000000..b6eb61d7 --- /dev/null +++ b/docs/docs/reference/Models/hf_tgi.md @@ -0,0 +1,15 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# HuggingFaceTGI + + + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/hf_tgi.py) + +## Properties + +<ClassPropertyRef name='server_url' details='{"title": "Server Url", "description": "URL of your TGI server", "default": "http://localhost:8080", "type": "string"}' required={false} default="http://localhost:8080"/><ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether SSL certificates should be verified when making the HTTP request", "type": "boolean"}' required={false} default=""/> + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "huggingface-tgi", "type": "string"}' required={false} default="huggingface-tgi"/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false} default="{'edit': '[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]'}"/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/llamacpp.md b/docs/docs/reference/Models/llamacpp.md index 7ce75574..0bb06e74 100644 --- a/docs/docs/reference/Models/llamacpp.md +++ b/docs/docs/reference/Models/llamacpp.md @@ -2,10 +2,33 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; # LlamaCpp +Run the llama.cpp server binary to start the API server. If running on a remote server, be sure to set host to 0.0.0.0: +```shell +.\server.exe -c 4096 --host 0.0.0.0 -t 16 --mlock -m models\meta\llama\codellama-7b-instruct.Q8_0.gguf +``` + +After it's up and running, change `~/.continue/config.py` to look like this: + +```python +from continuedev.src.continuedev.libs.llm.llamacpp import LlamaCpp + +config = ContinueConfig( + ... + models=Models( + default=LlamaCpp( + max_context_length=4096, + server_url="http://localhost:8080") + ) +) +``` [View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/llamacpp.py) ## Properties -<ClassPropertyRef name='title' details='{"title": "Title", "type": "string"}' required={false}/><ClassPropertyRef name='system_message' details='{"title": "System Message", "type": "string"}' required={false}/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "default": 2048, "type": "integer"}' required={false}/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "type": "string"}' required={false}/><ClassPropertyRef name='model' details='{"title": "Model", "default": "llamacpp", "type": "string"}' required={false}/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "default": 300, "type": "integer"}' required={false}/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false}/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "type": "string"}' required={false}/><ClassPropertyRef name='server_url' details='{"title": "Server Url", "default": "http://localhost:8080", "type": "string"}' required={false}/><ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "type": "boolean"}' required={false}/><ClassPropertyRef name='llama_cpp_args' details='{"title": "Llama Cpp Args", "default": {"stop": ["[INST]"]}, "type": "object"}' required={false}/><ClassPropertyRef name='use_command' details='{"title": "Use Command", "type": "string"}' required={false}/>
\ No newline at end of file +<ClassPropertyRef name='server_url' details='{"title": "Server Url", "description": "URL of the server", "default": "http://localhost:8080", "type": "string"}' required={false} default="http://localhost:8080"/><ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether SSL certificates should be verified when making the HTTP request", "type": "boolean"}' required={false} default=""/><ClassPropertyRef name='llama_cpp_args' details='{"title": "Llama Cpp Args", "description": "A list of additional arguments to pass to llama.cpp. See [here](https://github.com/ggerganov/llama.cpp/tree/master/examples/server#api-endpoints) for the complete catalog of options.", "default": {"stop": ["[INST]"]}, "type": "object"}' required={false} default="{'stop': ['[INST]']}"/> + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "llamacpp", "type": "string"}' required={false} default="llamacpp"/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false} default="{'edit': '[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]'}"/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/maybe_proxy_openai.md b/docs/docs/reference/Models/maybe_proxy_openai.md new file mode 100644 index 00000000..22ac2382 --- /dev/null +++ b/docs/docs/reference/Models/maybe_proxy_openai.md @@ -0,0 +1,36 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# MaybeProxyOpenAI + +With the `MaybeProxyOpenAI` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. + +Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps: + +1. Copy your API key from https://platform.openai.com/account/api-keys +2. Open `~/.continue/config.py`. You can do this by using the '/config' command in Continue +3. Change the default LLMs to look like this: + +```python +API_KEY = "<API_KEY>" +config = ContinueConfig( + ... + models=Models( + default=MaybeProxyOpenAI(model="gpt-4", api_key=API_KEY), + medium=MaybeProxyOpenAI(model="gpt-3.5-turbo", api_key=API_KEY) + ) +) +``` + +The `MaybeProxyOpenAI` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. + +These classes support any models available through the OpenAI API, assuming your API key has access, including "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", and "gpt-4-32k". + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/maybe_proxy_openai.py) + +## Properties + +<ClassPropertyRef name='llm' details='{"$ref": "#/definitions/LLM"}' required={false} default=""/> + +### Inherited Properties + +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "type": "string"}' required={true} default=""/><ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/ollama.md b/docs/docs/reference/Models/ollama.md index ef058119..9792ee52 100644 --- a/docs/docs/reference/Models/ollama.md +++ b/docs/docs/reference/Models/ollama.md @@ -2,10 +2,25 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; # Ollama +[Ollama](https://ollama.ai/) is a Mac application that makes it easy to locally run open-source models, including Llama-2. Download the app from the website, and it will walk you through setup in a couple of minutes. You can also read more in their [README](https://github.com/jmorganca/ollama). Continue can then be configured to use the `Ollama` LLM class: +```python +from continuedev.src.continuedev.libs.llm.ollama import Ollama + +config = ContinueConfig( + ... + models=Models( + default=Ollama(model="llama2") + ) +) +``` [View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/ollama.py) ## Properties -<ClassPropertyRef name='title' details='{"title": "Title", "type": "string"}' required={false}/><ClassPropertyRef name='system_message' details='{"title": "System Message", "type": "string"}' required={false}/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "default": 2048, "type": "integer"}' required={false}/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "type": "string"}' required={false}/><ClassPropertyRef name='model' details='{"title": "Model", "default": "llama2", "type": "string"}' required={false}/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "default": 300, "type": "integer"}' required={false}/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false}/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "type": "string"}' required={false}/><ClassPropertyRef name='server_url' details='{"title": "Server Url", "default": "http://localhost:11434", "type": "string"}' required={false}/>
\ No newline at end of file +<ClassPropertyRef name='server_url' details='{"title": "Server Url", "description": "URL of the Ollama server", "default": "http://localhost:11434", "type": "string"}' required={false} default="http://localhost:11434"/> + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "llama2", "type": "string"}' required={false} default="llama2"/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false} default="{'edit': '[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]'}"/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/openai.md b/docs/docs/reference/Models/openai.md index d325ca2f..0ade1a8f 100644 --- a/docs/docs/reference/Models/openai.md +++ b/docs/docs/reference/Models/openai.md @@ -4,10 +4,36 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; The OpenAI class can be used to access OpenAI models like gpt-4 and gpt-3.5-turbo. -If you are running a local model with an OpenAI-compatible API, you can also use the OpenAI class by changing the `api_base` argument. +If you are locally serving a model that uses an OpenAI-compatible server, you can simply change the `api_base` in the `OpenAI` class like this: + +```python +from continuedev.src.continuedev.libs.llm.openai import OpenAI + +config = ContinueConfig( + ... + models=Models( + default=OpenAI( + api_key="EMPTY", + model="<MODEL_NAME>", + api_base="http://localhost:8000", # change to your server + ) + ) +) +``` + +Options for serving models locally with an OpenAI-compatible server include: + +- [text-gen-webui](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai#setup--installation) +- [FastChat](https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md) +- [LocalAI](https://localai.io/basics/getting_started/) +- [llama-cpp-python](https://github.com/abetlen/llama-cpp-python#web-server) [View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/openai.py) ## Properties -<ClassPropertyRef name='model' details='{"title": "Model", "type": "string"}' required={true}/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "OpenAI API key", "type": "string"}' required={true}/><ClassPropertyRef name='title' details='{"title": "Title", "type": "string"}' required={false}/><ClassPropertyRef name='system_message' details='{"title": "System Message", "type": "string"}' required={false}/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "default": 2048, "type": "integer"}' required={false}/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "type": "string"}' required={false}/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "default": 300, "type": "integer"}' required={false}/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "default": {}, "type": "object"}' required={false}/><ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "type": "boolean"}' required={false}/><ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "type": "string"}' required={false}/><ClassPropertyRef name='proxy' details='{"title": "Proxy", "type": "string"}' required={false}/><ClassPropertyRef name='api_base' details='{"title": "Api Base", "type": "string"}' required={false}/><ClassPropertyRef name='api_type' details='{"title": "Api Type", "enum": ["azure", "openai"], "type": "string"}' required={false}/><ClassPropertyRef name='api_version' details='{"title": "Api Version", "type": "string"}' required={false}/><ClassPropertyRef name='engine' details='{"title": "Engine", "type": "string"}' required={false}/>
\ No newline at end of file +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/><ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to CA bundle to use for requests.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use for requests.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='api_base' details='{"title": "Api Base", "description": "OpenAI API base URL.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='api_type' details='{"title": "Api Type", "description": "OpenAI API type.", "enum": ["azure", "openai"], "type": "string"}' required={false} default=""/><ClassPropertyRef name='api_version' details='{"title": "Api Version", "description": "OpenAI API version. For use with Azure OpenAI Service.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='engine' details='{"title": "Engine", "description": "OpenAI engine. For use with Azure OpenAI Service.", "type": "string"}' required={false} default=""/> + +### Inherited Properties + +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "type": "string"}' required={true} default=""/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "OpenAI API key", "type": "string"}' required={true} default=""/><ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/queued.md b/docs/docs/reference/Models/queued.md index 6888a4e5..e253da09 100644 --- a/docs/docs/reference/Models/queued.md +++ b/docs/docs/reference/Models/queued.md @@ -2,10 +2,27 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; # QueuedLLM +QueuedLLM exists to make up for LLM servers that cannot handle multiple requests at once. It uses a lock to ensure that only one request is being processed at a time. +If you are already using another LLM class and are experiencing this problem, you can just wrap it with the QueuedLLM class like this: + +```python +from continuedev.src.continuedev.libs.llm.queued import QueuedLLM + +config = ContinueConfig( + ... + models=Models( + default=QueuedLLM(llm=<OTHER_LLM_CLASS>) + ) +) +``` [View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/queued.py) ## Properties -<ClassPropertyRef name='llm' details='{"$ref": "#/definitions/LLM"}' required={true}/><ClassPropertyRef name='title' details='{"title": "Title", "type": "string"}' required={false}/><ClassPropertyRef name='system_message' details='{"title": "System Message", "type": "string"}' required={false}/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "default": 2048, "type": "integer"}' required={false}/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "type": "string"}' required={false}/><ClassPropertyRef name='model' details='{"title": "Model", "default": "queued", "type": "string"}' required={false}/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "default": 300, "type": "integer"}' required={false}/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "default": {}, "type": "object"}' required={false}/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "type": "string"}' required={false}/>
\ No newline at end of file +<ClassPropertyRef name='llm' details='{"title": "Llm", "description": "The LLM to wrap with a lock", "allOf": [{"$ref": "#/definitions/LLM"}]}' required={true} default=""/> + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "queued", "type": "string"}' required={false} default="queued"/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/replicate.md b/docs/docs/reference/Models/replicate.md index 4f05cdfa..0c93a758 100644 --- a/docs/docs/reference/Models/replicate.md +++ b/docs/docs/reference/Models/replicate.md @@ -2,10 +2,30 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; # ReplicateLLM +Replicate is a great option for newly released language models or models that you've deployed through their platform. Sign up for an account [here](https://replicate.ai/), copy your API key, and then select any model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models). Change `~/.continue/config.py` to look like this: +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.replicate import ReplicateLLM + +config = ContinueConfig( + ... + models=Models( + default=ReplicateLLM( + model="replicate/codellama-13b-instruct:da5676342de1a5a335b848383af297f592b816b950a43d251a0a9edd0113604b", + api_key="my-replicate-api-key") + ) +) +``` + +If you don't specify the `model` parameter, it will default to `replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781`. [View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/replicate.py) ## Properties -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "type": "string"}' required={true}/><ClassPropertyRef name='title' details='{"title": "Title", "type": "string"}' required={false}/><ClassPropertyRef name='system_message' details='{"title": "System Message", "type": "string"}' required={false}/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "default": 2048, "type": "integer"}' required={false}/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "type": "string"}' required={false}/><ClassPropertyRef name='model' details='{"title": "Model", "default": "replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781", "type": "string"}' required={false}/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "default": 300, "type": "integer"}' required={false}/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false}/>
\ No newline at end of file + + +### Inherited Properties + +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "Replicate API key", "type": "string"}' required={true} default=""/><ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781", "type": "string"}' required={false} default="replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781"/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false} default="{'edit': '[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]'}"/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/text_gen_interface.md b/docs/docs/reference/Models/text_gen_interface.md index a59a4166..21404960 100644 --- a/docs/docs/reference/Models/text_gen_interface.md +++ b/docs/docs/reference/Models/text_gen_interface.md @@ -2,10 +2,27 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; # TextGenUI +TextGenUI is a comprehensive, open-source language model UI and local server. You can set it up with an OpenAI-compatible server plugin, but if for some reason that doesn't work, you can use this class like so: +```python +from continuedev.src.continuedev.libs.llm.text_gen_interface import TextGenUI + +config = ContinueConfig( + ... + models=Models( + default=TextGenUI( + model="<MODEL_NAME>", + ) + ) +) +``` [View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/text_gen_interface.py) ## Properties -<ClassPropertyRef name='title' details='{"title": "Title", "type": "string"}' required={false}/><ClassPropertyRef name='system_message' details='{"title": "System Message", "type": "string"}' required={false}/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "default": 2048, "type": "integer"}' required={false}/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "type": "string"}' required={false}/><ClassPropertyRef name='model' details='{"title": "Model", "default": "text-gen-ui", "type": "string"}' required={false}/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "default": 300, "type": "integer"}' required={false}/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false}/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "type": "string"}' required={false}/><ClassPropertyRef name='server_url' details='{"title": "Server Url", "default": "http://localhost:5000", "type": "string"}' required={false}/><ClassPropertyRef name='streaming_url' details='{"title": "Streaming Url", "default": "http://localhost:5005", "type": "string"}' required={false}/><ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "type": "boolean"}' required={false}/>
\ No newline at end of file +<ClassPropertyRef name='server_url' details='{"title": "Server Url", "description": "URL of your TextGenUI server", "default": "http://localhost:5000", "type": "string"}' required={false} default="http://localhost:5000"/><ClassPropertyRef name='streaming_url' details='{"title": "Streaming Url", "description": "URL of your TextGenUI streaming server (separate from main server URL)", "default": "http://localhost:5005", "type": "string"}' required={false} default="http://localhost:5005"/><ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "text-gen-ui", "type": "string"}' required={false} default="text-gen-ui"/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false} default="{'edit': '[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]'}"/><ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/>
\ No newline at end of file diff --git a/docs/docs/reference/Models/together.md b/docs/docs/reference/Models/together.md index e436644c..ec1ebb9c 100644 --- a/docs/docs/reference/Models/together.md +++ b/docs/docs/reference/Models/together.md @@ -2,10 +2,29 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; # TogetherLLM +The Together API is a cloud platform for running large AI models. You can sign up [here](https://api.together.xyz/signup), copy your API key on the initial welcome screen, and then hit the play button on any model from the [Together Models list](https://docs.together.ai/docs/models-inference). Change `~/.continue/config.py` to look like this: +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.together import TogetherLLM + +config = ContinueConfig( + ... + models=Models( + default=TogetherLLM( + api_key="<API_KEY>", + model="togethercomputer/llama-2-13b-chat" + ) + ) +) +``` [View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/together.py) ## Properties -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "type": "string"}' required={true}/><ClassPropertyRef name='title' details='{"title": "Title", "type": "string"}' required={false}/><ClassPropertyRef name='system_message' details='{"title": "System Message", "type": "string"}' required={false}/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "default": 2048, "type": "integer"}' required={false}/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "type": "string"}' required={false}/><ClassPropertyRef name='model' details='{"title": "Model", "default": "togethercomputer/RedPajama-INCITE-7B-Instruct", "type": "string"}' required={false}/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "default": 300, "type": "integer"}' required={false}/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false}/><ClassPropertyRef name='base_url' details='{"title": "Base Url", "default": "https://api.together.xyz", "type": "string"}' required={false}/><ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "type": "boolean"}' required={false}/>
\ No newline at end of file +<ClassPropertyRef name='base_url' details='{"title": "Base Url", "description": "The base URL for your Together API instance", "default": "https://api.together.xyz", "type": "string"}' required={false} default="https://api.together.xyz"/><ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether SSL certificates should be verified when making the HTTP request", "type": "boolean"}' required={false} default=""/> + +### Inherited Properties + +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "Together API key", "type": "string"}' required={true} default=""/><ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/><ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "togethercomputer/RedPajama-INCITE-7B-Instruct", "type": "string"}' required={false} default="togethercomputer/RedPajama-INCITE-7B-Instruct"/><ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/><ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]"}, "type": "object"}' required={false} default="{'edit': '[INST] Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.\n[/INST]'}"/>
\ No newline at end of file diff --git a/docs/docs/reference/config.md b/docs/docs/reference/config.md new file mode 100644 index 00000000..dbcfc4c6 --- /dev/null +++ b/docs/docs/reference/config.md @@ -0,0 +1,14 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# ContinueConfig + +Continue can be deeply customized by editing the `ContinueConfig` object in `~/.continue/config.py` (`%userprofile%\.continue\config.py` for Windows) on your machine. This class is instantiated from the config file for every new session. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/config.py) + +## Properties + +<ClassPropertyRef name='steps_on_startup' details='{"title": "Steps On Startup", "description": "Steps that will be automatically run at the beginning of a new session", "default": [], "type": "array", "items": {"$ref": "#/definitions/Step"}}' required={false} default="[]"/><ClassPropertyRef name='disallowed_steps' details='{"title": "Disallowed Steps", "description": "Steps that are not allowed to be run, and will be skipped if attempted", "default": [], "type": "array", "items": {"type": "string"}}' required={false} default="[]"/><ClassPropertyRef name='allow_anonymous_telemetry' details='{"title": "Allow Anonymous Telemetry", "description": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to False, we will not collect any data.", "default": true, "type": "boolean"}' required={false} default="True"/><ClassPropertyRef name='models' details='{"title": "Models", "description": "Configuration for the models used by Continue. Read more about how to configure models in the documentation.", "default": {"default": {"title": null, "system_message": null, "context_length": 2048, "model": "gpt-4", "timeout": 300, "prompt_templates": {}, "api_key": null, "llm": null, "class_name": "MaybeProxyOpenAI"}, "small": null, "medium": {"title": null, "system_message": null, "context_length": 2048, "model": "gpt-3.5-turbo", "timeout": 300, "prompt_templates": {}, "api_key": null, "llm": null, "class_name": "MaybeProxyOpenAI"}, "large": null, "edit": null, "chat": null, "unused": []}, "allOf": [{"$ref": "#/definitions/Models"}]}' required={false} default="{'default': {'title': None, 'system_message': None, 'context_length': 2048, 'model': 'gpt-4', 'timeout': 300, 'prompt_templates': {}, 'api_key': None, 'llm': None, 'class_name': 'MaybeProxyOpenAI'}, 'small': None, 'medium': {'title': None, 'system_message': None, 'context_length': 2048, 'model': 'gpt-3.5-turbo', 'timeout': 300, 'prompt_templates': {}, 'api_key': None, 'llm': None, 'class_name': 'MaybeProxyOpenAI'}, 'large': None, 'edit': None, 'chat': None, 'unused': []}"/><ClassPropertyRef name='temperature' details='{"title": "Temperature", "description": "The temperature parameter for sampling from the LLM. Higher temperatures will result in more random output, while lower temperatures will result in more predictable output. This value ranges from 0 to 1.", "default": 0.5, "type": "number"}' required={false} default="0.5"/><ClassPropertyRef name='custom_commands' details='{"title": "Custom Commands", "description": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt.", "default": [{"name": "test", "prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.", "description": "This is an example custom command. Use /config to edit it and create more"}], "type": "array", "items": {"$ref": "#/definitions/CustomCommand"}}' required={false} default="[{'name': 'test', 'prompt': "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.", 'description': 'This is an example custom command. Use /config to edit it and create more'}]"/><ClassPropertyRef name='slash_commands' details='{"title": "Slash Commands", "description": "An array of slash commands that let you map custom Steps to a shortcut.", "default": [], "type": "array", "items": {"$ref": "#/definitions/SlashCommand"}}' required={false} default="[]"/><ClassPropertyRef name='on_traceback' details='{"title": "On Traceback", "description": "The step that will be run when a traceback is detected (when you use the shortcut cmd+shift+R)", "allOf": [{"$ref": "#/definitions/Step"}]}' required={false} default=""/><ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/><ClassPropertyRef name='policy_override' details='{"title": "Policy Override", "description": "A Policy object that can be used to override the default behavior of Continue, for example in order to build custom agents that take multiple steps at a time.", "allOf": [{"$ref": "#/definitions/Policy"}]}' required={false} default=""/><ClassPropertyRef name='context_providers' details='{"title": "Context Providers", "description": "A list of ContextProvider objects that can be used to provide context to the LLM by typing '@'. Read more about ContextProviders in the documentation.", "default": [], "type": "array", "items": {"$ref": "#/definitions/ContextProvider"}}' required={false} default="[]"/><ClassPropertyRef name='user_token' details='{"title": "User Token", "description": "An optional token to identify the user.", "type": "string"}' required={false} default=""/><ClassPropertyRef name='data_server_url' details='{"title": "Data Server Url", "description": "The URL of the server where development data is sent. No data is sent unless a valid user token is provided.", "default": "https://us-west1-autodebug.cloudfunctions.net", "type": "string"}' required={false} default="https://us-west1-autodebug.cloudfunctions.net"/> + +### Inherited Properties + diff --git a/docs/docs/telemetry.md b/docs/docs/telemetry.md index e0ea2158..2202aa92 100644 --- a/docs/docs/telemetry.md +++ b/docs/docs/telemetry.md @@ -1,4 +1,4 @@ -# Telemetry
+# 🦔 Telemetry
## Overview
@@ -27,4 +27,4 @@ config = ContinueConfig( )
```
-You can turn off anonymous telemetry by changing the value of `allow_anonymous_telemetry` to `false`.
+You can turn off anonymous telemetry by changing the value of `allow_anonymous_telemetry` to `False`.
diff --git a/docs/docs/troubleshooting.md b/docs/docs/troubleshooting.md index 722c5d1b..46845c55 100644 --- a/docs/docs/troubleshooting.md +++ b/docs/docs/troubleshooting.md @@ -1,4 +1,4 @@ -# Troubleshooting +# ❓ Troubleshooting The Continue VS Code extension is currently in beta. It will attempt to start the Continue Python server locally for you, but sometimes this will fail, causing the "Starting Continue server..." not to disappear, or other hangups. While we are working on fixes to all of these problems, there are a few things you can do to temporarily troubleshoot: diff --git a/docs/sidebars.js b/docs/sidebars.js index 2121fea6..47e0baf7 100644 --- a/docs/sidebars.js +++ b/docs/sidebars.js @@ -15,34 +15,35 @@ const sidebars = { docsSidebar: [ "intro", - "getting-started", + "quickstart", "how-to-use-continue", "how-continue-works", { type: "category", - label: "Customization", + label: "🎨 Customization", collapsible: true, collapsed: false, items: [ + "customization/overview", "customization/models", "customization/context-providers", "customization/slash-commands", "customization/other-configuration", ], }, - "collecting-data", - "telemetry", - "troubleshooting", { type: "category", - label: "Walkthroughs", + label: "🚶 Walkthroughs", collapsible: true, collapsed: false, items: ["walkthroughs/codellama"], }, + "development-data", + "telemetry", + "troubleshooting", { type: "category", - label: "Reference", + label: "📖 Reference", collapsible: true, collapsed: false, items: [ diff --git a/docs/src/components/ClassPropertyRef.tsx b/docs/src/components/ClassPropertyRef.tsx index 46664c4c..7246663b 100644 --- a/docs/src/components/ClassPropertyRef.tsx +++ b/docs/src/components/ClassPropertyRef.tsx @@ -4,8 +4,14 @@ interface ClassPropertyRefProps { name: string; details: string; required: boolean; + default: string; } +const PYTHON_TYPES = { + string: "str", + integer: "int", +}; + export default function ClassPropertyRef(props: ClassPropertyRefProps) { const details = JSON.parse(props.details); @@ -15,10 +21,32 @@ export default function ClassPropertyRef(props: ClassPropertyRefProps) { <h4 style={{ display: "inline-block", marginRight: "10px" }}> {props.name} </h4> - <span style={{ color: "red", fontSize: "11px", marginRight: "4px" }}> - {props.required && "REQUIRED"} + {props.required && ( + <span + style={{ + color: "red", + fontSize: "11px", + marginRight: "4px", + borderRadius: "4px", + border: "1px solid red", + padding: "1px 2px", + }} + > + REQUIRED + </span> + )} + <span> + {details.type && `(${PYTHON_TYPES[details.type] || details.type})`} </span> - <span>{details.type && `(${details.type})`}</span> + + {props.default && ( + <span> + {" "} + = {details.type === "string" && '"'} + {props.default} + {details.type === "string" && '"'} + </span> + )} </div> <p>{details.description}</p> </> diff --git a/docs/src/css/custom.css b/docs/src/css/custom.css index 794febaf..3a7178dd 100644 --- a/docs/src/css/custom.css +++ b/docs/src/css/custom.css @@ -18,13 +18,13 @@ } /* For readability concerns, you should choose a lighter palette in dark mode. */ -[data-theme='dark'] { - --ifm-color-primary: #be1b55ff; - --ifm-color-primary-dark: #be1b55ff; - --ifm-color-primary-darker: #be1b55ff; - --ifm-color-primary-darkest: #be1b55ff; - --ifm-color-primary-light: #be1b55ff; - --ifm-color-primary-lighter: #be1b55ff; - --ifm-color-primary-lightest: #be1b55ff; +[data-theme="dark"] { + --ifm-color-primary: #59bc89ff; + --ifm-color-primary-dark: #59bc89ff; + --ifm-color-primary-darker: #59bc89ff; + --ifm-color-primary-darkest: #59bc89ff; + --ifm-color-primary-light: #59bc89ff; + --ifm-color-primary-lighter: #59bc89ff; + --ifm-color-primary-lightest: #59bc89ff; --docusaurus-highlighted-code-line-bg: rgba(0, 0, 0, 0.3); } |