diff options
author | Nate Sesti <sestinj@gmail.com> | 2023-09-24 17:46:33 -0700 |
---|---|---|
committer | Nate Sesti <sestinj@gmail.com> | 2023-09-24 17:46:33 -0700 |
commit | 1e3c8adabba561eeef124144f3a2ef36d26334b4 (patch) | |
tree | 625ebd4a769e4dfd74d8ea155f301a35ab2c35b0 | |
parent | 145466e40e5ac3ba8ccc172678f6d6cbf05c342e (diff) | |
download | sncontinue-1e3c8adabba561eeef124144f3a2ef36d26334b4.tar.gz sncontinue-1e3c8adabba561eeef124144f3a2ef36d26334b4.tar.bz2 sncontinue-1e3c8adabba561eeef124144f3a2ef36d26334b4.zip |
feat: :fire: fix duplication in reference
-rw-r--r-- | docs/docs/reference/Models/anthropic.md | 39 | ||||
-rw-r--r-- | docs/docs/reference/Models/hf_inference_api.md | 42 | ||||
-rw-r--r-- | docs/docs/reference/Models/hf_tgi.md | 27 | ||||
-rw-r--r-- | docs/docs/reference/Models/maybe_proxy_openai.md | 47 | ||||
-rw-r--r-- | docs/docs/reference/Models/openai_free_trial.md | 48 | ||||
-rw-r--r-- | docs/docs/reference/Models/openaifreetrial.md | 1 | ||||
-rw-r--r-- | docs/docs/reference/Models/queued.md | 40 | ||||
-rw-r--r-- | docs/docs/reference/Models/replicate.md | 42 | ||||
-rw-r--r-- | docs/docs/reference/Models/text_gen_interface.md | 41 | ||||
-rw-r--r-- | docs/docs/reference/Models/together.md | 42 | ||||
-rw-r--r-- | docs/docs/reference/config.md | 2 |
11 files changed, 3 insertions, 368 deletions
diff --git a/docs/docs/reference/Models/anthropic.md b/docs/docs/reference/Models/anthropic.md deleted file mode 100644 index 128b706d..00000000 --- a/docs/docs/reference/Models/anthropic.md +++ /dev/null @@ -1,39 +0,0 @@ -import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; - -# AnthropicLLM - -Import the `AnthropicLLM` class and set it as the default model: - -```python -from continuedev.src.continuedev.libs.llm.anthropic import AnthropicLLM - -config = ContinueConfig( - ... - models=Models( - default=AnthropicLLM(api_key="<API_KEY>", model="claude-2") - ) -) -``` - -Claude 2 is not yet publicly released. You can request early access [here](https://www.anthropic.com/earlyaccess). - -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/anthropic.py) - -## Properties - - - -### Inherited Properties - -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={true} default=""/> -<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> -<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "claude-2", "type": "string"}' required={false} default="claude-2"/> -<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> -<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> -<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> -<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> diff --git a/docs/docs/reference/Models/hf_inference_api.md b/docs/docs/reference/Models/hf_inference_api.md deleted file mode 100644 index 560309f2..00000000 --- a/docs/docs/reference/Models/hf_inference_api.md +++ /dev/null @@ -1,42 +0,0 @@ -import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; - -# HuggingFaceInferenceAPI - -Hugging Face Inference API is a great option for newly released language models. Sign up for an account and add billing [here](https://huggingface.co/settings/billing), access the Inference Endpoints [here](https://ui.endpoints.huggingface.co), click on “New endpoint”, and fill out the form (e.g. select a model like [WizardCoder-Python-34B-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0)), and then deploy your model by clicking “Create Endpoint”. Change `~/.continue/config.py` to look like this: - -```python -from continuedev.src.continuedev.core.models import Models -from continuedev.src.continuedev.libs.llm.hf_inference_api import HuggingFaceInferenceAPI - -config = ContinueConfig( - ... - models=Models( - default=HuggingFaceInferenceAPI( - endpoint_url: "<INFERENCE_API_ENDPOINT_URL>", - hf_token: "<HUGGING_FACE_TOKEN>", - ) -) -``` - -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/hf_inference_api.py) - -## Properties - -<ClassPropertyRef name='hf_token' details='{"title": "Hf Token", "description": "Your Hugging Face API token", "type": "string"}' required={true} default=""/> -<ClassPropertyRef name='endpoint_url' details='{"title": "Endpoint Url", "description": "Your Hugging Face Inference API endpoint URL", "type": "string"}' required={false} default=""/> - - -### Inherited Properties - -<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> -<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to use (optional for the HuggingFaceInferenceAPI class)", "default": "Hugging Face Inference API", "type": "string"}' required={false} default="Hugging Face Inference API"/> -<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> -<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> -<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> -<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/hf_tgi.md b/docs/docs/reference/Models/hf_tgi.md deleted file mode 100644 index 2cee9fe1..00000000 --- a/docs/docs/reference/Models/hf_tgi.md +++ /dev/null @@ -1,27 +0,0 @@ -import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; - -# HuggingFaceTGI - - - -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/hf_tgi.py) - -## Properties - -<ClassPropertyRef name='server_url' details='{"title": "Server Url", "description": "URL of your TGI server", "default": "http://localhost:8080", "type": "string"}' required={false} default="http://localhost:8080"/> - - -### Inherited Properties - -<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> -<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "huggingface-tgi", "type": "string"}' required={false} default="huggingface-tgi"/> -<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> -<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> -<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> -<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/maybe_proxy_openai.md b/docs/docs/reference/Models/maybe_proxy_openai.md deleted file mode 100644 index 055054fd..00000000 --- a/docs/docs/reference/Models/maybe_proxy_openai.md +++ /dev/null @@ -1,47 +0,0 @@ -import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; - -# OpenAIFreeTrial - -With the `OpenAIFreeTrial` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. - -Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps: - -1. Copy your API key from https://platform.openai.com/account/api-keys -2. Open `~/.continue/config.py`. You can do this by using the '/config' command in Continue -3. Change the default LLMs to look like this: - -```python -API_KEY = "<API_KEY>" -config = ContinueConfig( - ... - models=Models( - default=OpenAIFreeTrial(model="gpt-4", api_key=API_KEY), - medium=OpenAIFreeTrial(model="gpt-3.5-turbo", api_key=API_KEY) - ) -) -``` - -The `OpenAIFreeTrial` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. - -These classes support any models available through the OpenAI API, assuming your API key has access, including "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", and "gpt-4-32k". - -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/openai_free_trial.py) - -## Properties - -<ClassPropertyRef name='llm' details='{"$ref": "#/definitions/LLM"}' required={false} default=""/> - -### Inherited Properties - -<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "type": "string"}' required={true} default=""/> -<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> -<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> -<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> -<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> -<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/openai_free_trial.md b/docs/docs/reference/Models/openai_free_trial.md deleted file mode 100644 index cd510aa8..00000000 --- a/docs/docs/reference/Models/openai_free_trial.md +++ /dev/null @@ -1,48 +0,0 @@ -import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; - -# OpenAIFreeTrial - -With the `OpenAIFreeTrial` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. - -Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps: - -1. Copy your API key from https://platform.openai.com/account/api-keys -2. Open `~/.continue/config.py`. You can do this by using the '/config' command in Continue -3. Change the default LLMs to look like this: - -```python -API_KEY = "<API_KEY>" -config = ContinueConfig( - ... - models=Models( - default=OpenAIFreeTrial(model="gpt-4", api_key=API_KEY), - medium=OpenAIFreeTrial(model="gpt-3.5-turbo", api_key=API_KEY) - ) -) -``` - -The `OpenAIFreeTrial` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. - -These classes support any models available through the OpenAI API, assuming your API key has access, including "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", and "gpt-4-32k". - -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/openai_free_trial.py) - -## Properties - -<ClassPropertyRef name='llm' details='{"$ref": "#/definitions/LLM"}' required={false} default=""/> - - -### Inherited Properties - -<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "type": "string"}' required={true} default=""/> -<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> -<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> -<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> -<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> -<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/openaifreetrial.md b/docs/docs/reference/Models/openaifreetrial.md index a9efa6cc..99c21689 100644 --- a/docs/docs/reference/Models/openaifreetrial.md +++ b/docs/docs/reference/Models/openaifreetrial.md @@ -31,6 +31,7 @@ These classes support any models available through the OpenAI API, assuming your <ClassPropertyRef name='llm' details='{"$ref": "#/definitions/LLM"}' required={false} default=""/> + ### Inherited Properties <ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "type": "string"}' required={true} default=""/> diff --git a/docs/docs/reference/Models/queued.md b/docs/docs/reference/Models/queued.md deleted file mode 100644 index 06942e3e..00000000 --- a/docs/docs/reference/Models/queued.md +++ /dev/null @@ -1,40 +0,0 @@ -import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; - -# QueuedLLM - -QueuedLLM exists to make up for LLM servers that cannot handle multiple requests at once. It uses a lock to ensure that only one request is being processed at a time. - -If you are already using another LLM class and are experiencing this problem, you can just wrap it with the QueuedLLM class like this: - -```python -from continuedev.src.continuedev.libs.llm.queued import QueuedLLM - -config = ContinueConfig( - ... - models=Models( - default=QueuedLLM(llm=<OTHER_LLM_CLASS>) - ) -) -``` - -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/queued.py) - -## Properties - -<ClassPropertyRef name='llm' details='{"title": "Llm", "description": "The LLM to wrap with a lock", "allOf": [{"$ref": "#/definitions/LLM"}]}' required={true} default=""/> - - -### Inherited Properties - -<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> -<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "queued", "type": "string"}' required={false} default="queued"/> -<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> -<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> -<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> -<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/replicate.md b/docs/docs/reference/Models/replicate.md deleted file mode 100644 index 879459e0..00000000 --- a/docs/docs/reference/Models/replicate.md +++ /dev/null @@ -1,42 +0,0 @@ -import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; - -# ReplicateLLM - -Replicate is a great option for newly released language models or models that you've deployed through their platform. Sign up for an account [here](https://replicate.ai/), copy your API key, and then select any model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models). Change `~/.continue/config.py` to look like this: - -```python -from continuedev.src.continuedev.core.models import Models -from continuedev.src.continuedev.libs.llm.replicate import ReplicateLLM - -config = ContinueConfig( - ... - models=Models( - default=ReplicateLLM( - model="replicate/codellama-13b-instruct:da5676342de1a5a335b848383af297f592b816b950a43d251a0a9edd0113604b", - api_key="my-replicate-api-key") - ) -) -``` - -If you don't specify the `model` parameter, it will default to `replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781`. - -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/replicate.py) - -## Properties - - - -### Inherited Properties - -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "Replicate API key", "type": "string"}' required={true} default=""/> -<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> -<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781", "type": "string"}' required={false} default="replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781"/> -<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> -<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> -<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> -<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> diff --git a/docs/docs/reference/Models/text_gen_interface.md b/docs/docs/reference/Models/text_gen_interface.md deleted file mode 100644 index bb8dce1d..00000000 --- a/docs/docs/reference/Models/text_gen_interface.md +++ /dev/null @@ -1,41 +0,0 @@ -import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; - -# TextGenUI - -TextGenUI is a comprehensive, open-source language model UI and local server. You can set it up with an OpenAI-compatible server plugin, but if for some reason that doesn't work, you can use this class like so: - -```python -from continuedev.src.continuedev.libs.llm.text_gen_interface import TextGenUI - -config = ContinueConfig( - ... - models=Models( - default=TextGenUI( - model="<MODEL_NAME>", - ) - ) -) -``` - -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/text_gen_interface.py) - -## Properties - -<ClassPropertyRef name='server_url' details='{"title": "Server Url", "description": "URL of your TextGenUI server", "default": "http://localhost:5000", "type": "string"}' required={false} default="http://localhost:5000"/> -<ClassPropertyRef name='streaming_url' details='{"title": "Streaming Url", "description": "URL of your TextGenUI streaming server (separate from main server URL)", "default": "http://localhost:5005", "type": "string"}' required={false} default="http://localhost:5005"/> - - -### Inherited Properties - -<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> -<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "text-gen-ui", "type": "string"}' required={false} default="text-gen-ui"/> -<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> -<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> -<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> -<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Here is the code before editing:\n```\n{{{code_to_edit}}}\n```\n\nHere is the edit requested:\n\"{{{user_input}}}\"\n\nHere is the code after editing:"}, "type": "object"}' required={false} default="{'edit': 'Here is the code before editing:\n```\n{{{code_to_edit}}}\n```\n\nHere is the edit requested:\n"{{{user_input}}}"\n\nHere is the code after editing:'}"/> -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/together.md b/docs/docs/reference/Models/together.md deleted file mode 100644 index 3718f046..00000000 --- a/docs/docs/reference/Models/together.md +++ /dev/null @@ -1,42 +0,0 @@ -import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; - -# TogetherLLM - -The Together API is a cloud platform for running large AI models. You can sign up [here](https://api.together.xyz/signup), copy your API key on the initial welcome screen, and then hit the play button on any model from the [Together Models list](https://docs.together.ai/docs/models-inference). Change `~/.continue/config.py` to look like this: - -```python -from continuedev.src.continuedev.core.models import Models -from continuedev.src.continuedev.libs.llm.together import TogetherLLM - -config = ContinueConfig( - ... - models=Models( - default=TogetherLLM( - api_key="<API_KEY>", - model="togethercomputer/llama-2-13b-chat" - ) - ) -) -``` - -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/together.py) - -## Properties - -<ClassPropertyRef name='base_url' details='{"title": "Base Url", "description": "The base URL for your Together API instance", "default": "https://api.together.xyz", "type": "string"}' required={false} default="https://api.together.xyz"/> - - -### Inherited Properties - -<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "Together API key", "type": "string"}' required={true} default=""/> -<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> -<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "togethercomputer/RedPajama-INCITE-7B-Instruct", "type": "string"}' required={false} default="togethercomputer/RedPajama-INCITE-7B-Instruct"/> -<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> -<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> -<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> -<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> diff --git a/docs/docs/reference/config.md b/docs/docs/reference/config.md index 1f683ed2..60d5b73e 100644 --- a/docs/docs/reference/config.md +++ b/docs/docs/reference/config.md @@ -23,4 +23,6 @@ Continue can be deeply customized by editing the `ContinueConfig` object in `~/. <ClassPropertyRef name='data_server_url' details='{"title": "Data Server Url", "description": "The URL of the server where development data is sent. No data is sent unless a valid user token is provided.", "default": "https://us-west1-autodebug.cloudfunctions.net", "type": "string"}' required={false} default="https://us-west1-autodebug.cloudfunctions.net"/> <ClassPropertyRef name='disable_summaries' details='{"title": "Disable Summaries", "description": "If set to `True`, Continue will not generate summaries for each Step. This can be useful if you want to save on compute.", "default": false, "type": "boolean"}' required={false} default="False"/> + ### Inherited Properties + |