diff options
author | Nate Sesti <33237525+sestinj@users.noreply.github.com> | 2023-09-23 13:06:00 -0700 |
---|---|---|
committer | GitHub <noreply@github.com> | 2023-09-23 13:06:00 -0700 |
commit | e976d60974a7837967d03807605cbf2e7b4f3f9a (patch) | |
tree | 5ecb19062abb162832530dd953e9d2801026c23c /docs | |
parent | 470711d25b44d1a545c57bc17d40d5e1fd402216 (diff) | |
download | sncontinue-e976d60974a7837967d03807605cbf2e7b4f3f9a.tar.gz sncontinue-e976d60974a7837967d03807605cbf2e7b4f3f9a.tar.bz2 sncontinue-e976d60974a7837967d03807605cbf2e7b4f3f9a.zip |
UI Redesign and fixing many details (#496)
* feat: :lipstick: start of major design upgrade
* feat: :lipstick: model selection page
* feat: :lipstick: use shortcut to add highlighted code as ctx
* feat: :lipstick: better display of errors
* feat: :lipstick: ui for learning keyboard shortcuts, more details
* refactor: :construction: testing slash commands ui
* Truncate continue.log
* refactor: :construction: refactoring client_session, ui, more
* feat: :bug: layout fixes
* refactor: :lipstick: ui to enter OpenAI Key
* refactor: :truck: rename MaybeProxyOpenAI -> OpenAIFreeTrial
* starting help center
* removing old shortcut docs
* fix: :bug: fix model setting logic to avoid overwrites
* feat: :lipstick: tutorial and model descriptions
* refactor: :truck: rename unused -> saved
* refactor: :truck: rename model roles
* feat: :lipstick: edit indicator
* refactor: :lipstick: move +, folder icons
* feat: :lipstick: tab to clear all context
* fix: :bug: context providers ui fixes
* fix: :bug: fix lag when stopping step
* fix: :bug: don't override system message for models
* fix: :bug: fix continue button cursor
* feat: :lipstick: title bar
* fix: :bug: updates to code highlighting logic and more
* fix: :bug: fix renaming of summarize model role
* feat: :lipstick: help page and better session title
* feat: :lipstick: more help page / ui improvements
* feat: :lipstick: set session title
* fix: :bug: small fixes for changing sessions
* fix: :bug: perfecting the highlighting code and ctx interactions
* style: :lipstick: sticky headers for scroll, ollama warming
* fix: :bug: fix toggle bug
---------
Co-authored-by: Ty Dunn <ty@tydunn.com>
Diffstat (limited to 'docs')
34 files changed, 564 insertions, 35 deletions
diff --git a/docs/docs/customization/models.md b/docs/docs/customization/models.md index ac3b5f44..cebb0667 100644 --- a/docs/docs/customization/models.md +++ b/docs/docs/customization/models.md @@ -4,9 +4,9 @@ Continue makes it easy to swap out different LLM providers. Once you've added an Commercial Models -- [MaybeProxyOpenAI](../reference/Models/maybe_proxy_openai.md) (default) - Use gpt-4 or gpt-3.5-turbo free with our API key, or with your API key. gpt-4 is probably the most capable model of all options. +- [OpenAIFreeTrial](../reference/Models/openaifreetrial.md) (default) - Use gpt-4 or gpt-3.5-turbo free with our API key, or with your API key. gpt-4 is probably the most capable model of all options. - [OpenAI](../reference/Models/openai.md) - Use any OpenAI model with your own key. Can also change the base URL if you have a server that uses the OpenAI API format, including using the Azure OpenAI service, LocalAI, etc. -- [AnthropicLLM](../reference/Models/anthropic.md) - Use claude-2 with your Anthropic API key. Claude 2 is also highly capable, and has a 100,000 token context window. +- [AnthropicLLM](../reference/Models/anthropicllm.md) - Use claude-2 with your Anthropic API key. Claude 2 is also highly capable, and has a 100,000 token context window. Local Models @@ -17,9 +17,9 @@ Local Models Open-Source Models (not local) -- [TogetherLLM](../reference/Models/together.md) - Use any model from the [Together Models list](https://docs.together.ai/docs/models-inference) with your Together API key. -- [ReplicateLLM](../reference/Models/replicate.md) - Use any open-source model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models) with your Replicate API key. -- [HuggingFaceInferenceAPI](../reference/Models/hf_inference_api.md) - Use any open-source model from the [Hugging Face Inference API](https://huggingface.co/inference-api) with your Hugging Face token. +- [TogetherLLM](../reference/Models/togetherllm.md) - Use any model from the [Together Models list](https://docs.together.ai/docs/models-inference) with your Together API key. +- [ReplicateLLM](../reference/Models/replicatellm.md) - Use any open-source model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models) with your Replicate API key. +- [HuggingFaceInferenceAPI](../reference/Models/huggingfaceinferenceapi.md) - Use any open-source model from the [Hugging Face Inference API](https://huggingface.co/inference-api) with your Hugging Face token. ## Change the default LLM @@ -31,13 +31,13 @@ from continuedev.src.continuedev.core.models import Models config = ContinueConfig( ... models=Models( - default=MaybeProxyOpenAI(model="gpt-4"), - medium=MaybeProxyOpenAI(model="gpt-3.5-turbo") + default=OpenAIFreeTrial(model="gpt-4"), + summarize=OpenAIFreeTrial(model="gpt-3.5-turbo") ) ) ``` -The `default` and `medium` properties are different _model roles_. This allows different models to be used for different tasks. The available roles are `default`, `small`, `medium`, `large`, `edit`, and `chat`. `edit` is used when you use the '/edit' slash command, `chat` is used for all chat responses, and `medium` is used for summarizing. If not set, all roles will fall back to `default`. The values of these fields must be of the [`LLM`](https://github.com/continuedev/continue/blob/main/continuedev/src/continuedev/libs/llm/__init__.py) class, which implements methods for retrieving and streaming completions from an LLM. +The `default` and `summarize` properties are different _model roles_. This allows different models to be used for different tasks. The available roles are `default`, `summarize`, `edit`, and `chat`. `edit` is used when you use the '/edit' slash command, `chat` is used for all chat responses, and `summarize` is used for summarizing. If not set, all roles will fall back to `default`. The values of these fields must be of the [`LLM`](https://github.com/continuedev/continue/blob/main/continuedev/src/continuedev/libs/llm/__init__.py) class, which implements methods for retrieving and streaming completions from an LLM. Below, we describe the `LLM` classes available in the Continue core library, and how they can be used. diff --git a/docs/docs/how-to-use-continue.md b/docs/docs/how-to-use-continue.md index 3f21d92c..21b12395 100644 --- a/docs/docs/how-to-use-continue.md +++ b/docs/docs/how-to-use-continue.md @@ -21,12 +21,6 @@ If you are trying to use it for a new task and don’t have a sense of how much Remember: You are responsible for all code that you ship, whether it was written by you or by an LLM that you directed. This means it is crucial that you review what the LLM writes. To make this easier, we provide natural language descriptions of the actions the LLM took in the Continue GUI. -## Keyboard shortcuts - -Here you will find a list of all of the default keyboard shortcuts in VS Code: - -![keyboard-shortucts](/img/keyboard-shortcuts.png) - ## When to use Continue Here are tasks that Continue excels at helping you complete: diff --git a/docs/docs/reference/Context Providers/diffcontextprovider.md b/docs/docs/reference/Context Providers/diffcontextprovider.md new file mode 100644 index 00000000..54ba54b9 --- /dev/null +++ b/docs/docs/reference/Context Providers/diffcontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# DiffContextProvider + +Type '@diff' to reference all of the changes you've made to your current branch. This is useful if you want to summarize what you've done or ask for a general review of your work before committing. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/diff.py) + +## Properties + +<ClassPropertyRef name='workspace_dir' details='{"title": "Workspace Dir", "description": "The workspace directory in which to run `git diff`", "type": "string"}' required={false} default=""/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "diff", "type": "string"}' required={false} default="diff"/> +<ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "Diff", "type": "string"}' required={false} default="Diff"/> +<ClassPropertyRef name='description' details='{"title": "Description", "default": "Output of 'git diff' in current repo", "type": "string"}' required={false} default="Output of 'git diff' in current repo"/> +<ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/> +<ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "description": "Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type '@search <STRING_TO_SEARCH>'. This will change the behavior of the UI so that it can indicate the expectation for a query.", "default": false, "type": "boolean"}' required={false} default="False"/> diff --git a/docs/docs/reference/Context Providers/filecontextprovider.md b/docs/docs/reference/Context Providers/filecontextprovider.md new file mode 100644 index 00000000..12e68478 --- /dev/null +++ b/docs/docs/reference/Context Providers/filecontextprovider.md @@ -0,0 +1,19 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# FileContextProvider + +The FileContextProvider is a ContextProvider that allows you to search files in the open workspace. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/file.py) + +## Properties + + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "file", "type": "string"}' required={false} default="file"/> +<ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "Files", "type": "string"}' required={false} default="Files"/> +<ClassPropertyRef name='description' details='{"title": "Description", "default": "Reference files in the current workspace", "type": "string"}' required={false} default="Reference files in the current workspace"/> +<ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": false, "type": "boolean"}' required={false} default="False"/> +<ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "description": "Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type '@search <STRING_TO_SEARCH>'. This will change the behavior of the UI so that it can indicate the expectation for a query.", "default": false, "type": "boolean"}' required={false} default="False"/> diff --git a/docs/docs/reference/Context Providers/filetreecontextprovider.md b/docs/docs/reference/Context Providers/filetreecontextprovider.md new file mode 100644 index 00000000..a5b11555 --- /dev/null +++ b/docs/docs/reference/Context Providers/filetreecontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# FileTreeContextProvider + +Type '@tree' to reference the contents of your current workspace. The LLM will be able to see the nested directory structure of your project. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/filetree.py) + +## Properties + +<ClassPropertyRef name='workspace_dir' details='{"title": "Workspace Dir", "description": "The workspace directory to display", "type": "string"}' required={false} default=""/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "tree", "type": "string"}' required={false} default="tree"/> +<ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "File Tree", "type": "string"}' required={false} default="File Tree"/> +<ClassPropertyRef name='description' details='{"title": "Description", "default": "Add a formatted file tree of this directory to the context", "type": "string"}' required={false} default="Add a formatted file tree of this directory to the context"/> +<ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/> +<ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "description": "Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type '@search <STRING_TO_SEARCH>'. This will change the behavior of the UI so that it can indicate the expectation for a query.", "default": false, "type": "boolean"}' required={false} default="False"/> diff --git a/docs/docs/reference/Context Providers/githubissuescontextprovider.md b/docs/docs/reference/Context Providers/githubissuescontextprovider.md new file mode 100644 index 00000000..f174df96 --- /dev/null +++ b/docs/docs/reference/Context Providers/githubissuescontextprovider.md @@ -0,0 +1,21 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# GitHubIssuesContextProvider + +The GitHubIssuesContextProvider is a ContextProvider that allows you to search GitHub issues in a repo. Type '@issue' to reference the title and contents of an issue. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/github.py) + +## Properties + +<ClassPropertyRef name='repo_name' details='{"title": "Repo Name", "description": "The name of the GitHub repo from which to pull issues", "type": "string"}' required={true} default=""/> +<ClassPropertyRef name='auth_token' details='{"title": "Auth Token", "description": "The GitHub auth token to use to authenticate with the GitHub API", "type": "string"}' required={true} default=""/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "issues", "type": "string"}' required={false} default="issues"/> +<ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "GitHub Issues", "type": "string"}' required={false} default="GitHub Issues"/> +<ClassPropertyRef name='description' details='{"title": "Description", "default": "Reference GitHub issues", "type": "string"}' required={false} default="Reference GitHub issues"/> +<ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": false, "type": "boolean"}' required={false} default="False"/> +<ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "description": "Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type '@search <STRING_TO_SEARCH>'. This will change the behavior of the UI so that it can indicate the expectation for a query.", "default": false, "type": "boolean"}' required={false} default="False"/> diff --git a/docs/docs/reference/Context Providers/googlecontextprovider.md b/docs/docs/reference/Context Providers/googlecontextprovider.md new file mode 100644 index 00000000..84a9fdb5 --- /dev/null +++ b/docs/docs/reference/Context Providers/googlecontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# GoogleContextProvider + +Type '@google' to reference the results of a Google search. For example, type "@google python tutorial" if you want to search and discuss ways of learning Python. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/google.py) + +## Properties + +<ClassPropertyRef name='serper_api_key' details='{"title": "Serper Api Key", "description": "Your SerpAPI key, used to programmatically make Google searches. You can get a key at https://serper.dev.", "type": "string"}' required={true} default=""/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "google", "type": "string"}' required={false} default="google"/> +<ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "Google", "type": "string"}' required={false} default="Google"/> +<ClassPropertyRef name='description' details='{"title": "Description", "default": "Search Google", "type": "string"}' required={false} default="Search Google"/> +<ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/> +<ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "default": true, "type": "boolean"}' required={false} default="True"/> diff --git a/docs/docs/reference/Context Providers/searchcontextprovider.md b/docs/docs/reference/Context Providers/searchcontextprovider.md new file mode 100644 index 00000000..9aa22f33 --- /dev/null +++ b/docs/docs/reference/Context Providers/searchcontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# SearchContextProvider + +Type '@search' to reference the results of codebase search, just like the results you would get from VS Code search. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/search.py) + +## Properties + +<ClassPropertyRef name='workspace_dir' details='{"title": "Workspace Dir", "description": "The workspace directory to search", "type": "string"}' required={false} default=""/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "search", "type": "string"}' required={false} default="search"/> +<ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "Search", "type": "string"}' required={false} default="Search"/> +<ClassPropertyRef name='description' details='{"title": "Description", "default": "Search the workspace for all matches of an exact string (e.g. '@search console.log')", "type": "string"}' required={false} default="Search the workspace for all matches of an exact string (e.g. '@search console.log')"/> +<ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/> +<ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "default": true, "type": "boolean"}' required={false} default="True"/> diff --git a/docs/docs/reference/Context Providers/terminalcontextprovider.md b/docs/docs/reference/Context Providers/terminalcontextprovider.md new file mode 100644 index 00000000..ca4ad01a --- /dev/null +++ b/docs/docs/reference/Context Providers/terminalcontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# TerminalContextProvider + +Type '@terminal' to reference the contents of your IDE's terminal. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/terminal.py) + +## Properties + +<ClassPropertyRef name='get_last_n_commands' details='{"title": "Get Last N Commands", "description": "The number of previous commands to reference", "default": 3, "type": "integer"}' required={false} default="3"/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "terminal", "type": "string"}' required={false} default="terminal"/> +<ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "Terminal", "type": "string"}' required={false} default="Terminal"/> +<ClassPropertyRef name='description' details='{"title": "Description", "default": "Reference the contents of the terminal", "type": "string"}' required={false} default="Reference the contents of the terminal"/> +<ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/> +<ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "description": "Indicates whether the ContextProvider requires a query. For example, the SearchContextProvider requires you to type '@search <STRING_TO_SEARCH>'. This will change the behavior of the UI so that it can indicate the expectation for a query.", "default": false, "type": "boolean"}' required={false} default="False"/> diff --git a/docs/docs/reference/Context Providers/urlcontextprovider.md b/docs/docs/reference/Context Providers/urlcontextprovider.md new file mode 100644 index 00000000..38ddc0e5 --- /dev/null +++ b/docs/docs/reference/Context Providers/urlcontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# URLContextProvider + +Type '@url' to reference the contents of a URL. You can either reference preset URLs, or reference one dynamically by typing '@url https://example.com'. The text contents of the page will be fetched and used as context. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/url.py) + +## Properties + +<ClassPropertyRef name='preset_urls' details='{"title": "Preset Urls", "description": "A list of preset URLs that you will be able to quickly reference by typing '@url'", "default": [], "type": "array", "items": {"type": "string"}}' required={false} default="[]"/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "default": "url", "type": "string"}' required={false} default="url"/> +<ClassPropertyRef name='display_title' details='{"title": "Display Title", "default": "URL", "type": "string"}' required={false} default="URL"/> +<ClassPropertyRef name='description' details='{"title": "Description", "default": "Reference the contents of a webpage", "type": "string"}' required={false} default="Reference the contents of a webpage"/> +<ClassPropertyRef name='dynamic' details='{"title": "Dynamic", "default": true, "type": "boolean"}' required={false} default="True"/> +<ClassPropertyRef name='requires_query' details='{"title": "Requires Query", "default": true, "type": "boolean"}' required={false} default="True"/> diff --git a/docs/docs/reference/Models/anthropic.md b/docs/docs/reference/Models/anthropic.md index e2c6f683..128b706d 100644 --- a/docs/docs/reference/Models/anthropic.md +++ b/docs/docs/reference/Models/anthropic.md @@ -35,4 +35,5 @@ Claude 2 is not yet publicly released. You can request early access [here](https <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> <ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> diff --git a/docs/docs/reference/Models/anthropicllm.md b/docs/docs/reference/Models/anthropicllm.md new file mode 100644 index 00000000..128b706d --- /dev/null +++ b/docs/docs/reference/Models/anthropicllm.md @@ -0,0 +1,39 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# AnthropicLLM + +Import the `AnthropicLLM` class and set it as the default model: + +```python +from continuedev.src.continuedev.libs.llm.anthropic import AnthropicLLM + +config = ContinueConfig( + ... + models=Models( + default=AnthropicLLM(api_key="<API_KEY>", model="claude-2") + ) +) +``` + +Claude 2 is not yet publicly released. You can request early access [here](https://www.anthropic.com/earlyaccess). + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/anthropic.py) + +## Properties + + + +### Inherited Properties + +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={true} default=""/> +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> +<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "claude-2", "type": "string"}' required={false} default="claude-2"/> +<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> +<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> +<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> diff --git a/docs/docs/reference/Models/ggml.md b/docs/docs/reference/Models/ggml.md index d02f6d05..7bdb5441 100644 --- a/docs/docs/reference/Models/ggml.md +++ b/docs/docs/reference/Models/ggml.md @@ -24,7 +24,6 @@ config = ContinueConfig( ## Properties <ClassPropertyRef name='server_url' details='{"title": "Server Url", "description": "URL of the OpenAI-compatible server where the model is being served", "default": "http://localhost:8000", "type": "string"}' required={false} default="http://localhost:8000"/> -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> ### Inherited Properties @@ -38,5 +37,6 @@ config = ContinueConfig( <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> <ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/hf_inference_api.md b/docs/docs/reference/Models/hf_inference_api.md index e7857b21..560309f2 100644 --- a/docs/docs/reference/Models/hf_inference_api.md +++ b/docs/docs/reference/Models/hf_inference_api.md @@ -37,5 +37,6 @@ config = ContinueConfig( <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> <ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/hf_tgi.md b/docs/docs/reference/Models/hf_tgi.md index ab3f4d61..2cee9fe1 100644 --- a/docs/docs/reference/Models/hf_tgi.md +++ b/docs/docs/reference/Models/hf_tgi.md @@ -22,5 +22,6 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> <ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/huggingfaceinferenceapi.md b/docs/docs/reference/Models/huggingfaceinferenceapi.md new file mode 100644 index 00000000..560309f2 --- /dev/null +++ b/docs/docs/reference/Models/huggingfaceinferenceapi.md @@ -0,0 +1,42 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# HuggingFaceInferenceAPI + +Hugging Face Inference API is a great option for newly released language models. Sign up for an account and add billing [here](https://huggingface.co/settings/billing), access the Inference Endpoints [here](https://ui.endpoints.huggingface.co), click on “New endpoint”, and fill out the form (e.g. select a model like [WizardCoder-Python-34B-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0)), and then deploy your model by clicking “Create Endpoint”. Change `~/.continue/config.py` to look like this: + +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.hf_inference_api import HuggingFaceInferenceAPI + +config = ContinueConfig( + ... + models=Models( + default=HuggingFaceInferenceAPI( + endpoint_url: "<INFERENCE_API_ENDPOINT_URL>", + hf_token: "<HUGGING_FACE_TOKEN>", + ) +) +``` + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/hf_inference_api.py) + +## Properties + +<ClassPropertyRef name='hf_token' details='{"title": "Hf Token", "description": "Your Hugging Face API token", "type": "string"}' required={true} default=""/> +<ClassPropertyRef name='endpoint_url' details='{"title": "Endpoint Url", "description": "Your Hugging Face Inference API endpoint URL", "type": "string"}' required={false} default=""/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> +<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to use (optional for the HuggingFaceInferenceAPI class)", "default": "Hugging Face Inference API", "type": "string"}' required={false} default="Hugging Face Inference API"/> +<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> +<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> +<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/huggingfacetgi.md b/docs/docs/reference/Models/huggingfacetgi.md new file mode 100644 index 00000000..2cee9fe1 --- /dev/null +++ b/docs/docs/reference/Models/huggingfacetgi.md @@ -0,0 +1,27 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# HuggingFaceTGI + + + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/hf_tgi.py) + +## Properties + +<ClassPropertyRef name='server_url' details='{"title": "Server Url", "description": "URL of your TGI server", "default": "http://localhost:8080", "type": "string"}' required={false} default="http://localhost:8080"/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> +<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "huggingface-tgi", "type": "string"}' required={false} default="huggingface-tgi"/> +<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> +<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> +<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/llamacpp.md b/docs/docs/reference/Models/llamacpp.md index ae4b6e62..8a6be11e 100644 --- a/docs/docs/reference/Models/llamacpp.md +++ b/docs/docs/reference/Models/llamacpp.md @@ -42,5 +42,6 @@ config = ContinueConfig( <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> <ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/maybe_proxy_openai.md b/docs/docs/reference/Models/maybe_proxy_openai.md index c080b54d..055054fd 100644 --- a/docs/docs/reference/Models/maybe_proxy_openai.md +++ b/docs/docs/reference/Models/maybe_proxy_openai.md @@ -1,8 +1,8 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; -# MaybeProxyOpenAI +# OpenAIFreeTrial -With the `MaybeProxyOpenAI` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. +With the `OpenAIFreeTrial` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps: @@ -15,23 +15,22 @@ API_KEY = "<API_KEY>" config = ContinueConfig( ... models=Models( - default=MaybeProxyOpenAI(model="gpt-4", api_key=API_KEY), - medium=MaybeProxyOpenAI(model="gpt-3.5-turbo", api_key=API_KEY) + default=OpenAIFreeTrial(model="gpt-4", api_key=API_KEY), + medium=OpenAIFreeTrial(model="gpt-3.5-turbo", api_key=API_KEY) ) ) ``` -The `MaybeProxyOpenAI` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. +The `OpenAIFreeTrial` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. These classes support any models available through the OpenAI API, assuming your API key has access, including "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", and "gpt-4-32k". -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/maybe_proxy_openai.py) +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/openai_free_trial.py) ## Properties <ClassPropertyRef name='llm' details='{"$ref": "#/definitions/LLM"}' required={false} default=""/> - ### Inherited Properties <ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "type": "string"}' required={true} default=""/> @@ -43,5 +42,6 @@ These classes support any models available through the OpenAI API, assuming your <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> <ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> <ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/ollama.md b/docs/docs/reference/Models/ollama.md index f0370b45..39257395 100644 --- a/docs/docs/reference/Models/ollama.md +++ b/docs/docs/reference/Models/ollama.md @@ -33,5 +33,6 @@ config = ContinueConfig( <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> <ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/openai.md b/docs/docs/reference/Models/openai.md index f28e0598..e78dd404 100644 --- a/docs/docs/reference/Models/openai.md +++ b/docs/docs/reference/Models/openai.md @@ -32,7 +32,6 @@ Options for serving models locally with an OpenAI-compatible server include: ## Properties -<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use for requests.", "type": "string"}' required={false} default=""/> <ClassPropertyRef name='api_base' details='{"title": "Api Base", "description": "OpenAI API base URL.", "type": "string"}' required={false} default=""/> <ClassPropertyRef name='api_type' details='{"title": "Api Type", "description": "OpenAI API type.", "enum": ["azure", "openai"], "type": "string"}' required={false} default=""/> <ClassPropertyRef name='api_version' details='{"title": "Api Version", "description": "OpenAI API version. For use with Azure OpenAI Service.", "type": "string"}' required={false} default=""/> @@ -51,4 +50,5 @@ Options for serving models locally with an OpenAI-compatible server include: <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use for requests.", "type": "string"}' required={false} default=""/> <ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> diff --git a/docs/docs/reference/Models/openai_free_trial.md b/docs/docs/reference/Models/openai_free_trial.md new file mode 100644 index 00000000..cd510aa8 --- /dev/null +++ b/docs/docs/reference/Models/openai_free_trial.md @@ -0,0 +1,48 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# OpenAIFreeTrial + +With the `OpenAIFreeTrial` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. + +Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps: + +1. Copy your API key from https://platform.openai.com/account/api-keys +2. Open `~/.continue/config.py`. You can do this by using the '/config' command in Continue +3. Change the default LLMs to look like this: + +```python +API_KEY = "<API_KEY>" +config = ContinueConfig( + ... + models=Models( + default=OpenAIFreeTrial(model="gpt-4", api_key=API_KEY), + medium=OpenAIFreeTrial(model="gpt-3.5-turbo", api_key=API_KEY) + ) +) +``` + +The `OpenAIFreeTrial` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. + +These classes support any models available through the OpenAI API, assuming your API key has access, including "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", and "gpt-4-32k". + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/openai_free_trial.py) + +## Properties + +<ClassPropertyRef name='llm' details='{"$ref": "#/definitions/LLM"}' required={false} default=""/> + + +### Inherited Properties + +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "type": "string"}' required={true} default=""/> +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> +<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> +<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> +<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/openaifreetrial.md b/docs/docs/reference/Models/openaifreetrial.md new file mode 100644 index 00000000..a9efa6cc --- /dev/null +++ b/docs/docs/reference/Models/openaifreetrial.md @@ -0,0 +1,47 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# OpenAIFreeTrial + +With the `OpenAIFreeTrial` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. + +Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps: + +1. Copy your API key from https://platform.openai.com/account/api-keys +2. Open `~/.continue/config.py`. You can do this by using the '/config' command in Continue +3. Change the default LLMs to look like this: + +```python +API_KEY = "<API_KEY>" +config = ContinueConfig( + ... + models=Models( + default=OpenAIFreeTrial(model="gpt-4", api_key=API_KEY), + summarize=OpenAIFreeTrial(model="gpt-3.5-turbo", api_key=API_KEY) + ) +) +``` + +The `OpenAIFreeTrial` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. + +These classes support any models available through the OpenAI API, assuming your API key has access, including "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", and "gpt-4-32k". + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/openai_free_trial.py) + +## Properties + +<ClassPropertyRef name='llm' details='{"$ref": "#/definitions/LLM"}' required={false} default=""/> + +### Inherited Properties + +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "type": "string"}' required={true} default=""/> +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> +<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> +<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> +<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/queued.md b/docs/docs/reference/Models/queued.md index 231aa4dc..06942e3e 100644 --- a/docs/docs/reference/Models/queued.md +++ b/docs/docs/reference/Models/queued.md @@ -35,5 +35,6 @@ config = ContinueConfig( <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> <ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> <ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/queuedllm.md b/docs/docs/reference/Models/queuedllm.md new file mode 100644 index 00000000..06942e3e --- /dev/null +++ b/docs/docs/reference/Models/queuedllm.md @@ -0,0 +1,40 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# QueuedLLM + +QueuedLLM exists to make up for LLM servers that cannot handle multiple requests at once. It uses a lock to ensure that only one request is being processed at a time. + +If you are already using another LLM class and are experiencing this problem, you can just wrap it with the QueuedLLM class like this: + +```python +from continuedev.src.continuedev.libs.llm.queued import QueuedLLM + +config = ContinueConfig( + ... + models=Models( + default=QueuedLLM(llm=<OTHER_LLM_CLASS>) + ) +) +``` + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/queued.py) + +## Properties + +<ClassPropertyRef name='llm' details='{"title": "Llm", "description": "The LLM to wrap with a lock", "allOf": [{"$ref": "#/definitions/LLM"}]}' required={true} default=""/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> +<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "queued", "type": "string"}' required={false} default="queued"/> +<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> +<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> +<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/replicate.md b/docs/docs/reference/Models/replicate.md index 83bfd383..879459e0 100644 --- a/docs/docs/reference/Models/replicate.md +++ b/docs/docs/reference/Models/replicate.md @@ -38,4 +38,5 @@ If you don't specify the `model` parameter, it will default to `replicate/llama- <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> diff --git a/docs/docs/reference/Models/replicatellm.md b/docs/docs/reference/Models/replicatellm.md new file mode 100644 index 00000000..879459e0 --- /dev/null +++ b/docs/docs/reference/Models/replicatellm.md @@ -0,0 +1,42 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# ReplicateLLM + +Replicate is a great option for newly released language models or models that you've deployed through their platform. Sign up for an account [here](https://replicate.ai/), copy your API key, and then select any model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models). Change `~/.continue/config.py` to look like this: + +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.replicate import ReplicateLLM + +config = ContinueConfig( + ... + models=Models( + default=ReplicateLLM( + model="replicate/codellama-13b-instruct:da5676342de1a5a335b848383af297f592b816b950a43d251a0a9edd0113604b", + api_key="my-replicate-api-key") + ) +) +``` + +If you don't specify the `model` parameter, it will default to `replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781`. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/replicate.py) + +## Properties + + + +### Inherited Properties + +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "Replicate API key", "type": "string"}' required={true} default=""/> +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> +<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781", "type": "string"}' required={false} default="replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781"/> +<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> +<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> +<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> diff --git a/docs/docs/reference/Models/text_gen_interface.md b/docs/docs/reference/Models/text_gen_interface.md index d910bee2..bb8dce1d 100644 --- a/docs/docs/reference/Models/text_gen_interface.md +++ b/docs/docs/reference/Models/text_gen_interface.md @@ -36,5 +36,6 @@ config = ContinueConfig( <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Here is the code before editing:\n```\n{{code_to_edit}}\n```\n\nHere is the edit requested:\n\"{{user_input}}\"\n\nHere is the code after editing:"}, "type": "object"}' required={false} default="{'edit': 'Here is the code before editing:\n```\n{{code_to_edit}}\n```\n\nHere is the edit requested:\n"{{user_input}}"\n\nHere is the code after editing:'}"/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Here is the code before editing:\n```\n{{{code_to_edit}}}\n```\n\nHere is the edit requested:\n\"{{{user_input}}}\"\n\nHere is the code after editing:"}, "type": "object"}' required={false} default="{'edit': 'Here is the code before editing:\n```\n{{{code_to_edit}}}\n```\n\nHere is the edit requested:\n"{{{user_input}}}"\n\nHere is the code after editing:'}"/> <ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/textgenui.md b/docs/docs/reference/Models/textgenui.md new file mode 100644 index 00000000..bb8dce1d --- /dev/null +++ b/docs/docs/reference/Models/textgenui.md @@ -0,0 +1,41 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# TextGenUI + +TextGenUI is a comprehensive, open-source language model UI and local server. You can set it up with an OpenAI-compatible server plugin, but if for some reason that doesn't work, you can use this class like so: + +```python +from continuedev.src.continuedev.libs.llm.text_gen_interface import TextGenUI + +config = ContinueConfig( + ... + models=Models( + default=TextGenUI( + model="<MODEL_NAME>", + ) + ) +) +``` + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/text_gen_interface.py) + +## Properties + +<ClassPropertyRef name='server_url' details='{"title": "Server Url", "description": "URL of your TextGenUI server", "default": "http://localhost:5000", "type": "string"}' required={false} default="http://localhost:5000"/> +<ClassPropertyRef name='streaming_url' details='{"title": "Streaming Url", "description": "URL of your TextGenUI streaming server (separate from main server URL)", "default": "http://localhost:5005", "type": "string"}' required={false} default="http://localhost:5005"/> + + +### Inherited Properties + +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> +<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "text-gen-ui", "type": "string"}' required={false} default="text-gen-ui"/> +<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> +<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> +<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Here is the code before editing:\n```\n{{{code_to_edit}}}\n```\n\nHere is the edit requested:\n\"{{{user_input}}}\"\n\nHere is the code after editing:"}, "type": "object"}' required={false} default="{'edit': 'Here is the code before editing:\n```\n{{{code_to_edit}}}\n```\n\nHere is the edit requested:\n"{{{user_input}}}"\n\nHere is the code after editing:'}"/> +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "The API key for the LLM provider.", "type": "string"}' required={false} default=""/> diff --git a/docs/docs/reference/Models/together.md b/docs/docs/reference/Models/together.md index 6838ba36..3718f046 100644 --- a/docs/docs/reference/Models/together.md +++ b/docs/docs/reference/Models/together.md @@ -38,4 +38,5 @@ config = ContinueConfig( <ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> <ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> <ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> -<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{code_to_edit}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{user_input}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> diff --git a/docs/docs/reference/Models/togetherllm.md b/docs/docs/reference/Models/togetherllm.md new file mode 100644 index 00000000..3718f046 --- /dev/null +++ b/docs/docs/reference/Models/togetherllm.md @@ -0,0 +1,42 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# TogetherLLM + +The Together API is a cloud platform for running large AI models. You can sign up [here](https://api.together.xyz/signup), copy your API key on the initial welcome screen, and then hit the play button on any model from the [Together Models list](https://docs.together.ai/docs/models-inference). Change `~/.continue/config.py` to look like this: + +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.together import TogetherLLM + +config = ContinueConfig( + ... + models=Models( + default=TogetherLLM( + api_key="<API_KEY>", + model="togethercomputer/llama-2-13b-chat" + ) + ) +) +``` + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/together.py) + +## Properties + +<ClassPropertyRef name='base_url' details='{"title": "Base Url", "description": "The base URL for your Together API instance", "default": "https://api.together.xyz", "type": "string"}' required={false} default="https://api.together.xyz"/> + + +### Inherited Properties + +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "Together API key", "type": "string"}' required={true} default=""/> +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> +<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "togethercomputer/RedPajama-INCITE-7B-Instruct", "type": "string"}' required={false} default="togethercomputer/RedPajama-INCITE-7B-Instruct"/> +<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> +<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> +<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {"edit": "Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags."}, "type": "object"}' required={false} default="{'edit': 'Consider the following code:\n```\n{{{code_to_edit}}}\n```\nEdit the code to perfectly satisfy the following user request:\n{{{user_input}}}\nOutput nothing except for the code. No code block, no English explanation, no start/end tags.'}"/> diff --git a/docs/docs/reference/config.md b/docs/docs/reference/config.md index f867ee1e..1f683ed2 100644 --- a/docs/docs/reference/config.md +++ b/docs/docs/reference/config.md @@ -11,7 +11,7 @@ Continue can be deeply customized by editing the `ContinueConfig` object in `~/. <ClassPropertyRef name='steps_on_startup' details='{"title": "Steps On Startup", "description": "Steps that will be automatically run at the beginning of a new session", "default": [], "type": "array", "items": {"$ref": "#/definitions/Step"}}' required={false} default="[]"/> <ClassPropertyRef name='disallowed_steps' details='{"title": "Disallowed Steps", "description": "Steps that are not allowed to be run, and will be skipped if attempted", "default": [], "type": "array", "items": {"type": "string"}}' required={false} default="[]"/> <ClassPropertyRef name='allow_anonymous_telemetry' details='{"title": "Allow Anonymous Telemetry", "description": "If this field is set to True, we will collect anonymous telemetry as described in the documentation page on telemetry. If set to False, we will not collect any data.", "default": true, "type": "boolean"}' required={false} default="True"/> -<ClassPropertyRef name='models' details='{"title": "Models", "description": "Configuration for the models used by Continue. Read more about how to configure models in the documentation.", "default": {"default": {"title": null, "system_message": null, "context_length": 2048, "model": "gpt-4", "stop_tokens": null, "timeout": 300, "verify_ssl": null, "ca_bundle_path": null, "prompt_templates": {}, "api_key": null, "llm": null, "class_name": "MaybeProxyOpenAI"}, "small": null, "medium": {"title": null, "system_message": null, "context_length": 2048, "model": "gpt-3.5-turbo", "stop_tokens": null, "timeout": 300, "verify_ssl": null, "ca_bundle_path": null, "prompt_templates": {}, "api_key": null, "llm": null, "class_name": "MaybeProxyOpenAI"}, "large": null, "edit": null, "chat": null, "unused": []}, "allOf": [{"$ref": "#/definitions/Models"}]}' required={false} default="{'default': {'title': None, 'system_message': None, 'context_length': 2048, 'model': 'gpt-4', 'stop_tokens': None, 'timeout': 300, 'verify_ssl': None, 'ca_bundle_path': None, 'prompt_templates': {}, 'api_key': None, 'llm': None, 'class_name': 'MaybeProxyOpenAI'}, 'small': None, 'medium': {'title': None, 'system_message': None, 'context_length': 2048, 'model': 'gpt-3.5-turbo', 'stop_tokens': None, 'timeout': 300, 'verify_ssl': None, 'ca_bundle_path': None, 'prompt_templates': {}, 'api_key': None, 'llm': None, 'class_name': 'MaybeProxyOpenAI'}, 'large': None, 'edit': None, 'chat': None, 'unused': []}"/> +<ClassPropertyRef name='models' details='{"title": "Models", "description": "Configuration for the models used by Continue. Read more about how to configure models in the documentation.", "default": {"default": {"title": null, "system_message": null, "context_length": 2048, "model": "gpt-4", "stop_tokens": null, "timeout": 300, "verify_ssl": null, "ca_bundle_path": null, "proxy": null, "prompt_templates": {}, "api_key": null, "llm": null, "class_name": "OpenAIFreeTrial"}, "summarize": {"title": null, "system_message": null, "context_length": 2048, "model": "gpt-3.5-turbo", "stop_tokens": null, "timeout": 300, "verify_ssl": null, "ca_bundle_path": null, "proxy": null, "prompt_templates": {}, "api_key": null, "llm": null, "class_name": "OpenAIFreeTrial"}, "edit": null, "chat": null, "saved": []}, "allOf": [{"$ref": "#/definitions/Models"}]}' required={false} default="{'default': {'title': None, 'system_message': None, 'context_length': 2048, 'model': 'gpt-4', 'stop_tokens': None, 'timeout': 300, 'verify_ssl': None, 'ca_bundle_path': None, 'proxy': None, 'prompt_templates': {}, 'api_key': None, 'llm': None, 'class_name': 'OpenAIFreeTrial'}, 'summarize': {'title': None, 'system_message': None, 'context_length': 2048, 'model': 'gpt-3.5-turbo', 'stop_tokens': None, 'timeout': 300, 'verify_ssl': None, 'ca_bundle_path': None, 'proxy': None, 'prompt_templates': {}, 'api_key': None, 'llm': None, 'class_name': 'OpenAIFreeTrial'}, 'edit': None, 'chat': None, 'saved': []}"/> <ClassPropertyRef name='temperature' details='{"title": "Temperature", "description": "The temperature parameter for sampling from the LLM. Higher temperatures will result in more random output, while lower temperatures will result in more predictable output. This value ranges from 0 to 1.", "default": 0.5, "type": "number"}' required={false} default="0.5"/> <ClassPropertyRef name='custom_commands' details='{"title": "Custom Commands", "description": "An array of custom commands that allow you to reuse prompts. Each has name, description, and prompt properties. When you enter /<name> in the text input, it will act as a shortcut to the prompt.", "default": [{"name": "test", "prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.", "description": "This is an example custom command. Use /config to edit it and create more"}], "type": "array", "items": {"$ref": "#/definitions/CustomCommand"}}' required={false} default="[{'name': 'test', 'prompt': "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.", 'description': 'This is an example custom command. Use /config to edit it and create more'}]"/> <ClassPropertyRef name='slash_commands' details='{"title": "Slash Commands", "description": "An array of slash commands that let you map custom Steps to a shortcut.", "default": [], "type": "array", "items": {"$ref": "#/definitions/SlashCommand"}}' required={false} default="[]"/> @@ -23,6 +23,4 @@ Continue can be deeply customized by editing the `ContinueConfig` object in `~/. <ClassPropertyRef name='data_server_url' details='{"title": "Data Server Url", "description": "The URL of the server where development data is sent. No data is sent unless a valid user token is provided.", "default": "https://us-west1-autodebug.cloudfunctions.net", "type": "string"}' required={false} default="https://us-west1-autodebug.cloudfunctions.net"/> <ClassPropertyRef name='disable_summaries' details='{"title": "Disable Summaries", "description": "If set to `True`, Continue will not generate summaries for each Step. This can be useful if you want to save on compute.", "default": false, "type": "boolean"}' required={false} default="False"/> - ### Inherited Properties - diff --git a/docs/docs/walkthroughs/create-a-recipe.md b/docs/docs/walkthroughs/create-a-recipe.md index 3ec641c6..0d92fb92 100644 --- a/docs/docs/walkthroughs/create-a-recipe.md +++ b/docs/docs/walkthroughs/create-a-recipe.md @@ -31,7 +31,7 @@ If you'd like to override the default description of your steps, which is just t - Return a static string
- Store state in a class attribute (prepend with a double underscore, which signifies (through Pydantic) that this is not a parameter for the Step, just internal state) during the run method, and then grab this in the describe method.
-- Use state in conjunction with the `models` parameter of the describe method to autogenerate a description with a language model. For example, if you'd used an attribute called `__code_written` to store a string representing some code that was written, you could implement describe as `return models.medium.complete(f"{self.\_\_code_written}\n\nSummarize the changes made in the above code.")`.
+- Use state in conjunction with the `models` parameter of the describe method to autogenerate a description with a language model. For example, if you'd used an attribute called `__code_written` to store a string representing some code that was written, you could implement describe as `return models.summarize.complete(f"{self.\_\_code_written}\n\nSummarize the changes made in the above code.")`.
## 2. Compose steps together into a complete recipe
diff --git a/docs/static/img/keyboard-shortcuts.png b/docs/static/img/keyboard-shortcuts.png Binary files differdeleted file mode 100644 index a9b75fc5..00000000 --- a/docs/static/img/keyboard-shortcuts.png +++ /dev/null |