From e976d60974a7837967d03807605cbf2e7b4f3f9a Mon Sep 17 00:00:00 2001 From: Nate Sesti <33237525+sestinj@users.noreply.github.com> Date: Sat, 23 Sep 2023 13:06:00 -0700 Subject: UI Redesign and fixing many details (#496) * feat: :lipstick: start of major design upgrade * feat: :lipstick: model selection page * feat: :lipstick: use shortcut to add highlighted code as ctx * feat: :lipstick: better display of errors * feat: :lipstick: ui for learning keyboard shortcuts, more details * refactor: :construction: testing slash commands ui * Truncate continue.log * refactor: :construction: refactoring client_session, ui, more * feat: :bug: layout fixes * refactor: :lipstick: ui to enter OpenAI Key * refactor: :truck: rename MaybeProxyOpenAI -> OpenAIFreeTrial * starting help center * removing old shortcut docs * fix: :bug: fix model setting logic to avoid overwrites * feat: :lipstick: tutorial and model descriptions * refactor: :truck: rename unused -> saved * refactor: :truck: rename model roles * feat: :lipstick: edit indicator * refactor: :lipstick: move +, folder icons * feat: :lipstick: tab to clear all context * fix: :bug: context providers ui fixes * fix: :bug: fix lag when stopping step * fix: :bug: don't override system message for models * fix: :bug: fix continue button cursor * feat: :lipstick: title bar * fix: :bug: updates to code highlighting logic and more * fix: :bug: fix renaming of summarize model role * feat: :lipstick: help page and better session title * feat: :lipstick: more help page / ui improvements * feat: :lipstick: set session title * fix: :bug: small fixes for changing sessions * fix: :bug: perfecting the highlighting code and ctx interactions * style: :lipstick: sticky headers for scroll, ollama warming * fix: :bug: fix toggle bug --------- Co-authored-by: Ty Dunn --- docs/docs/customization/models.md | 16 +++---- docs/docs/how-to-use-continue.md | 6 --- .../Context Providers/diffcontextprovider.md | 20 +++++++++ .../Context Providers/filecontextprovider.md | 19 ++++++++ .../Context Providers/filetreecontextprovider.md | 20 +++++++++ .../githubissuescontextprovider.md | 21 +++++++++ .../Context Providers/googlecontextprovider.md | 20 +++++++++ .../Context Providers/searchcontextprovider.md | 20 +++++++++ .../Context Providers/terminalcontextprovider.md | 20 +++++++++ .../Context Providers/urlcontextprovider.md | 20 +++++++++ docs/docs/reference/Models/anthropic.md | 1 + docs/docs/reference/Models/anthropicllm.md | 39 +++++++++++++++++ docs/docs/reference/Models/ggml.md | 4 +- docs/docs/reference/Models/hf_inference_api.md | 3 +- docs/docs/reference/Models/hf_tgi.md | 3 +- .../reference/Models/huggingfaceinferenceapi.md | 42 ++++++++++++++++++ docs/docs/reference/Models/huggingfacetgi.md | 27 ++++++++++++ docs/docs/reference/Models/llamacpp.md | 3 +- docs/docs/reference/Models/maybe_proxy_openai.md | 14 +++--- docs/docs/reference/Models/ollama.md | 3 +- docs/docs/reference/Models/openai.md | 2 +- docs/docs/reference/Models/openai_free_trial.md | 48 +++++++++++++++++++++ docs/docs/reference/Models/openaifreetrial.md | 47 ++++++++++++++++++++ docs/docs/reference/Models/queued.md | 1 + docs/docs/reference/Models/queuedllm.md | 40 +++++++++++++++++ docs/docs/reference/Models/replicate.md | 3 +- docs/docs/reference/Models/replicatellm.md | 42 ++++++++++++++++++ docs/docs/reference/Models/text_gen_interface.md | 3 +- docs/docs/reference/Models/textgenui.md | 41 ++++++++++++++++++ docs/docs/reference/Models/together.md | 3 +- docs/docs/reference/Models/togetherllm.md | 42 ++++++++++++++++++ docs/docs/reference/config.md | 4 +- docs/docs/walkthroughs/create-a-recipe.md | 2 +- docs/static/img/keyboard-shortcuts.png | Bin 142544 -> 0 bytes 34 files changed, 564 insertions(+), 35 deletions(-) create mode 100644 docs/docs/reference/Context Providers/diffcontextprovider.md create mode 100644 docs/docs/reference/Context Providers/filecontextprovider.md create mode 100644 docs/docs/reference/Context Providers/filetreecontextprovider.md create mode 100644 docs/docs/reference/Context Providers/githubissuescontextprovider.md create mode 100644 docs/docs/reference/Context Providers/googlecontextprovider.md create mode 100644 docs/docs/reference/Context Providers/searchcontextprovider.md create mode 100644 docs/docs/reference/Context Providers/terminalcontextprovider.md create mode 100644 docs/docs/reference/Context Providers/urlcontextprovider.md create mode 100644 docs/docs/reference/Models/anthropicllm.md create mode 100644 docs/docs/reference/Models/huggingfaceinferenceapi.md create mode 100644 docs/docs/reference/Models/huggingfacetgi.md create mode 100644 docs/docs/reference/Models/openai_free_trial.md create mode 100644 docs/docs/reference/Models/openaifreetrial.md create mode 100644 docs/docs/reference/Models/queuedllm.md create mode 100644 docs/docs/reference/Models/replicatellm.md create mode 100644 docs/docs/reference/Models/textgenui.md create mode 100644 docs/docs/reference/Models/togetherllm.md delete mode 100644 docs/static/img/keyboard-shortcuts.png (limited to 'docs') diff --git a/docs/docs/customization/models.md b/docs/docs/customization/models.md index ac3b5f44..cebb0667 100644 --- a/docs/docs/customization/models.md +++ b/docs/docs/customization/models.md @@ -4,9 +4,9 @@ Continue makes it easy to swap out different LLM providers. Once you've added an Commercial Models -- [MaybeProxyOpenAI](../reference/Models/maybe_proxy_openai.md) (default) - Use gpt-4 or gpt-3.5-turbo free with our API key, or with your API key. gpt-4 is probably the most capable model of all options. +- [OpenAIFreeTrial](../reference/Models/openaifreetrial.md) (default) - Use gpt-4 or gpt-3.5-turbo free with our API key, or with your API key. gpt-4 is probably the most capable model of all options. - [OpenAI](../reference/Models/openai.md) - Use any OpenAI model with your own key. Can also change the base URL if you have a server that uses the OpenAI API format, including using the Azure OpenAI service, LocalAI, etc. -- [AnthropicLLM](../reference/Models/anthropic.md) - Use claude-2 with your Anthropic API key. Claude 2 is also highly capable, and has a 100,000 token context window. +- [AnthropicLLM](../reference/Models/anthropicllm.md) - Use claude-2 with your Anthropic API key. Claude 2 is also highly capable, and has a 100,000 token context window. Local Models @@ -17,9 +17,9 @@ Local Models Open-Source Models (not local) -- [TogetherLLM](../reference/Models/together.md) - Use any model from the [Together Models list](https://docs.together.ai/docs/models-inference) with your Together API key. -- [ReplicateLLM](../reference/Models/replicate.md) - Use any open-source model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models) with your Replicate API key. -- [HuggingFaceInferenceAPI](../reference/Models/hf_inference_api.md) - Use any open-source model from the [Hugging Face Inference API](https://huggingface.co/inference-api) with your Hugging Face token. +- [TogetherLLM](../reference/Models/togetherllm.md) - Use any model from the [Together Models list](https://docs.together.ai/docs/models-inference) with your Together API key. +- [ReplicateLLM](../reference/Models/replicatellm.md) - Use any open-source model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models) with your Replicate API key. +- [HuggingFaceInferenceAPI](../reference/Models/huggingfaceinferenceapi.md) - Use any open-source model from the [Hugging Face Inference API](https://huggingface.co/inference-api) with your Hugging Face token. ## Change the default LLM @@ -31,13 +31,13 @@ from continuedev.src.continuedev.core.models import Models config = ContinueConfig( ... models=Models( - default=MaybeProxyOpenAI(model="gpt-4"), - medium=MaybeProxyOpenAI(model="gpt-3.5-turbo") + default=OpenAIFreeTrial(model="gpt-4"), + summarize=OpenAIFreeTrial(model="gpt-3.5-turbo") ) ) ``` -The `default` and `medium` properties are different _model roles_. This allows different models to be used for different tasks. The available roles are `default`, `small`, `medium`, `large`, `edit`, and `chat`. `edit` is used when you use the '/edit' slash command, `chat` is used for all chat responses, and `medium` is used for summarizing. If not set, all roles will fall back to `default`. The values of these fields must be of the [`LLM`](https://github.com/continuedev/continue/blob/main/continuedev/src/continuedev/libs/llm/__init__.py) class, which implements methods for retrieving and streaming completions from an LLM. +The `default` and `summarize` properties are different _model roles_. This allows different models to be used for different tasks. The available roles are `default`, `summarize`, `edit`, and `chat`. `edit` is used when you use the '/edit' slash command, `chat` is used for all chat responses, and `summarize` is used for summarizing. If not set, all roles will fall back to `default`. The values of these fields must be of the [`LLM`](https://github.com/continuedev/continue/blob/main/continuedev/src/continuedev/libs/llm/__init__.py) class, which implements methods for retrieving and streaming completions from an LLM. Below, we describe the `LLM` classes available in the Continue core library, and how they can be used. diff --git a/docs/docs/how-to-use-continue.md b/docs/docs/how-to-use-continue.md index 3f21d92c..21b12395 100644 --- a/docs/docs/how-to-use-continue.md +++ b/docs/docs/how-to-use-continue.md @@ -21,12 +21,6 @@ If you are trying to use it for a new task and don’t have a sense of how much Remember: You are responsible for all code that you ship, whether it was written by you or by an LLM that you directed. This means it is crucial that you review what the LLM writes. To make this easier, we provide natural language descriptions of the actions the LLM took in the Continue GUI. -## Keyboard shortcuts - -Here you will find a list of all of the default keyboard shortcuts in VS Code: - -![keyboard-shortucts](/img/keyboard-shortcuts.png) - ## When to use Continue Here are tasks that Continue excels at helping you complete: diff --git a/docs/docs/reference/Context Providers/diffcontextprovider.md b/docs/docs/reference/Context Providers/diffcontextprovider.md new file mode 100644 index 00000000..54ba54b9 --- /dev/null +++ b/docs/docs/reference/Context Providers/diffcontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# DiffContextProvider + +Type '@diff' to reference all of the changes you've made to your current branch. This is useful if you want to summarize what you've done or ask for a general review of your work before committing. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/diff.py) + +## Properties + + + + +### Inherited Properties + + + + + + diff --git a/docs/docs/reference/Context Providers/filecontextprovider.md b/docs/docs/reference/Context Providers/filecontextprovider.md new file mode 100644 index 00000000..12e68478 --- /dev/null +++ b/docs/docs/reference/Context Providers/filecontextprovider.md @@ -0,0 +1,19 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# FileContextProvider + +The FileContextProvider is a ContextProvider that allows you to search files in the open workspace. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/file.py) + +## Properties + + + +### Inherited Properties + + + + + + diff --git a/docs/docs/reference/Context Providers/filetreecontextprovider.md b/docs/docs/reference/Context Providers/filetreecontextprovider.md new file mode 100644 index 00000000..a5b11555 --- /dev/null +++ b/docs/docs/reference/Context Providers/filetreecontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# FileTreeContextProvider + +Type '@tree' to reference the contents of your current workspace. The LLM will be able to see the nested directory structure of your project. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/filetree.py) + +## Properties + + + + +### Inherited Properties + + + + + + diff --git a/docs/docs/reference/Context Providers/githubissuescontextprovider.md b/docs/docs/reference/Context Providers/githubissuescontextprovider.md new file mode 100644 index 00000000..f174df96 --- /dev/null +++ b/docs/docs/reference/Context Providers/githubissuescontextprovider.md @@ -0,0 +1,21 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# GitHubIssuesContextProvider + +The GitHubIssuesContextProvider is a ContextProvider that allows you to search GitHub issues in a repo. Type '@issue' to reference the title and contents of an issue. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/github.py) + +## Properties + + + + + +### Inherited Properties + + + + + + diff --git a/docs/docs/reference/Context Providers/googlecontextprovider.md b/docs/docs/reference/Context Providers/googlecontextprovider.md new file mode 100644 index 00000000..84a9fdb5 --- /dev/null +++ b/docs/docs/reference/Context Providers/googlecontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# GoogleContextProvider + +Type '@google' to reference the results of a Google search. For example, type "@google python tutorial" if you want to search and discuss ways of learning Python. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/google.py) + +## Properties + + + + +### Inherited Properties + + + + + + diff --git a/docs/docs/reference/Context Providers/searchcontextprovider.md b/docs/docs/reference/Context Providers/searchcontextprovider.md new file mode 100644 index 00000000..9aa22f33 --- /dev/null +++ b/docs/docs/reference/Context Providers/searchcontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# SearchContextProvider + +Type '@search' to reference the results of codebase search, just like the results you would get from VS Code search. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/search.py) + +## Properties + + + + +### Inherited Properties + + + + + + diff --git a/docs/docs/reference/Context Providers/terminalcontextprovider.md b/docs/docs/reference/Context Providers/terminalcontextprovider.md new file mode 100644 index 00000000..ca4ad01a --- /dev/null +++ b/docs/docs/reference/Context Providers/terminalcontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# TerminalContextProvider + +Type '@terminal' to reference the contents of your IDE's terminal. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/terminal.py) + +## Properties + + + + +### Inherited Properties + + + + + + diff --git a/docs/docs/reference/Context Providers/urlcontextprovider.md b/docs/docs/reference/Context Providers/urlcontextprovider.md new file mode 100644 index 00000000..38ddc0e5 --- /dev/null +++ b/docs/docs/reference/Context Providers/urlcontextprovider.md @@ -0,0 +1,20 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# URLContextProvider + +Type '@url' to reference the contents of a URL. You can either reference preset URLs, or reference one dynamically by typing '@url https://example.com'. The text contents of the page will be fetched and used as context. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/plugins/context_providers/url.py) + +## Properties + + + + +### Inherited Properties + + + + + + diff --git a/docs/docs/reference/Models/anthropic.md b/docs/docs/reference/Models/anthropic.md index e2c6f683..128b706d 100644 --- a/docs/docs/reference/Models/anthropic.md +++ b/docs/docs/reference/Models/anthropic.md @@ -35,4 +35,5 @@ Claude 2 is not yet publicly released. You can request early access [here](https + diff --git a/docs/docs/reference/Models/anthropicllm.md b/docs/docs/reference/Models/anthropicllm.md new file mode 100644 index 00000000..128b706d --- /dev/null +++ b/docs/docs/reference/Models/anthropicllm.md @@ -0,0 +1,39 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# AnthropicLLM + +Import the `AnthropicLLM` class and set it as the default model: + +```python +from continuedev.src.continuedev.libs.llm.anthropic import AnthropicLLM + +config = ContinueConfig( + ... + models=Models( + default=AnthropicLLM(api_key="", model="claude-2") + ) +) +``` + +Claude 2 is not yet publicly released. You can request early access [here](https://www.anthropic.com/earlyaccess). + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/anthropic.py) + +## Properties + + + +### Inherited Properties + + + + + + + + + + + + + diff --git a/docs/docs/reference/Models/ggml.md b/docs/docs/reference/Models/ggml.md index d02f6d05..7bdb5441 100644 --- a/docs/docs/reference/Models/ggml.md +++ b/docs/docs/reference/Models/ggml.md @@ -24,7 +24,6 @@ config = ContinueConfig( ## Properties - ### Inherited Properties @@ -38,5 +37,6 @@ config = ContinueConfig( - + + diff --git a/docs/docs/reference/Models/hf_inference_api.md b/docs/docs/reference/Models/hf_inference_api.md index e7857b21..560309f2 100644 --- a/docs/docs/reference/Models/hf_inference_api.md +++ b/docs/docs/reference/Models/hf_inference_api.md @@ -37,5 +37,6 @@ config = ContinueConfig( - + + diff --git a/docs/docs/reference/Models/hf_tgi.md b/docs/docs/reference/Models/hf_tgi.md index ab3f4d61..2cee9fe1 100644 --- a/docs/docs/reference/Models/hf_tgi.md +++ b/docs/docs/reference/Models/hf_tgi.md @@ -22,5 +22,6 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; - + + diff --git a/docs/docs/reference/Models/huggingfaceinferenceapi.md b/docs/docs/reference/Models/huggingfaceinferenceapi.md new file mode 100644 index 00000000..560309f2 --- /dev/null +++ b/docs/docs/reference/Models/huggingfaceinferenceapi.md @@ -0,0 +1,42 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# HuggingFaceInferenceAPI + +Hugging Face Inference API is a great option for newly released language models. Sign up for an account and add billing [here](https://huggingface.co/settings/billing), access the Inference Endpoints [here](https://ui.endpoints.huggingface.co), click on “New endpoint”, and fill out the form (e.g. select a model like [WizardCoder-Python-34B-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0)), and then deploy your model by clicking “Create Endpoint”. Change `~/.continue/config.py` to look like this: + +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.hf_inference_api import HuggingFaceInferenceAPI + +config = ContinueConfig( + ... + models=Models( + default=HuggingFaceInferenceAPI( + endpoint_url: "", + hf_token: "", + ) +) +``` + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/hf_inference_api.py) + +## Properties + + + + + +### Inherited Properties + + + + + + + + + + + + + diff --git a/docs/docs/reference/Models/huggingfacetgi.md b/docs/docs/reference/Models/huggingfacetgi.md new file mode 100644 index 00000000..2cee9fe1 --- /dev/null +++ b/docs/docs/reference/Models/huggingfacetgi.md @@ -0,0 +1,27 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# HuggingFaceTGI + + + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/hf_tgi.py) + +## Properties + + + + +### Inherited Properties + + + + + + + + + + + + + diff --git a/docs/docs/reference/Models/llamacpp.md b/docs/docs/reference/Models/llamacpp.md index ae4b6e62..8a6be11e 100644 --- a/docs/docs/reference/Models/llamacpp.md +++ b/docs/docs/reference/Models/llamacpp.md @@ -42,5 +42,6 @@ config = ContinueConfig( - + + diff --git a/docs/docs/reference/Models/maybe_proxy_openai.md b/docs/docs/reference/Models/maybe_proxy_openai.md index c080b54d..055054fd 100644 --- a/docs/docs/reference/Models/maybe_proxy_openai.md +++ b/docs/docs/reference/Models/maybe_proxy_openai.md @@ -1,8 +1,8 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; -# MaybeProxyOpenAI +# OpenAIFreeTrial -With the `MaybeProxyOpenAI` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. +With the `OpenAIFreeTrial` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps: @@ -15,23 +15,22 @@ API_KEY = "" config = ContinueConfig( ... models=Models( - default=MaybeProxyOpenAI(model="gpt-4", api_key=API_KEY), - medium=MaybeProxyOpenAI(model="gpt-3.5-turbo", api_key=API_KEY) + default=OpenAIFreeTrial(model="gpt-4", api_key=API_KEY), + medium=OpenAIFreeTrial(model="gpt-3.5-turbo", api_key=API_KEY) ) ) ``` -The `MaybeProxyOpenAI` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. +The `OpenAIFreeTrial` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. These classes support any models available through the OpenAI API, assuming your API key has access, including "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", and "gpt-4-32k". -[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/maybe_proxy_openai.py) +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/openai_free_trial.py) ## Properties - ### Inherited Properties @@ -43,5 +42,6 @@ These classes support any models available through the OpenAI API, assuming your + diff --git a/docs/docs/reference/Models/ollama.md b/docs/docs/reference/Models/ollama.md index f0370b45..39257395 100644 --- a/docs/docs/reference/Models/ollama.md +++ b/docs/docs/reference/Models/ollama.md @@ -33,5 +33,6 @@ config = ContinueConfig( - + + diff --git a/docs/docs/reference/Models/openai.md b/docs/docs/reference/Models/openai.md index f28e0598..e78dd404 100644 --- a/docs/docs/reference/Models/openai.md +++ b/docs/docs/reference/Models/openai.md @@ -32,7 +32,6 @@ Options for serving models locally with an OpenAI-compatible server include: ## Properties - @@ -51,4 +50,5 @@ Options for serving models locally with an OpenAI-compatible server include: + diff --git a/docs/docs/reference/Models/openai_free_trial.md b/docs/docs/reference/Models/openai_free_trial.md new file mode 100644 index 00000000..cd510aa8 --- /dev/null +++ b/docs/docs/reference/Models/openai_free_trial.md @@ -0,0 +1,48 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# OpenAIFreeTrial + +With the `OpenAIFreeTrial` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. + +Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps: + +1. Copy your API key from https://platform.openai.com/account/api-keys +2. Open `~/.continue/config.py`. You can do this by using the '/config' command in Continue +3. Change the default LLMs to look like this: + +```python +API_KEY = "" +config = ContinueConfig( + ... + models=Models( + default=OpenAIFreeTrial(model="gpt-4", api_key=API_KEY), + medium=OpenAIFreeTrial(model="gpt-3.5-turbo", api_key=API_KEY) + ) +) +``` + +The `OpenAIFreeTrial` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. + +These classes support any models available through the OpenAI API, assuming your API key has access, including "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", and "gpt-4-32k". + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/openai_free_trial.py) + +## Properties + + + + +### Inherited Properties + + + + + + + + + + + + + diff --git a/docs/docs/reference/Models/openaifreetrial.md b/docs/docs/reference/Models/openaifreetrial.md new file mode 100644 index 00000000..a9efa6cc --- /dev/null +++ b/docs/docs/reference/Models/openaifreetrial.md @@ -0,0 +1,47 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# OpenAIFreeTrial + +With the `OpenAIFreeTrial` `LLM`, new users can try out Continue with GPT-4 using a proxy server that securely makes calls to OpenAI using our API key. Continue should just work the first time you install the extension in VS Code. + +Once you are using Continue regularly though, you will need to add an OpenAI API key that has access to GPT-4 by following these steps: + +1. Copy your API key from https://platform.openai.com/account/api-keys +2. Open `~/.continue/config.py`. You can do this by using the '/config' command in Continue +3. Change the default LLMs to look like this: + +```python +API_KEY = "" +config = ContinueConfig( + ... + models=Models( + default=OpenAIFreeTrial(model="gpt-4", api_key=API_KEY), + summarize=OpenAIFreeTrial(model="gpt-3.5-turbo", api_key=API_KEY) + ) +) +``` + +The `OpenAIFreeTrial` class will automatically switch to using your API key instead of ours. If you'd like to explicitly use one or the other, you can use the `ProxyServer` or `OpenAI` classes instead. + +These classes support any models available through the OpenAI API, assuming your API key has access, including "gpt-4", "gpt-3.5-turbo", "gpt-3.5-turbo-16k", and "gpt-4-32k". + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/openai_free_trial.py) + +## Properties + + + +### Inherited Properties + + + + + + + + + + + + + diff --git a/docs/docs/reference/Models/queued.md b/docs/docs/reference/Models/queued.md index 231aa4dc..06942e3e 100644 --- a/docs/docs/reference/Models/queued.md +++ b/docs/docs/reference/Models/queued.md @@ -35,5 +35,6 @@ config = ContinueConfig( + diff --git a/docs/docs/reference/Models/queuedllm.md b/docs/docs/reference/Models/queuedllm.md new file mode 100644 index 00000000..06942e3e --- /dev/null +++ b/docs/docs/reference/Models/queuedllm.md @@ -0,0 +1,40 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# QueuedLLM + +QueuedLLM exists to make up for LLM servers that cannot handle multiple requests at once. It uses a lock to ensure that only one request is being processed at a time. + +If you are already using another LLM class and are experiencing this problem, you can just wrap it with the QueuedLLM class like this: + +```python +from continuedev.src.continuedev.libs.llm.queued import QueuedLLM + +config = ContinueConfig( + ... + models=Models( + default=QueuedLLM(llm=) + ) +) +``` + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/queued.py) + +## Properties + + + + +### Inherited Properties + + + + + + + + + + + + + diff --git a/docs/docs/reference/Models/replicate.md b/docs/docs/reference/Models/replicate.md index 83bfd383..879459e0 100644 --- a/docs/docs/reference/Models/replicate.md +++ b/docs/docs/reference/Models/replicate.md @@ -38,4 +38,5 @@ If you don't specify the `model` parameter, it will default to `replicate/llama- - + + diff --git a/docs/docs/reference/Models/replicatellm.md b/docs/docs/reference/Models/replicatellm.md new file mode 100644 index 00000000..879459e0 --- /dev/null +++ b/docs/docs/reference/Models/replicatellm.md @@ -0,0 +1,42 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# ReplicateLLM + +Replicate is a great option for newly released language models or models that you've deployed through their platform. Sign up for an account [here](https://replicate.ai/), copy your API key, and then select any model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models). Change `~/.continue/config.py` to look like this: + +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.replicate import ReplicateLLM + +config = ContinueConfig( + ... + models=Models( + default=ReplicateLLM( + model="replicate/codellama-13b-instruct:da5676342de1a5a335b848383af297f592b816b950a43d251a0a9edd0113604b", + api_key="my-replicate-api-key") + ) +) +``` + +If you don't specify the `model` parameter, it will default to `replicate/llama-2-70b-chat:58d078176e02c219e11eb4da5a02a7830a283b14cf8f94537af893ccff5ee781`. + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/replicate.py) + +## Properties + + + +### Inherited Properties + + + + + + + + + + + + + diff --git a/docs/docs/reference/Models/text_gen_interface.md b/docs/docs/reference/Models/text_gen_interface.md index d910bee2..bb8dce1d 100644 --- a/docs/docs/reference/Models/text_gen_interface.md +++ b/docs/docs/reference/Models/text_gen_interface.md @@ -36,5 +36,6 @@ config = ContinueConfig( - + + diff --git a/docs/docs/reference/Models/textgenui.md b/docs/docs/reference/Models/textgenui.md new file mode 100644 index 00000000..bb8dce1d --- /dev/null +++ b/docs/docs/reference/Models/textgenui.md @@ -0,0 +1,41 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# TextGenUI + +TextGenUI is a comprehensive, open-source language model UI and local server. You can set it up with an OpenAI-compatible server plugin, but if for some reason that doesn't work, you can use this class like so: + +```python +from continuedev.src.continuedev.libs.llm.text_gen_interface import TextGenUI + +config = ContinueConfig( + ... + models=Models( + default=TextGenUI( + model="", + ) + ) +) +``` + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/text_gen_interface.py) + +## Properties + + + + + +### Inherited Properties + + + + + + + + + + + + + diff --git a/docs/docs/reference/Models/together.md b/docs/docs/reference/Models/together.md index 6838ba36..3718f046 100644 --- a/docs/docs/reference/Models/together.md +++ b/docs/docs/reference/Models/together.md @@ -38,4 +38,5 @@ config = ContinueConfig( - + + diff --git a/docs/docs/reference/Models/togetherllm.md b/docs/docs/reference/Models/togetherllm.md new file mode 100644 index 00000000..3718f046 --- /dev/null +++ b/docs/docs/reference/Models/togetherllm.md @@ -0,0 +1,42 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# TogetherLLM + +The Together API is a cloud platform for running large AI models. You can sign up [here](https://api.together.xyz/signup), copy your API key on the initial welcome screen, and then hit the play button on any model from the [Together Models list](https://docs.together.ai/docs/models-inference). Change `~/.continue/config.py` to look like this: + +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.together import TogetherLLM + +config = ContinueConfig( + ... + models=Models( + default=TogetherLLM( + api_key="", + model="togethercomputer/llama-2-13b-chat" + ) + ) +) +``` + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/together.py) + +## Properties + + + + +### Inherited Properties + + + + + + + + + + + + + diff --git a/docs/docs/reference/config.md b/docs/docs/reference/config.md index f867ee1e..1f683ed2 100644 --- a/docs/docs/reference/config.md +++ b/docs/docs/reference/config.md @@ -11,7 +11,7 @@ Continue can be deeply customized by editing the `ContinueConfig` object in `~/. - + @@ -23,6 +23,4 @@ Continue can be deeply customized by editing the `ContinueConfig` object in `~/. - ### Inherited Properties - diff --git a/docs/docs/walkthroughs/create-a-recipe.md b/docs/docs/walkthroughs/create-a-recipe.md index 3ec641c6..0d92fb92 100644 --- a/docs/docs/walkthroughs/create-a-recipe.md +++ b/docs/docs/walkthroughs/create-a-recipe.md @@ -31,7 +31,7 @@ If you'd like to override the default description of your steps, which is just t - Return a static string - Store state in a class attribute (prepend with a double underscore, which signifies (through Pydantic) that this is not a parameter for the Step, just internal state) during the run method, and then grab this in the describe method. -- Use state in conjunction with the `models` parameter of the describe method to autogenerate a description with a language model. For example, if you'd used an attribute called `__code_written` to store a string representing some code that was written, you could implement describe as `return models.medium.complete(f"{self.\_\_code_written}\n\nSummarize the changes made in the above code.")`. +- Use state in conjunction with the `models` parameter of the describe method to autogenerate a description with a language model. For example, if you'd used an attribute called `__code_written` to store a string representing some code that was written, you could implement describe as `return models.summarize.complete(f"{self.\_\_code_written}\n\nSummarize the changes made in the above code.")`. ## 2. Compose steps together into a complete recipe diff --git a/docs/static/img/keyboard-shortcuts.png b/docs/static/img/keyboard-shortcuts.png deleted file mode 100644 index a9b75fc5..00000000 Binary files a/docs/static/img/keyboard-shortcuts.png and /dev/null differ -- cgit v1.2.3-70-g09d2