diff options
author | Nate Sesti <sestinj@gmail.com> | 2023-10-04 10:08:56 -0700 |
---|---|---|
committer | Nate Sesti <sestinj@gmail.com> | 2023-10-04 10:08:56 -0700 |
commit | 8093c9d10db9d3084057f4f5ea0278b9b72f5193 (patch) | |
tree | efd60753c7d32bf1deffa5190803ec6af5797580 /docs | |
parent | e19c918bb1c517a6a119ae8437c46e0724d2be9d (diff) | |
download | sncontinue-8093c9d10db9d3084057f4f5ea0278b9b72f5193.tar.gz sncontinue-8093c9d10db9d3084057f4f5ea0278b9b72f5193.tar.bz2 sncontinue-8093c9d10db9d3084057f4f5ea0278b9b72f5193.zip |
docs: :memo: title for codeblocks in docs
Diffstat (limited to 'docs')
-rw-r--r-- | docs/docs/reference/Context Providers/urlcontextprovider.md | 1 | ||||
-rw-r--r-- | docs/docs/reference/Models/anthropicllm.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/ggml.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/googlepalmapi.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/huggingfaceinferenceapi.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/llamacpp.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/ollama.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/openai.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/openaifreetrial.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/queuedllm.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/replicatellm.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/textgenui.md | 2 | ||||
-rw-r--r-- | docs/docs/reference/Models/togetherllm.md | 2 |
13 files changed, 13 insertions, 12 deletions
diff --git a/docs/docs/reference/Context Providers/urlcontextprovider.md b/docs/docs/reference/Context Providers/urlcontextprovider.md index 38ddc0e5..fd4a7c4a 100644 --- a/docs/docs/reference/Context Providers/urlcontextprovider.md +++ b/docs/docs/reference/Context Providers/urlcontextprovider.md @@ -9,6 +9,7 @@ Type '@url' to reference the contents of a URL. You can either reference preset ## Properties <ClassPropertyRef name='preset_urls' details='{"title": "Preset Urls", "description": "A list of preset URLs that you will be able to quickly reference by typing '@url'", "default": [], "type": "array", "items": {"type": "string"}}' required={false} default="[]"/> +<ClassPropertyRef name='static_url_context_items' details='{"title": "Static Url Context Items", "default": [], "type": "array", "items": {"$ref": "#/definitions/ContextItem"}}' required={false} default="[]"/> ### Inherited Properties diff --git a/docs/docs/reference/Models/anthropicllm.md b/docs/docs/reference/Models/anthropicllm.md index 1ff17ce7..b35761f0 100644 --- a/docs/docs/reference/Models/anthropicllm.md +++ b/docs/docs/reference/Models/anthropicllm.md @@ -4,7 +4,7 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; Import the `AnthropicLLM` class and set it as the default model: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.libs.llm.anthropic import AnthropicLLM config = ContinueConfig( diff --git a/docs/docs/reference/Models/ggml.md b/docs/docs/reference/Models/ggml.md index aa2af17f..7fa2a3fc 100644 --- a/docs/docs/reference/Models/ggml.md +++ b/docs/docs/reference/Models/ggml.md @@ -6,7 +6,7 @@ See our [5 minute quickstart](https://github.com/continuedev/ggml-server-example Once the model is running on localhost:8000, change `~/.continue/config.py` to look like this: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.libs.llm.ggml import GGML config = ContinueConfig( diff --git a/docs/docs/reference/Models/googlepalmapi.md b/docs/docs/reference/Models/googlepalmapi.md index 74bec3f3..4823dbd1 100644 --- a/docs/docs/reference/Models/googlepalmapi.md +++ b/docs/docs/reference/Models/googlepalmapi.md @@ -4,7 +4,7 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; The Google PaLM API is currently in public preview, so production applications are not supported yet. However, you can [create an API key in Google MakerSuite](https://makersuite.google.com/u/2/app/apikey) and begin trying out the `chat-bison-001` model. Change `~/.continue/config.py` to look like this: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.core.models import Models from continuedev.src.continuedev.libs.llm.hf_inference_api import GooglePaLMAPI diff --git a/docs/docs/reference/Models/huggingfaceinferenceapi.md b/docs/docs/reference/Models/huggingfaceinferenceapi.md index ca85522c..9dbf23ed 100644 --- a/docs/docs/reference/Models/huggingfaceinferenceapi.md +++ b/docs/docs/reference/Models/huggingfaceinferenceapi.md @@ -4,7 +4,7 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; Hugging Face Inference API is a great option for newly released language models. Sign up for an account and add billing [here](https://huggingface.co/settings/billing), access the Inference Endpoints [here](https://ui.endpoints.huggingface.co), click on “New endpoint”, and fill out the form (e.g. select a model like [WizardCoder-Python-34B-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0)), and then deploy your model by clicking “Create Endpoint”. Change `~/.continue/config.py` to look like this: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.core.models import Models from continuedev.src.continuedev.libs.llm.hf_inference_api import HuggingFaceInferenceAPI diff --git a/docs/docs/reference/Models/llamacpp.md b/docs/docs/reference/Models/llamacpp.md index 69b528bd..362914f8 100644 --- a/docs/docs/reference/Models/llamacpp.md +++ b/docs/docs/reference/Models/llamacpp.md @@ -10,7 +10,7 @@ Run the llama.cpp server binary to start the API server. If running on a remote After it's up and running, change `~/.continue/config.py` to look like this: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.libs.llm.llamacpp import LlamaCpp config = ContinueConfig( diff --git a/docs/docs/reference/Models/ollama.md b/docs/docs/reference/Models/ollama.md index 2a5fcff7..64a326b7 100644 --- a/docs/docs/reference/Models/ollama.md +++ b/docs/docs/reference/Models/ollama.md @@ -4,7 +4,7 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; [Ollama](https://ollama.ai/) is an application for Mac and Linux that makes it easy to locally run open-source models, including Llama-2. Download the app from the website, and it will walk you through setup in a couple of minutes. You can also read more in their [README](https://github.com/jmorganca/ollama). Continue can then be configured to use the `Ollama` LLM class: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.libs.llm.ollama import Ollama config = ContinueConfig( diff --git a/docs/docs/reference/Models/openai.md b/docs/docs/reference/Models/openai.md index 5287e61d..039c1bf7 100644 --- a/docs/docs/reference/Models/openai.md +++ b/docs/docs/reference/Models/openai.md @@ -6,7 +6,7 @@ The OpenAI class can be used to access OpenAI models like gpt-4 and gpt-3.5-turb If you are locally serving a model that uses an OpenAI-compatible server, you can simply change the `api_base` in the `OpenAI` class like this: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.libs.llm.openai import OpenAI config = ContinueConfig( diff --git a/docs/docs/reference/Models/openaifreetrial.md b/docs/docs/reference/Models/openaifreetrial.md index 5175273b..8ebe92a7 100644 --- a/docs/docs/reference/Models/openaifreetrial.md +++ b/docs/docs/reference/Models/openaifreetrial.md @@ -10,7 +10,7 @@ Once you are using Continue regularly though, you will need to add an OpenAI API 2. Open `~/.continue/config.py`. You can do this by using the '/config' command in Continue 3. Change the default LLMs to look like this: -```python +```python title="~/.continue/config.py" API_KEY = "<API_KEY>" config = ContinueConfig( ... diff --git a/docs/docs/reference/Models/queuedllm.md b/docs/docs/reference/Models/queuedllm.md index edb980ab..c9a0b4b1 100644 --- a/docs/docs/reference/Models/queuedllm.md +++ b/docs/docs/reference/Models/queuedllm.md @@ -6,7 +6,7 @@ QueuedLLM exists to make up for LLM servers that cannot handle multiple requests If you are already using another LLM class and are experiencing this problem, you can just wrap it with the QueuedLLM class like this: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.libs.llm.queued import QueuedLLM config = ContinueConfig( diff --git a/docs/docs/reference/Models/replicatellm.md b/docs/docs/reference/Models/replicatellm.md index 5a474f71..0dc5f838 100644 --- a/docs/docs/reference/Models/replicatellm.md +++ b/docs/docs/reference/Models/replicatellm.md @@ -4,7 +4,7 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; Replicate is a great option for newly released language models or models that you've deployed through their platform. Sign up for an account [here](https://replicate.ai/), copy your API key, and then select any model from the [Replicate Streaming List](https://replicate.com/collections/streaming-language-models). Change `~/.continue/config.py` to look like this: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.core.models import Models from continuedev.src.continuedev.libs.llm.replicate import ReplicateLLM diff --git a/docs/docs/reference/Models/textgenui.md b/docs/docs/reference/Models/textgenui.md index daede8eb..e0d757e4 100644 --- a/docs/docs/reference/Models/textgenui.md +++ b/docs/docs/reference/Models/textgenui.md @@ -4,7 +4,7 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; TextGenUI is a comprehensive, open-source language model UI and local server. You can set it up with an OpenAI-compatible server plugin, but if for some reason that doesn't work, you can use this class like so: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.libs.llm.text_gen_interface import TextGenUI config = ContinueConfig( diff --git a/docs/docs/reference/Models/togetherllm.md b/docs/docs/reference/Models/togetherllm.md index 6ddde9dd..e0dc35de 100644 --- a/docs/docs/reference/Models/togetherllm.md +++ b/docs/docs/reference/Models/togetherllm.md @@ -4,7 +4,7 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; The Together API is a cloud platform for running large AI models. You can sign up [here](https://api.together.xyz/signup), copy your API key on the initial welcome screen, and then hit the play button on any model from the [Together Models list](https://docs.together.ai/docs/models-inference). Change `~/.continue/config.py` to look like this: -```python +```python title="~/.continue/config.py" from continuedev.src.continuedev.core.models import Models from continuedev.src.continuedev.libs.llm.together import TogetherLLM |