summaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/docs/customization/models.md2
-rw-r--r--docs/docs/reference/Models/ollama.md3
2 files changed, 2 insertions, 3 deletions
diff --git a/docs/docs/customization/models.md b/docs/docs/customization/models.md
index cebb0667..ba3f6424 100644
--- a/docs/docs/customization/models.md
+++ b/docs/docs/customization/models.md
@@ -10,7 +10,7 @@ Commercial Models
Local Models
-- [Ollama](../reference/Models/ollama.md) - If you have a Mac, Ollama is the simplest way to run open-source models like Code Llama.
+- [Ollama](../reference/Models/ollama.md) - If you are on Mac or Linux, Ollama is the simplest way to run open-source models like Code Llama.
- [OpenAI](../reference/Models/openai.md) - If you have access to an OpenAI-compatible server (e.g. llama-cpp-python, LocalAI, FastChat, TextGenWebUI, etc.), you can use the `OpenAI` class and just change the base URL.
- [GGML](../reference/Models/ggml.md) - An alternative way to connect to OpenAI-compatible servers. Will use `aiohttp` directly instead of the `openai` Python package.
- [LlamaCpp](../reference/Models/llamacpp.md) - Build llama.cpp from source and use its built-in API server.
diff --git a/docs/docs/reference/Models/ollama.md b/docs/docs/reference/Models/ollama.md
index 39257395..6388e8cc 100644
--- a/docs/docs/reference/Models/ollama.md
+++ b/docs/docs/reference/Models/ollama.md
@@ -2,7 +2,7 @@ import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx';
# Ollama
-[Ollama](https://ollama.ai/) is a Mac application that makes it easy to locally run open-source models, including Llama-2. Download the app from the website, and it will walk you through setup in a couple of minutes. You can also read more in their [README](https://github.com/jmorganca/ollama). Continue can then be configured to use the `Ollama` LLM class:
+[Ollama](https://ollama.ai/) is an application for Mac and Linux that makes it easy to locally run open-source models, including Llama-2. Download the app from the website, and it will walk you through setup in a couple of minutes. You can also read more in their [README](https://github.com/jmorganca/ollama). Continue can then be configured to use the `Ollama` LLM class:
```python
from continuedev.src.continuedev.libs.llm.ollama import Ollama
@@ -21,7 +21,6 @@ config = ContinueConfig(
<ClassPropertyRef name='server_url' details='{&quot;title&quot;: &quot;Server Url&quot;, &quot;description&quot;: &quot;URL of the Ollama server&quot;, &quot;default&quot;: &quot;http://localhost:11434&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default="http://localhost:11434"/>
-
### Inherited Properties
<ClassPropertyRef name='title' details='{&quot;title&quot;: &quot;Title&quot;, &quot;description&quot;: &quot;A title that will identify this model in the model selection dropdown&quot;, &quot;type&quot;: &quot;string&quot;}' required={false} default=""/>