diff options
author | Nate Sesti <sestinj@gmail.com> | 2023-08-24 09:51:53 -0700 |
---|---|---|
committer | Nate Sesti <sestinj@gmail.com> | 2023-08-24 09:51:53 -0700 |
commit | 6c67b381724515306ce85bdf70ac8d854597eb54 (patch) | |
tree | 9b71e66de94b0a47cb5da930294ac6b0e1e85d68 | |
parent | 88a8166476d38889fd4f9323472cc34a5226e05c (diff) | |
download | sncontinue-6c67b381724515306ce85bdf70ac8d854597eb54.tar.gz sncontinue-6c67b381724515306ce85bdf70ac8d854597eb54.tar.bz2 sncontinue-6c67b381724515306ce85bdf70ac8d854597eb54.zip |
codellama docs
-rw-r--r-- | docs/docs/walkthroughs/codellama.md | 48 |
1 files changed, 48 insertions, 0 deletions
diff --git a/docs/docs/walkthroughs/codellama.md b/docs/docs/walkthroughs/codellama.md new file mode 100644 index 00000000..19cfa226 --- /dev/null +++ b/docs/docs/walkthroughs/codellama.md @@ -0,0 +1,48 @@ +# Using Code Llama with Continue + +With Continue, you can use Code Llama as a drop-in replacement for GPT-4, either by running locally with Ollama or GGML or through Replicate. + +For more general information on customizing Continue, read [our customization docs](../customization.md). + +## Ollama + +1. Download Ollama [here](https://ollama.ai/) (it should walk you through the rest of these steps) +2. Open a terminal and run `ollama pull codellama:7b-q4_0`\* +3. Run `ollama serve codellama:7b-q4_0` +4. Change your Continue config file to look like this: + +```python +from continuedev.src.continuedev.libs.llm.ollama import Ollama + +config = ContinueConfig( + ... + models=Models( + default=Ollama(model="codellama:7b-q4_0") + ) +) +``` + +5. Reload the VS Code window for changes to take effect + +\*Only the 7b model is available right now. The others will be ready later today or tomorrow. + +## Replicate + +1. Get your Replicate API key [here](https://replicate.ai/) +2. Change your Continue config file to look like this: + +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.replicate import ReplicateLLM + +config = ContinueConfig( + ... + models=Models( + default=ReplicateLLM( + model="<CODE_LLAMA_MODEL_ID>", + api_key="<MY_REPLICATE_API_KEY>") + ) +) +``` + +3. Reload the VS Code window for changes to take effect |