diff options
author | Nate Sesti <sestinj@gmail.com> | 2023-10-02 12:27:12 -0700 |
---|---|---|
committer | Nate Sesti <sestinj@gmail.com> | 2023-10-02 12:27:12 -0700 |
commit | b33a87272ec5a755082fcf33b217133155ea9f20 (patch) | |
tree | f403f6b4c950b93d997f42d4cea641717f66ab23 | |
parent | a3a05fee312ad7c04d2abb0e186da55c7d061462 (diff) | |
parent | d59e2168ae54020080fce52b02bb257f3a7de27a (diff) | |
download | sncontinue-b33a87272ec5a755082fcf33b217133155ea9f20.tar.gz sncontinue-b33a87272ec5a755082fcf33b217133155ea9f20.tar.bz2 sncontinue-b33a87272ec5a755082fcf33b217133155ea9f20.zip |
Merge branch 'main' of https://github.com/continuedev/continue
-rw-r--r-- | continuedev/src/continuedev/models/reference/test.py | 1 | ||||
-rw-r--r-- | docs/docs/customization/models.md | 1 | ||||
-rw-r--r-- | docs/docs/reference/Models/googlepalmapi.md | 41 |
3 files changed, 43 insertions, 0 deletions
diff --git a/continuedev/src/continuedev/models/reference/test.py b/continuedev/src/continuedev/models/reference/test.py index 87f01ede..0ab9ba85 100644 --- a/continuedev/src/continuedev/models/reference/test.py +++ b/continuedev/src/continuedev/models/reference/test.py @@ -15,6 +15,7 @@ LLM_MODULES = [ ("hf_inference_api", "HuggingFaceInferenceAPI"), ("hf_tgi", "HuggingFaceTGI"), ("openai_free_trial", "OpenAIFreeTrial"), + ("google_palm_api", "GooglePaLMAPI"), ("queued", "QueuedLLM"), ] diff --git a/docs/docs/customization/models.md b/docs/docs/customization/models.md index 7c5caee7..29f1ac91 100644 --- a/docs/docs/customization/models.md +++ b/docs/docs/customization/models.md @@ -7,6 +7,7 @@ Commercial Models - [OpenAIFreeTrial](../reference/Models/openaifreetrial.md) (default) - Use gpt-4 or gpt-3.5-turbo free with our API key, or with your API key. gpt-4 is probably the most capable model of all options. - [OpenAI](../reference/Models/openai.md) - Use any OpenAI model with your own key. Can also change the base URL if you have a server that uses the OpenAI API format, including using the Azure OpenAI service, LocalAI, etc. - [AnthropicLLM](../reference/Models/anthropicllm.md) - Use claude-2 with your Anthropic API key. Claude 2 is also highly capable, and has a 100,000 token context window. +- [GooglePaLMAPI](../reference/Models/googlepalmapi.md) - Try out the `chat-bison-001` model, which is currently in public preview, after creating an API key in [Google MakerSuite](https://makersuite.google.com/u/2/app/apikey) Local Models diff --git a/docs/docs/reference/Models/googlepalmapi.md b/docs/docs/reference/Models/googlepalmapi.md new file mode 100644 index 00000000..74bec3f3 --- /dev/null +++ b/docs/docs/reference/Models/googlepalmapi.md @@ -0,0 +1,41 @@ +import ClassPropertyRef from '@site/src/components/ClassPropertyRef.tsx'; + +# GooglePaLMAPI + +The Google PaLM API is currently in public preview, so production applications are not supported yet. However, you can [create an API key in Google MakerSuite](https://makersuite.google.com/u/2/app/apikey) and begin trying out the `chat-bison-001` model. Change `~/.continue/config.py` to look like this: + +```python +from continuedev.src.continuedev.core.models import Models +from continuedev.src.continuedev.libs.llm.hf_inference_api import GooglePaLMAPI + +config = ContinueConfig( + ... + models=Models( + default=GooglePaLMAPI( + model="chat-bison-001" + api_key="<MAKERSUITE_API_KEY>", + ) +) +``` + +[View the source](https://github.com/continuedev/continue/tree/main/continuedev/src/continuedev/libs/llm/google_palm_api.py) + +## Properties + + + +### Inherited Properties + +<ClassPropertyRef name='api_key' details='{"title": "Api Key", "description": "Google PaLM API key", "type": "string"}' required={true} default=""/> +<ClassPropertyRef name='title' details='{"title": "Title", "description": "A title that will identify this model in the model selection dropdown", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='system_message' details='{"title": "System Message", "description": "A system message that will always be followed by the LLM", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='context_length' details='{"title": "Context Length", "description": "The maximum context length of the LLM in tokens, as counted by count_tokens.", "default": 2048, "type": "integer"}' required={false} default="2048"/> +<ClassPropertyRef name='unique_id' details='{"title": "Unique Id", "description": "The unique ID of the user.", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='model' details='{"title": "Model", "description": "The name of the model to be used (e.g. gpt-4, codellama)", "default": "chat-bison-001", "type": "string"}' required={false} default="chat-bison-001"/> +<ClassPropertyRef name='max_tokens' details='{"title": "Max Tokens", "description": "The maximum number of tokens to generate.", "default": 1024, "type": "integer"}' required={false} default="1024"/> +<ClassPropertyRef name='stop_tokens' details='{"title": "Stop Tokens", "description": "Tokens that will stop the completion.", "type": "array", "items": {"type": "string"}}' required={false} default=""/> +<ClassPropertyRef name='timeout' details='{"title": "Timeout", "description": "Set the timeout for each request to the LLM. If you are running a local LLM that takes a while to respond, you might want to set this to avoid timeouts.", "default": 300, "type": "integer"}' required={false} default="300"/> +<ClassPropertyRef name='verify_ssl' details='{"title": "Verify Ssl", "description": "Whether to verify SSL certificates for requests.", "type": "boolean"}' required={false} default=""/> +<ClassPropertyRef name='ca_bundle_path' details='{"title": "Ca Bundle Path", "description": "Path to a custom CA bundle to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='proxy' details='{"title": "Proxy", "description": "Proxy URL to use when making the HTTP request", "type": "string"}' required={false} default=""/> +<ClassPropertyRef name='prompt_templates' details='{"title": "Prompt Templates", "description": "A dictionary of prompt templates that can be used to customize the behavior of the LLM in certain situations. For example, set the \"edit\" key in order to change the prompt that is used for the /edit slash command. Each value in the dictionary is a string templated in mustache syntax, and filled in at runtime with the variables specific to the situation. See the documentation for more information.", "default": {}, "type": "object"}' required={false} default="{}"/> |