index
:
sncontinue.git
cleanup
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
continuedev
/
src
/
continuedev
/
libs
/
llm
Age
Commit message (
Expand
)
Author
2023-09-02
fix: :bug: fix timeout type
Nate Sesti
2023-09-02
feat: :sparkles: set session timeout on GGML requests
Nate Sesti
2023-09-02
fix: :bug: llamacpp fix indexing max_tokens
Nate Sesti
2023-09-02
fix: :loud_sound: better logging for ggml completion endpoint
Nate Sesti
2023-09-02
fix: :bug: number of bug fixes
Nate Sesti
2023-09-01
refactor: :recycle: refactoring LLM to avoid repetition
Nate Sesti
2023-09-01
feat: :sparkles: improved model dropdown
Nate Sesti
2023-09-01
feat: :sparkles: select custom model to use with edit step
Nate Sesti
2023-08-31
feat: :sparkles: change proxy url for openai class
Nate Sesti
2023-08-30
don't url decode ollama
Nate Sesti
2023-08-30
fix: :art: many small improvements
Nate Sesti
2023-08-29
feat: :sparkles: huggingface tgi LLM class
Nate Sesti
2023-08-28
feat: :sparkles: text-gen-webui, cleaning config and welcome
Nate Sesti
2023-08-28
fix: :bug: fix telemetry bug
Nate Sesti
2023-08-27
fix: :bug: streaming url_decode for Ollama
Nate Sesti
2023-08-27
refactor: :zap: use code llama / llama2 prompt for TogetherLLM
Nate Sesti
2023-08-27
fix: :bug: patch for ocassional 0 choices from older azure versions
Nate Sesti
2023-08-27
fix: :bug: fix togetherAI model json parsing
Nate Sesti
2023-08-27
feat: :art: custom prompt templates per model
Nate Sesti
2023-08-27
fix: :bug: urldecode ollama responses, make edit faster
Nate Sesti
2023-08-27
feat: :sparkles: LlamaCpp LLM subclass
Nate Sesti
2023-08-26
fix: :bug: fix ssh /edit by checking for file through vscode fs
Nate Sesti
2023-08-26
feat: :sparkles: select model from dropdown
Nate Sesti
2023-08-25
small fixes, mostly ggml updates
Nate Sesti
2023-08-22
Config UI (#399)
Nate Sesti
2023-08-21
fix: :bug: fixing bugs with ggml
Nate Sesti
2023-08-20
fix: :bug: fix replicate to work with models requiring prompt input
Nate Sesti
2023-08-19
fix: :bug: fix ggml bug
Nate Sesti
2023-08-18
style: :art: autoformat with black on all python files
Nate Sesti
2023-08-16
fix together.py for llama-70b
Nate Sesti
2023-08-14
fix: :bug: MAX_TOKENS_FOR_MODEL bug fix, more testing
Nate Sesti
2023-08-13
fix: :ambulance: hotfix and package.json seo experiment
Nate Sesti
2023-08-13
fix: :bug: fix for Azure OpenAI model names
Nate Sesti
2023-08-09
feat: :sparkles: testing in ci, final test of
Nate Sesti
2023-08-09
feat: :sparkles: support for Together.ai models
Nate Sesti
2023-08-08
feat: :sparkles: testing improved prompting for stablecode
Nate Sesti
2023-08-08
feat: :sparkles: support stablecoder with replicate LLM
Nate Sesti
2023-08-08
feat: :children_crossing: display troubleshooting link when loading
Nate Sesti
2023-08-08
feat: :sparkles: huggingface inference api llm update
Nate Sesti
2023-08-08
fix: :bug: use certifi to set ca_bundle_path for openai
Nate Sesti
2023-08-05
fix: :bug: set api_keys in config.py, fix spawn error handling
Nate Sesti
2023-08-04
fix: :bug: ebusy and logging bug fixes
Nate Sesti
2023-08-03
update default config comment, verify_ssl
Nate Sesti
2023-08-02
Merge branch 'main' into package-python
Nate Sesti
2023-08-02
checkpoint
Nate Sesti
2023-08-02
attempting alternative solution to import config
Nate Sesti
2023-08-02
anthropic fixes
Nate Sesti
2023-07-31
testing nuitka, pyoxidizer, pyinstaller
Nate Sesti
2023-07-31
feat: :sparkles: llama-2 support
Nate Sesti
2023-07-31
Merge branch 'main' into ollama
Nate Sesti
[next]