index
:
sncontinue.git
cleanup
Unnamed repository; edit this file 'description' to name the repository.
summary
refs
log
tree
commit
diff
log msg
author
committer
range
path:
root
/
continuedev
/
src
/
continuedev
/
libs
Age
Commit message (
Expand
)
Author
2023-09-02
fix: :bug: fix timeout type
Nate Sesti
2023-09-02
feat: :sparkles: set session timeout on GGML requests
Nate Sesti
2023-09-02
fix: :bug: llamacpp fix indexing max_tokens
Nate Sesti
2023-09-02
fix: :loud_sound: better logging for ggml completion endpoint
Nate Sesti
2023-09-02
fix: :bug: number of bug fixes
Nate Sesti
2023-09-01
refactor: :recycle: refactoring LLM to avoid repetition
Nate Sesti
2023-09-01
feat: :sparkles: improved model dropdown
Nate Sesti
2023-09-01
Windows meilisearch (#441)
Nate Sesti
2023-08-31
feat: :sparkles: change proxy url for openai class
Nate Sesti
2023-08-30
don't url decode ollama
Nate Sesti
2023-08-30
fix: :art: many small improvements
Nate Sesti
2023-08-29
feat: :sparkles: huggingface tgi LLM class
Nate Sesti
2023-08-29
fix: :bug: fix 2 model config bugs
Nate Sesti
2023-08-29
feat: :mute: complete removal of telemetry when allow_anonymous_telemetry false
Nate Sesti
2023-08-28
feat: :sparkles: @terminal context provider
Nate Sesti
2023-08-28
feat: :sparkles: text-gen-webui, cleaning config and welcome
Nate Sesti
2023-08-28
fix: :bug: fix telemetry bug
Nate Sesti
2023-08-27
fix: :bug: streaming url_decode for Ollama
Nate Sesti
2023-08-27
refactor: :zap: use code llama / llama2 prompt for TogetherLLM
Nate Sesti
2023-08-27
fix: :bug: patch for ocassional 0 choices from older azure versions
Nate Sesti
2023-08-27
fix: :bug: fix togetherAI model json parsing
Nate Sesti
2023-08-27
feat: :art: custom prompt templates per model
Nate Sesti
2023-08-27
fix: :bug: default to counting chars if tiktoken blocked
Nate Sesti
2023-08-27
fix: :bug: urldecode ollama responses, make edit faster
Nate Sesti
2023-08-27
feat: :sparkles: LlamaCpp LLM subclass
Nate Sesti
2023-08-26
fix: :bug: fix ssh /edit by checking for file through vscode fs
Nate Sesti
2023-08-26
feat: :sparkles: select model from dropdown
Nate Sesti
2023-08-25
small fixes, mostly ggml updates
Nate Sesti
2023-08-24
fix: :bug: replace hardcoded path for config file
Nate Sesti
2023-08-23
fix: :bug: bug where old server doesn't get updated
Nate Sesti
2023-08-22
Config UI (#399)
Nate Sesti
2023-08-21
fix: :bug: fixing bugs with ggml
Nate Sesti
2023-08-20
feat: :sparkles: saved context groups
Nate Sesti
2023-08-20
fix: :bug: fix replicate to work with models requiring prompt input
Nate Sesti
2023-08-19
fix: :bug: fix ggml bug
Nate Sesti
2023-08-19
don't ignore unused import for tiktoken
Nate Sesti
2023-08-19
fix: :bug: make sure server_version.txt exists
Nate Sesti
2023-08-18
style: :art: autoformat with black on all python files
Nate Sesti
2023-08-16
fix together.py for llama-70b
Nate Sesti
2023-08-14
fix: :bug: MAX_TOKENS_FOR_MODEL bug fix, more testing
Nate Sesti
2023-08-13
feat: :white_check_mark: update test and add model telemetry
Nate Sesti
2023-08-13
fix: :ambulance: hotfix and package.json seo experiment
Nate Sesti
2023-08-13
fix: :bug: fix for Azure OpenAI model names
Nate Sesti
2023-08-11
Merge pull request #371 from bra1nDump/fix-url-context [skip ci]
Nate Sesti
2023-08-11
feat: :recycle: load preset_urls at load_index
Nate Sesti
2023-08-10
Draft fixing broken default configuration and introducing default urls that y...
Kirill Dubovitskiy
2023-08-09
feat: :sparkles: testing in ci, final test of
Nate Sesti
2023-08-09
feat: :sparkles: support for Together.ai models
Nate Sesti
2023-08-09
feat: :sparkles: add urlcontextprovider back to default config
Nate Sesti
2023-08-08
feat: :sparkles: testing improved prompting for stablecode
Nate Sesti
[next]