diff options
author | Nate Sesti <sestinj@gmail.com> | 2023-07-26 00:26:02 -0700 |
---|---|---|
committer | Nate Sesti <sestinj@gmail.com> | 2023-07-26 00:26:02 -0700 |
commit | 79a2fa634e5b5d44e13fbd49facf14a4fc3745d1 (patch) | |
tree | 0e5917d1ae3fad12e4cf459ec273593d9d5267a4 /continuedev | |
parent | b759e2dbfe36b3e8873527b9736d64866da9b604 (diff) | |
parent | 2b69bf6f1fc2e06b16b718358ceed4911d6e87c3 (diff) | |
download | sncontinue-79a2fa634e5b5d44e13fbd49facf14a4fc3745d1.tar.gz sncontinue-79a2fa634e5b5d44e13fbd49facf14a4fc3745d1.tar.bz2 sncontinue-79a2fa634e5b5d44e13fbd49facf14a4fc3745d1.zip |
Merge branch 'config-py' into merge-config-py-TO-main
Diffstat (limited to 'continuedev')
31 files changed, 1527 insertions, 512 deletions
diff --git a/continuedev/notes.md b/continuedev/notes.md new file mode 100644 index 00000000..bfd8cf4a --- /dev/null +++ b/continuedev/notes.md @@ -0,0 +1,101 @@ +### List of concrete things that will be built + +- Interface with language servers +- Central place to initiate language model suggestions +- Abstracted set of tools around language servers and other complicated sources of information +- Way to keep track of reversible/replayable series of human/LLM changes to code, at better granularity that git +- A library of prompts and tools to combine them to yield good examples +- A basic LLM agnostic prompting interface +- The server or something that can be integrated easily into an extension for any IDE +- A CLI tool that can be called to make a one-off change on some codebase +- A default interface that can run at localhost, but which we will also create a desktop application version of +- Tools to parse LLM output to get file outputs +- Parse and unparse tracebacks in any language +- FileEdit/FileDiff creation from LLM output where you don't necessarily know the position of the lines +- Test generation and tools to use +- Tools/prompts for summarizing groups of file edits +- Need to be able to remove/add files. Is there any other reversible action you should be considering? Does git track anything else? Renaming files, or adding/removing folders. + +There should be different levels of abstraction at which you can work with these concepts. One of them should be as simple as + +- You write a formatted string with FormattedStringPrompter +- Specify a source for each of the strings, by a simple strongly typed enum, like traceback or something else + maybe not realistic or useful + +--- + +- One big thing that happens as you're fixing errors is that you encounter a fork in the road. The language model should be able to present you with both options, and you just click to decide. +- What I'm doing right now: I write a bunch of code without running it, then ahve to solve a bunch of errors at once, but small obvious ones. We can do this all automatically. + +--- + +### Current limitations: + +- We are always specifying how to use the tools directly instead of letting the AI choose how to use them on its own. You should expand to allow this. +- We want the history of both user and AI changes to be reflected as a single agent. So you need to watch for user updates to the filesystem. See https://pythonhosted.org/watchdog/quickstart.html#quickstart +- Language servers are a big deal, you've not done anything about that quite yet. + - class to manage all of them, and some way to configure which to run. + - call them inside actions? Probably not. Does language server ever make changes? Maybe you just create a python client +- You want this library to play well with IDEs, which means it should see file changes even before they are saved. What you're building might look more like a language server than anything else then. Just an extended language server. Something else that points at this is your need for watching the filesystem for changes. This is precisely what the LSP does. +- Prompts don't always transfer well between models. So a prompt should actually have different versions for each model, instead of being just a single string. +- Kind of weird syntax for creating your own actions, validators, etc... USE ANNOTATIONS +- Stuff should be serializable +- We also want to be able to answer questions, not just generate file edits. + +### Plugins + +Plugin is a more general word, which subsumes validator plugins, tool plugins, what else? + +### Continue as Extended Language Server + +- Language server capable of directly editing the filesystem and running commands. +- Really just needs to write to files, or suggest file edits. But actually in an ideal future it can do more, like press every button in the IDE + +The question isn't now "do we want to use it," but "is it the actual thing we are building?" I've realized that we need 1) to watch files for changes and make suggestions based off of these, 2) need to be language agnostic, 3) need to plug in to any IDE ideally. All of these things are the bread and butter of LSP. It seems like what we might actually be building is a headless LSP client, or an LSP server with a backdoor, or an LSP server with more endpoints. Trying to figure out where it best fits in. + +- We're not totally focusing on responding to small updates, so it might be okay to later build our own endpoint to watch for non-save updates to files. +- There aren't so many things that need to be done in their own language that aren't already done in LSP, are there? + +Overall, I think you should just think of this framework as a way of giving tools to language models and then putting them in a loop to edit, validate, run code. Tools are the plugins, and so you shouldn't have to build all of them, and they should be written in any language. + +The LSP Tool is just another tool. It will be common, so you want it built-in, but it's just another tool. +The thing about LSP is that there's a lot of state going on, and it needs to be running the whole time. +An edit in VS Code before saving can just be a hook to watch for, can replace the WatchDog thing. + +A cool feature of what we're doing is that we might summarize the changes made by a human, such that they can separate their work into describable and reversible parts. + +In essence, our framework makes it easy for people to match up problems to prompts. So people can share their solutions of the form "whenever you see this error, you can run this prompt with these tools, and it will fix it automatically". + +I'm finding that the validators are pretty tightly tied to the actions. Should this be reflected in a single class for both? + +--- + +The final simplification you could make: policies are actions. So the very first action that is always called is actually a policy, but it might be instantiated with a specific action first. + +Don't do this right now. But you might want to, and make it DAGs all the way down. + +Other consideration: a good amount of work could go into defining the spaces of observations. + +""" +What do they do that's interesting: + +- agent has get_allowed_tools() method +- They have analog of Prompter with PromptTemplate +- they pass an LLM object to instantiate the Chain object + +What doesn't LangChain do right? + +They don't have stopping as an action +Not reversible +""" + +Runners could also be pluginable. They are like the middleware for actions. + +- Ask for permission +- Keep track of substeps in DAG +- Keep locks on resources, have steps declare the resources they will use / want to lock up + +Policies should be generators! This makes it much more natural to group steps. Also means you can probably just define a decorator to a generator function that will turn it into a full policy class. +This is just syntactic sugar though. + +can you also make an annotation for actions, so you just have to write the run function? And then automatically add it to the pluggy library somehow. diff --git a/continuedev/poetry.lock b/continuedev/poetry.lock index 1cd4a591..d1b5f3d6 100644 --- a/continuedev/poetry.lock +++ b/continuedev/poetry.lock @@ -1,10 +1,20 @@ -# This file is automatically @generated by Poetry 1.4.1 and should not be changed by hand. +# This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand. + +[[package]] +name = "aiofiles" +version = "23.1.0" +description = "File support for asyncio." +optional = false +python-versions = ">=3.7,<4.0" +files = [ + {file = "aiofiles-23.1.0-py3-none-any.whl", hash = "sha256:9312414ae06472eb6f1d163f555e466a23aed1c8f60c30cccf7121dba2e53eb2"}, + {file = "aiofiles-23.1.0.tar.gz", hash = "sha256:edd247df9a19e0db16534d4baaf536d6609a43e1de5401d7a4c1c148753a1635"}, +] [[package]] name = "aiohttp" version = "3.8.4" description = "Async http client/server framework (asyncio)" -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -113,7 +123,6 @@ speedups = ["Brotli", "aiodns", "cchardet"] name = "aiosignal" version = "1.3.1" description = "aiosignal: a list of registered asynchronous callbacks" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -128,7 +137,6 @@ frozenlist = ">=1.1.0" name = "anthropic" version = "0.3.4" description = "Client library for the anthropic API" -category = "main" optional = false python-versions = ">=3.7,<4.0" files = [ @@ -148,7 +156,6 @@ typing-extensions = ">=4.1.1,<5" name = "anyio" version = "3.6.2" description = "High level compatibility layer for multiple asynchronous event loop implementations" -category = "main" optional = false python-versions = ">=3.6.2" files = [ @@ -169,7 +176,6 @@ trio = ["trio (>=0.16,<0.22)"] name = "async-timeout" version = "4.0.2" description = "Timeout context manager for asyncio programs" -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -181,7 +187,6 @@ files = [ name = "attrs" version = "23.1.0" description = "Classes Without Boilerplate" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -200,7 +205,6 @@ tests-no-zope = ["cloudpickle", "hypothesis", "mypy (>=1.1.1)", "pympler", "pyte name = "backoff" version = "2.2.1" description = "Function decoration for backoff and retry" -category = "main" optional = false python-versions = ">=3.7,<4.0" files = [ @@ -212,7 +216,6 @@ files = [ name = "boltons" version = "23.0.0" description = "When they're not builtins, they're boltons." -category = "main" optional = false python-versions = "*" files = [ @@ -221,10 +224,23 @@ files = [ ] [[package]] +name = "camel-converter" +version = "3.0.2" +description = "Converts a string from snake case to camel case or camel case to snake case" +optional = false +python-versions = ">=3.8,<4.0" +files = [ + {file = "camel_converter-3.0.2-py3-none-any.whl", hash = "sha256:88e5d91be5b2dff9c0748ba515774c3421088922d9e77c39f8742eb41cb7db88"}, + {file = "camel_converter-3.0.2.tar.gz", hash = "sha256:3b3d076e824ae979b271b4d497c90514c2b218811f76b0c368fb69da2556fe07"}, +] + +[package.extras] +pydantic = ["pydantic (>=1.8.2)"] + +[[package]] name = "certifi" version = "2022.12.7" description = "Python package for providing Mozilla's CA Bundle." -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -233,10 +249,85 @@ files = [ ] [[package]] +name = "cffi" +version = "1.15.1" +description = "Foreign Function Interface for Python calling C code." +optional = false +python-versions = "*" +files = [ + {file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"}, + {file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"}, + {file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"}, + {file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"}, + {file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"}, + {file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"}, + {file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"}, + {file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"}, + {file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"}, + {file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"}, + {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"}, + {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"}, + {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"}, + {file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"}, + {file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"}, + {file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"}, + {file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"}, + {file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"}, + {file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"}, + {file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"}, + {file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"}, + {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"}, + {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"}, + {file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"}, + {file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"}, + {file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"}, + {file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"}, + {file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"}, + {file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"}, + {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"}, + {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"}, + {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"}, + {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"}, + {file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"}, + {file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"}, + {file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"}, + {file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"}, + {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"}, + {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"}, + {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"}, + {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"}, + {file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"}, + {file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"}, + {file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"}, + {file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"}, + {file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"}, + {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"}, + {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"}, + {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"}, + {file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"}, + {file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"}, + {file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"}, + {file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"}, + {file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"}, + {file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"}, + {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"}, + {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"}, + {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"}, + {file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"}, + {file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"}, + {file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"}, + {file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"}, + {file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"}, + {file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"}, +] + +[package.dependencies] +pycparser = "*" + +[[package]] name = "charset-normalizer" version = "3.1.0" description = "The Real First Universal Charset Detector. Open, modern and actively maintained alternative to Chardet." -category = "main" optional = false python-versions = ">=3.7.0" files = [ @@ -321,7 +412,6 @@ files = [ name = "chevron" version = "0.14.0" description = "Mustache templating language renderer" -category = "main" optional = false python-versions = "*" files = [ @@ -333,7 +423,6 @@ files = [ name = "click" version = "8.1.3" description = "Composable command line interface toolkit" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -348,7 +437,6 @@ colorama = {version = "*", markers = "platform_system == \"Windows\""} name = "colorama" version = "0.4.6" description = "Cross-platform colored terminal text." -category = "main" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*,>=2.7" files = [ @@ -357,10 +445,54 @@ files = [ ] [[package]] +name = "cryptography" +version = "41.0.2" +description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers." +optional = false +python-versions = ">=3.7" +files = [ + {file = "cryptography-41.0.2-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:01f1d9e537f9a15b037d5d9ee442b8c22e3ae11ce65ea1f3316a41c78756b711"}, + {file = "cryptography-41.0.2-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:079347de771f9282fbfe0e0236c716686950c19dee1b76240ab09ce1624d76d7"}, + {file = "cryptography-41.0.2-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:439c3cc4c0d42fa999b83ded80a9a1fb54d53c58d6e59234cfe97f241e6c781d"}, + {file = "cryptography-41.0.2-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f14ad275364c8b4e525d018f6716537ae7b6d369c094805cae45300847e0894f"}, + {file = "cryptography-41.0.2-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:84609ade00a6ec59a89729e87a503c6e36af98ddcd566d5f3be52e29ba993182"}, + {file = "cryptography-41.0.2-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:49c3222bb8f8e800aead2e376cbef687bc9e3cb9b58b29a261210456a7783d83"}, + {file = "cryptography-41.0.2-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:d73f419a56d74fef257955f51b18d046f3506270a5fd2ac5febbfa259d6c0fa5"}, + {file = "cryptography-41.0.2-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:2a034bf7d9ca894720f2ec1d8b7b5832d7e363571828037f9e0c4f18c1b58a58"}, + {file = "cryptography-41.0.2-cp37-abi3-win32.whl", hash = "sha256:d124682c7a23c9764e54ca9ab5b308b14b18eba02722b8659fb238546de83a76"}, + {file = "cryptography-41.0.2-cp37-abi3-win_amd64.whl", hash = "sha256:9c3fe6534d59d071ee82081ca3d71eed3210f76ebd0361798c74abc2bcf347d4"}, + {file = "cryptography-41.0.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:a719399b99377b218dac6cf547b6ec54e6ef20207b6165126a280b0ce97e0d2a"}, + {file = "cryptography-41.0.2-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:182be4171f9332b6741ee818ec27daff9fb00349f706629f5cbf417bd50e66fd"}, + {file = "cryptography-41.0.2-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:7a9a3bced53b7f09da251685224d6a260c3cb291768f54954e28f03ef14e3766"}, + {file = "cryptography-41.0.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:f0dc40e6f7aa37af01aba07277d3d64d5a03dc66d682097541ec4da03cc140ee"}, + {file = "cryptography-41.0.2-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:674b669d5daa64206c38e507808aae49904c988fa0a71c935e7006a3e1e83831"}, + {file = "cryptography-41.0.2-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:7af244b012711a26196450d34f483357e42aeddb04128885d95a69bd8b14b69b"}, + {file = "cryptography-41.0.2-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:9b6d717393dbae53d4e52684ef4f022444fc1cce3c48c38cb74fca29e1f08eaa"}, + {file = "cryptography-41.0.2-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:192255f539d7a89f2102d07d7375b1e0a81f7478925b3bc2e0549ebf739dae0e"}, + {file = "cryptography-41.0.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:f772610fe364372de33d76edcd313636a25684edb94cee53fd790195f5989d14"}, + {file = "cryptography-41.0.2-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:b332cba64d99a70c1e0836902720887fb4529ea49ea7f5462cf6640e095e11d2"}, + {file = "cryptography-41.0.2-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:9a6673c1828db6270b76b22cc696f40cde9043eb90373da5c2f8f2158957f42f"}, + {file = "cryptography-41.0.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:342f3767e25876751e14f8459ad85e77e660537ca0a066e10e75df9c9e9099f0"}, + {file = "cryptography-41.0.2.tar.gz", hash = "sha256:7d230bf856164de164ecb615ccc14c7fc6de6906ddd5b491f3af90d3514c925c"}, +] + +[package.dependencies] +cffi = ">=1.12" + +[package.extras] +docs = ["sphinx (>=5.3.0)", "sphinx-rtd-theme (>=1.1.1)"] +docstest = ["pyenchant (>=1.6.11)", "sphinxcontrib-spelling (>=4.0.1)", "twine (>=1.12.0)"] +nox = ["nox"] +pep8test = ["black", "check-sdist", "mypy", "ruff"] +sdist = ["build"] +ssh = ["bcrypt (>=3.1.5)"] +test = ["pretend", "pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-xdist"] +test-randomorder = ["pytest-randomly"] + +[[package]] name = "dataclasses-json" version = "0.5.7" description = "Easily serialize dataclasses to and from JSON" -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -377,10 +509,26 @@ typing-inspect = ">=0.4.0" dev = ["flake8", "hypothesis", "ipython", "mypy (>=0.710)", "portray", "pytest (>=6.2.3)", "simplejson", "types-dataclasses"] [[package]] +name = "deprecated" +version = "1.2.14" +description = "Python @deprecated decorator to deprecate old python classes, functions or methods." +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +files = [ + {file = "Deprecated-1.2.14-py2.py3-none-any.whl", hash = "sha256:6fac8b097794a90302bdbb17b9b815e732d3c4720583ff1b198499d78470466c"}, + {file = "Deprecated-1.2.14.tar.gz", hash = "sha256:e5323eb936458dccc2582dc6f9c322c852a775a27065ff2b0c4970b9d53d01b3"}, +] + +[package.dependencies] +wrapt = ">=1.10,<2" + +[package.extras] +dev = ["PyTest", "PyTest-Cov", "bump2version (<1)", "sphinx (<2)", "tox"] + +[[package]] name = "diff-match-patch" version = "20230430" description = "Diff Match and Patch" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -395,7 +543,6 @@ dev = ["attribution (==1.6.2)", "black (==23.3.0)", "flit (==3.8.0)", "mypy (==1 name = "directory-tree" version = "0.0.3.1" description = "Utility Package that Displays out the Tree Structure of a Particular Directory." -category = "main" optional = false python-versions = "*" files = [ @@ -410,7 +557,6 @@ dev = ["pytest (>=3.7)"] name = "distro" version = "1.8.0" description = "Distro - an OS platform information API" -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -422,7 +568,6 @@ files = [ name = "fastapi" version = "0.95.1" description = "FastAPI framework, high performance, easy to learn, fast to code, ready for production" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -444,7 +589,6 @@ test = ["anyio[trio] (>=3.2.1,<4.0.0)", "black (==23.1.0)", "coverage[toml] (>=6 name = "frozenlist" version = "1.3.3" description = "A list-like structure which implements collections.abc.MutableSequence" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -528,7 +672,6 @@ files = [ name = "gpt-index" version = "0.6.8" description = "Interface between LLMs and your data" -category = "main" optional = false python-versions = "*" files = [ @@ -550,7 +693,6 @@ tiktoken = "*" name = "greenlet" version = "2.0.2" description = "Lightweight in-process concurrent programming" -category = "main" optional = false python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*" files = [ @@ -624,7 +766,6 @@ test = ["objgraph", "psutil"] name = "h11" version = "0.14.0" description = "A pure-Python, bring-your-own-I/O implementation of HTTP/1.1" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -636,7 +777,6 @@ files = [ name = "httpcore" version = "0.17.3" description = "A minimal low-level HTTP client." -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -648,17 +788,16 @@ files = [ anyio = ">=3.0,<5.0" certifi = "*" h11 = ">=0.13,<0.15" -sniffio = ">=1.0.0,<2.0.0" +sniffio = "==1.*" [package.extras] http2 = ["h2 (>=3,<5)"] -socks = ["socksio (>=1.0.0,<2.0.0)"] +socks = ["socksio (==1.*)"] [[package]] name = "httpx" version = "0.24.1" description = "The next generation HTTP client." -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -674,15 +813,14 @@ sniffio = "*" [package.extras] brotli = ["brotli", "brotlicffi"] -cli = ["click (>=8.0.0,<9.0.0)", "pygments (>=2.0.0,<3.0.0)", "rich (>=10,<14)"] +cli = ["click (==8.*)", "pygments (==2.*)", "rich (>=10,<14)"] http2 = ["h2 (>=3,<5)"] -socks = ["socksio (>=1.0.0,<2.0.0)"] +socks = ["socksio (==1.*)"] [[package]] name = "idna" version = "3.4" description = "Internationalized Domain Names in Applications (IDNA)" -category = "main" optional = false python-versions = ">=3.5" files = [ @@ -694,7 +832,6 @@ files = [ name = "importlib-resources" version = "6.0.0" description = "Read resources from Python packages" -category = "main" optional = false python-versions = ">=3.8" files = [ @@ -713,7 +850,6 @@ testing = ["pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", name = "jsonref" version = "1.1.0" description = "jsonref is a library for automatic dereferencing of JSON Reference objects for Python." -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -725,7 +861,6 @@ files = [ name = "jsonschema" version = "4.17.3" description = "An implementation of JSON Schema validation for Python" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -747,7 +882,6 @@ format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339- name = "langchain" version = "0.0.171" description = "Building applications with LLMs through composability" -category = "main" optional = false python-versions = ">=3.8.1,<4.0" files = [ @@ -784,7 +918,6 @@ qdrant = ["qdrant-client (>=1.1.2,<2.0.0)"] name = "marshmallow" version = "3.19.0" description = "A lightweight library for converting complex datatypes to and from native Python datatypes." -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -805,7 +938,6 @@ tests = ["pytest", "pytz", "simplejson"] name = "marshmallow-enum" version = "1.5.1" description = "Enum field for Marshmallow" -category = "main" optional = false python-versions = "*" files = [ @@ -817,10 +949,27 @@ files = [ marshmallow = ">=2.0.0" [[package]] +name = "meilisearch-python-async" +version = "1.4.8" +description = "A Python async client for the Meilisearch API" +optional = false +python-versions = ">=3.8,<4.0" +files = [ + {file = "meilisearch_python_async-1.4.8-py3-none-any.whl", hash = "sha256:dea8da89ea254cd2de7e3c0f0883e98486fc724ba8212e4fe19b2b44d9ca6aa1"}, + {file = "meilisearch_python_async-1.4.8.tar.gz", hash = "sha256:ebcea0ed800dce291809ec599384f103da55362b4485997ff4daa0010c935fc6"}, +] + +[package.dependencies] +aiofiles = ">=0.7" +camel-converter = ">=1.0.0" +httpx = ">=0.17" +pydantic = ">=1.8" +PyJWT = ">=2.3.0" + +[[package]] name = "monotonic" version = "1.6" description = "An implementation of time.monotonic() for Python 2 & < 3.3" -category = "main" optional = false python-versions = "*" files = [ @@ -832,7 +981,6 @@ files = [ name = "multidict" version = "6.0.4" description = "multidict implementation" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -916,7 +1064,6 @@ files = [ name = "mypy-extensions" version = "1.0.0" description = "Type system extensions for programs checked with the mypy type checker." -category = "main" optional = false python-versions = ">=3.5" files = [ @@ -928,7 +1075,6 @@ files = [ name = "nest-asyncio" version = "1.5.6" description = "Patch asyncio to allow nested event loops" -category = "main" optional = false python-versions = ">=3.5" files = [ @@ -940,7 +1086,6 @@ files = [ name = "numexpr" version = "2.8.4" description = "Fast numerical expression evaluator for NumPy" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -983,7 +1128,6 @@ numpy = ">=1.13.3" name = "numpy" version = "1.24.3" description = "Fundamental package for array computing in Python" -category = "main" optional = false python-versions = ">=3.8" files = [ @@ -1021,7 +1165,6 @@ files = [ name = "openai" version = "0.27.6" description = "Python client library for the OpenAI API" -category = "main" optional = false python-versions = ">=3.7.1" files = [ @@ -1036,7 +1179,7 @@ tqdm = "*" [package.extras] datalib = ["numpy", "openpyxl (>=3.0.7)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)"] -dev = ["black (>=21.6b0,<22.0)", "pytest (>=6.0.0,<7.0.0)", "pytest-asyncio", "pytest-mock"] +dev = ["black (>=21.6b0,<22.0)", "pytest (==6.*)", "pytest-asyncio", "pytest-mock"] embeddings = ["matplotlib", "numpy", "openpyxl (>=3.0.7)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)", "plotly", "scikit-learn (>=1.0.2)", "scipy", "tenacity (>=8.0.1)"] wandb = ["numpy", "openpyxl (>=3.0.7)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1.0.11)", "wandb"] @@ -1044,7 +1187,6 @@ wandb = ["numpy", "openpyxl (>=3.0.7)", "pandas (>=1.2.3)", "pandas-stubs (>=1.1 name = "openapi-schema-pydantic" version = "1.2.4" description = "OpenAPI (v3) specification schema as pydantic class" -category = "main" optional = false python-versions = ">=3.6.1" files = [ @@ -1059,7 +1201,6 @@ pydantic = ">=1.8.2" name = "packaging" version = "23.1" description = "Core utilities for Python packages" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1071,7 +1212,6 @@ files = [ name = "pandas" version = "2.0.1" description = "Powerful data structures for data analysis, time series, and statistics" -category = "main" optional = false python-versions = ">=3.8" files = [ @@ -1139,7 +1279,6 @@ xml = ["lxml (>=4.6.3)"] name = "pkgutil-resolve-name" version = "1.3.10" description = "Resolve a name to an object." -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -1151,7 +1290,6 @@ files = [ name = "posthog" version = "3.0.1" description = "Integrate PostHog into any python application." -category = "main" optional = false python-versions = "*" files = [ @@ -1175,7 +1313,6 @@ test = ["coverage", "flake8", "freezegun (==0.3.15)", "mock (>=2.0.0)", "pylint" name = "psutil" version = "5.9.5" description = "Cross-platform lib for process and system monitoring in Python." -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" files = [ @@ -1199,10 +1336,20 @@ files = [ test = ["enum34", "ipaddress", "mock", "pywin32", "wmi"] [[package]] +name = "pycparser" +version = "2.21" +description = "C parser in Python" +optional = false +python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*" +files = [ + {file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"}, + {file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"}, +] + +[[package]] name = "pydantic" version = "1.10.7" description = "Data validation and settings management using python type hints" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1252,10 +1399,72 @@ dotenv = ["python-dotenv (>=0.10.4)"] email = ["email-validator (>=1.0.3)"] [[package]] +name = "pygithub" +version = "1.59.0" +description = "Use the full Github API v3" +optional = false +python-versions = ">=3.7" +files = [ + {file = "PyGithub-1.59.0-py3-none-any.whl", hash = "sha256:126bdbae72087d8d038b113aab6b059b4553cb59348e3024bb1a1cae406ace9e"}, + {file = "PyGithub-1.59.0.tar.gz", hash = "sha256:6e05ff49bac3caa7d1d6177a10c6e55a3e20c85b92424cc198571fd0cf786690"}, +] + +[package.dependencies] +deprecated = "*" +pyjwt = {version = ">=2.4.0", extras = ["crypto"]} +pynacl = ">=1.4.0" +requests = ">=2.14.0" + +[[package]] +name = "pyjwt" +version = "2.8.0" +description = "JSON Web Token implementation in Python" +optional = false +python-versions = ">=3.7" +files = [ + {file = "PyJWT-2.8.0-py3-none-any.whl", hash = "sha256:59127c392cc44c2da5bb3192169a91f429924e17aff6534d70fdc02ab3e04320"}, + {file = "PyJWT-2.8.0.tar.gz", hash = "sha256:57e28d156e3d5c10088e0c68abb90bfac3df82b40a71bd0daa20c65ccd5c23de"}, +] + +[package.dependencies] +cryptography = {version = ">=3.4.0", optional = true, markers = "extra == \"crypto\""} + +[package.extras] +crypto = ["cryptography (>=3.4.0)"] +dev = ["coverage[toml] (==5.0.4)", "cryptography (>=3.4.0)", "pre-commit", "pytest (>=6.0.0,<7.0.0)", "sphinx (>=4.5.0,<5.0.0)", "sphinx-rtd-theme", "zope.interface"] +docs = ["sphinx (>=4.5.0,<5.0.0)", "sphinx-rtd-theme", "zope.interface"] +tests = ["coverage[toml] (==5.0.4)", "pytest (>=6.0.0,<7.0.0)"] + +[[package]] +name = "pynacl" +version = "1.5.0" +description = "Python binding to the Networking and Cryptography (NaCl) library" +optional = false +python-versions = ">=3.6" +files = [ + {file = "PyNaCl-1.5.0-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:401002a4aaa07c9414132aaed7f6836ff98f59277a234704ff66878c2ee4a0d1"}, + {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:52cb72a79269189d4e0dc537556f4740f7f0a9ec41c1322598799b0bdad4ef92"}, + {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a36d4a9dda1f19ce6e03c9a784a2921a4b726b02e1c736600ca9c22029474394"}, + {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0c84947a22519e013607c9be43706dd42513f9e6ae5d39d3613ca1e142fba44d"}, + {file = "PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06b8f6fa7f5de8d5d2f7573fe8c863c051225a27b61e6860fd047b1775807858"}, + {file = "PyNaCl-1.5.0-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:a422368fc821589c228f4c49438a368831cb5bbc0eab5ebe1d7fac9dded6567b"}, + {file = "PyNaCl-1.5.0-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:61f642bf2378713e2c2e1de73444a3778e5f0a38be6fee0fe532fe30060282ff"}, + {file = "PyNaCl-1.5.0-cp36-abi3-win32.whl", hash = "sha256:e46dae94e34b085175f8abb3b0aaa7da40767865ac82c928eeb9e57e1ea8a543"}, + {file = "PyNaCl-1.5.0-cp36-abi3-win_amd64.whl", hash = "sha256:20f42270d27e1b6a29f54032090b972d97f0a1b0948cc52392041ef7831fee93"}, + {file = "PyNaCl-1.5.0.tar.gz", hash = "sha256:8ac7448f09ab85811607bdd21ec2464495ac8b7c66d146bf545b0f08fb9220ba"}, +] + +[package.dependencies] +cffi = ">=1.4.1" + +[package.extras] +docs = ["sphinx (>=1.6.5)", "sphinx-rtd-theme"] +tests = ["hypothesis (>=3.27.0)", "pytest (>=3.2.1,!=3.3.0)"] + +[[package]] name = "pyrsistent" version = "0.19.3" description = "Persistent/Functional/Immutable data structures" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1292,7 +1501,6 @@ files = [ name = "python-dateutil" version = "2.8.2" description = "Extensions to the standard Python datetime module" -category = "main" optional = false python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7" files = [ @@ -1307,7 +1515,6 @@ six = ">=1.5" name = "python-dotenv" version = "1.0.0" description = "Read key-value pairs from a .env file and set them as environment variables" -category = "main" optional = false python-versions = ">=3.8" files = [ @@ -1322,7 +1529,6 @@ cli = ["click (>=5.0)"] name = "pytz" version = "2023.3" description = "World timezone definitions, modern and historical" -category = "main" optional = false python-versions = "*" files = [ @@ -1334,7 +1540,6 @@ files = [ name = "pyyaml" version = "6.0" description = "YAML parser and emitter for Python" -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -1384,7 +1589,6 @@ files = [ name = "regex" version = "2023.5.5" description = "Alternative regular expression module, to replace re." -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -1482,7 +1686,6 @@ files = [ name = "requests" version = "2.29.0" description = "Python HTTP for Humans." -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1504,7 +1707,6 @@ use-chardet-on-py3 = ["chardet (>=3.0.2,<6)"] name = "six" version = "1.16.0" description = "Python 2 and 3 compatibility utilities" -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*" files = [ @@ -1516,7 +1718,6 @@ files = [ name = "sniffio" version = "1.3.0" description = "Sniff out which async library your code is running under" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1528,7 +1729,6 @@ files = [ name = "sqlalchemy" version = "2.0.13" description = "Database Abstraction Library" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1576,7 +1776,7 @@ files = [ ] [package.dependencies] -greenlet = {version = "!=0.4.17", markers = "platform_machine == \"aarch64\" or platform_machine == \"ppc64le\" or platform_machine == \"x86_64\" or platform_machine == \"amd64\" or platform_machine == \"AMD64\" or platform_machine == \"win32\" or platform_machine == \"WIN32\""} +greenlet = {version = "!=0.4.17", markers = "platform_machine == \"win32\" or platform_machine == \"WIN32\" or platform_machine == \"AMD64\" or platform_machine == \"amd64\" or platform_machine == \"x86_64\" or platform_machine == \"ppc64le\" or platform_machine == \"aarch64\""} typing-extensions = ">=4.2.0" [package.extras] @@ -1606,7 +1806,6 @@ sqlcipher = ["sqlcipher3-binary"] name = "starlette" version = "0.26.1" description = "The little ASGI library that shines." -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1625,7 +1824,6 @@ full = ["httpx (>=0.22.0)", "itsdangerous", "jinja2", "python-multipart", "pyyam name = "tenacity" version = "8.2.2" description = "Retry code until it succeeds" -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -1640,7 +1838,6 @@ doc = ["reno", "sphinx", "tornado (>=4.5)"] name = "tiktoken" version = "0.4.0" description = "tiktoken is a fast BPE tokeniser for use with OpenAI's models" -category = "main" optional = false python-versions = ">=3.8" files = [ @@ -1686,7 +1883,6 @@ blobfile = ["blobfile (>=2)"] name = "tokenizers" version = "0.13.3" description = "Fast and Customizable Tokenizers" -category = "main" optional = false python-versions = "*" files = [ @@ -1741,7 +1937,6 @@ testing = ["black (==22.3)", "datasets", "numpy", "pytest", "requests"] name = "tqdm" version = "4.65.0" description = "Fast, Extensible Progress Meter" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1762,7 +1957,6 @@ telegram = ["requests"] name = "typer" version = "0.7.0" description = "Typer, build great CLIs. Easy to code. Based on Python type hints." -category = "main" optional = false python-versions = ">=3.6" files = [ @@ -1783,7 +1977,6 @@ test = ["black (>=22.3.0,<23.0.0)", "coverage (>=6.2,<7.0)", "isort (>=5.0.6,<6. name = "typing-extensions" version = "4.5.0" description = "Backported and Experimental Type Hints for Python 3.7+" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1795,7 +1988,6 @@ files = [ name = "typing-inspect" version = "0.8.0" description = "Runtime inspection utilities for typing module." -category = "main" optional = false python-versions = "*" files = [ @@ -1811,7 +2003,6 @@ typing-extensions = ">=3.7.4" name = "tzdata" version = "2023.3" description = "Provider of IANA time zone data" -category = "main" optional = false python-versions = ">=2" files = [ @@ -1823,7 +2014,6 @@ files = [ name = "urllib3" version = "1.26.15" description = "HTTP library with thread-safe connection pooling, file post, and more." -category = "main" optional = false python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.5.*" files = [ @@ -1840,7 +2030,6 @@ socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"] name = "uvicorn" version = "0.21.1" description = "The lightning-fast ASGI server." -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1859,7 +2048,6 @@ standard = ["colorama (>=0.4)", "httptools (>=0.5.0)", "python-dotenv (>=0.13)", name = "websockets" version = "11.0.2" description = "An implementation of the WebSocket Protocol (RFC 6455 & 7692)" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -1936,10 +2124,93 @@ files = [ ] [[package]] +name = "wrapt" +version = "1.15.0" +description = "Module for decorators, wrappers and monkey patching." +optional = false +python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7" +files = [ + {file = "wrapt-1.15.0-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:ca1cccf838cd28d5a0883b342474c630ac48cac5df0ee6eacc9c7290f76b11c1"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:e826aadda3cae59295b95343db8f3d965fb31059da7de01ee8d1c40a60398b29"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:5fc8e02f5984a55d2c653f5fea93531e9836abbd84342c1d1e17abc4a15084c2"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux2010_i686.whl", hash = "sha256:96e25c8603a155559231c19c0349245eeb4ac0096fe3c1d0be5c47e075bd4f46"}, + {file = "wrapt-1.15.0-cp27-cp27m-manylinux2010_x86_64.whl", hash = "sha256:40737a081d7497efea35ab9304b829b857f21558acfc7b3272f908d33b0d9d4c"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:f87ec75864c37c4c6cb908d282e1969e79763e0d9becdfe9fe5473b7bb1e5f09"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:1286eb30261894e4c70d124d44b7fd07825340869945c79d05bda53a40caa079"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux2010_i686.whl", hash = "sha256:493d389a2b63c88ad56cdc35d0fa5752daac56ca755805b1b0c530f785767d5e"}, + {file = "wrapt-1.15.0-cp27-cp27mu-manylinux2010_x86_64.whl", hash = "sha256:58d7a75d731e8c63614222bcb21dd992b4ab01a399f1f09dd82af17bbfc2368a"}, + {file = "wrapt-1.15.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:21f6d9a0d5b3a207cdf7acf8e58d7d13d463e639f0c7e01d82cdb671e6cb7923"}, + {file = "wrapt-1.15.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ce42618f67741d4697684e501ef02f29e758a123aa2d669e2d964ff734ee00ee"}, + {file = "wrapt-1.15.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:41d07d029dd4157ae27beab04d22b8e261eddfc6ecd64ff7000b10dc8b3a5727"}, + {file = "wrapt-1.15.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:54accd4b8bc202966bafafd16e69da9d5640ff92389d33d28555c5fd4f25ccb7"}, + {file = "wrapt-1.15.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2fbfbca668dd15b744418265a9607baa970c347eefd0db6a518aaf0cfbd153c0"}, + {file = "wrapt-1.15.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:76e9c727a874b4856d11a32fb0b389afc61ce8aaf281ada613713ddeadd1cfec"}, + {file = "wrapt-1.15.0-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:e20076a211cd6f9b44a6be58f7eeafa7ab5720eb796975d0c03f05b47d89eb90"}, + {file = "wrapt-1.15.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:a74d56552ddbde46c246b5b89199cb3fd182f9c346c784e1a93e4dc3f5ec9975"}, + {file = "wrapt-1.15.0-cp310-cp310-win32.whl", hash = "sha256:26458da5653aa5b3d8dc8b24192f574a58984c749401f98fff994d41d3f08da1"}, + {file = "wrapt-1.15.0-cp310-cp310-win_amd64.whl", hash = "sha256:75760a47c06b5974aa5e01949bf7e66d2af4d08cb8c1d6516af5e39595397f5e"}, + {file = "wrapt-1.15.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ba1711cda2d30634a7e452fc79eabcadaffedf241ff206db2ee93dd2c89a60e7"}, + {file = "wrapt-1.15.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:56374914b132c702aa9aa9959c550004b8847148f95e1b824772d453ac204a72"}, + {file = "wrapt-1.15.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a89ce3fd220ff144bd9d54da333ec0de0399b52c9ac3d2ce34b569cf1a5748fb"}, + {file = "wrapt-1.15.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3bbe623731d03b186b3d6b0d6f51865bf598587c38d6f7b0be2e27414f7f214e"}, + {file = "wrapt-1.15.0-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3abbe948c3cbde2689370a262a8d04e32ec2dd4f27103669a45c6929bcdbfe7c"}, + {file = "wrapt-1.15.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:b67b819628e3b748fd3c2192c15fb951f549d0f47c0449af0764d7647302fda3"}, + {file = "wrapt-1.15.0-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:7eebcdbe3677e58dd4c0e03b4f2cfa346ed4049687d839adad68cc38bb559c92"}, + {file = "wrapt-1.15.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:74934ebd71950e3db69960a7da29204f89624dde411afbfb3b4858c1409b1e98"}, + {file = "wrapt-1.15.0-cp311-cp311-win32.whl", hash = "sha256:bd84395aab8e4d36263cd1b9308cd504f6cf713b7d6d3ce25ea55670baec5416"}, + {file = "wrapt-1.15.0-cp311-cp311-win_amd64.whl", hash = "sha256:a487f72a25904e2b4bbc0817ce7a8de94363bd7e79890510174da9d901c38705"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:4ff0d20f2e670800d3ed2b220d40984162089a6e2c9646fdb09b85e6f9a8fc29"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:9ed6aa0726b9b60911f4aed8ec5b8dd7bf3491476015819f56473ffaef8959bd"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux2010_i686.whl", hash = "sha256:896689fddba4f23ef7c718279e42f8834041a21342d95e56922e1c10c0cc7afb"}, + {file = "wrapt-1.15.0-cp35-cp35m-manylinux2010_x86_64.whl", hash = "sha256:75669d77bb2c071333417617a235324a1618dba66f82a750362eccbe5b61d248"}, + {file = "wrapt-1.15.0-cp35-cp35m-win32.whl", hash = "sha256:fbec11614dba0424ca72f4e8ba3c420dba07b4a7c206c8c8e4e73f2e98f4c559"}, + {file = "wrapt-1.15.0-cp35-cp35m-win_amd64.whl", hash = "sha256:fd69666217b62fa5d7c6aa88e507493a34dec4fa20c5bd925e4bc12fce586639"}, + {file = "wrapt-1.15.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b0724f05c396b0a4c36a3226c31648385deb6a65d8992644c12a4963c70326ba"}, + {file = "wrapt-1.15.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bbeccb1aa40ab88cd29e6c7d8585582c99548f55f9b2581dfc5ba68c59a85752"}, + {file = "wrapt-1.15.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:38adf7198f8f154502883242f9fe7333ab05a5b02de7d83aa2d88ea621f13364"}, + {file = "wrapt-1.15.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:578383d740457fa790fdf85e6d346fda1416a40549fe8db08e5e9bd281c6a475"}, + {file = "wrapt-1.15.0-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:a4cbb9ff5795cd66f0066bdf5947f170f5d63a9274f99bdbca02fd973adcf2a8"}, + {file = "wrapt-1.15.0-cp36-cp36m-musllinux_1_1_i686.whl", hash = "sha256:af5bd9ccb188f6a5fdda9f1f09d9f4c86cc8a539bd48a0bfdc97723970348418"}, + {file = "wrapt-1.15.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:b56d5519e470d3f2fe4aa7585f0632b060d532d0696c5bdfb5e8319e1d0f69a2"}, + {file = "wrapt-1.15.0-cp36-cp36m-win32.whl", hash = "sha256:77d4c1b881076c3ba173484dfa53d3582c1c8ff1f914c6461ab70c8428b796c1"}, + {file = "wrapt-1.15.0-cp36-cp36m-win_amd64.whl", hash = "sha256:077ff0d1f9d9e4ce6476c1a924a3332452c1406e59d90a2cf24aeb29eeac9420"}, + {file = "wrapt-1.15.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:5c5aa28df055697d7c37d2099a7bc09f559d5053c3349b1ad0c39000e611d317"}, + {file = "wrapt-1.15.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3a8564f283394634a7a7054b7983e47dbf39c07712d7b177b37e03f2467a024e"}, + {file = "wrapt-1.15.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:780c82a41dc493b62fc5884fb1d3a3b81106642c5c5c78d6a0d4cbe96d62ba7e"}, + {file = "wrapt-1.15.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e169e957c33576f47e21864cf3fc9ff47c223a4ebca8960079b8bd36cb014fd0"}, + {file = "wrapt-1.15.0-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:b02f21c1e2074943312d03d243ac4388319f2456576b2c6023041c4d57cd7019"}, + {file = "wrapt-1.15.0-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:f2e69b3ed24544b0d3dbe2c5c0ba5153ce50dcebb576fdc4696d52aa22db6034"}, + {file = "wrapt-1.15.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d787272ed958a05b2c86311d3a4135d3c2aeea4fc655705f074130aa57d71653"}, + {file = "wrapt-1.15.0-cp37-cp37m-win32.whl", hash = "sha256:02fce1852f755f44f95af51f69d22e45080102e9d00258053b79367d07af39c0"}, + {file = "wrapt-1.15.0-cp37-cp37m-win_amd64.whl", hash = "sha256:abd52a09d03adf9c763d706df707c343293d5d106aea53483e0ec8d9e310ad5e"}, + {file = "wrapt-1.15.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:cdb4f085756c96a3af04e6eca7f08b1345e94b53af8921b25c72f096e704e145"}, + {file = "wrapt-1.15.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:230ae493696a371f1dbffaad3dafbb742a4d27a0afd2b1aecebe52b740167e7f"}, + {file = "wrapt-1.15.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:63424c681923b9f3bfbc5e3205aafe790904053d42ddcc08542181a30a7a51bd"}, + {file = "wrapt-1.15.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d6bcbfc99f55655c3d93feb7ef3800bd5bbe963a755687cbf1f490a71fb7794b"}, + {file = "wrapt-1.15.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c99f4309f5145b93eca6e35ac1a988f0dc0a7ccf9ccdcd78d3c0adf57224e62f"}, + {file = "wrapt-1.15.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:b130fe77361d6771ecf5a219d8e0817d61b236b7d8b37cc045172e574ed219e6"}, + {file = "wrapt-1.15.0-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:96177eb5645b1c6985f5c11d03fc2dbda9ad24ec0f3a46dcce91445747e15094"}, + {file = "wrapt-1.15.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d5fe3e099cf07d0fb5a1e23d399e5d4d1ca3e6dfcbe5c8570ccff3e9208274f7"}, + {file = "wrapt-1.15.0-cp38-cp38-win32.whl", hash = "sha256:abd8f36c99512755b8456047b7be10372fca271bf1467a1caa88db991e7c421b"}, + {file = "wrapt-1.15.0-cp38-cp38-win_amd64.whl", hash = "sha256:b06fa97478a5f478fb05e1980980a7cdf2712015493b44d0c87606c1513ed5b1"}, + {file = "wrapt-1.15.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:2e51de54d4fb8fb50d6ee8327f9828306a959ae394d3e01a1ba8b2f937747d86"}, + {file = "wrapt-1.15.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:0970ddb69bba00670e58955f8019bec4a42d1785db3faa043c33d81de2bf843c"}, + {file = "wrapt-1.15.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:76407ab327158c510f44ded207e2f76b657303e17cb7a572ffe2f5a8a48aa04d"}, + {file = "wrapt-1.15.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cd525e0e52a5ff16653a3fc9e3dd827981917d34996600bbc34c05d048ca35cc"}, + {file = "wrapt-1.15.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9d37ac69edc5614b90516807de32d08cb8e7b12260a285ee330955604ed9dd29"}, + {file = "wrapt-1.15.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:078e2a1a86544e644a68422f881c48b84fef6d18f8c7a957ffd3f2e0a74a0d4a"}, + {file = "wrapt-1.15.0-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:2cf56d0e237280baed46f0b5316661da892565ff58309d4d2ed7dba763d984b8"}, + {file = "wrapt-1.15.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:7dc0713bf81287a00516ef43137273b23ee414fe41a3c14be10dd95ed98a2df9"}, + {file = "wrapt-1.15.0-cp39-cp39-win32.whl", hash = "sha256:46ed616d5fb42f98630ed70c3529541408166c22cdfd4540b88d5f21006b0eff"}, + {file = "wrapt-1.15.0-cp39-cp39-win_amd64.whl", hash = "sha256:eef4d64c650f33347c1f9266fa5ae001440b232ad9b98f1f43dfe7a79435c0a6"}, + {file = "wrapt-1.15.0-py3-none-any.whl", hash = "sha256:64b1df0f83706b4ef4cfb4fb0e4c2669100fd7ecacfb59e091fad300d4e04640"}, + {file = "wrapt-1.15.0.tar.gz", hash = "sha256:d06730c6aed78cee4126234cf2d071e01b44b915e725a6cb439a879ec9754a3a"}, +] + +[[package]] name = "yarl" version = "1.9.2" description = "Yet another URL library" -category = "main" optional = false python-versions = ">=3.7" files = [ @@ -2027,7 +2298,6 @@ multidict = ">=4.0" name = "zipp" version = "3.16.2" description = "Backport of pathlib-compatible object wrapper for zip files" -category = "main" optional = false python-versions = ">=3.8" files = [ @@ -2042,4 +2312,4 @@ testing = ["big-O", "jaraco.functools", "jaraco.itertools", "more-itertools", "p [metadata] lock-version = "2.0" python-versions = "^3.8.1" -content-hash = "3fcd19c11b9c338a181e591b56e21d59c7834abff431fb9f40cc1ea874b64557" +content-hash = "126119ec6b94f1da4af9f8f2de4352df20522652cd58726546bd2fb46b40a9ef" diff --git a/continuedev/pyproject.toml b/continuedev/pyproject.toml index 0abc9504..d7505e2b 100644 --- a/continuedev/pyproject.toml +++ b/continuedev/pyproject.toml @@ -27,6 +27,8 @@ directory-tree = "^0.0.3.1" anthropic = "^0.3.4" chevron = "^0.14.0" psutil = "^5.9.5" +pygithub = "^1.59.0" +meilisearch-python-async = "^1.4.8" [tool.poetry.scripts] typegen = "src.continuedev.models.generate_json_schema:main" diff --git a/continuedev/src/continuedev/core/autopilot.py b/continuedev/src/continuedev/core/autopilot.py index 9dbced32..42a58423 100644 --- a/continuedev/src/continuedev/core/autopilot.py +++ b/continuedev/src/continuedev/core/autopilot.py @@ -2,24 +2,25 @@ from functools import cached_property import traceback import time from typing import Any, Callable, Coroutine, Dict, List, Union -import os from aiohttp import ClientPayloadError from pydantic import root_validator from ..models.filesystem import RangeInFileWithContents from ..models.filesystem_edit import FileEditWithFullContents from .observation import Observation, InternalErrorObservation +from .context import ContextManager +from ..plugins.context_providers.file import FileContextProvider +from ..plugins.context_providers.highlighted_code import HighlightedCodeContextProvider from ..server.ide_protocol import AbstractIdeProtocolServer from ..libs.util.queue import AsyncSubscriptionQueue from ..models.main import ContinueBaseModel -from .main import Context, ContinueCustomException, HighlightedRangeContext, Policy, History, FullState, Step, HistoryNode +from .main import Context, ContinueCustomException, Policy, History, FullState, Step, HistoryNode from ..plugins.steps.core.core import ReversibleStep, ManualEditStep, UserInputStep -from ..libs.util.telemetry import capture_event from .sdk import ContinueSDK -from ..libs.util.step_name_to_steps import get_step_from_name from ..libs.util.traceback_parsers import get_python_traceback, get_javascript_traceback from openai import error as openai_errors from ..libs.util.create_async_task import create_async_task +from ..libs.util.telemetry import posthog_logger def get_error_title(e: Exception) -> str: @@ -50,10 +51,11 @@ class Autopilot(ContinueBaseModel): history: History = History.from_empty() context: Context = Context() full_state: Union[FullState, None] = None - _on_update_callbacks: List[Callable[[FullState], None]] = [] - + context_manager: Union[ContextManager, None] = None continue_sdk: ContinueSDK = None + _on_update_callbacks: List[Callable[[FullState], None]] = [] + _active: bool = False _should_halt: bool = False _main_user_input_queue: List[str] = [] @@ -65,6 +67,15 @@ class Autopilot(ContinueBaseModel): async def create(cls, policy: Policy, ide: AbstractIdeProtocolServer, full_state: FullState) -> "Autopilot": autopilot = cls(ide=ide, policy=policy) autopilot.continue_sdk = await ContinueSDK.create(autopilot) + + # Load documents into the search index + autopilot.context_manager = await ContextManager.create( + autopilot.continue_sdk.config.context_providers + [ + HighlightedCodeContextProvider(ide=ide), + FileContextProvider(workspace_dir=ide.workspace_directory) + ]) + await autopilot.context_manager.load_index() + return autopilot class Config: @@ -78,15 +89,16 @@ class Autopilot(ContinueBaseModel): values['history'] = full_state.history return values - def get_full_state(self) -> FullState: + async def get_full_state(self) -> FullState: full_state = FullState( history=self.history, active=self._active, user_input_queue=self._main_user_input_queue, default_model=self.continue_sdk.config.default_model, - highlighted_ranges=self._highlighted_ranges, slash_commands=self.get_available_slash_commands(), - adding_highlighted_code=self._adding_highlighted_code, + adding_highlighted_code=self.context_manager.context_providers[ + "code"].adding_highlighted_code, + selected_context_items=await self.context_manager.get_selected_items() ) self.full_state = full_state return full_state @@ -98,17 +110,14 @@ class Autopilot(ContinueBaseModel): "name": x.name, "description": x.description}, self.continue_sdk.config.slash_commands)) or [] return custom_commands + slash_commands - async def change_default_model(self, model: str): - self.continue_sdk.update_default_model(model) - async def clear_history(self): # Reset history self.history = History.from_empty() self._main_user_input_queue = [] self._active = False - # Also remove all context - self._highlighted_ranges = [] + # Clear context + await self.context_manager.clear_context() await self.update_subscribers() @@ -117,7 +126,7 @@ class Autopilot(ContinueBaseModel): self._on_update_callbacks.append(callback) async def update_subscribers(self): - full_state = self.get_full_state() + full_state = await self.get_full_state() for callback in self._on_update_callbacks: await callback(full_state) @@ -159,85 +168,13 @@ class Autopilot(ContinueBaseModel): traceback = get_tb_func(output) if traceback is not None: for tb_step in self.continue_sdk.config.on_traceback: - step = get_step_from_name( - tb_step.step_name, {"output": output, **tb_step.params}) + step = tb_step.step({"output": output, **tb_step.params}) await self._run_singular_step(step) - _highlighted_ranges: List[HighlightedRangeContext] = [] - _adding_highlighted_code: bool = False - - def _make_sure_is_editing_range(self): - """If none of the highlighted ranges are currently being edited, the first should be selected""" - if len(self._highlighted_ranges) == 0: - return - if not any(map(lambda x: x.editing, self._highlighted_ranges)): - self._highlighted_ranges[0].editing = True - - def _disambiguate_highlighted_ranges(self): - """If any files have the same name, also display their folder name""" - name_status: Dict[str, set] = { - } # basename -> set of full paths with that basename - for rif in self._highlighted_ranges: - basename = os.path.basename(rif.range.filepath) - if basename in name_status: - name_status[basename].add(rif.range.filepath) - else: - name_status[basename] = {rif.range.filepath} - - for rif in self._highlighted_ranges: - basename = os.path.basename(rif.range.filepath) - if len(name_status[basename]) > 1: - rif.display_name = os.path.join( - os.path.basename(os.path.dirname(rif.range.filepath)), basename) - else: - rif.display_name = basename - async def handle_highlighted_code(self, range_in_files: List[RangeInFileWithContents]): - # Filter out rifs from ~/.continue/diffs folder - range_in_files = [ - rif for rif in range_in_files if not os.path.dirname(rif.filepath) == os.path.expanduser("~/.continue/diffs")] - - # Make sure all filepaths are relative to workspace - workspace_path = self.continue_sdk.ide.workspace_directory - - # If not adding highlighted code - if not self._adding_highlighted_code: - if len(self._highlighted_ranges) == 1 and len(range_in_files) <= 1 and (len(range_in_files) == 0 or range_in_files[0].range.start == range_in_files[0].range.end): - # If un-highlighting the range to edit, then remove the range - self._highlighted_ranges = [] - await self.update_subscribers() - elif len(range_in_files) > 0: - # Otherwise, replace the current range with the new one - # This is the first range to be highlighted - self._highlighted_ranges = [HighlightedRangeContext( - range=range_in_files[0], editing=True, pinned=False, display_name=os.path.basename(range_in_files[0].filepath))] - await self.update_subscribers() - return - - # If current range overlaps with any others, delete them and only keep the new range - new_ranges = [] - for i, rif in enumerate(self._highlighted_ranges): - found_overlap = False - for new_rif in range_in_files: - if rif.range.filepath == new_rif.filepath and rif.range.range.overlaps_with(new_rif.range): - found_overlap = True - break - - # Also don't allow multiple ranges in same file with same content. This is useless to the model, and avoids - # the bug where cmd+f causes repeated highlights - if rif.range.filepath == new_rif.filepath and rif.range.contents == new_rif.contents: - found_overlap = True - break - - if not found_overlap: - new_ranges.append(rif) - - self._highlighted_ranges = new_ranges + [HighlightedRangeContext( - range=rif, editing=False, pinned=False, display_name=os.path.basename(rif.filepath) - ) for rif in range_in_files] - - self._make_sure_is_editing_range() - self._disambiguate_highlighted_ranges() + # Add to context manager + await self.context_manager.context_providers["code"].handle_highlighted_code( + range_in_files) await self.update_subscribers() @@ -254,29 +191,16 @@ class Autopilot(ContinueBaseModel): await self.update_subscribers() - async def delete_context_at_indices(self, indices: List[int]): - kept_ranges = [] - for i, rif in enumerate(self._highlighted_ranges): - if i not in indices: - kept_ranges.append(rif) - self._highlighted_ranges = kept_ranges - - self._make_sure_is_editing_range() - + async def delete_context_with_ids(self, ids: List[str]): + await self.context_manager.delete_context_with_ids(ids) await self.update_subscribers() async def toggle_adding_highlighted_code(self): - self._adding_highlighted_code = not self._adding_highlighted_code - await self.update_subscribers() - - async def set_editing_at_indices(self, indices: List[int]): - for i in range(len(self._highlighted_ranges)): - self._highlighted_ranges[i].editing = i in indices + self.context_manager.context_providers["code"].adding_highlighted_code = not self.context_manager.context_providers["code"].adding_highlighted_code await self.update_subscribers() - async def set_pinned_at_indices(self, indices: List[int]): - for i in range(len(self._highlighted_ranges)): - self._highlighted_ranges[i].pinned = i in indices + async def set_editing_at_ids(self, ids: List[str]): + self.context_manager.context_providers["code"].set_editing_at_ids(ids) await self.update_subscribers() async def _run_singular_step(self, step: "Step", is_future_step: bool = False) -> Coroutine[Observation, None, None]: @@ -294,8 +218,8 @@ class Autopilot(ContinueBaseModel): # last_depth = self.history.timeline[i].depth # i -= 1 - capture_event(self.continue_sdk.ide.unique_id, 'step run', { - 'step_name': step.name, 'params': step.dict()}) + posthog_logger.capture_event( + 'step run', {'step_name': step.name, 'params': step.dict()}) if not is_future_step: # Check manual edits buffer, clear out if needed by creating a ManualEditStep @@ -336,8 +260,8 @@ class Autopilot(ContinueBaseModel): # Attach an InternalErrorObservation to the step and unhide it. print( f"Error while running step: \n{error_string}\n{error_title}") - capture_event(self.continue_sdk.ide.unique_id, 'step error', { - 'error_message': error_string, 'error_title': error_title, 'step_name': step.name, 'params': step.dict()}) + posthog_logger.capture_event('step error', { + 'error_message': error_string, 'error_title': error_title, 'step_name': step.name, 'params': step.dict()}) observation = InternalErrorObservation( error=error_string, title=error_title) @@ -441,10 +365,6 @@ class Autopilot(ContinueBaseModel): if len(self._main_user_input_queue) > 1: return - # Remove context unless pinned - # self._highlighted_ranges = [ - # hr for hr in self._highlighted_ranges if hr.pinned] - # await self._request_halt() # Just run the step that takes user input, and # then up to the policy to decide how to deal with it. @@ -460,3 +380,7 @@ class Autopilot(ContinueBaseModel): await self._request_halt() await self.reverse_to_index(index) await self.run_from_step(UserInputStep(user_input=user_input)) + + async def select_context_item(self, id: str, query: str): + await self.context_manager.select_context_item(id, query) + await self.update_subscribers() diff --git a/continuedev/src/continuedev/core/config.py b/continuedev/src/continuedev/core/config.py index 70c4876e..cb9c8977 100644 --- a/continuedev/src/continuedev/core/config.py +++ b/continuedev/src/continuedev/core/config.py @@ -1,14 +1,16 @@ import json import os +from .main import Step +from .context import ContextProvider from pydantic import BaseModel, validator -from typing import List, Literal, Optional, Dict +from typing import List, Literal, Optional, Dict, Type, Union import yaml class SlashCommand(BaseModel): name: str description: str - step_name: str + step: Type[Step] params: Optional[Dict] = {} @@ -19,54 +21,10 @@ class CustomCommand(BaseModel): class OnTracebackSteps(BaseModel): - step_name: str + step: Type[Step] params: Optional[Dict] = {} -DEFAULT_SLASH_COMMANDS = [ - # SlashCommand( - # name="pytest", - # description="Write pytest unit tests for the current file", - # step_name="WritePytestsRecipe", - # params=??) - SlashCommand( - name="edit", - description="Edit code in the current file or the highlighted code", - step_name="EditHighlightedCodeStep", - ), - # SlashCommand( - # name="explain", - # description="Reply to instructions or a question with previous steps and the highlighted code or current file as context", - # step_name="SimpleChatStep", - # ), - SlashCommand( - name="config", - description="Open the config file to create new and edit existing slash commands", - step_name="OpenConfigStep", - ), - SlashCommand( - name="help", - description="Ask a question like '/help what is given to the llm as context?'", - step_name="HelpStep", - ), - SlashCommand( - name="comment", - description="Write comments for the current file or highlighted code", - step_name="CommentCodeStep", - ), - SlashCommand( - name="feedback", - description="Send feedback to improve Continue", - step_name="FeedbackStep", - ), - SlashCommand( - name="clear", - description="Clear step history", - step_name="ClearHistoryStep", - ) -] - - class AzureInfo(BaseModel): endpoint: str engine: str @@ -77,7 +35,7 @@ class ContinueConfig(BaseModel): """ A pydantic class for the continue config file. """ - steps_on_startup: Optional[Dict[str, Dict]] = {} + steps_on_startup: List[Step] = [] disallowed_steps: Optional[List[str]] = [] allow_anonymous_telemetry: Optional[bool] = True default_model: Literal["gpt-3.5-turbo", "gpt-3.5-turbo-16k", @@ -88,88 +46,52 @@ class ContinueConfig(BaseModel): description="This is an example custom command. Use /config to edit it and create more", prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.", )] - slash_commands: Optional[List[SlashCommand]] = DEFAULT_SLASH_COMMANDS - on_traceback: Optional[List[OnTracebackSteps]] = [ - OnTracebackSteps(step_name="DefaultOnTracebackStep")] + slash_commands: Optional[List[SlashCommand]] = [] + on_traceback: Optional[List[OnTracebackSteps]] = [] system_message: Optional[str] = None azure_openai_info: Optional[AzureInfo] = None + context_providers: List[ContextProvider] = [] + # Want to force these to be the slash commands for now @validator('slash_commands', pre=True) def default_slash_commands_validator(cls, v): - return DEFAULT_SLASH_COMMANDS + from ..plugins.steps.open_config import OpenConfigStep + from ..plugins.steps.clear_history import ClearHistoryStep + from ..plugins.steps.feedback import FeedbackStep + from ..plugins.steps.comment_code import CommentCodeStep + from ..plugins.steps.main import EditHighlightedCodeStep + + DEFAULT_SLASH_COMMANDS = [ + SlashCommand( + name="edit", + description="Edit code in the current file or the highlighted code", + step=EditHighlightedCodeStep, + ), + SlashCommand( + name="config", + description="Open the config file to create new and edit existing slash commands", + step=OpenConfigStep, + ), + SlashCommand( + name="comment", + description="Write comments for the current file or highlighted code", + step=CommentCodeStep, + ), + SlashCommand( + name="feedback", + description="Send feedback to improve Continue", + step=FeedbackStep, + ), + SlashCommand( + name="clear", + description="Clear step history", + step=ClearHistoryStep, + ) + ] + + return DEFAULT_SLASH_COMMANDS + v @validator('temperature', pre=True) def temperature_validator(cls, v): return max(0.0, min(1.0, v)) - - -def load_config(config_file: str) -> ContinueConfig: - """ - Load the config file and return a ContinueConfig object. - """ - if not os.path.exists(config_file): - return ContinueConfig() - - _, ext = os.path.splitext(config_file) - if ext == '.yaml': - with open(config_file, 'r') as f: - try: - config_dict = yaml.safe_load(f) - except: - return ContinueConfig() - elif ext == '.json': - with open(config_file, 'r') as f: - try: - config_dict = json.load(f) - except: - return ContinueConfig() - else: - raise ValueError(f'Unknown config file extension: {ext}') - return ContinueConfig(**config_dict) - - -def load_global_config() -> ContinueConfig: - """ - Load the global config file and return a ContinueConfig object. - """ - global_dir = os.path.expanduser('~/.continue') - if not os.path.exists(global_dir): - os.mkdir(global_dir) - - yaml_path = os.path.join(global_dir, 'config.yaml') - if os.path.exists(yaml_path): - with open(config_path, 'r') as f: - try: - config_dict = yaml.safe_load(f) - except: - return ContinueConfig() - else: - config_path = os.path.join(global_dir, 'config.json') - if not os.path.exists(config_path): - with open(config_path, 'w') as f: - json.dump(ContinueConfig().dict(), f, indent=4) - with open(config_path, 'r') as f: - try: - config_dict = json.load(f) - except: - return ContinueConfig() - return ContinueConfig(**config_dict) - - -def update_global_config(config: ContinueConfig): - """ - Update the config file with the given ContinueConfig object. - """ - global_dir = os.path.expanduser('~/.continue') - if not os.path.exists(global_dir): - os.mkdir(global_dir) - - yaml_path = os.path.join(global_dir, 'config.yaml') - if os.path.exists(yaml_path): - with open(config_path, 'w') as f: - yaml.dump(config.dict(), f, indent=4) - else: - config_path = os.path.join(global_dir, 'config.json') - with open(config_path, 'w') as f: - json.dump(config.dict(exclude_unset=False), f, indent=4) diff --git a/continuedev/src/continuedev/core/context.py b/continuedev/src/continuedev/core/context.py new file mode 100644 index 00000000..7d302656 --- /dev/null +++ b/continuedev/src/continuedev/core/context.py @@ -0,0 +1,209 @@ + +from abc import abstractmethod +from typing import Dict, List +from meilisearch_python_async import Client +from pydantic import BaseModel + + +from .main import ChatMessage, ContextItem, ContextItemDescription, ContextItemId +from ..server.meilisearch_server import check_meilisearch_running + + +SEARCH_INDEX_NAME = "continue_context_items" + + +class ContextProvider(BaseModel): + """ + The ContextProvider class is a plugin that lets you provide new information to the LLM by typing '@'. + When you type '@', the context provider will be asked to populate a list of options. + These options will be updated on each keystroke. + When you hit enter on an option, the context provider will add that item to the autopilot's list of context (which is all stored in the ContextManager object). + """ + + title: str + + selected_items: List[ContextItem] = [] + + async def get_selected_items(self) -> List[ContextItem]: + """ + Returns all of the selected ContextItems. + + Default implementation simply returns self.selected_items. + + Other implementations may add an async processing step. + """ + return self.selected_items + + @abstractmethod + async def provide_context_items(self) -> List[ContextItem]: + """ + Provide documents for search index. This is run on startup. + + This is the only method that must be implemented. + """ + + async def get_chat_messages(self) -> List[ChatMessage]: + """ + Returns all of the chat messages for the context provider. + + Default implementation has a string template. + """ + return [ChatMessage(role="user", content=f"{item.description.name}: {item.description.description}\n\n{item.content}", summary=item.description.description) for item in await self.get_selected_items()] + + async def get_item(self, id: ContextItemId, query: str, search_client: Client) -> ContextItem: + """ + Returns the ContextItem with the given id. + + Default implementation uses the search index to get the item. + """ + result = await search_client.index( + SEARCH_INDEX_NAME).get_document(id.to_string()) + return ContextItem( + description=ContextItemDescription( + name=result["name"], + description=result["description"], + id=id + ), + content=result["content"] + ) + + async def delete_context_with_ids(self, ids: List[ContextItemId]): + """ + Deletes the ContextItems with the given IDs, lets ContextProviders recalculate. + + Default implementation simply deletes those with the given ids. + """ + id_strings = {id.to_string() for id in ids} + self.selected_items = list( + filter(lambda item: item.description.id.to_string() not in id_strings, self.selected_items)) + + async def clear_context(self): + """ + Clears all context. + + Default implementation simply clears the selected items. + """ + self.selected_items = [] + + async def add_context_item(self, id: ContextItemId, query: str, search_client: Client): + """ + Adds the given ContextItem to the list of ContextItems. + + Default implementation simply appends the item, not allowing duplicates. + + This method also allows you not to have to load all of the information until an item is selected. + """ + + # Don't add duplicate context + for item in self.selected_items: + if item.description.id.item_id == id.item_id: + return + + new_item = await self.get_item(id, query, search_client) + self.selected_items.append(new_item) + + +class ContextManager: + """ + The context manager is responsible for storing the context to be passed to the LLM, including + - ContextItems (highlighted code, GitHub Issues, etc.) + - ChatMessages in the history + - System Message + - Functions + + It is responsible for compiling all of this information into a single prompt without exceeding the token limit. + """ + + async def get_selected_items(self) -> List[ContextItem]: + """ + Returns all of the selected ContextItems. + """ + return sum([await provider.get_selected_items() for provider in self.context_providers.values()], []) + + async def get_chat_messages(self) -> List[ChatMessage]: + """ + Returns chat messages from each provider. + """ + return sum([await provider.get_chat_messages() for provider in self.context_providers.values()], []) + + search_client: Client + + def __init__(self, context_providers: List[ContextProvider], search_client: Client): + self.search_client = search_client + self.context_providers = { + prov.title: prov for prov in context_providers} + self.provider_titles = { + provider.title for provider in context_providers} + + @classmethod + async def create(cls, context_providers: List[ContextProvider]): + search_client = Client('http://localhost:7700') + health = await search_client.health() + if not health.status == "available": + print("MeiliSearch not running, avoiding any dependent context providers") + context_providers = list( + filter(lambda cp: cp.title == "code", context_providers)) + + return cls(context_providers, search_client) + + async def load_index(self): + for _, provider in self.context_providers.items(): + context_items = await provider.provide_context_items() + documents = [ + { + "id": item.description.id.to_string(), + "name": item.description.name, + "description": item.description.description, + "content": item.content + } + for item in context_items + ] + if len(documents) > 0: + await self.search_client.index(SEARCH_INDEX_NAME).add_documents(documents) + + # def compile_chat_messages(self, max_tokens: int) -> List[Dict]: + # """ + # Compiles the chat prompt into a single string. + # """ + # return compile_chat_messages(self.model, self.chat_history, max_tokens, self.prompt, self.functions, self.system_message) + + async def select_context_item(self, id: str, query: str): + """ + Selects the ContextItem with the given id. + """ + id: ContextItemId = ContextItemId.from_string(id) + if id.provider_title not in self.provider_titles: + raise ValueError( + f"Context provider with title {id.provider_title} not found") + + await self.context_providers[id.provider_title].add_context_item(id, query, self.search_client) + + async def delete_context_with_ids(self, ids: List[str]): + """ + Deletes the ContextItems with the given IDs, lets ContextProviders recalculate. + """ + + # Group by provider title + provider_title_to_ids: Dict[str, List[ContextItemId]] = {} + for id in ids: + id: ContextItemId = ContextItemId.from_string(id) + if id.provider_title not in provider_title_to_ids: + provider_title_to_ids[id.provider_title] = [] + provider_title_to_ids[id.provider_title].append(id) + + # Recalculate context for each updated provider + for provider_title, ids in provider_title_to_ids.items(): + await self.context_providers[provider_title].delete_context_with_ids(ids) + + async def clear_context(self): + """ + Clears all context. + """ + for provider in self.context_providers.values(): + await self.context_providers[provider.title].clear_context() + + +""" +Should define "ArgsTransformer" and "PromptTransformer" classes for the different LLMs. A standard way for them to ingest the +same format of prompts so you don't have to redo all of this logic. +""" diff --git a/continuedev/src/continuedev/core/main.py b/continuedev/src/continuedev/core/main.py index 50d01f8d..df9b98ef 100644 --- a/continuedev/src/continuedev/core/main.py +++ b/continuedev/src/continuedev/core/main.py @@ -1,12 +1,11 @@ import json -from textwrap import dedent -from typing import Callable, Coroutine, Dict, Generator, List, Literal, Tuple, Union +from typing import Coroutine, Dict, List, Literal, Union +from pydantic.schema import schema + -from ..models.filesystem import RangeInFileWithContents from ..models.main import ContinueBaseModel -from pydantic import validator +from pydantic import BaseModel, validator from .observation import Observation -from pydantic.schema import schema ChatMessageRole = Literal["assistant", "user", "system", "function"] @@ -201,12 +200,57 @@ class SlashCommandDescription(ContinueBaseModel): description: str -class HighlightedRangeContext(ContinueBaseModel): - """Context for a highlighted range""" - range: RangeInFileWithContents - editing: bool - pinned: bool - display_name: str +class ContextItemId(BaseModel): + """ + A ContextItemId is a unique identifier for a ContextItem. + """ + provider_title: str + item_id: str + + @validator('provider_title', 'item_id') + def must_be_valid_id(cls, v): + import re + if not re.match(r'^[0-9a-zA-Z_-]*$', v): + raise ValueError( + "Both provider_title and item_id can only include characters 0-9, a-z, A-Z, -, and _") + return v + + def to_string(self) -> str: + return f"{self.provider_title}-{self.item_id}" + + @staticmethod + def from_string(string: str) -> 'ContextItemId': + provider_title, *rest = string.split('-') + item_id = '-'.join(rest) + return ContextItemId(provider_title=provider_title, item_id=item_id) + + +class ContextItemDescription(BaseModel): + """ + A ContextItemDescription is a description of a ContextItem that is displayed to the user when they type '@'. + + The id can be used to retrieve the ContextItem from the ContextManager. + """ + name: str + description: str + id: ContextItemId + + +class ContextItem(BaseModel): + """ + A ContextItem is a single item that is stored in the ContextManager. + """ + description: ContextItemDescription + content: str + + @validator('content', pre=True) + def content_must_be_string(cls, v): + if v is None: + return '' + return v + + editing: bool = False + editable: bool = False class FullState(ContinueBaseModel): @@ -215,9 +259,9 @@ class FullState(ContinueBaseModel): active: bool user_input_queue: List[str] default_model: str - highlighted_ranges: List[HighlightedRangeContext] slash_commands: List[SlashCommandDescription] adding_highlighted_code: bool + selected_context_items: List[ContextItem] class ContinueSDK: diff --git a/continuedev/src/continuedev/core/policy.py b/continuedev/src/continuedev/core/policy.py index dfa0e7f9..d90177b5 100644 --- a/continuedev/src/continuedev/core/policy.py +++ b/continuedev/src/continuedev/core/policy.py @@ -8,8 +8,8 @@ from ..plugins.steps.steps_on_startup import StepsOnStartupStep from .main import Step, History, Policy from .observation import UserInputObservation from ..plugins.steps.core.core import MessageStep -from ..libs.util.step_name_to_steps import get_step_from_name from ..plugins.steps.custom_command import CustomCommandStep +from ..plugins.steps.main import EditHighlightedCodeStep def parse_slash_command(inp: str, config: ContinueConfig) -> Union[None, Step]: @@ -24,7 +24,11 @@ def parse_slash_command(inp: str, config: ContinueConfig) -> Union[None, Step]: if slash_command.name == command_name[1:]: params = slash_command.params params["user_input"] = after_command - return get_step_from_name(slash_command.step_name, params) + try: + return slash_command.step(**params) + except TypeError as e: + raise Exception( + f"Incorrect params used for slash command '{command_name}': {e}") return None @@ -52,7 +56,6 @@ class DefaultPolicy(Policy): - Use `cmd+m` (Mac) / `ctrl+m` (Windows) to open Continue - Use `/help` to ask questions about how to use Continue""")) >> WelcomeStep() >> - # SetupContinueWorkspaceStep() >> # CreateCodebaseIndexChroma() >> StepsOnStartupStep()) @@ -69,6 +72,9 @@ class DefaultPolicy(Policy): if custom_command is not None: return custom_command + if user_input.startswith("/edit"): + return EditHighlightedCodeStep(user_input=user_input[5:]) + return SimpleChatStep() return None diff --git a/continuedev/src/continuedev/core/sdk.py b/continuedev/src/continuedev/core/sdk.py index 9d1025e3..e9aefa76 100644 --- a/continuedev/src/continuedev/core/sdk.py +++ b/continuedev/src/continuedev/core/sdk.py @@ -1,4 +1,3 @@ -import asyncio from functools import cached_property from typing import Coroutine, Dict, Union import os @@ -6,7 +5,7 @@ import os from ..plugins.steps.core.core import DefaultModelEditCodeStep from ..models.main import Range from .abstract_sdk import AbstractContinueSDK -from .config import ContinueConfig, load_config, load_global_config, update_global_config +from .config import ContinueConfig from ..models.filesystem_edit import FileEdit, FileSystemEdit, AddFile, DeleteFile, AddDirectory, DeleteDirectory from ..models.filesystem import RangeInFile from ..libs.llm.hf_inference_api import HuggingFaceInferenceAPI @@ -18,6 +17,8 @@ from ..server.ide_protocol import AbstractIdeProtocolServer from .main import Context, ContinueCustomException, History, HistoryNode, Step, ChatMessage from ..plugins.steps.core.core import * from ..libs.llm.proxy_server import ProxyServer +from ..libs.util.telemetry import posthog_logger +from ..libs.util.paths import getConfigFilePath class Autopilot: @@ -144,20 +145,20 @@ class ContinueSDK(AbstractContinueSDK): ide: AbstractIdeProtocolServer models: Models context: Context + config: ContinueConfig __autopilot: Autopilot def __init__(self, autopilot: Autopilot): self.ide = autopilot.ide self.__autopilot = autopilot self.context = autopilot.context - self.config = self._load_config() @classmethod async def create(cls, autopilot: Autopilot) -> "ContinueSDK": sdk = ContinueSDK(autopilot) try: - config = sdk._load_config() + config = sdk._load_config_dot_py() sdk.config = config except Exception as e: print(e) @@ -175,19 +176,6 @@ class ContinueSDK(AbstractContinueSDK): sdk.models = await Models.create(sdk) return sdk - config: ContinueConfig - - def _load_config(self) -> ContinueConfig: - dir = self.ide.workspace_directory - yaml_path = os.path.join(dir, '.continue', 'config.yaml') - json_path = os.path.join(dir, '.continue', 'config.json') - if os.path.exists(yaml_path): - return load_config(yaml_path) - elif os.path.exists(json_path): - return load_config(json_path) - else: - return load_global_config() - @property def history(self) -> History: return self.__autopilot.history @@ -267,16 +255,32 @@ class ContinueSDK(AbstractContinueSDK): async def get_user_secret(self, env_var: str, prompt: str) -> str: return await self.ide.getUserSecret(env_var) + _last_valid_config: ContinueConfig = None + + def _load_config_dot_py(self) -> ContinueConfig: + # Use importlib to load the config file config.py at the given path + path = getConfigFilePath() + try: + import importlib.util + spec = importlib.util.spec_from_file_location("config", path) + config = importlib.util.module_from_spec(spec) + spec.loader.exec_module(config) + self._last_valid_config = config.config + + # When the config is loaded, setup posthog logger + posthog_logger.setup( + self.ide.unique_id, config.config.allow_anonymous_telemetry or True) + + return config.config + except Exception as e: + print("Error loading config.py: ", e) + return ContinueConfig() if self._last_valid_config is None else self._last_valid_config + def get_code_context(self, only_editing: bool = False) -> List[RangeInFileWithContents]: context = list(filter(lambda x: x.editing, self.__autopilot._highlighted_ranges) ) if only_editing else self.__autopilot._highlighted_ranges return [c.range for c in context] - def update_default_model(self, model: str): - config = self.config - config.default_model = model - update_global_config(config) - def set_loading_message(self, message: str): # self.__autopilot.set_loading_message(message) raise NotImplementedError() @@ -286,28 +290,13 @@ class ContinueSDK(AbstractContinueSDK): async def get_chat_context(self) -> List[ChatMessage]: history_context = self.history.to_chat_history() - highlighted_code = [ - hr.range for hr in self.__autopilot._highlighted_ranges] - - preface = "The following code is highlighted" - - # If no higlighted ranges, use first file as context - if len(highlighted_code) == 0: - preface = "The following file is open" - visible_files = await self.ide.getVisibleFiles() - if len(visible_files) > 0: - content = await self.ide.readFile(visible_files[0]) - highlighted_code = [ - RangeInFileWithContents.from_entire_file(visible_files[0], content)] - - for rif in highlighted_code: - msg = ChatMessage(content=f"{preface} ({rif.filepath}):\n```\n{rif.contents}\n```", - role="user", summary=f"{preface}: {rif.filepath}") - - # Don't insert after latest user message or function call - i = -1 - if len(history_context) > 0 and (history_context[i].role == "user" or history_context[i].role == "function"): - i -= 1 + + context_messages: List[ChatMessage] = await self.__autopilot.context_manager.get_chat_messages() + + # Insert at the end, but don't insert after latest user message or function call + i = -2 if (len(history_context) > 0 and ( + history_context[-1].role == "user" or history_context[-1].role == "function")) else -1 + for msg in context_messages: history_context.insert(i, msg) return history_context diff --git a/continuedev/src/continuedev/libs/constants/default_config.py.txt b/continuedev/src/continuedev/libs/constants/default_config.py.txt new file mode 100644 index 00000000..f80a9ff0 --- /dev/null +++ b/continuedev/src/continuedev/libs/constants/default_config.py.txt @@ -0,0 +1,87 @@ +""" +This is the Continue configuration file. + +If you aren't getting strong typing on these imports, +be sure to select the Python interpreter in ~/.continue/server/env. +""" + +import subprocess + +from continuedev.src.continuedev.core.main import Step +from continuedev.src.continuedev.core.sdk import ContinueSDK +from continuedev.src.continuedev.core.config import CustomCommand, SlashCommand, ContinueConfig +from continuedev.src.continuedev.plugins.context_providers.github import GitHubIssuesContextProvider +from continuedev.src.continuedev.plugins.context_providers.google import GoogleContextProvider + + +class CommitMessageStep(Step): + """ + This is a Step, the building block of Continue. + It can be used below as a slash command, so that + run will be called when you type '/commit'. + """ + async def run(self, sdk: ContinueSDK): + + # Get the root directory of the workspace + dir = sdk.ide.workspace_directory + + # Run git diff in that directory + diff = subprocess.check_output( + ["git", "diff"], cwd=dir).decode("utf-8") + + # Ask gpt-3.5-16k to write a commit message, + # and set it as the description of this step + self.description = await sdk.models.gpt3516k.complete( + f"{diff}\n\nWrite a short, specific (less than 50 chars) commit message about the above changes:") + + +config = ContinueConfig( + + # If set to False, we will not collect any usage data + # See here to learn what anonymous data we collect: https://continue.dev/docs/telemetry + allow_anonymous_telemetry=True, + + # GPT-4 is recommended for best results + # See options here: https://continue.dev/docs/customization#change-the-default-llm + default_model="gpt-4", + + # Set a system message with information that the LLM should always keep in mind + # E.g. "Please give concise answers. Always respond in Spanish." + system_message=None, + + # Set temperature to any value between 0 and 1. Higher values will make the LLM + # more creative, while lower values will make it more predictable. + temperature=0.5, + + # Custom commands let you map a prompt to a shortened slash command + # They are like slash commands, but more easily defined - write just a prompt instead of a Step class + # Their output will always be in chat form + custom_commands=[CustomCommand( + name="test", + description="This is an example custom command. Use /config to edit it and create more", + prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.", + )], + + # Slash commands let you run a Step from a slash command + slash_commands=[ + # SlashCommand( + # name="commit", + # description="This is an example slash command. Use /config to edit it and create more", + # step=CommitMessageStep, + # ) + ], + + # Context providers let you quickly select context by typing '@' + # Uncomment the following to + # - quickly reference GitHub issues + # - show Google search results to the LLM + context_providers=[ + # GitHubIssuesContextProvider( + # repo_name="<your github username or organization>/<your repo name>", + # auth_token="<your github auth token>" + # ), + # GoogleContextProvider( + # serper_api_key="<your serper.dev api key>" + # ) + ] +) diff --git a/continuedev/src/continuedev/libs/llm/proxy_server.py b/continuedev/src/continuedev/libs/llm/proxy_server.py index 75c91c4e..f9e3fa01 100644 --- a/continuedev/src/continuedev/libs/llm/proxy_server.py +++ b/continuedev/src/continuedev/libs/llm/proxy_server.py @@ -3,10 +3,10 @@ import json import traceback from typing import Any, Callable, Coroutine, Dict, Generator, List, Literal, Union import aiohttp -from ..util.telemetry import capture_event from ...core.main import ChatMessage from ..llm import LLM -from ..util.count_tokens import DEFAULT_ARGS, DEFAULT_MAX_TOKENS, compile_chat_messages, CHAT_MODELS, count_tokens, format_chat_messages +from ..util.telemetry import posthog_logger +from ..util.count_tokens import DEFAULT_ARGS, compile_chat_messages, count_tokens, format_chat_messages import certifi import ssl @@ -36,7 +36,7 @@ class ProxyServer(LLM): def count_tokens(self, text: str): return count_tokens(self.default_model, text) - + def get_headers(self): # headers with unique id return {"unique_id": self.unique_id} @@ -87,7 +87,7 @@ class ProxyServer(LLM): if "content" in loaded_chunk: completion += loaded_chunk["content"] except Exception as e: - capture_event(self.unique_id, "proxy_server_parse_error", { + posthog_logger.capture_event(self.unique_id, "proxy_server_parse_error", { "error_title": "Proxy server stream_chat parsing failed", "error_message": '\n'.join(traceback.format_exception(e))}) else: break diff --git a/continuedev/src/continuedev/libs/util/create_async_task.py b/continuedev/src/continuedev/libs/util/create_async_task.py index 354cea82..2473c638 100644 --- a/continuedev/src/continuedev/libs/util/create_async_task.py +++ b/continuedev/src/continuedev/libs/util/create_async_task.py @@ -1,6 +1,6 @@ from typing import Coroutine, Union import traceback -from .telemetry import capture_event +from .telemetry import posthog_logger import asyncio import nest_asyncio nest_asyncio.apply() @@ -16,7 +16,7 @@ def create_async_task(coro: Coroutine, unique_id: Union[str, None] = None): except Exception as e: print("Exception caught from async task: ", '\n'.join(traceback.format_exception(e))) - capture_event(unique_id or "None", "async_task_error", { + posthog_logger.capture_event("async_task_error", { "error_title": e.__str__() or e.__repr__(), "error_message": '\n'.join(traceback.format_exception(e)) }) diff --git a/continuedev/src/continuedev/libs/util/paths.py b/continuedev/src/continuedev/libs/util/paths.py index fddef887..14a97f57 100644 --- a/continuedev/src/continuedev/libs/util/paths.py +++ b/continuedev/src/continuedev/libs/util/paths.py @@ -2,16 +2,45 @@ import os from ..constants.main import CONTINUE_SESSIONS_FOLDER, CONTINUE_GLOBAL_FOLDER, CONTINUE_SERVER_FOLDER -def getGlobalFolderPath(): - return os.path.join(os.path.expanduser("~"), CONTINUE_GLOBAL_FOLDER) +def getGlobalFolderPath(): + path = os.path.join(os.path.expanduser("~"), CONTINUE_GLOBAL_FOLDER) + os.makedirs(path, exist_ok=True) + return path def getSessionsFolderPath(): - return os.path.join(getGlobalFolderPath(), CONTINUE_SESSIONS_FOLDER) + path = os.path.join(getGlobalFolderPath(), CONTINUE_SESSIONS_FOLDER) + os.makedirs(path, exist_ok=True) + return path + def getServerFolderPath(): - return os.path.join(getGlobalFolderPath(), CONTINUE_SERVER_FOLDER) + path = os.path.join(getGlobalFolderPath(), CONTINUE_SERVER_FOLDER) + os.makedirs(path, exist_ok=True) + return path + def getSessionFilePath(session_id: str): - return os.path.join(getSessionsFolderPath(), f"{session_id}.json")
\ No newline at end of file + path = os.path.join(getSessionsFolderPath(), f"{session_id}.json") + os.makedirs(os.path.dirname(path), exist_ok=True) + return path + + +def getDefaultConfigFile() -> str: + current_path = os.path.dirname(os.path.realpath(__file__)) + config_path = os.path.join( + current_path, "..", "constants", "default_config.py.txt") + with open(config_path, 'r') as f: + return f.read() + + +def getConfigFilePath() -> str: + path = os.path.join(getGlobalFolderPath(), "config.py") + os.makedirs(os.path.dirname(path), exist_ok=True) + + if not os.path.exists(path): + with open(path, 'w') as f: + f.write(getDefaultConfigFile()) + + return path diff --git a/continuedev/src/continuedev/libs/util/telemetry.py b/continuedev/src/continuedev/libs/util/telemetry.py index 17735dce..a967828e 100644 --- a/continuedev/src/continuedev/libs/util/telemetry.py +++ b/continuedev/src/continuedev/libs/util/telemetry.py @@ -1,27 +1,37 @@ from typing import Any from posthog import Posthog -from ...core.config import load_config import os from dotenv import load_dotenv from .commonregex import clean_pii_from_any load_dotenv() in_codespaces = os.getenv("CODESPACES") == "true" +POSTHOG_API_KEY = 'phc_JS6XFROuNbhJtVCEdTSYk6gl5ArRrTNMpCcguAXlSPs' -# The personal API key is necessary only if you want to use local evaluation of feature flags. -posthog = Posthog('phc_JS6XFROuNbhJtVCEdTSYk6gl5ArRrTNMpCcguAXlSPs', - host='https://app.posthog.com') +class PostHogLogger: + def __init__(self, api_key: str): + self.api_key = api_key + self.unique_id = None + self.allow_anonymous_telemetry = True -def capture_event(unique_id: str, event_name: str, event_properties: Any): - # Return early if telemetry is disabled - config = load_config('.continue/config.json') - if not config.allow_anonymous_telemetry: - return + def setup(self, unique_id: str, allow_anonymous_telemetry: bool): + self.unique_id = unique_id + self.allow_anonymous_telemetry = allow_anonymous_telemetry - if in_codespaces: - event_properties['codespaces'] = True + # The personal API key is necessary only if you want to use local evaluation of feature flags. + self.posthog = Posthog(self.api_key, host='https://app.posthog.com') - # Send event to PostHog - posthog.capture(unique_id, event_name, - clean_pii_from_any(event_properties)) + def capture_event(self, event_name: str, event_properties: Any): + if not self.allow_anonymous_telemetry or self.unique_id is None: + return + + if in_codespaces: + event_properties['codespaces'] = True + + # Send event to PostHog + self.posthog.capture(self.unique_id, event_name, + clean_pii_from_any(event_properties)) + + +posthog_logger = PostHogLogger(api_key=POSTHOG_API_KEY) diff --git a/continuedev/src/continuedev/models/generate_json_schema.py b/continuedev/src/continuedev/models/generate_json_schema.py index 6cebf429..06614984 100644 --- a/continuedev/src/continuedev/models/generate_json_schema.py +++ b/continuedev/src/continuedev/models/generate_json_schema.py @@ -2,6 +2,7 @@ from .main import * from .filesystem import RangeInFile, FileEdit from .filesystem_edit import FileEditWithFullContents from ..core.main import History, HistoryNode, FullState +from ..core.context import ContextItem from pydantic import schema_json_of import os @@ -13,6 +14,8 @@ MODELS_TO_GENERATE = [ FileEditWithFullContents ] + [ History, HistoryNode, FullState +] + [ + ContextItem ] RENAMES = { diff --git a/continuedev/src/continuedev/plugins/context_providers/file.py b/continuedev/src/continuedev/plugins/context_providers/file.py new file mode 100644 index 00000000..6222ec6a --- /dev/null +++ b/continuedev/src/continuedev/plugins/context_providers/file.py @@ -0,0 +1,62 @@ +import os +import re +from typing import List +from ...core.main import ContextItem, ContextItemDescription, ContextItemId +from ...core.context import ContextProvider +from fnmatch import fnmatch + + +def get_file_contents(filepath: str) -> str: + try: + with open(filepath, "r") as f: + return f.read() + except UnicodeDecodeError: + return "" + + +class FileContextProvider(ContextProvider): + """ + The FileContextProvider is a ContextProvider that allows you to search files in the open workspace. + """ + + title = "file" + workspace_dir: str + ignore_patterns: List[str] = [ + ".git", + ".vscode", + ".idea", + ".vs", + ".venv", + "env", + ".env", + "node_modules", + "dist", + "build", + "target", + "out", + "bin", + ".pytest_cache", + ".vscode-test", + ".continue", + ] + + async def provide_context_items(self) -> List[ContextItem]: + filepaths = [] + for root, dir_names, file_names in os.walk(self.workspace_dir): + dir_names[:] = [d for d in dir_names if not any( + fnmatch(d, pattern) for pattern in self.ignore_patterns)] + for file_name in file_names: + filepaths.append(os.path.join(root, file_name)) + + return [ContextItem( + content=get_file_contents(file)[:min( + 2000, len(get_file_contents(file)))], + description=ContextItemDescription( + name=os.path.basename(file), + description=file, + id=ContextItemId( + provider_title=self.title, + item_id=re.sub(r'[^0-9a-zA-Z_-]', '', file) + ) + ) + ) for file in filepaths] diff --git a/continuedev/src/continuedev/plugins/context_providers/github.py b/continuedev/src/continuedev/plugins/context_providers/github.py new file mode 100644 index 00000000..765a534d --- /dev/null +++ b/continuedev/src/continuedev/plugins/context_providers/github.py @@ -0,0 +1,35 @@ +from typing import List +from github import Github +from github import Auth + +from ...core.context import ContextProvider, ContextItemDescription, ContextItem, ContextItemId + + +class GitHubIssuesContextProvider(ContextProvider): + """ + The GitHubIssuesContextProvider is a ContextProvider + that allows you to search GitHub issues in a repo. + """ + + title = "issues" + repo_name: str + auth_token: str + + async def provide_context_items(self) -> List[ContextItem]: + auth = Auth.Token(self.auth_token) + gh = Github(auth=auth) + + repo = gh.get_repo(self.repo_name) + issues = repo.get_issues().get_page(0) + + return [ContextItem( + content=issue.body, + description=ContextItemDescription( + name=f"Issue #{issue.number}", + description=issue.title, + id=ContextItemId( + provider_title=self.title, + item_id=issue.id + ) + ) + ) for issue in issues] diff --git a/continuedev/src/continuedev/plugins/context_providers/google.py b/continuedev/src/continuedev/plugins/context_providers/google.py new file mode 100644 index 00000000..64954833 --- /dev/null +++ b/continuedev/src/continuedev/plugins/context_providers/google.py @@ -0,0 +1,64 @@ +import json +from typing import List + +import aiohttp +from ...core.main import ContextItem, ContextItemDescription, ContextItemId +from ...core.context import ContextProvider + + +class GoogleContextProvider(ContextProvider): + title = "google" + + serper_api_key: str + + GOOGLE_CONTEXT_ITEM_ID = "google_search" + + @property + def BASE_CONTEXT_ITEM(self): + return ContextItem( + content="", + description=ContextItemDescription( + name="Google Search", + description="Enter a query to search google", + id=ContextItemId( + provider_title=self.title, + item_id=self.GOOGLE_CONTEXT_ITEM_ID + ) + ) + ) + + async def _google_search(self, query: str) -> str: + url = "https://google.serper.dev/search" + + payload = json.dumps({ + "q": query + }) + headers = { + 'X-API-KEY': self.serper_api_key, + 'Content-Type': 'application/json' + } + + async with aiohttp.ClientSession() as session: + async with session.post(url, headers=headers, data=payload) as response: + return await response.text() + + async def provide_context_items(self) -> List[ContextItem]: + return [self.BASE_CONTEXT_ITEM] + + async def get_item(self, id: ContextItemId, query: str, _) -> ContextItem: + if not id.item_id == self.GOOGLE_CONTEXT_ITEM_ID: + raise Exception("Invalid item id") + + results = await self._google_search(query) + json_results = json.loads(results) + content = f"Google Search: {query}\n\n" + if answerBox := json_results.get("answerBox"): + content += f"Answer Box ({answerBox['title']}): {answerBox['answer']}\n\n" + + for result in json_results["organic"]: + content += f"{result['title']}\n{result['link']}\n{result['snippet']}\n\n" + + ctx_item = self.BASE_CONTEXT_ITEM.copy() + ctx_item.content = content + ctx_item.description.id.item_id = query + return ctx_item diff --git a/continuedev/src/continuedev/plugins/context_providers/highlighted_code.py b/continuedev/src/continuedev/plugins/context_providers/highlighted_code.py new file mode 100644 index 00000000..426c0804 --- /dev/null +++ b/continuedev/src/continuedev/plugins/context_providers/highlighted_code.py @@ -0,0 +1,191 @@ +import os +from typing import Any, Dict, List + +from meilisearch_python_async import Client +from ...core.main import ChatMessage +from ...models.filesystem import RangeInFile, RangeInFileWithContents +from ...core.context import ContextItem, ContextItemDescription, ContextItemId +from pydantic import BaseModel + + +class HighlightedRangeContextItem(BaseModel): + rif: RangeInFileWithContents + item: ContextItem + + +class HighlightedCodeContextProvider(BaseModel): + """ + The ContextProvider class is a plugin that lets you provide new information to the LLM by typing '@'. + When you type '@', the context provider will be asked to populate a list of options. + These options will be updated on each keystroke. + When you hit enter on an option, the context provider will add that item to the autopilot's list of context (which is all stored in the ContextManager object). + """ + + title = "code" + + ide: Any # IdeProtocolServer + + highlighted_ranges: List[HighlightedRangeContextItem] = [] + adding_highlighted_code: bool = False + + should_get_fallback_context_item: bool = True + last_added_fallback: bool = False + + async def _get_fallback_context_item(self) -> HighlightedRangeContextItem: + if not self.should_get_fallback_context_item: + return None + + visible_files = await self.ide.getVisibleFiles() + if len(visible_files) > 0: + content = await self.ide.readFile(visible_files[0]) + rif = RangeInFileWithContents.from_entire_file( + visible_files[0], content) + + item = self._rif_to_context_item(rif, 0, True) + item.description.name = self._rif_to_name( + rif, show_line_nums=False) + + self.last_added_fallback = True + return HighlightedRangeContextItem(rif=rif, item=item) + + return None + + async def get_selected_items(self) -> List[ContextItem]: + items = [hr.item for hr in self.highlighted_ranges] + + if len(items) == 0 and (fallback_item := await self._get_fallback_context_item()): + items = [fallback_item.item] + + return items + + async def get_chat_messages(self) -> List[ContextItem]: + ranges = self.highlighted_ranges + if len(ranges) == 0 and (fallback_item := await self._get_fallback_context_item()): + ranges = [fallback_item] + + return [ChatMessage( + role="user", + content=f"Code in this file is highlighted ({r.rif.filepath}):\n```\n{r.rif.contents}\n```", + summary=f"Code in this file is highlighted: {r.rif.filepath}" + ) for r in ranges] + + def _make_sure_is_editing_range(self): + """If none of the highlighted ranges are currently being edited, the first should be selected""" + if len(self.highlighted_ranges) == 0: + return + if not any(map(lambda x: x.item.editing, self.highlighted_ranges)): + self.highlighted_ranges[0].item.editing = True + + def _disambiguate_highlighted_ranges(self): + """If any files have the same name, also display their folder name""" + name_status: Dict[str, set] = { + } # basename -> set of full paths with that basename + for hr in self.highlighted_ranges: + basename = os.path.basename(hr.rif.filepath) + if basename in name_status: + name_status[basename].add(hr.rif.filepath) + else: + name_status[basename] = {hr.rif.filepath} + + for hr in self.highlighted_ranges: + if len(name_status[basename]) > 1: + hr.item.description.name = self._rif_to_name(hr.rif, display_filename=os.path.join( + os.path.basename(os.path.dirname(hr.rif.filepath)), basename)) + else: + hr.item.description.name = self._rif_to_name( + hr.rif, display_filename=basename) + + async def provide_context_items(self) -> List[ContextItem]: + return [] + + async def delete_context_with_ids(self, ids: List[ContextItemId]) -> List[ContextItem]: + indices_to_delete = [ + int(id.item_id) for id in ids + ] + + kept_ranges = [] + for i, hr in enumerate(self.highlighted_ranges): + if i not in indices_to_delete: + kept_ranges.append(hr) + self.highlighted_ranges = kept_ranges + + self._make_sure_is_editing_range() + + if len(self.highlighted_ranges) == 0 and self.last_added_fallback: + self.should_get_fallback_context_item = False + + return [hr.item for hr in self.highlighted_ranges] + + def _rif_to_name(self, rif: RangeInFileWithContents, display_filename: str = None, show_line_nums: bool = True) -> str: + line_nums = f" ({rif.range.start.line + 1}-{rif.range.end.line + 1})" if show_line_nums else "" + return f"{display_filename or os.path.basename(rif.filepath)}{line_nums}" + + def _rif_to_context_item(self, rif: RangeInFileWithContents, idx: int, editing: bool) -> ContextItem: + return ContextItem( + description=ContextItemDescription( + name=self._rif_to_name(rif), + description=rif.filepath, + id=ContextItemId( + provider_title=self.title, + item_id=str(idx) + ) + ), + content=rif.contents, + editing=editing, + editable=True + ) + + async def handle_highlighted_code(self, range_in_files: List[RangeInFileWithContents]): + self.should_get_fallback_context_item = True + self.last_added_fallback = False + + # Filter out rifs from ~/.continue/diffs folder + range_in_files = [ + rif for rif in range_in_files if not os.path.dirname(rif.filepath) == os.path.expanduser("~/.continue/diffs")] + + # If not adding highlighted code + if not self.adding_highlighted_code: + if len(self.highlighted_ranges) == 1 and len(range_in_files) <= 1 and (len(range_in_files) == 0 or range_in_files[0].range.start == range_in_files[0].range.end): + # If un-highlighting the range to edit, then remove the range + self.highlighted_ranges = [] + elif len(range_in_files) > 0: + # Otherwise, replace the current range with the new one + # This is the first range to be highlighted + self.highlighted_ranges = [ + HighlightedRangeContextItem( + rif=range_in_files[0], + item=self._rif_to_context_item(range_in_files[0], 0, True))] + + return + + # If current range overlaps with any others, delete them and only keep the new range + new_ranges = [] + for i, hr in enumerate(self.highlighted_ranges): + found_overlap = False + for new_rif in range_in_files: + if hr.rif.filepath == new_rif.filepath and hr.rif.range.overlaps_with(new_rif.range): + found_overlap = True + break + + # Also don't allow multiple ranges in same file with same content. This is useless to the model, and avoids + # the bug where cmd+f causes repeated highlights + if hr.rif.filepath == new_rif.filepath and hr.rif.contents == new_rif.contents: + found_overlap = True + break + + if not found_overlap: + new_ranges.append(HighlightedRangeContextItem(rif=hr.rif, item=self._rif_to_context_item( + hr.rif, len(new_ranges), False))) + + self.highlighted_ranges = new_ranges + [HighlightedRangeContextItem(rif=rif, item=self._rif_to_context_item( + rif, len(new_ranges) + idx, False)) for idx, rif in enumerate(range_in_files)] + + self._make_sure_is_editing_range() + self._disambiguate_highlighted_ranges() + + async def set_editing_at_ids(self, ids: List[str]): + for hr in self.highlighted_ranges: + hr.item.editing = hr.item.description.id.to_string() in ids + + async def add_context_item(self, id: ContextItemId, query: str, search_client: Client, prev: List[ContextItem] = None) -> List[ContextItem]: + raise NotImplementedError() diff --git a/continuedev/src/continuedev/plugins/steps/custom_command.py b/continuedev/src/continuedev/plugins/steps/custom_command.py index d5b6e48b..419b3c3d 100644 --- a/continuedev/src/continuedev/plugins/steps/custom_command.py +++ b/continuedev/src/continuedev/plugins/steps/custom_command.py @@ -1,6 +1,6 @@ from ...libs.util.templating import render_templated_string from ...core.main import Step -from ...core.sdk import ContinueSDK +from ...core.sdk import ContinueSDK, Models from ..steps.chat import SimpleChatStep @@ -11,7 +11,7 @@ class CustomCommandStep(Step): slash_command: str hide: bool = True - async def describe(self): + async def describe(self, models: Models): return self.prompt async def run(self, sdk: ContinueSDK): diff --git a/continuedev/src/continuedev/plugins/steps/feedback.py b/continuedev/src/continuedev/plugins/steps/feedback.py index 119e3112..fa56a4d9 100644 --- a/continuedev/src/continuedev/plugins/steps/feedback.py +++ b/continuedev/src/continuedev/plugins/steps/feedback.py @@ -2,7 +2,7 @@ from typing import Coroutine from ...core.main import Models from ...core.main import Step from ...core.sdk import ContinueSDK -from ...libs.util.telemetry import capture_event +from ...libs.util.telemetry import posthog_logger class FeedbackStep(Step): @@ -13,5 +13,4 @@ class FeedbackStep(Step): return f"`{self.user_input}`\n\nWe'll see your feedback and make improvements as soon as possible. If you'd like to directly email us, you can contact [nate@continue.dev](mailto:nate@continue.dev?subject=Feedback%20On%20Continue)." async def run(self, sdk: ContinueSDK): - capture_event(sdk.ide.unique_id, "feedback", - {"feedback": self.user_input}) + posthog_logger.capture_event("feedback", {"feedback": self.user_input}) diff --git a/continuedev/src/continuedev/plugins/steps/help.py b/continuedev/src/continuedev/plugins/steps/help.py index 5111c7cf..d3807706 100644 --- a/continuedev/src/continuedev/plugins/steps/help.py +++ b/continuedev/src/continuedev/plugins/steps/help.py @@ -1,7 +1,7 @@ from textwrap import dedent from ...core.main import ChatMessage, Step from ...core.sdk import ContinueSDK -from ...libs.util.telemetry import capture_event +from ...libs.util.telemetry import posthog_logger help = dedent("""\ Continue is an open-source coding autopilot. It is a VS Code extension that brings the power of ChatGPT to your IDE. @@ -55,5 +55,5 @@ class HelpStep(Step): self.description += chunk["content"] await sdk.update_ui() - capture_event(sdk.ide.unique_id, "help", { - "question": question, "answer": self.description}) + posthog_logger.capture_event( + "help", {"question": question, "answer": self.description}) diff --git a/continuedev/src/continuedev/plugins/steps/main.py b/continuedev/src/continuedev/plugins/steps/main.py index 30117c55..a8752df2 100644 --- a/continuedev/src/continuedev/plugins/steps/main.py +++ b/continuedev/src/continuedev/plugins/steps/main.py @@ -15,20 +15,6 @@ from .core.core import DefaultModelEditCodeStep from ...libs.util.calculate_diff import calculate_diff2 -class SetupContinueWorkspaceStep(Step): - async def describe(self, models: Models) -> Coroutine[str, None, None]: - return "Set up Continue workspace by adding a .continue directory" - - async def run(self, sdk: ContinueSDK) -> Coroutine[Observation, None, None]: - if not os.path.exists(os.path.join(await sdk.ide.getWorkspaceDirectory(), ".continue")): - await sdk.add_directory(".continue") - if not os.path.exists(os.path.join(await sdk.ide.getWorkspaceDirectory(), ".continue", "config.json")): - await sdk.add_file(".continue/config.json", dedent("""\ - { - "allow_anonymous_telemetry": true - }""")) - - class Policy(BaseModel): pass diff --git a/continuedev/src/continuedev/plugins/steps/open_config.py b/continuedev/src/continuedev/plugins/steps/open_config.py index d950c26f..64ead547 100644 --- a/continuedev/src/continuedev/plugins/steps/open_config.py +++ b/continuedev/src/continuedev/plugins/steps/open_config.py @@ -1,6 +1,7 @@ from textwrap import dedent from ...core.main import Step from ...core.sdk import ContinueSDK +from ...libs.util.paths import getConfigFilePath import os @@ -9,21 +10,20 @@ class OpenConfigStep(Step): async def describe(self, models): return dedent("""\ - `\"config.json\"` is now open. You can add a custom slash command in the `\"custom_commands\"` section, like in this example: - ```json - "custom_commands": [ - { - "name": "test", - "description": "Write unit tests like I do for the highlighted code", - "prompt": "Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated." - } - ] + `\"config.py\"` is now open. You can add a custom slash command in the `\"custom_commands\"` section, like in this example: + ```python + config = ContinueConfig( + ... + custom_commands=[CustomCommand( + name="test", + description="Write unit tests like I do for the highlighted code", + prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated.", + )] + ) ``` - `"name"` is the command you will type. - `"description"` is the description displayed in the slash command menu. - `"prompt"` is the instruction given to the model. The overall prompt becomes "Task: {prompt}, Additional info: {user_input}". For example, if you entered "/test exactly 5 assertions", the overall prompt would become "Task: Write a comprehensive...and sophisticated, Additional info: exactly 5 assertions".""") + `name` is the command you will type. + `description` is the description displayed in the slash command menu. + `prompt` is the instruction given to the model. The overall prompt becomes "Task: {prompt}, Additional info: {user_input}". For example, if you entered "/test exactly 5 assertions", the overall prompt would become "Task: Write a comprehensive...and sophisticated, Additional info: exactly 5 assertions".""") async def run(self, sdk: ContinueSDK): - global_dir = os.path.expanduser('~/.continue') - config_path = os.path.join(global_dir, 'config.json') - await sdk.ide.setFileOpen(config_path) + await sdk.ide.setFileOpen(getConfigFilePath()) diff --git a/continuedev/src/continuedev/plugins/steps/steps_on_startup.py b/continuedev/src/continuedev/plugins/steps/steps_on_startup.py index 19d62d30..489cada3 100644 --- a/continuedev/src/continuedev/plugins/steps/steps_on_startup.py +++ b/continuedev/src/continuedev/plugins/steps/steps_on_startup.py @@ -1,6 +1,5 @@ from ...core.main import Step from ...core.sdk import Models, ContinueSDK -from ...libs.util.step_name_to_steps import get_step_from_name class StepsOnStartupStep(Step): @@ -12,6 +11,6 @@ class StepsOnStartupStep(Step): async def run(self, sdk: ContinueSDK): steps_on_startup = sdk.config.steps_on_startup - for step_name, step_params in steps_on_startup.items(): - step = get_step_from_name(step_name, step_params) + for step_type in steps_on_startup: + step = step_type() await sdk.run_step(step) diff --git a/continuedev/src/continuedev/server/gui.py b/continuedev/src/continuedev/server/gui.py index ae57c0b6..c0957395 100644 --- a/continuedev/src/continuedev/server/gui.py +++ b/continuedev/src/continuedev/server/gui.py @@ -2,15 +2,15 @@ import asyncio import json from fastapi import Depends, Header, WebSocket, APIRouter from starlette.websockets import WebSocketState, WebSocketDisconnect -from typing import Any, List, Type, TypeVar, Union +from typing import Any, List, Type, TypeVar from pydantic import BaseModel import traceback from uvicorn.main import Server -from .session_manager import SessionManager, session_manager, Session +from .session_manager import session_manager, Session from .gui_protocol import AbstractGUIProtocolServer from ..libs.util.queue import AsyncSubscriptionQueue -from ..libs.util.telemetry import capture_event +from ..libs.util.telemetry import posthog_logger from ..libs.util.create_async_task import create_async_task router = APIRouter(prefix="/gui", tags=["gui"]) @@ -61,12 +61,12 @@ class GUIProtocolServer(AbstractGUIProtocolServer): "data": data }) - async def _receive_json(self, message_type: str, timeout: int = 5) -> Any: + async def _receive_json(self, message_type: str, timeout: int = 20) -> Any: try: return await asyncio.wait_for(self.sub_queue.get(message_type), timeout=timeout) except asyncio.TimeoutError: raise Exception( - "GUI Protocol _receive_json timed out after 5 seconds") + "GUI Protocol _receive_json timed out after 20 seconds") async def _send_and_receive_json(self, data: Any, resp_model: Type[T], message_type: str) -> T: await self._send_json(message_type, data) @@ -85,31 +85,23 @@ class GUIProtocolServer(AbstractGUIProtocolServer): self.on_reverse_to_index(data["index"]) elif message_type == "retry_at_index": self.on_retry_at_index(data["index"]) - elif message_type == "change_default_model": - self.on_change_default_model(data["model"]) elif message_type == "clear_history": self.on_clear_history() elif message_type == "delete_at_index": self.on_delete_at_index(data["index"]) - elif message_type == "delete_context_at_indices": - self.on_delete_context_at_indices(data["indices"]) + elif message_type == "delete_context_with_ids": + self.on_delete_context_with_ids(data["ids"]) elif message_type == "toggle_adding_highlighted_code": self.on_toggle_adding_highlighted_code() elif message_type == "set_editing_at_indices": self.on_set_editing_at_indices(data["indices"]) - elif message_type == "set_pinned_at_indices": - self.on_set_pinned_at_indices(data["indices"]) elif message_type == "show_logs_at_index": self.on_show_logs_at_index(data["index"]) + elif message_type == "select_context_item": + self.select_context_item(data["id"], data["query"]) except Exception as e: print(e) - async def send_state_update(self): - state = self.session.autopilot.get_full_state().dict() - await self._send_json("state_update", { - "state": state - }) - def on_main_input(self, input: str): # Do something with user input create_async_task(self.session.autopilot.accept_user_input( @@ -132,10 +124,6 @@ class GUIProtocolServer(AbstractGUIProtocolServer): create_async_task( self.session.autopilot.retry_at_index(index), self.session.autopilot.continue_sdk.ide.unique_id) - def on_change_default_model(self, model: str): - create_async_task(self.session.autopilot.change_default_model( - model), self.session.autopilot.continue_sdk.ide.unique_id) - def on_clear_history(self): create_async_task(self.session.autopilot.clear_history( ), self.session.autopilot.continue_sdk.ide.unique_id) @@ -144,10 +132,10 @@ class GUIProtocolServer(AbstractGUIProtocolServer): create_async_task(self.session.autopilot.delete_at_index( index), self.session.autopilot.continue_sdk.ide.unique_id) - def on_delete_context_at_indices(self, indices: List[int]): + def on_delete_context_with_ids(self, ids: List[str]): create_async_task( - self.session.autopilot.delete_context_at_indices( - indices), self.session.autopilot.continue_sdk.ide.unique_id + self.session.autopilot.delete_context_with_ids( + ids), self.session.autopilot.continue_sdk.ide.unique_id ) def on_toggle_adding_highlighted_code(self): @@ -162,18 +150,17 @@ class GUIProtocolServer(AbstractGUIProtocolServer): indices), self.session.autopilot.continue_sdk.ide.unique_id ) - def on_set_pinned_at_indices(self, indices: List[int]): - create_async_task( - self.session.autopilot.set_pinned_at_indices( - indices), self.session.autopilot.continue_sdk.ide.unique_id - ) - def on_show_logs_at_index(self, index: int): name = f"continue_logs.txt" logs = "\n\n############################################\n\n".join( ["This is a log of the exact prompt/completion pairs sent/received from the LLM during this step"] + self.session.autopilot.continue_sdk.history.timeline[index].logs) create_async_task( - self.session.autopilot.ide.showVirtualFile(name, logs)) + self.session.autopilot.ide.showVirtualFile(name, logs), self.session.autopilot.continue_sdk.ide.unique_id) + + def select_context_item(self, id: str, query: str): + """Called when user selects an item from the dropdown""" + create_async_task( + self.session.autopilot.select_context_item(id, query), self.session.autopilot.continue_sdk.ide.unique_id) @router.websocket("/ws") @@ -188,11 +175,11 @@ async def websocket_endpoint(websocket: WebSocket, session: Session = Depends(we protocol.websocket = websocket # Update any history that may have happened before connection - await protocol.send_state_update() + await protocol.session.autopilot.update_subscribers() while AppStatus.should_exit is False: message = await websocket.receive_text() - print("Received message", message) + print("Received GUI message", message) if type(message) is str: message = json.loads(message) @@ -206,13 +193,13 @@ async def websocket_endpoint(websocket: WebSocket, session: Session = Depends(we print("GUI websocket disconnected") except Exception as e: print("ERROR in gui websocket: ", e) - capture_event(session.autopilot.continue_sdk.ide.unique_id, "gui_error", { - "error_title": e.__str__() or e.__repr__(), "error_message": '\n'.join(traceback.format_exception(e))}) + posthog_logger.capture_event("gui_error", { + "error_title": e.__str__() or e.__repr__(), "error_message": '\n'.join(traceback.format_exception(e))}) raise e finally: print("Closing gui websocket") if websocket.client_state != WebSocketState.DISCONNECTED: await websocket.close() - session_manager.persist_session(session.session_id) + await session_manager.persist_session(session.session_id) session_manager.remove_session(session.session_id) diff --git a/continuedev/src/continuedev/server/gui_protocol.py b/continuedev/src/continuedev/server/gui_protocol.py index 9766fcd0..990833be 100644 --- a/continuedev/src/continuedev/server/gui_protocol.py +++ b/continuedev/src/continuedev/server/gui_protocol.py @@ -1,6 +1,8 @@ from typing import Any, Dict, List from abc import ABC, abstractmethod +from ..core.context import ContextItem + class AbstractGUIProtocolServer(ABC): @abstractmethod @@ -24,21 +26,17 @@ class AbstractGUIProtocolServer(ABC): """Called when the user inputs a step""" @abstractmethod - async def send_state_update(self, state: dict): - """Send a state update to the client""" - - @abstractmethod def on_retry_at_index(self, index: int): """Called when the user requests a retry at a previous index""" @abstractmethod - def on_change_default_model(self): - """Called when the user requests to change the default model""" - - @abstractmethod def on_clear_history(self): """Called when the user requests to clear the history""" @abstractmethod def on_delete_at_index(self, index: int): """Called when the user requests to delete a step at a given index""" + + @abstractmethod + def select_context_item(self, id: str, query: str): + """Called when user selects an item from the dropdown""" diff --git a/continuedev/src/continuedev/server/ide.py b/continuedev/src/continuedev/server/ide.py index aeff5623..cf8b32a1 100644 --- a/continuedev/src/continuedev/server/ide.py +++ b/continuedev/src/continuedev/server/ide.py @@ -1,23 +1,25 @@ # This is a separate server from server/main.py -from functools import cached_property import json import os -from typing import Any, Dict, List, Type, TypeVar, Union +from typing import Any, List, Type, TypeVar, Union import uuid -from fastapi import WebSocket, Body, APIRouter +from fastapi import WebSocket, APIRouter from starlette.websockets import WebSocketState, WebSocketDisconnect from uvicorn.main import Server +from pydantic import BaseModel import traceback +import asyncio -from ..libs.util.telemetry import capture_event +from .meilisearch_server import start_meilisearch +from ..libs.util.telemetry import posthog_logger from ..libs.util.queue import AsyncSubscriptionQueue from ..models.filesystem import FileSystem, RangeInFile, EditDiff, RangeInFileWithContents, RealFileSystem from ..models.filesystem_edit import AddDirectory, AddFile, DeleteDirectory, DeleteFile, FileSystemEdit, FileEdit, FileEditWithFullContents, RenameDirectory, RenameFile, SequentialFileSystemEdit -from pydantic import BaseModel -from .gui import SessionManager, session_manager +from .gui import session_manager from .ide_protocol import AbstractIdeProtocolServer -import asyncio from ..libs.util.create_async_task import create_async_task +from .session_manager import SessionManager + import nest_asyncio nest_asyncio.apply() @@ -138,6 +140,7 @@ class IdeProtocolServer(AbstractIdeProtocolServer): continue message_type = message["messageType"] data = message["data"] + print("Received message while initializing", message_type) if message_type == "workspaceDirectory": self.workspace_directory = data["workspaceDirectory"] elif message_type == "uniqueId": @@ -152,17 +155,18 @@ class IdeProtocolServer(AbstractIdeProtocolServer): async def _send_json(self, message_type: str, data: Any): if self.websocket.application_state == WebSocketState.DISCONNECTED: return + print("Sending IDE message: ", message_type) await self.websocket.send_json({ "messageType": message_type, "data": data }) - async def _receive_json(self, message_type: str, timeout: int = 5) -> Any: + async def _receive_json(self, message_type: str, timeout: int = 20) -> Any: try: return await asyncio.wait_for(self.sub_queue.get(message_type), timeout=timeout) except asyncio.TimeoutError: raise Exception( - "IDE Protocol _receive_json timed out after 5 seconds") + "IDE Protocol _receive_json timed out after 20 seconds", message_type) async def _send_and_receive_json(self, data: Any, resp_model: Type[T], message_type: str) -> T: await self._send_json(message_type, data) @@ -273,12 +277,12 @@ class IdeProtocolServer(AbstractIdeProtocolServer): # like file changes, tracebacks, etc... def onAcceptRejectSuggestion(self, accepted: bool): - capture_event(self.unique_id, "accept_reject_suggestion", { + posthog_logger.capture_event("accept_reject_suggestion", { "accepted": accepted }) def onAcceptRejectDiff(self, accepted: bool): - capture_event(self.unique_id, "accept_reject_diff", { + posthog_logger.capture_event("accept_reject_diff", { "accepted": accepted }) @@ -431,6 +435,13 @@ class IdeProtocolServer(AbstractIdeProtocolServer): @router.websocket("/ws") async def websocket_endpoint(websocket: WebSocket, session_id: str = None): try: + # Start meilisearch + try: + await start_meilisearch() + except Exception as e: + print("Failed to start MeiliSearch") + print(e) + await websocket.accept() print("Accepted websocket connection from, ", websocket.client) await websocket.send_json({"messageType": "connected", "data": {}}) @@ -443,6 +454,7 @@ async def websocket_endpoint(websocket: WebSocket, session_id: str = None): message_type = message["messageType"] data = message["data"] + print("Received IDE message: ", message_type) create_async_task( ideProtocolServer.handle_json(message_type, data)) @@ -450,8 +462,8 @@ async def websocket_endpoint(websocket: WebSocket, session_id: str = None): if session_id is not None: session_manager.registered_ides[session_id] = ideProtocolServer other_msgs = await ideProtocolServer.initialize(session_id) - capture_event(ideProtocolServer.unique_id, "session_started", { - "session_id": ideProtocolServer.session_id}) + posthog_logger.capture_event("session_started", { + "session_id": ideProtocolServer.session_id}) for other_msg in other_msgs: handle_msg(other_msg) @@ -465,13 +477,14 @@ async def websocket_endpoint(websocket: WebSocket, session_id: str = None): print("IDE wbsocket disconnected") except Exception as e: print("Error in ide websocket: ", e) - capture_event(ideProtocolServer.unique_id, "gui_error", { - "error_title": e.__str__() or e.__repr__(), "error_message": '\n'.join(traceback.format_exception(e))}) + posthog_logger.capture_event("gui_error", { + "error_title": e.__str__() or e.__repr__(), "error_message": '\n'.join(traceback.format_exception(e))}) raise e finally: if websocket.client_state != WebSocketState.DISCONNECTED: await websocket.close() - capture_event(ideProtocolServer.unique_id, "session_ended", { - "session_id": ideProtocolServer.session_id}) - session_manager.registered_ides.pop(ideProtocolServer.session_id) + posthog_logger.capture_event("session_ended", { + "session_id": ideProtocolServer.session_id}) + if ideProtocolServer.session_id in session_manager.registered_ides: + session_manager.registered_ides.pop(ideProtocolServer.session_id) diff --git a/continuedev/src/continuedev/server/main.py b/continuedev/src/continuedev/server/main.py index 42dc0cc1..0b59d4fe 100644 --- a/continuedev/src/continuedev/server/main.py +++ b/continuedev/src/continuedev/server/main.py @@ -1,15 +1,17 @@ +import asyncio import time import psutil import os from fastapi import FastAPI from fastapi.middleware.cors import CORSMiddleware -from .ide import router as ide_router -from .gui import router as gui_router -from .session_manager import session_manager import atexit import uvicorn import argparse +from .ide import router as ide_router +from .gui import router as gui_router +from .session_manager import session_manager + app = FastAPI() app.include_router(ide_router) @@ -41,15 +43,20 @@ args = parser.parse_args() # log_file = open('output.log', 'a') # sys.stdout = log_file - def run_server(): uvicorn.run(app, host="0.0.0.0", port=args.port) -def cleanup(): +async def cleanup_coroutine(): print("Cleaning up sessions") for session_id in session_manager.sessions: - session_manager.persist_session(session_id) + await session_manager.persist_session(session_id) + + +def cleanup(): + loop = asyncio.new_event_loop() + loop.run_until_complete(cleanup_coroutine()) + loop.close() def cpu_usage_report(): @@ -79,5 +86,6 @@ if __name__ == "__main__": run_server() except Exception as e: + print("Error starting Continue server: ", e) cleanup() raise e diff --git a/continuedev/src/continuedev/server/meilisearch_server.py b/continuedev/src/continuedev/server/meilisearch_server.py new file mode 100644 index 00000000..286019e1 --- /dev/null +++ b/continuedev/src/continuedev/server/meilisearch_server.py @@ -0,0 +1,77 @@ +import os +import shutil +import subprocess + +from meilisearch_python_async import Client +from ..libs.util.paths import getServerFolderPath + + +def ensure_meilisearch_installed(): + """ + Checks if MeiliSearch is installed. + """ + serverPath = getServerFolderPath() + meilisearchPath = os.path.join(serverPath, "meilisearch") + dumpsPath = os.path.join(serverPath, "dumps") + dataMsPath = os.path.join(serverPath, "data.ms") + + paths = [meilisearchPath, dumpsPath, dataMsPath] + + existing_paths = set() + non_existing_paths = set() + for path in paths: + if os.path.exists(path): + existing_paths.add(path) + else: + non_existing_paths.add(path) + + if len(non_existing_paths) > 0: + # Clear the meilisearch binary + if meilisearchPath in existing_paths: + os.remove(meilisearchPath) + non_existing_paths.remove(meilisearchPath) + + # Clear the existing directories + for p in existing_paths: + shutil.rmtree(p, ignore_errors=True) + + # Download MeiliSearch + print("Downloading MeiliSearch...") + subprocess.run( + f"curl -L https://install.meilisearch.com | sh", shell=True, check=True, cwd=serverPath) + + +async def check_meilisearch_running() -> bool: + """ + Checks if MeiliSearch is running. + """ + + try: + client = Client('http://localhost:7700') + resp = await client.health() + if resp["status"] != "available": + return False + return True + except Exception: + return False + + +async def start_meilisearch(): + """ + Starts the MeiliSearch server, wait for it. + """ + + # Doesn't work on windows for now + if not os.name == "posix": + return + + serverPath = getServerFolderPath() + + # Check if MeiliSearch is installed, if not download + ensure_meilisearch_installed() + + # Check if MeiliSearch is running + if not await check_meilisearch_running(): + print("Starting MeiliSearch...") + subprocess.Popen(["./meilisearch"], cwd=serverPath, stdout=subprocess.DEVNULL, + stderr=subprocess.STDOUT, close_fds=True, start_new_session=True) diff --git a/continuedev/src/continuedev/server/session_manager.py b/continuedev/src/continuedev/server/session_manager.py index 20219273..3136f1bf 100644 --- a/continuedev/src/continuedev/server/session_manager.py +++ b/continuedev/src/continuedev/server/session_manager.py @@ -74,7 +74,7 @@ class SessionManager: async def on_update(state: FullState): await session_manager.send_ws_data(session_id, "state_update", { - "state": autopilot.get_full_state().dict() + "state": state.dict() }) autopilot.on_update(on_update) @@ -84,9 +84,9 @@ class SessionManager: def remove_session(self, session_id: str): del self.sessions[session_id] - def persist_session(self, session_id: str): + async def persist_session(self, session_id: str): """Save the session's FullState as a json file""" - full_state = self.sessions[session_id].autopilot.get_full_state() + full_state = await self.sessions[session_id].autopilot.get_full_state() if not os.path.exists(getSessionsFolderPath()): os.mkdir(getSessionsFolderPath()) with open(getSessionFilePath(session_id), "w") as f: |