kopia lustrzana https://github.com/biobootloader/wolverine
Squashed commit of the following:
commitpull/16/head742aaaf9d1
Merge:f2d21e7
fe87faa
Author: biobootloader <128252497+biobootloader@users.noreply.github.com> Date: Fri Apr 14 15:44:12 2023 -0700 Merge pull request #13 from fsboehme/main more robust parsing of JSON (+ indentation) commitfe87faa2fb
Author: Felix Boehme <fsboehme@gmail.com> Date: Fri Apr 14 17:49:48 2023 -0400 cleanup commit4db9d1bf43
Author: Felix Boehme <fsboehme@gmail.com> Date: Fri Apr 14 17:49:09 2023 -0400 more cleanup commite1d0a790f8
Author: Felix Boehme <fsboehme@gmail.com> Date: Fri Apr 14 17:46:18 2023 -0400 cleanup commitb044882dc3
Author: Felix Boehme <fsboehme@gmail.com> Date: Fri Apr 14 17:37:27 2023 -0400 remove duplicate code from rebase commitdd174cf30e
Author: Felix Boehme <fsboehme@gmail.com> Date: Fri Apr 14 17:15:07 2023 -0400 add DEFAULT_MODEL to .env.sample + fix typo commit2497fb816b
Author: Felix Boehme <fsboehme@gmail.com> Date: Fri Apr 14 16:29:45 2023 -0400 move json_validated_response to standalone function commit923f7057e3
Author: Felix Boehme <fsboehme@gmail.com> Date: Thu Apr 13 11:35:24 2023 -0400 update readme - updated readme to mention .env - added model arg back commit0656a83da7
Author: Felix Boehme <fsboehme@gmail.com> Date: Thu Apr 13 11:29:06 2023 -0400 recursive calls if not json parsable - makes recursive calls to API (with a comment about it not being parsable) if response was not parsable - pass prompt.txt as system prompt - use env var for `DEFAULT_MODEL` - use env var for OPENAI_API_KEY commit7c072fba2a
Author: Felix Boehme <fsboehme@gmail.com> Date: Thu Apr 13 11:24:41 2023 -0400 update prompt to make it pay attention to indentation commitc62f91eaee
Author: Felix Boehme <fsboehme@gmail.com> Date: Thu Apr 13 11:23:44 2023 -0400 Update .gitignore commitf2d21e7b93
Merge:0420860
6343f6f
Author: biobootloader <128252497+biobootloader@users.noreply.github.com> Date: Fri Apr 14 13:59:44 2023 -0700 Merge pull request #12 from chriscarrollsmith/main Implemented .env file API key storage commit6343f6f50b
Author: biobootloader <128252497+biobootloader@users.noreply.github.com> Date: Fri Apr 14 13:59:31 2023 -0700 Apply suggestions from code review commitd87ebfa46f
Merge:9af5480
75f08e2
Author: Christopher Carroll Smith <75859865+chriscarrollsmith@users.noreply.github.com> Date: Fri Apr 14 16:53:25 2023 -0400 Merge branch 'main' of https://github.com/chriscarrollsmith/wolverine commit9af5480b89
Author: Christopher Carroll Smith <75859865+chriscarrollsmith@users.noreply.github.com> Date: Fri Apr 14 16:53:02 2023 -0400 Added python-dotenv to requirements.txt commit75f08e2852
Merge:e8a8931
0420860
Author: Christopher Carroll Smith <75859865+chriscarrollsmith@users.noreply.github.com> Date: Fri Apr 14 16:50:29 2023 -0400 Merge pull request #1 from biobootloader/main Reconcile with master branch commit04208605fe
Merge:d547822
6afb4db
Author: biobootloader <128252497+biobootloader@users.noreply.github.com> Date: Fri Apr 14 13:22:53 2023 -0700 Merge pull request #20 from eltociear/patch-1 fix typo in README.md commitd54782230c
Merge:1b9649e
4863df6
Author: biobootloader <128252497+biobootloader@users.noreply.github.com> Date: Fri Apr 14 13:19:43 2023 -0700 Merge pull request #17 from hemangjoshi37a/main added `star-history` ⭐⭐⭐⭐⭐ commit6afb4db2ff
Author: Ikko Eltociear Ashimine <eltociear@gmail.com> Date: Fri Apr 14 16:37:05 2023 +0900 fix typo in README.md reliablity -> reliability commit4863df6898
Author: Hemang Joshi <hemangjoshi37a@gmail.com> Date: Fri Apr 14 10:27:32 2023 +0530 added `star-history` added `star-history` commite8a893156e
Author: Christopher Carroll Smith <75859865+chriscarrollsmith@users.noreply.github.com> Date: Wed Apr 12 13:45:54 2023 -0400 Implemented .env file API key storage
rodzic
e1c413fae2
commit
946e15ff20
|
@ -0,0 +1,2 @@
|
|||
OPENAI_API_KEY=your_api_key
|
||||
#DEFAULT_MODEL=gpt-3.5-turbo
|
|
@ -1,2 +1,5 @@
|
|||
venv
|
||||
openai_key.txt
|
||||
.venv
|
||||
.env
|
||||
env/
|
||||
.vscode/
|
||||
|
|
15
README.md
15
README.md
|
@ -13,8 +13,11 @@ For a quick demonstration see my [demo video on twitter](https://twitter.com/bio
|
|||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
cp .env.sample .env
|
||||
|
||||
Add your openAI api key to `openai_key.txt` - _warning!_ by default this uses GPT-4 and may make many repeated calls to the api.
|
||||
Add your openAI api key to `.env`
|
||||
|
||||
_warning!_ By default wolverine uses GPT-4 and may make many repeated calls to the api.
|
||||
|
||||
## Example Usage
|
||||
|
||||
|
@ -30,13 +33,21 @@ You can also use flag `--confirm=True` which will ask you `yes or no` before mak
|
|||
|
||||
python wolverine.py buggy_script.py "subtract" 20 3 --confirm=True
|
||||
|
||||
If you want to use GPT-3.5 by default instead of GPT-4 uncomment the default model line in `.env`:
|
||||
|
||||
DEFAULT_MODEL=gpt-3.5-turbo
|
||||
|
||||
## Future Plans
|
||||
|
||||
This is just a quick prototype I threw together in a few hours. There are many possible extensions and contributions are welcome:
|
||||
|
||||
- add flags to customize usage, such as asking for user confirmation before running changed code
|
||||
- further iterations on the edit format that GPT responds in. Currently it struggles a bit with indentation, but I'm sure that can be improved
|
||||
- a suite of example buggy files that we can test prompts on to ensure reliablity and measure improvement
|
||||
- a suite of example buggy files that we can test prompts on to ensure reliability and measure improvement
|
||||
- multiple files / codebases: send GPT everything that appears in the stacktrace
|
||||
- graceful handling of large files - should we just send GPT relevant classes / functions?
|
||||
- extension to languages other than python
|
||||
|
||||
## Star History
|
||||
|
||||
[![Star History Chart](https://api.star-history.com/svg?repos=biobootloader/wolverine&type=Date)](https://star-history.com/#biobootloader/wolverine)
|
||||
|
|
|
@ -4,10 +4,13 @@ Because you are part of an automated system, the format you respond in is very s
|
|||
|
||||
In addition to the changes, please also provide short explanations of the what went wrong. A single explanation is required, but if you think it's helpful, feel free to provide more explanations for groups of more complicated changes. Be careful to use proper indentation and spacing in your changes. An example response could be:
|
||||
|
||||
Be ABSOLUTELY SURE to include the CORRECT INDENTATION when making replacements.
|
||||
|
||||
example response:
|
||||
[
|
||||
{"explanation": "this is just an example, this would usually be a brief explanation of what went wrong"},
|
||||
{"operation": "InsertAfter", "line": 10, "content": "x = 1\ny = 2\nz = x * y"},
|
||||
{"operation": "Delete", "line": 15, "content": ""},
|
||||
{"operation": "Replace", "line": 18, "content": "x += 1"},
|
||||
{"operation": "Replace", "line": 18, "content": " x += 1"},
|
||||
{"operation": "Delete", "line": 20, "content": ""}
|
||||
]
|
||||
|
|
|
@ -13,6 +13,7 @@ multidict==6.0.4
|
|||
openai==0.27.2
|
||||
pycodestyle==2.10.0
|
||||
pyflakes==3.0.1
|
||||
python-dotenv==1.0.0
|
||||
requests==2.28.2
|
||||
six==1.16.0
|
||||
termcolor==2.2.0
|
||||
|
|
97
wolverine.py
97
wolverine.py
|
@ -5,13 +5,20 @@ import os
|
|||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
import openai
|
||||
from termcolor import cprint
|
||||
from dotenv import load_dotenv
|
||||
|
||||
|
||||
# Set up the OpenAI API
|
||||
with open("openai_key.txt") as f:
|
||||
openai.api_key = f.read().strip()
|
||||
load_dotenv()
|
||||
openai.api_key = os.getenv("OPENAI_API_KEY")
|
||||
|
||||
DEFAULT_MODEL = os.environ.get("DEFAULT_MODEL", "gpt-4")
|
||||
|
||||
|
||||
with open("prompt.txt") as f:
|
||||
SYSTEM_PROMPT = f.read()
|
||||
|
||||
|
||||
def run_script(script_name, script_args):
|
||||
|
@ -25,7 +32,49 @@ def run_script(script_name, script_args):
|
|||
return result.decode("utf-8"), 0
|
||||
|
||||
|
||||
def send_error_to_gpt(file_path, args, error_message, model):
|
||||
def json_validated_response(model, messages):
|
||||
"""
|
||||
This function is needed because the API can return a non-json response.
|
||||
This will run recursively until a valid json response is returned.
|
||||
todo: might want to stop after a certain number of retries
|
||||
"""
|
||||
response = openai.ChatCompletion.create(
|
||||
model=model,
|
||||
messages=messages,
|
||||
temperature=0.5,
|
||||
)
|
||||
messages.append(response.choices[0].message)
|
||||
content = response.choices[0].message.content
|
||||
# see if json can be parsed
|
||||
try:
|
||||
json_start_index = content.index(
|
||||
"["
|
||||
) # find the starting position of the JSON data
|
||||
json_data = content[
|
||||
json_start_index:
|
||||
] # extract the JSON data from the response string
|
||||
json_response = json.loads(json_data)
|
||||
except (json.decoder.JSONDecodeError, ValueError) as e:
|
||||
cprint(f"{e}. Re-running the query.", "red")
|
||||
# debug
|
||||
cprint(f"\nGPT RESPONSE:\n\n{content}\n\n", "yellow")
|
||||
# append a user message that says the json is invalid
|
||||
messages.append(
|
||||
{
|
||||
"role": "user",
|
||||
"content": "Your response could not be parsed by json.loads. Please restate your last message as pure JSON.",
|
||||
}
|
||||
)
|
||||
# rerun the api call
|
||||
return json_validated_response(model, messages)
|
||||
except Exception as e:
|
||||
cprint(f"Unknown error: {e}", "red")
|
||||
cprint(f"\nGPT RESPONSE:\n\n{content}\n\n", "yellow")
|
||||
raise e
|
||||
return json_response
|
||||
|
||||
|
||||
def send_error_to_gpt(file_path, args, error_message, model=DEFAULT_MODEL):
|
||||
with open(file_path, "r") as f:
|
||||
file_lines = f.readlines()
|
||||
|
||||
|
@ -34,12 +83,7 @@ def send_error_to_gpt(file_path, args, error_message, model):
|
|||
file_with_lines.append(str(i + 1) + ": " + line)
|
||||
file_with_lines = "".join(file_with_lines)
|
||||
|
||||
with open("prompt.txt") as f:
|
||||
initial_prompt_text = f.read()
|
||||
|
||||
prompt = (
|
||||
initial_prompt_text +
|
||||
"\n\n"
|
||||
"Here is the script that needs fixing:\n\n"
|
||||
f"{file_with_lines}\n\n"
|
||||
"Here are the arguments it was provided:\n\n"
|
||||
|
@ -51,28 +95,27 @@ def send_error_to_gpt(file_path, args, error_message, model):
|
|||
)
|
||||
|
||||
# print(prompt)
|
||||
messages = [
|
||||
{
|
||||
"role": "system",
|
||||
"content": SYSTEM_PROMPT,
|
||||
},
|
||||
{
|
||||
"role": "user",
|
||||
"content": prompt,
|
||||
},
|
||||
]
|
||||
|
||||
response = openai.ChatCompletion.create(
|
||||
model=model,
|
||||
messages=[
|
||||
{
|
||||
"role": "user",
|
||||
"content": prompt,
|
||||
}
|
||||
],
|
||||
temperature=1.0,
|
||||
)
|
||||
|
||||
return response.choices[0].message.content.strip()
|
||||
return json_validated_response(model, messages)
|
||||
|
||||
|
||||
# Added the flag confirm. Once user use flag confirm then it will ask for confirmation before applying the changes.
|
||||
def apply_changes(file_path, changes_json, confirm=False):
|
||||
def apply_changes(file_path, changes: list, confirm=False):
|
||||
"""
|
||||
Pass changes as loaded json (list of dicts)
|
||||
"""
|
||||
with open(file_path, "r") as f:
|
||||
original_file_lines = f.readlines()
|
||||
|
||||
changes = json.loads(changes_json)
|
||||
|
||||
# Filter out explanation elements
|
||||
operation_changes = [change for change in changes if "operation" in change]
|
||||
explanations = [
|
||||
|
@ -137,8 +180,7 @@ def apply_changes(file_path, changes_json, confirm=False):
|
|||
print("Changes applied.")
|
||||
|
||||
|
||||
# Added the flag confirm. Once user use flag confirm then it will ask for confirmation before applying the changes.
|
||||
def main(script_name, *script_args, revert=False, model="gpt-4", confirm=False):
|
||||
def main(script_name, *script_args, revert=False, model=DEFAULT_MODEL, confirm=False):
|
||||
if revert:
|
||||
backup_file = script_name + ".bak"
|
||||
if os.path.exists(backup_file):
|
||||
|
@ -169,6 +211,7 @@ def main(script_name, *script_args, revert=False, model="gpt-4", confirm=False):
|
|||
error_message=output,
|
||||
model=model,
|
||||
)
|
||||
|
||||
apply_changes(script_name, json_response, confirm=confirm)
|
||||
cprint("Changes applied. Rerunning...", "blue")
|
||||
|
||||
|
|
Ładowanie…
Reference in New Issue