Give your python scripts regenerative healing abilities!
Go to file
Itezaz ul Hassan b8b456373d
Merge fe04c86b9f into 2f5a026ff9
2023-04-16 02:45:40 +05:00
.env.sample cleanup 2023-04-14 17:46:18 -04:00
.flake8 use Fire for args, add flag to use 3.5-turbo 2023-04-08 12:38:14 -07:00
.gitignore more cleanup 2023-04-14 17:49:09 -04:00
LICENSE initial commit 2023-03-18 15:16:47 -07:00
README.md Merge branch 'main' of https://github.com/Itezaz-ul-Hassan/wolverine into Add-Support-For-Args-and-Env 2023-04-16 02:44:42 +05:00
args.py Add Support for Args Flags and .env 2023-04-14 15:35:54 +05:00
buggy_script.js chore: update buggy js script file 2023-04-15 10:10:59 +02:00
buggy_script.py initial commit 2023-03-18 15:16:47 -07:00
prompt.txt update prompt to make it pay attention to indentation 2023-04-14 17:08:18 -04:00
requirements.txt Added python-dotenv to requirements.txt 2023-04-14 16:53:02 -04:00
wolverine.py Merge branch 'main' of https://github.com/Itezaz-ul-Hassan/wolverine into Add-Support-For-Args-and-Env 2023-04-16 02:44:42 +05:00

README.md

Wolverine

About

Give your python scripts regenerative healing abilities!

Run your scripts with Wolverine and when they crash, GPT-4 edits them and explains what went wrong. Even if you have many bugs it will repeatedly rerun until it's fixed.

For a quick demonstration see my demo video on twitter.

Setup

python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
cp .env.sample .env

Add your openAI api key to .env

warning! By default wolverine uses GPT-4 and may make many repeated calls to the api.

Example Usage

To run with gpt-4 (the default, tested option):

python wolverine.py buggy_script.py "subtract" 20 3

You can also run with other models, but be warned they may not adhere to the edit format as well:

python wolverine.py --model=gpt-3.5-turbo -f buggy_script.py "subtract" 20 3

Flags and their usage

  • To run with specific model, pass the --model or -m flag with model name
  • To pass the buggy script name, pass the -f or --flag flag with script name
  • To run the updated changes to the script till success, pass the -y or --yes flag
  • To revert the script to its original state, pass the -r or --revert flag

Sample full command

python wolverine.py --model=gpt-3.5-turbo -f buggy_script.py -y "subtract" 20 3

If you want to use GPT-3.5 by default instead of GPT-4 uncomment the default model line in .env:

DEFAULT_MODEL=gpt-3.5-turbo

You can also use flag --confirm=True which will ask you yes or no before making changes to the file. If flag is not used then it will apply the changes to the file

python wolverine.py buggy_script.py "subtract" 20 3 --confirm=True

Future Plans

This is just a quick prototype I threw together in a few hours. There are many possible extensions and contributions are welcome:

  • add flags to customize usage, such as asking for user confirmation before running changed code
  • further iterations on the edit format that GPT responds in. Currently it struggles a bit with indentation, but I'm sure that can be improved
  • a suite of example buggy files that we can test prompts on to ensure reliability and measure improvement
  • multiple files / codebases: send GPT everything that appears in the stacktrace
  • graceful handling of large files - should we just send GPT relevant classes / functions?
  • extension to languages other than python

Star History

Star History Chart