From 56df9a71d22e87612d790be11003b03b84151fb2 Mon Sep 17 00:00:00 2001 From: Itezaz-ul-Hassan Date: Fri, 14 Apr 2023 15:42:29 +0500 Subject: [PATCH] Add flags and their usage --- README.md | 19 ++++++++++++++++--- 1 file changed, 16 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 55813f7..e6f4efb 100644 --- a/README.md +++ b/README.md @@ -13,8 +13,9 @@ For a quick demonstration see my [demo video on twitter](https://twitter.com/bio python3 -m venv venv source venv/bin/activate pip install -r requirements.txt + cp .env.sample .env -Add your openAI api key to `openai_key.txt` - _warning!_ by default this uses GPT-4 and may make many repeated calls to the api. +Add your openAI api key to `.env` ## Example Usage @@ -24,7 +25,19 @@ To run with gpt-4 (the default, tested option): You can also run with other models, but be warned they may not adhere to the edit format as well: - python wolverine.py --model=gpt-3.5-turbo buggy_script.py "subtract" 20 3 + python wolverine.py --model=gpt-3.5-turbo -f buggy_script.py "subtract" 20 3 + + +## Flags and their usage + +- To run with specific model, pass the `--model` or `-m` flag with model name +- To pass the buggy script name, pass the `-f` or `--flag` flag with script name +- To run the updated changes to the script till success, pass the `-y` or `--yes` flag +- To revert the script to its original state, pass the `-r` or `--revert` flag + +## Sample full command + + python wolverine.py --model=gpt-3.5-turbo -f buggy_script.py -y "subtract" 20 3 ## Future Plans @@ -32,7 +45,7 @@ This is just a quick prototype I threw together in a few hours. There are many p - add flags to customize usage, such as asking for user confirmation before running changed code - further iterations on the edit format that GPT responds in. Currently it struggles a bit with indentation, but I'm sure that can be improved -- a suite of example buggy files that we can test prompts on to ensure reliablity and measure improvement +- a suite of example buggy files that we can test prompts on to ensure reliability and measure improvement - multiple files / codebases: send GPT everything that appears in the stacktrace - graceful handling of large files - should we just send GPT relevant classes / functions? - extension to languages other than python