kopia lustrzana https://github.com/biobootloader/wolverine
Add flags and their usage
rodzic
0995bd428b
commit
56df9a71d2
19
README.md
19
README.md
|
@ -13,8 +13,9 @@ For a quick demonstration see my [demo video on twitter](https://twitter.com/bio
|
||||||
python3 -m venv venv
|
python3 -m venv venv
|
||||||
source venv/bin/activate
|
source venv/bin/activate
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
|
cp .env.sample .env
|
||||||
|
|
||||||
Add your openAI api key to `openai_key.txt` - _warning!_ by default this uses GPT-4 and may make many repeated calls to the api.
|
Add your openAI api key to `.env`
|
||||||
|
|
||||||
## Example Usage
|
## Example Usage
|
||||||
|
|
||||||
|
@ -24,7 +25,19 @@ To run with gpt-4 (the default, tested option):
|
||||||
|
|
||||||
You can also run with other models, but be warned they may not adhere to the edit format as well:
|
You can also run with other models, but be warned they may not adhere to the edit format as well:
|
||||||
|
|
||||||
python wolverine.py --model=gpt-3.5-turbo buggy_script.py "subtract" 20 3
|
python wolverine.py --model=gpt-3.5-turbo -f buggy_script.py "subtract" 20 3
|
||||||
|
|
||||||
|
|
||||||
|
## Flags and their usage
|
||||||
|
|
||||||
|
- To run with specific model, pass the `--model` or `-m` flag with model name
|
||||||
|
- To pass the buggy script name, pass the `-f` or `--flag` flag with script name
|
||||||
|
- To run the updated changes to the script till success, pass the `-y` or `--yes` flag
|
||||||
|
- To revert the script to its original state, pass the `-r` or `--revert` flag
|
||||||
|
|
||||||
|
## Sample full command
|
||||||
|
|
||||||
|
python wolverine.py --model=gpt-3.5-turbo -f buggy_script.py -y "subtract" 20 3
|
||||||
|
|
||||||
## Future Plans
|
## Future Plans
|
||||||
|
|
||||||
|
@ -32,7 +45,7 @@ This is just a quick prototype I threw together in a few hours. There are many p
|
||||||
|
|
||||||
- add flags to customize usage, such as asking for user confirmation before running changed code
|
- add flags to customize usage, such as asking for user confirmation before running changed code
|
||||||
- further iterations on the edit format that GPT responds in. Currently it struggles a bit with indentation, but I'm sure that can be improved
|
- further iterations on the edit format that GPT responds in. Currently it struggles a bit with indentation, but I'm sure that can be improved
|
||||||
- a suite of example buggy files that we can test prompts on to ensure reliablity and measure improvement
|
- a suite of example buggy files that we can test prompts on to ensure reliability and measure improvement
|
||||||
- multiple files / codebases: send GPT everything that appears in the stacktrace
|
- multiple files / codebases: send GPT everything that appears in the stacktrace
|
||||||
- graceful handling of large files - should we just send GPT relevant classes / functions?
|
- graceful handling of large files - should we just send GPT relevant classes / functions?
|
||||||
- extension to languages other than python
|
- extension to languages other than python
|
||||||
|
|
Ładowanie…
Reference in New Issue