picoGPT is an unnecessarily tiny and minimal implementation of GPT-2 in plain NumPy.
Go to file
jaymody 817292baea Update README. 2023-04-24 16:05:45 -04:00
.gitignore Initial commit. 2023-01-21 22:12:22 +01:00
LICENSE Add LICENSE 2023-02-09 20:13:10 -05:00
README.md Update README. 2023-04-24 16:05:45 -04:00
encoder.py Fix of 2 typos (#21) 2023-04-04 14:27:21 -04:00
gpt2.py Some quality of life improvements. 2023-02-17 10:27:46 -05:00
gpt2_pico.py Some quality of life improvements. 2023-02-17 10:27:46 -05:00
requirements.txt Some quality of life improvements. 2023-02-17 10:27:46 -05:00
utils.py show download progress in bytes (#18) 2023-02-25 08:31:14 -07:00

README.md

PicoGPT

Accompanying blog post: GPT in 60 Lines of Numpy


You've seen openai/gpt-2.

You've seen karpathy/minGPT.

You've even seen karpathy/nanoGPT!

But have you seen picoGPT??!?

picoGPT is an unnecessarily tiny and minimal implementation of GPT-2 in plain NumPy. The entire forward pass code is 40 lines of code.

picoGPT features:

  • Fast? Nah, picoGPT is megaSLOW 🐌
  • Training code? Error, 404 not found
  • Batch inference? picoGPT is civilized, single file line, one at a time only
  • top-p sampling? top-k? temperature? categorical sampling?! greedy?
  • Readable? gpt2.py gpt2_pico.py
  • Smol??? YESS!!! TEENIE TINY in fact 🤏

A quick breakdown of each of the files:

  • encoder.py contains the code for OpenAI's BPE Tokenizer, taken straight from their gpt-2 repo.
  • utils.py contains the code to download and load the GPT-2 model weights, tokenizer, and hyper-parameters.
  • gpt2.py contains the actual GPT model and generation code which we can run as a python script.
  • gpt2_pico.py is the same as gpt2.py, but in even fewer lines of code. Why? Because why not 😎👍.

Dependencies

pip install -r requirements.txt

Tested on Python 3.9.10.

Usage

python gpt2.py "Alan Turing theorized that computers would one day become"

Which generates

 the most powerful machines on the planet.

The computer is a machine that can perform complex calculations, and it can perform these calculations in a way that is very similar to the human brain.

You can also control the number of tokens to generate, the model size (one of ["124M", "355M", "774M", "1558M"]), and the directory to save the models:

python gpt2.py \
    "Alan Turing theorized that computers would one day become" \
    --n_tokens_to_generate 40 \
    --model_size "124M" \
    --models_dir "models"