ZipPy: Fast method to classify text as AI or human-generated
 
 
 
 
Go to file
gjcthinkst f8fd3f4f8f
Merge pull request #12 from Gitoffthelawn/patch-1
i18n AMO link
2024-04-12 13:30:00 +02:00
.github/workflows Add auto publishing to pypi 2024-01-12 06:45:56 +02:00
inch Update browser extension repo to v0.3.1 2023-09-21 13:33:36 -06:00
nlzmadetect Fix ai-generated.txt path for Nim tests 2023-10-27 11:56:32 -06:00
samples Add a GPT-4 sample 2024-02-21 12:26:25 -07:00
test_results Added initial sample length normalization 2024-02-21 20:00:02 -07:00
zippy Added initial sample length normalization 2024-02-21 20:00:02 -07:00
.gitignore Initial commit 2023-06-09 03:44:42 -06:00
.gitmodules Added code to make Nim compile to CLI and web 2023-06-09 03:46:28 -06:00
LICENSE Initial commit 2023-06-09 03:44:42 -06:00
README.md i18n AMO link 2024-04-10 21:59:45 -07:00
ai_detect_roc.png Added initial sample length normalization 2024-02-21 20:00:02 -07:00
burstiness.py Initial commit of burstiness analysis 2023-06-09 03:46:29 -06:00
contentatscale_detect.py Initial results from contentatscale.ai added, and removed the OpenAI detector from the ROC chart plotter 2023-09-21 16:37:28 -06:00
crossplag_detect.py Remove copy-paste artifact 2023-09-20 16:23:55 -06:00
gptzero_detect.py Added GPTZero API for testing and comparison 2023-06-09 03:46:29 -06:00
openai_detect.py Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00
plot_rocs.py Update plot_rocs to point to new test result directory 2023-10-15 09:52:43 -06:00
preset_plot_rocs.py Improve compression preset testing 2024-02-21 12:23:46 -07:00
requirements.txt Add Windows support and a requirements.txt 2023-10-27 09:54:13 -06:00
roberta_detect.py Add CUDA support for Roberta (local) and fix an alignment issue 2023-06-15 10:47:50 -06:00
roberta_local.py Add CUDA support for Roberta (local) and fix an alignment issue 2023-06-15 10:47:50 -06:00
setup.py Add auto publishing to pypi 2024-01-12 06:45:56 +02:00
test_contentatscale_detect.py Completed evaluation of contentatscale.ai and added zlib support to both the Python and Nim/JS implementations 2023-09-26 07:51:41 -06:00
test_crossplag_detect.py Added crossplag results 2023-06-20 21:08:51 -06:00
test_gptzero_detect.py Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
test_openai_detect.py Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
test_roberta_detect.py Fix typo in CHEAT tests 2023-06-09 03:46:30 -06:00
test_zippy_detect.py Added initial sample length normalization 2024-02-21 20:00:02 -07:00

README.md

ZipPy: Fast method to classify text as AI or human-generated

This is a research repo for fast AI detection using compression. While there are a number of existing LLM detection systems, they all use a large model trained on either an LLM or its training data to calculate the probability of each word given the preceding, then calculate a score where the more high-probability tokens are more likely to be AI-originated. Techniques and tools in this repo are looking for faster approximation to be embeddable and more scalable.

Compression-based detector (zippy.py and nlzmadetect)

ZipPy uses either the LZMA or zlib compression ratios as a way to indirectly measure the perplexity of a text. Compression ratios have been used in the past to detect anomalies in network data for intrusion detection, so if perplexity is roughly a measure of anomalous tokens, it may be possible to use compression to detect low-perplexity text. LZMA and zlib create a dictionary of seen tokens and then use though in place of future tokens. The dictionary size, token length, etc. are all dynamic (though influenced by the 'preset' of 0-9--with 0 being the fastest but worse compression than 9). The basic idea is to 'seed' a compression stream with a corpus of AI-generated text (ai-generated.txt) and then measure the compression ratio of just the seed data with that of the sample appended. Samples that follow more closely in word choice, structure, etc. will achieve a higher compression ratio due to the prevalence of similar tokens in the dictionary, novel words, structures, etc. will appear anomalous to the seeded dictionary, resulting in a worse compression ratio.

Current evaluation

Some of the leading LLM detection tools are: OpenAI's model detector (v2), Content at Scale, GPTZero, CrossPlag's AI detector, and Roberta. Here are each of them compared with both the LZMA and zlib detector across the test datasets:

ROC curve of detection tools

Installation

You can install zippy one of two ways:

Using python/pip

Via pip:

pip3 install thinkst-zippy

Or from source:

python3 setup.py build && python3 setup.py sdist && pip3 install dist/*.tar.gz

Now you can import zippy in other scripts.

Using pkgx

pkgx install zippy # or run it directly `pkgx zippy -h`

Usage

ZipPy will read files passed as command-line arguments or will read from stdin to allow for piping of text to it.

Once you've installed zippy it will add a new script (zippy) that you can use directly:

$ zippy -h
usage: zippy [-h] [-p P] [-e {zlib,lzma,brotli,ensemble}] [-s | sample_files ...]

positional arguments:
  sample_files          Text file(s) containing the sample to classify

options:
  -h, --help            show this help message and exit
  -p P                  Preset to use with compressor, higher values are slower but provide better compression
  -e {zlib,lzma,brotli,ensemble}
                        Which compression engine to use: lzma, zlib, brotli, or an ensemble of all engines
  -s                    Read from stdin until EOF is reached instead of from a file
$ zippy samples/human-generated/about_me.txt 
samples/human-generated/about_me.txt
('Human', 0.06013429262166636)

If you want to use the ZipPy technology in your browser, check out the Chrome extension or the Firefox extension that runs ZipPy in-browser to flag potentially AI-generated content.