ZipPy: Fast method to classify text as AI or human-generated
 
 
 
 
Go to file
Jacob Torrey 4be0a7e9bf
Fix ai-generated.txt path for Nim tests
2023-10-27 11:56:32 -06:00
.github/workflows Update pytest.yml 2023-09-27 16:54:43 -06:00
inch Update browser extension repo to v0.3.1 2023-09-21 13:33:36 -06:00
nlzmadetect Fix ai-generated.txt path for Nim tests 2023-10-27 11:56:32 -06:00
samples Added GPTZero's public eval dataset and added it to the test bench 2023-09-14 12:13:25 -06:00
test_results Update plot_rocs to point to new test result directory 2023-10-15 09:52:43 -06:00
zippy Ensure the ai-generated.txt is included in the built package 2023-10-27 17:21:58 +00:00
.gitignore Initial commit 2023-06-09 03:44:42 -06:00
.gitmodules Added code to make Nim compile to CLI and web 2023-06-09 03:46:28 -06:00
LICENSE Initial commit 2023-06-09 03:44:42 -06:00
README.md Ensure the ai-generated.txt is included in the built package 2023-10-27 17:21:58 +00:00
ai_detect_roc.png Completed evaluation of contentatscale.ai and added zlib support to both the Python and Nim/JS implementations 2023-09-26 07:51:41 -06:00
burstiness.py Initial commit of burstiness analysis 2023-06-09 03:46:29 -06:00
contentatscale_detect.py Initial results from contentatscale.ai added, and removed the OpenAI detector from the ROC chart plotter 2023-09-21 16:37:28 -06:00
crossplag_detect.py Remove copy-paste artifact 2023-09-20 16:23:55 -06:00
gptzero_detect.py Added GPTZero API for testing and comparison 2023-06-09 03:46:29 -06:00
openai_detect.py Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00
plot_rocs.py Update plot_rocs to point to new test result directory 2023-10-15 09:52:43 -06:00
preset_plot_rocs.py Added preset analysis 2023-10-15 04:42:53 -06:00
requirements.txt Add Windows support and a requirements.txt 2023-10-27 09:54:13 -06:00
roberta_detect.py Add CUDA support for Roberta (local) and fix an alignment issue 2023-06-15 10:47:50 -06:00
roberta_local.py Add CUDA support for Roberta (local) and fix an alignment issue 2023-06-15 10:47:50 -06:00
setup.py Ensure the ai-generated.txt is included in the built package 2023-10-27 17:21:58 +00:00
test_contentatscale_detect.py Completed evaluation of contentatscale.ai and added zlib support to both the Python and Nim/JS implementations 2023-09-26 07:51:41 -06:00
test_crossplag_detect.py Added crossplag results 2023-06-20 21:08:51 -06:00
test_gptzero_detect.py Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
test_openai_detect.py Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
test_roberta_detect.py Fix typo in CHEAT tests 2023-06-09 03:46:30 -06:00
test_zippy_detect.py Ensure the ai-generated.txt is included in the built package 2023-10-27 17:21:58 +00:00

README.md

ZipPy: Fast method to classify text as AI or human-generated

This is a research repo for fast AI detection using compression. While there are a number of existing LLM detection systems, they all use a large model trained on either an LLM or its training data to calculate the probability of each word given the preceeding, then calculating a score where the more high-probability tokens are more likely to be AI-originated. Techniques and tools in this repo are looking for faster approximation to be embeddable and more scalable.

Compression-based detector (zippy.py and nlzmadetect)

ZipPy uses either the LZMA or zlib compression ratios as a way to indirectly measure the perplexity of a text. Compression ratios have been used in the past to detect anomalies in network data for intrusion detection, so if perplexity is roughly a measure of anomalous tokens, it may be possible to use compression to detect low-perplexity text. LZMA and zlib creates a dictionary of seen tokens, and then uses though in place of future tokens. The dictionary size, token length, etc. are all dynamic (though influenced by the 'preset' of 0-9--with 0 being the fastest but worse compression than 9). The basic idea is to 'seed' a compression stream with a corpus of AI-generated text (ai-generated.txt) and then measure the compression ratio of just the seed data with that of the sample appended. Samples that follow more closely in word choice, structure, etc. will acheive a higher compression ratio due to the prevalence of similar tokens in the dictionary, novel words, structures, etc. will appear anomalous to the seeded dictionary, resulting in a worse compression ratio.

Current evaluation

Some of the leading LLM detection tools are: OpenAI's model detector (v2), Content at Scale, GPTZero, CrossPlag's AI detector, and Roberta. Here are each of them compared with both the LZMA and zlib detector across the test datasets:

ROC curve of detection tools

Usage

ZipPy will read files passed as command-line arguments, or will read from stdin to allow for piping of text to it.

First, build and install the tool:

$ python3 setup.py build && python3 setup.py install

It will install a new script (zippy) that you can use directly:

$ zippy -h
usage: zippy [-h] [-p P] [-e {zlib,lzma,brotli,ensemble}] [-s | sample_files ...]

positional arguments:
  sample_files          Text file(s) containing the sample to classify

options:
  -h, --help            show this help message and exit
  -p P                  Preset to use with compressor, higher values are slower but provide better compression
  -e {zlib,lzma,brotli,ensemble}
                        Which compression engine to use: lzma, zlib, brotli, or an ensemble of all engines
  -s                    Read from stdin until EOF is reached instead of from a file
$ zippy samples/human-generated/about_me.txt 
samples/human-generated/about_me.txt
('Human', 0.06013429262166636)

If you want to use the ZipPy technology in your browser, check out the Chrome extension or the Firefox extension that runs ZipPy in-browser to flag potentially AI-generated content.