ZipPy: Fast method to classify text as AI or human-generated
 
 
 
 
Go to file
Jacob Torrey 2760f1b9aa Fix typo in CHEAT tests
Signed-off-by: Jacob Torrey <jacob@thinkst.com>
2023-06-09 03:46:30 -06:00
.github/workflows Squash action tracebacks 2023-06-09 03:46:29 -06:00
nlzmadetect Added a localStorage-based local prelude feature 2023-06-09 03:46:28 -06:00
samples Added ChatGPT sample 2023-06-09 03:46:30 -06:00
.gitignore Initial commit 2023-06-09 03:44:42 -06:00
.gitmodules Added code to make Nim compile to CLI and web 2023-06-09 03:46:28 -06:00
LICENSE Initial commit 2023-06-09 03:44:42 -06:00
README.md Fix minor typo in README.md 2023-06-09 03:46:29 -06:00
ai-generated.txt Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
ai_detect_roc.png Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
burstiness.py Initial commit of burstiness analysis 2023-06-09 03:46:29 -06:00
gptzero-report.xml Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
gptzero_detect.py Added GPTZero API for testing and comparison 2023-06-09 03:46:29 -06:00
lzma-report.xml Added CHEAT test results 2023-06-09 03:46:30 -06:00
lzma_detect.py Change preset and behavior for very short samples 2023-06-09 03:46:30 -06:00
openai-report.xml Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
openai_detect.py Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00
plot_rocs.py Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
roberta-report.xml Added CHEAT test results 2023-06-09 03:46:30 -06:00
roberta_detect.py Added ChatGPT-generated news articles 2023-06-09 03:46:29 -06:00
roberta_local.py Added code to use roberta-openai-detector to compare results against 2023-06-09 03:46:27 -06:00
test_gptzero_detect.py Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
test_lzma_detect.py Added CHEAT to roberta and lzma tests 2023-06-09 03:46:30 -06:00
test_openai_detect.py Completed a 500/set test with CHEAT 2023-06-09 03:46:30 -06:00
test_roberta_detect.py Fix typo in CHEAT tests 2023-06-09 03:46:30 -06:00

README.md

ai-detect: Fast methods to classify text as AI or human-generated

This is a research repo for fast AI detection methods as we experiment with different techniques. While there are a number of existing LLM detection systems, they all use a large model trained on either an LLM or its training data to calculate the probability of each word given the preceeding, then calculating a score where the more high-probability tokens are more likely to be AI-originated. Techniques and tools in this repo are looking for faster approximation to be embeddable and more scalable.

LZMA compression detector (lzma_detect.py and nlzmadetect)

Python classifiation accuracy testing Nim classification accuracy testing

This is the first attempt, using the LZMA compression ratios as a way to indirectly measure the perplexity of a text. Compression ratios have been used in the past to detect anomalies in network data for intrusion detection, so if perplexity is roughly a measure of anomalous tokens, it may be possible to use compression to detect low-perplexity text. LZMA creates a dictionary of seen tokens, and then uses though in place of future tokens. The dictionary size, token length, etc. are all dynamic (though influenced by the 'preset' of 0-9--with 0 being the fastest but worse compression than 9). The basic idea is to 'seed' an LZMA compression stream with a corpus of AI-generated text (ai-generated.txt) and then measure the compression ratio of just the seed data with that of the sample appended. Samples that follow more closely in word choice, structure, etc. will acheive a higher compression ratio due to the prevalence of similar tokens in the dictionary, novel words, structures, etc. will appear anomalous to the seeded dictionary, resulting in a worse compression ratio.

Current evaluation

The leading LLM detection tools are OpenAI's model detector (v2), GPTZero, and Roberta. Here are each of them compared with the LZMA detector across all the test datasets:

ROC curve of detection tools

*It should be noted that the evaluation is skewed by selecting only a subset of each dataset, as the DNN detectors perform better on more diverse inputs (e.g., code, foreign languages, etc.) whereas the LZMA-based detector works on inputs closer in style to the prelude data (i.e., English prose).