ZipPy: Fast method to classify text as AI or human-generated
 
 
 
 
Go to file
Jacob Torrey 5fe84f9aed Added OpenAI's detector and all the test run reports along with a ROC diagram
Signed-off-by: Jacob Torrey <jacob@thinkst.com>
2023-06-09 03:46:29 -06:00
.github/workflows Squash action tracebacks 2023-06-09 03:46:29 -06:00
nlzmadetect Added a localStorage-based local prelude feature 2023-06-09 03:46:28 -06:00
samples Added ChatGPT-generated news articles 2023-06-09 03:46:29 -06:00
.gitignore Initial commit 2023-06-09 03:44:42 -06:00
.gitmodules Added code to make Nim compile to CLI and web 2023-06-09 03:46:28 -06:00
LICENSE Initial commit 2023-06-09 03:44:42 -06:00
README.md Update README.md 2023-06-09 03:46:27 -06:00
ai-generated.txt Added ability to plot ROC curves for test results 2023-06-09 03:46:29 -06:00
ai_detect_roc.png Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00
burstiness.py Initial commit of burstiness analysis 2023-06-09 03:46:29 -06:00
gptzero-report.xml Added GPTZero tests (and results) to ROC plot 2023-06-09 03:46:29 -06:00
gptzero_detect.py Added GPTZero API for testing and comparison 2023-06-09 03:46:29 -06:00
lzma-report.xml Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00
lzma_detect.py Added ability to plot ROC curves for test results 2023-06-09 03:46:29 -06:00
openai-report.xml Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00
openai_detect.py Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00
plot_rocs.py Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00
roberta-report.xml Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00
roberta_detect.py Added ChatGPT-generated news articles 2023-06-09 03:46:29 -06:00
roberta_local.py Added code to use roberta-openai-detector to compare results against 2023-06-09 03:46:27 -06:00
test_gptzero_detect.py Added GPTZero tests (and results) to ROC plot 2023-06-09 03:46:29 -06:00
test_lzma_detect.py Added ability to plot ROC curves for test results 2023-06-09 03:46:29 -06:00
test_openai_detect.py Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00
test_roberta_detect.py Added OpenAI's detector and all the test run reports along with a ROC diagram 2023-06-09 03:46:29 -06:00

README.md

ai-detect: Fast methods to classify text as AI or human-generated

This is a research repo for fast AI detection methods as we experiment with different techniques. While there are a number of existing LLM detection systems, they all use a large model trained on either an LLM or its training data to calculate the probability of each word given the preceeding, then calculating a score where the more high-probability tokens are more likely to be AI-originated. Techniques and tools in this repo are looking for faster approximation to be embeddable and more scalable.

LZMA compression detector (lzma_detect.py and nlzmadetect)

Python classifiation accuracy testing Nim classification accuracy testing

This is the first attempt, using the LZMA compression ratios as a way to indirectly measure the perplexity of a text. Compression ratios have been used in the past to detect anomalies in network data for intrusion detection, so if perplexity is roughly a measure of anomalous tokens, it may be possible to use compression to detect low-perplexity text. LZMA creates a dictionary of seen tokens, and then uses though in place of future tokens. The dictionary size, token length, etc. are all dynamic (though influenced by the 'preset' of 0-9--with 0 being the fastest but worse compression than 9). The basic idea is to 'seed' an LZMA compression stream with a corpus of AI-generated text (ai-generated.txt) and then measure the compression ratio of just the seed data with that of the sample appended. Samples that follow more closely in word choice, structure, etc. will acheive a higher compression ratio due to the prevalence of similar tokens in the dictionary, novel words, structures, etc. will appear anomalous to the seeded dictionary, resulting in a worse compression ratio.