What is GPTZero?
GPTZero is a classification model that predicts whether a document was written by a large language model, providing predictions on a sentence, paragraph, and document level. GPTZero was trained on a large, diverse corpus of human-written and AI-generated text, with a focus on English prose.
When and how should I use GPTZero?
Our users have seen the use of AI-generated text proliferate into education, certification, hiring and recruitment, social writing platforms, disinformation, and beyond. We've created GPTZero as a tool to highlight the possible use of AI in writing text. In particular, we focus on classifying AI use in prose.
Our classifier returns a document-level score,
completely_generated_prob that specifies the probability the entire document was AI-generated. We would recommend using this score when deciding whether or not there is a significant use of AI in generating the text.
The sentence-level classification (for example, the highlighted text) should be used when our classifier has identified a mix of AI-generated and human-written content. In other words, when a single sentence has been highlighted as AI-generated, that should not be used to indicate that an essay is partially AI-generated. Rather, when a large portion of the document is identified as AI-generated, the highlighted sentence will indicate where in the document we believe this occurred.
Overall, our classifier is intended to be used to flag situations in which a conversation can be started (for example, between educators and students) to drive further inquiry and spread awareness of the risks of using AI in written work.
Does GPTZero only detect ChatGPT outputs?
No, GPTZero works robustly across a range of AI language models, including but not limited to ChatGPT, GPT-3, GPT-2, LLaMA, and AI services based on those models.
What are the limitations of the classifier?
The nature of AI-generated content is changing constantly. While we build more robust models for GPTZero, we recommend that educators take these results as one of many pieces in a holistic assessment of student work. There always exist edge cases with both instances where AI is classified as human, and human is classified as AI.
The accuracy of our model increases as more text is submitted to the model. As such, the accuracy of the model on the document-level classification will be greater than the accuracy on the paragraph-level, which is greater than the accuracy on the sentence level.
The accuracy of our model also increases for text similar in nature to our dataset. While we train on a highly diverse set of human and AI-generated text, the majority of our dataset is in English prose, written by adults.
Our classifier is not trained to identify AI-generated text after it has been heavily modified after generation (although we estimate this is a minority of the uses for AI-generation at the moment).
Currently, our classifier can sometimes flag other machine-generated or highly procedural text as AI-generated, and as such, should be used on more descriptive portions of text.
What data did you train your model on?
We trained our models on a dataset of paired human-written and AI-generated text. Our human-written text spans student-written articles, news articles, as well as question and answer datasets spanning multiple disciplines in the sciences and humanities. For each article of human-written text, we generate corresponding articles with AI to ensure there isn't topic-level bias in our dataset. Finally, we train our model with an equal balance of human and AI-written articles.
How do you know your model works?
We test our models on a never-before-seen set of human and AI articles from a section of our large-scale dataset, in addition to a smaller set of challenging articles that are outside its training distribution. We classify 99% of the human-written articles correctly, and 85% of the AI-generated articles correctly, when we set a threshold of 0.65 on the
completely_generated_prob returned by our API (human if below 0.65, AI if above 0.65). Our classifier achieves an AUC score (definition) of 0.98.
How do I turn the probabilities from your API into outcomes?
For reference, our API is detailed here.
We recommend using
completely_generated_prob to understand whether a document was completely generated by AI. On our validation dataset, here is how the results change when you set all documents with
completely_generated_prob under the threshold as human, and above as AI:
- At a threshold of 0.65, 85% of AI documents are classified as AI, and 99% of human documents are classified as human
- At a threshold of 0.16, 96% of AI documents are classified as AI, and 96% of human documents are classified as human
We recommend using a threshold of 0.65 or higher to minimize the number of false positives, as we think it is currently more harmful to falsely detect human writing as AI than vice versa.
Are you storing data from API calls?
No. We do not store or collect the documents passed into any calls to our API. We wanted to be overly cautious on the side of storing data from any organizations using our API.
Why GPTZero over other detection models?
- GPTZero is the most accurate AI detector across use-cases, verified by multiple independent sources, including TechCrunch, which called us the best and most reliable AI detector after testing seven others.
- GPTZero builds and constantly improves our own technology. In our competitor analysis, we found that not only does GPTZero perform better, some competitor services are actually just forwarding the outputs of free, open-source models without additional training.
- In contrast to many other models, GPTZero is finetuned for student writing and academic prose. By doing so, we've seen large improvements in accuracies for this use-case.