Introducing Pangram 3.2

Katherine Thai
February 27, 2026

Announcing Pangram 3.2

Pangram is very excited to release a new AI detection model, Pangram 3.2. Like its predecessors, Pangram 3.1 and Pangram 3.0, it is based on the EditLens architecture described in our ICLR 2026 paper. What our users can expect is an incremental, but noticeable improvement to the number of true positives the detector is able to catch (the recall), while maintaining the same industry-low false positive rate, ensuring false accusations of AI usage are still exceedingly rare.

Model Card

Adopting the best practices of LLM releases, we have decided to begin releasing Model Cards alongside our detector updates: essentially "Nutrition Labels" for AI models. Our model cards describe the training architecture and framework, details about the training dataset, relevant evaluation results, and changes made that may have an impact on the detector's behavior. We also describe exact specifications of the model's inputs and outputs, supported languages, and the kinds of conditions under which we'd expect Pangram to perform well, and where it is more limited.

What to Expect

You will probably notice that Pangram 3.2 is more sensitive than Pangram 3.1-- in other words, more AI text will be caught. This is due to improvements in humanizer detection, detection of Claude 4.6, sensitivity in detecting shorter AI-generated texts, adding more data to the training dataset, and more optimal hyperparameters in the EditLens architecture.

What has Improved

Humanizer Detection and Adversarial Prompting

The single largest improvement to Pangram 3.2 is its ability to detect (humanized AI generated text)[https://www.pangram.com/blog/humanizers-aug-25]. On our internal humanizer evaluation set, we improve the detection rate of humanizers by 4x compared to Pangram 3.1. We also see a roughly 3x improvement on our internal evaluation of "adversarial prompts," which are texts generated by a language model that has been instructed to intentionally add mistakes and write in a style that evades AI detection.

This is particularly important in education, where students are increasingly using humanizer tools or trying to prompt language models in ways to avoid the resulting text coming across as "too AI-generated."

Detection of AI-Generated Short Social Media Posts

Given the virality of our X bot, which people have used to check tweets for AI-generated text, we have focused hard recently on improving detection on short, Tweet-length content online. We also have decreased the minimum word count from 75 down to 50 as we feel we have become more confident in our ability to distinguish AI-generated posts in the 50-75 word range.

At an equivalent false positive rate as Pangram 3.1, we have improved the false negative rate on short social media posts by 17% in Pangram 3.2.

Claude 4.6 Improvements

A number of users have reported false negatives with Claude Opus 4.6 specifically. We have addressed this by regenerating our dataset with Claude Opus 4.6 data included. After evaluating on our internal challenge datasets (particularly difficult examples) and red teaming, we now feel confident that Pangram is able to detect Claude Opus 4.6 as well as any other frontier LLM.

What's Next

AI-generated Math, Code, and Science

AI-generated code and math are not currently detected with high recall. We are currently focusing on these use cases due to high customer demand. While math and code are more formulaic, and thus harder to detect than AI-generated writing, some of our early experiments are showing promising results.

Continued Iteration on Humanizers

The humanizer market is constantly evolving, and a wider variety of humanizers has hit the market in recent months. We are building more advanced techniques to detect humanizers, which we hope to share publicly soon.

The Future

Pangram is committed to always staying at the edge of the frontier of what's possible with AI detection. We are constantly evolving as LLM capability continues to rise.

We are also hiring! Check out our careers page to help us build the world's best AI detectors.

Inscreva-se na nossa newsletter
Partilhamos atualizações mensais sobre a nossa investigação em deteção de IA.