Pangram detects GPT-5 with 99.8%+ accuracy! Learn more
In this tutorial, we will learn how to check text for AI content using Pangram's pangram-sdk
Python package.
The pangram-sdk
package allows developers to use Pangram's AI Content Detector API to check short pieces of text, or longer documents, for signs that the content was AI-generated.
In this tutorial, we will cover acquiring an API key, using Pangram's Python SDK, and making HTTP requests directly to Pangram's API endpoints. Please see Pangram's full API documentation for more information and usage examples.
To start, you're going to need a Pangram account. Create an account using the email that you want your API key to be attached to. Once you make an account, you'll have one of two options to set up an account: sign up for a developer plan or get a researcher API key.
Pangram's Developer Plan starts at $100 per month. Included with the plan are up to 2000 API credits per month. You can contact us to unlock your account and enable usage-based pricing. Sign up for the Developer Plan to get started. Once you sign up for the developer plan, you will be able to find your API key in the API console.
Pangram also provides API keys free of charge to researchers. If you are working on a non-commercial research study, please fill out this form to apply for free API credits. We will respond to you directly with an API key and your research credit allocation.
Once you have your API key, you can add it to your environment. Run the following command, replacing the example API key with your personal API key. You can also add this command to your .bashrc
, .zshrc
, .env
, etc. to automatically set the PANGRAM_API_KEY
variable.
export PANGRAM_API_KEY="12345678-1234-abcd-0123-123456789abc"
Make sure you have the correct Python environment enabled. Run the following command to install Pangram's Python SDK:
pip install pangram-sdk
If you use uv, you can instead use:
uv add pangram-sdk
If you use Poetry, the command would be:
poetry add pangram-sdk
First, create a Pangram Client to make requests. The Pangram Client will automatically read your API key from your environment variables.
from pangram import Pangram
pangram_client = Pangram()
You can also pass an API key in directly:
from pangram import Pangram
my_api_key = '' # Fill this in with your API key.
pangram_client = Pangram(api_key=my_api_key)
pangram_client
's predict
function will make a single request to Pangram's API and return the result. By default, this will only look at roughly the first 400 words. One request will use one credit.
text = "The quick brown fox jumps over the lazy dog."
result = pangram_client.predict(text)
score = result["ai_likelihood"]
text_representation_of_score = result["prediction"]
print(f"We predict that the text {text} is {text_representation_of_score}, with an AI likelihood of {score}.")
Use the predict_batch
function to send a batch of queries at once, for faster processing of large datasets. One request will use one credit per item in the batch. The results returned will be an array of the same format as the single predict
function.
text_batch = ["text1", "text2"]
results = pangram_client.batch_predict(text_batch)
for result in results:
text = result["text"]
score = result["ai_likelihood"]
text_representation_of_score = result["prediction"]
print(f"We predict that the text {text} is {text_representation_of_score}, with an AI likelihood of {score}.")
Use the predict_sliding_window
function to get an accurate prediction of AI use across a longer document. This function will split the input text into windows and predict AI for every window in the batch. This function uses one credit per 1,000 words in the input text.
text = "The quick brown fox jumps over the lazy dog."
result = pangram_client.predict_sliding_window(text)
score = result["ai_likelihood"]
text_representation_of_score = result["prediction"]
print(f"We predict that the text {text} is {text_representation_of_score}, with an AI likelihood of {score}.")
The result is a dict with the following fields:
text
: [string] the input textai_likelihood
: [float] a number between 0 and 1, where close to 1 indicates a confident prediction that the text is AIprediction
: [string] a text description of how much AI content the text containsshort_prediction
: [string] "Human", "Mixed", or "AI"fraction_ai_content
: [float] a float between 0 and 1, where 1 indicates that AI is present throughout the text.windows
: [list] a list of single prediction results for the text.Pangram's dashboard can natively display the results of a sliding window request. Use the function predict_with_dashboard_link
to run a sliding window query and also receive a dashboard link. Just like predict_sliding_window
, this function is billed at 1 credit per 1,000 words of input text.
text = "The quick brown fox jumps over the lazy dog."
result = pangram_client.predict_sliding_window(text)
score = result["ai_likelihood"]
text_representation_of_score = result["prediction"]
dashboard_link = result["dashboard_link"]
print(f"We predict that the text {text} is {text_representation_of_score}, with an AI likelihood of {score}. You can see the full results at {dashboard_link}")
The result is a dict with the same fields as a predict_sliding_window
result, except it also contains one additional field:
dashboard_link
: [string] a link to a page containing the full sliding window results.All of these functions can also be accessed via HTTP. For full documentation on how to send HTTP requests to the Pangram API, please see Pangram's Inference API documentation.
Occasionally, a request to Pangram may time out or fail. To ensure that your program doesn't crash, we strongly recommend adding retries. One such library is Tenacity, which we recommend.
Here's an example of using Tenacity to retry Pangram calls:
from tenacity import retry, stop_after_attempt, wait_random_exponential, retry_if_exception_type
@retry(
retry=retry_if_exception_type((TimeoutError, ConnectionError)),
stop=stop_after_attempt(5),
wait=wait_random_exponential(multiplier=0.5, max=10),
reraise=True,
)
def predict(text):
return pangram_client.predict(text)
Here's a full example of using the Pangram SDK to check any text for AI, and get a dashboard link, with retries.
from pangram import Pangram
from tenacity import retry, stop_after_attempt, wait_random_exponential, retry_if_exception_type
api_key = ""
pangram_client = Pangram(api_key=api_key)
@retry(
retry=retry_if_exception_type((TimeoutError, ConnectionError)),
stop=stop_after_attempt(5),
wait=wait_random_exponential(multiplier=0.5, max=10),
reraise=True,
)
def predict_ai_with_link(text)
result = pangram_client.predict_sliding_window(text)
return result
text = "The quick brown fox jumps over the lazy dog."
result = predict_ai_with_link(text)
score = result["ai_likelihood"]
text_representation_of_score = result["prediction"]
dashboard_link = result["dashboard_link"]
print(f"We predict that the text {text} is {text_representation_of_score}, with an AI likelihood of {score}. You can see the full results at {dashboard_link}")
Hopefully with this guide, you were able to use Pangram's AI Detection Python package to detect AI content programmatically. Did you make anything cool with it? Please tag us on LinkedIn or X and share what you made!