Announcing a new partnership with Proofig! Learn more
We're excited to announce a new partnership with Proofig AI, the pioneering provider of automated image integrity solutions for scientific publications, to expand their PubShield ecosystem with automated screening for suspected AI-generated or AI-assisted writing in manuscripts.
As researchers, we know how important it is to produce original work and be able to demonstrate its authenticity, while giving proper attribution to the author. In a world where generative AI has blurred the line between human and AI-written authorship, we built Pangram to help teams assess AI-generated and AI-assisted writing and support the responsible, policy-aligned use of these tools. Proofig AI is a trusted leader in research integrity, and we're thrilled to partner with them to expand the PubShield ecosystem, giving researchers the ability to automatically screen for AI. Together, we're making it easier to uphold integrity and transparency in the submission process, at scale.
PubShield is Proofig AI's institutional submission hub that consolidates best-in-class manuscript quality assurance checks into one workflow and dashboard. It replaces scattered logins, fragmented workflows, and manual coordination with a consistent, scalable review process. PubShield currently supports screening across text similarity, image integrity, reference checks, AI-generated text, and data-reporting and compliance readiness and is designed to expand toward full manuscript quality assurance.
Inside the PubShield ecosystem, researchers at participating institutions can use Pangram's AI detection software to screen manuscript text for likely AI-generated or AI-assisted writing, including content produced by tools such as ChatGPT, Claude, Gemini, and others. Researchers and institutional teams can use the results to guide follow-up where needed, helping identify content that may require disclosure, clarification, or revision to align with journal policies and institutional standards. This helps mitigate policy and reputational risk while keeping the review process consistent and scalable.
"As generative AI tools become increasingly prevalent in research and writing workflows, principal investigators and corresponding authors need clear and practical ways to understand when and how AI assistance has been used by authors and collaborators. We identified Pangram as a strong and well-engineered solution in this space and reached out to them because of the exceptional quality, technical rigor, and reliability of their approach. Integrating Pangram and Proofig AI into PubShield gives researchers a simple, unified way to run key integrity checks before submission, helping them move forward with peace of mind and higher-quality submissions."
— Dr. Dror Kolodkin-Gal, Co-founder and CEO, Proofig AI
The rapid adoption of generative AI writing tools is creating new challenges for researchers and the institutions that support them as journal policies and community best practices continue to evolve. Expectations around when AI assistance is permitted and when disclosure is required can vary by journal and discipline. As a result, institutions need scalable ways to support researchers with consistent review and reduce the risk of non-compliance, disputes over authorship transparency, or avoidable post-submission complications.
Pangram is now available as part of PubShield for participating institutions. Institutions interested in enabling PubShield with Pangram can contact Proofig AI to learn more about access and deployment.
