Artificial intelligence has changed how images, video, and audio are produced, but it has also created new openings for digital fraud. The rise of AI-generated deepfakes has forced banks, media companies, and insurers to rethink how they confirm whether something is genuine.
Many organizations now recognize that traditional defenses, such as liveness checks or profile verification tools, are losing effectiveness. The challenge is not only spotting fabricated content but also proving that authentic content is truly authentic. Researchers at the University of California, Berkeley warned that as synthetic media becomes more advanced, distinguishing real from fake by visual cues alone may soon be statistically unreliable.
BlueChips, a cryptographic verification company founded by Rick Gulati, is developing a system intended to serve as a foundational layer for proving authenticity. The company’s view is that the future of digital trust will rely less on the ability to detect fabrications and more on offering clear, mathematically verifiable proof of origin, identity, and consent.
A Cryptographic Method to Reconstruct Trust
Instead of evaluating appearances or behavioral signals, BlueChips creates what it calls Stamps. These are cryptographically bound credentials that attach to a file along with the identity of the subject, the device used to create it, and a secure consent record. Each component, including the file hash, face hash, device signature, and timestamp, can be independently verified.
The system differs from typical detection-driven tools. BlueChips uses secure enclaves on supported devices such as Apple’s Secure Enclave and Android’s StrongBox to sign media at the moment it is created. The stamp is written to a permissioned blockchain and then anchored at intervals to a public chain like Ethereum for outside verification. This allows third parties to confirm integrity, revocation state, and provenance without depending on BlueChips as a central authority.
Gulati said the objective is to restore a reliable point of reference. “The issue isn’t only that fakes are improving,” he said. “It’s that genuine content no longer comes with a built-in way to prove it is genuine. That is the gap we are trying to close.”
Financial Institutions Look for More Than Detection
Consumer-facing platforms often focus on viral deepfakes, but financial institutions are dealing with transactional fraud that uses synthetic identities, AI-generated documents, and impersonation attempts. A Deloitte analysis from 2025 projected that identity fraud driven by synthetic media would cost global financial institutions more than 40 billion dollars annually by 2026.
Banks have tested a variety of methods, including selfie checks and document analysis, yet these tools struggle to keep up with increasingly believable AI forgeries. BlueChips’ proposal for financial institutions centers on creating a verifiable chain beginning at the point of capture. If a customer signs an agreement, uploads a document, or records a verification video, the resulting file is bound to a stamp that cannot be transferred to another person or device.
Gulati said this reduces speculation. “We are not trying to guess whether something looks real,” he said. “We provide a way to validate that an asset came from a verified user on a verified device using a method that cannot be forged.”
Although adoption is still early, banks and other institutions have expressed interest in hardware-backed verification models.
Cross-Sector Demand as AI Fraud Expands
The spread of non-consensual AI-generated media has also affected entertainment companies, creator platforms, and public figures.
BlueChips includes a consent-receipt process that uses zero-knowledge compatible proofs. Platforms can confirm that a subject granted permission without accessing personal details. Creators can also issue a revocation, immediately invalidating stamped copies. As synthetic media tools become simpler to use, this type of revocable consent record is increasingly viewed as a practical necessity.
Gulati said this space has grown quickly. “Creators and public figures need a way to defend their digital identity without depending solely on platform moderation,” he said. “What we are building gives them a measurable form of control.”
Building a Verification Standard for AI Media
Regulators and companies have been exploring ways to address synthetic media risks, and several provenance frameworks have been proposed. The Coalition for Content Provenance and Authenticity, or C2PA, has introduced a standard that some news organizations and device makers already use. BlueChips integrates C2PA support but expands on it by adding blockchain anchoring and hardware-attested identity binding.
The question is whether a consistent verification standard will be adopted widely. Verification technology gains value only when it is used across industries, and coordination among companies, governments, and platforms remains a work in progress. Even so, the demand for methods that can provide measurable certainty has grown rapidly, driven by both security incidents and stricter consent requirements.
Whether BlueChips becomes central to this shift is still unknown. What is clear is that digital trust is becoming a priority across sectors, and organizations are searching for tools that do not rely on guesswork.
