AI Image Provenance: C2PA, Content Credentials, and the Future of Photo Metadata
C2PA and Content Credentials embed cryptographic provenance data into images to combat deepfakes — but they also raise new privacy concerns. Here is what you need to know.
C2PA (Coalition for Content Provenance and Authenticity) is a new industry standard that embeds cryptographically signed metadata into images to prove their origin and editing history. Backed by Adobe, Microsoft, Google, and major camera manufacturers, it is designed to combat AI-generated deepfakes and misinformation. However, this same provenance data can contain GPS coordinates, device identities, and creator information — creating a genuine tension between transparency and privacy.
What is C2PA and how does it work?
C2PA defines a way to embed a tamper-evident provenance record into image files. This record can include who created the image (person, organisation, or AI system), what device or software was used, when and where it was created, and every edit made to the image since creation.
The metadata is cryptographically signed, meaning any modification to the image that is not recorded in the provenance chain will break the signature. This allows viewers to verify whether an image is authentic, AI-generated, or has been manipulated. For businesses concerned about metadata compliance, our GDPR photo metadata guide covers the regulatory implications.
What are Content Credentials?
Content Credentials is Adobe's implementation of C2PA. Photos taken with supported cameras or created in Adobe tools carry a verifiable chain of custody. Social media platforms are beginning to display Content Credentials information, showing users whether an image was AI-generated, captured by a camera, or edited.
Adobe has integrated Content Credentials into Photoshop, Lightroom, and Firefly (their AI image generator). Camera manufacturers including Sony, Leica, and Nikon are shipping C2PA-enabled hardware.
Why does this create a privacy problem?
C2PA provenance data can contain the same categories of personal information as traditional EXIF data — location, device identity, creator name, timestamps — but it is specifically designed to resist removal. The entire purpose of the standard is that metadata persists and can be verified.
This creates a fundamental tension. The same standard that helps verify an image has not been deepfaked is also the standard that makes it harder to share photos anonymously. A journalist protecting a source, a domestic abuse survivor documenting evidence, or simply a private individual who does not want their location tracked — all face challenges as provenance metadata becomes more widespread.
What legislation is driving adoption?
California's SB 942 and similar legislative efforts are pushing for AI-generated content to carry mandatory provenance markers. The EU AI Act includes provisions for labelling AI-generated content. While aimed at transparency around synthetic media, these laws may have broader implications for all image metadata handling.
Can C2PA metadata be removed?
Yes, though doing so breaks the cryptographic chain of trust. The provenance data lives in specific segments of the image file that can be stripped just like traditional EXIF data. ExifVoid's cleaning process removes all metadata segments including newer provenance blocks, giving users the choice of what to share.
What does this mean for ordinary users?
For now, C2PA metadata is still relatively uncommon in casual photography. But as major camera manufacturers ship C2PA-enabled hardware and platforms begin requiring provenance data, it will become increasingly prevalent. Understanding what provenance metadata is — and having tools to manage it — will become as important as understanding EXIF data is today. Our guide on how to check if your photos have metadata explains how to see what is embedded in your files.
Frequently asked questions
Will C2PA replace EXIF?
No. C2PA is designed to complement, not replace, existing metadata standards. EXIF, XMP, and IPTC will continue to exist alongside C2PA provenance data. This means future photos may contain even more metadata than current ones. See our guide to EXIF vs XMP vs IPTC for details on existing standards.
Does removing C2PA metadata make a photo look suspicious?
In contexts where provenance verification is expected (such as news media), removing C2PA data may reduce trust in the image. For personal sharing, it is unlikely to matter. Most social media platforms do not yet display or require provenance data.
Are AI-generated images labelled with C2PA?
Major AI image generators including Adobe Firefly, DALL-E, and Midjourney are beginning to embed C2PA metadata identifying their output as AI-generated. However, not all generators do this, and the metadata can be removed — which is precisely the problem C2PA is trying to solve.
Protect your photos now
Scan and remove metadata — free, private, instant.