Generative artificial intelligence is radically transforming the world of images. For you, professional photographers or collectors of rare works, it raises a crucial question: who decides if your works, family archives, or private collections can be used to train an AI?
Two models clash in the legal world: opt-in (AI requires your explicit consent) and opt-out (you must express your refusal to be protected).
Opt-in or opt-out: the difference between control and loss of value
For the field of photography and visual arts, this choice is fundamental:
- Opt-in system (Protected by default): This is maximum protection. None of your images, regardless of their age or rarity, can be exploited without your prior authorization. You retain full control over the value of your work.
- Opt-out system (Permitted by default): Everything is allowed by default. Your photos could be ingested to train an AI model unless you actively object. This means the burden of protection falls on you, and if one work is overlooked, it may be used.
Behind this distinction lies the challenge of preserving the uniqueness, market value, and control of your archival images and collections.
International regulation moves forward, Canada hesitates
In Europe, the Copyright Directive (CDSM) has set a clear framework: it introduced the possibility for creators to reserve their rights. In practice, you can signal that your collections and content must not be subjected to Text and Data Mining (TDM) by AIs for commercial purposes.
In Canada, things are still unclear.
The government launched a public consultation in October 2023 to assess the impact of AI on copyright and to determine whether the approach should be opt-in or opt-out. For now, no specific law has been adopted on this issue.
Note that Bill C-27 (Artificial Intelligence and Data Act – AIDA) is active and progressing, but it does not resolve the specific issue of copyright in AI training.
The situation in Canada remains a gray area:
- The law protects your creations through copyright and fair dealing, but this framework was not designed for massive AI training.
- Your images risk being absorbed, diluting their uniqueness and heritage value.
Your proactive role: protect your files now
Since legislation is slow, technical standards already exist to help you protect your visual assets:
- Content Credentials: This is a “digital identity card” embedded in your files (photos, scans), documenting their origin, modification history, and associated rights. It is an essential tool to prove provenance.
- TDM·AI: An emerging protocol that adds an automatic signal to your content (via code or metadata) indicating it should not be used by AI training bots.
- Metadata and terms of use: Ensure that the IPTC/EXIF metadata of all your images (especially digitized collections) are up to date. Clearly state your terms of use on your platforms and archive websites.
Stay vigilant for the future of your collection
The future lies between technological innovation and the protection of your rights. Europe has taken the lead, but in Canada, creators and collectors must remain vigilant and proactive.
The question is not just legal: it is a cultural battle over the right to decide the use of your visual heritage in the age of AI.
If the State does not make a clear decision, how will you ensure that your precious collections and photographic work are not anonymously integrated into a global training dataset?