ML watermarking
Owner
Description
ML Watermarking offers watermarking methods to help proving ownership on the machine learning models you train.
Watermarking a machine learning model involves the incorporation of a distinct behavior, typically a modification in the model's predictions, within a specific dataset referred to as a trigger set, that this library can help you generate for classification - and in future version detection - tasks. Method and guidelines for evaluating watermarking robustness are planned to be added in future releases.
ML Watermarking is an initiative under Confiance.ai, an organization dedicated to fostering transparency, fairness, and trust in the field of artificial intelligence.
Documentation
User Manual
- Documentation is available online.
Methodological Guideline
Scientific Contribution
Demonstrator
- A restricted demonstrator is available here. It relies on a visual inspection and an industrial control use-cases.
Support
Support for ML Watermarking must be obtained by sending an email to support@confiance.ai
Ensure your email contains :
- Your name
- A link to this page
- the version you are working with
- A clear description of the problematic (bug, crash, feature or help request)
- A full description of the problem whichallow to reproduce it
- Any file or screenshort element mandatory for the full understanding of the problematic
Files
Files
Additional details
- Functional maturity
- Technological maturity