Published April 11, 2024 | Version 1.0.0
Python Library Restricted

ML watermarking

Description

ML Watermarking offers watermarking methods to help proving ownership on the machine learning models you train.

Watermarking a machine learning model involves the incorporation of a distinct behavior, typically a modification in the model's predictions, within a specific dataset referred to as a trigger set, that this library can help you generate for classification - and in future version detection - tasks. Method and guidelines for evaluating watermarking robustness are planned to be added in future releases.

ML Watermarking is an initiative under Confiance.ai, an organization dedicated to fostering transparency, fairness, and trust in the field of artificial intelligence.

Documentation

User Manual

  • Documentation is available online.

Methodological Guideline

  1. Methodological guideline for ML Watermarking

Scientific Contribution

  1. Scientific Contribution for Object Detection ML Watermarking

Demonstrator

  • A restricted demonstrator is available here. It relies on a visual inspection and an industrial control use-cases.

Support

Support for ML Watermarking must be obtained by sending an email to support@confiance.ai

Ensure your email contains :

  • Your name
  • A link to this page
  • the version you are working with
  • A clear description of the problematic (bug, crash, feature or help request)
  • A full description of the problem whichallow to reproduce it
  • Any file or screenshort element mandatory for the full understanding of the problematic

Files

Files

Restricted

The record is publicly accessible, but files are restricted to users with access.

Additional details

Trustworthy Attributes
Security
Engineering roles
Data Engineer
ML-Algorithm Engineer
Engineering activities
Use cases
Visual Inspection
Functional Set
Robustness
Model Component Life Cycle
Operation
Functional maturity
Technological maturity