There is a newer version of the record available.

Published April 11, 2024 | Version Demonstrator
Python Library Restricted

ML watermarking

Description

ML Watermarking offers watermarking methods to help proving ownership on the machine learning models you train.

Watermarking a machine learning model involves the incorporation of a distinct behavior, typically a modification in the model's predictions, within a specific dataset referred to as a trigger set, that this library can help you generate for classification - and in future version detection - tasks. Method and guidelines for evaluating watermarking robustness are planned to be added in future releases.

ML Watermarking is an initiative under Confiance.ai, an organization dedicated to fostering transparency, fairness, and trust in the field of artificial intelligence.

Files

Files

Restricted

The record is publicly accessible, but files are restricted to users with access.

Additional details

Trustworthy Attributes
Security
Engineering roles
Data Engineer
ML-Algorithm Engineer
Use cases
Visual Inspection
Functional Set
Robustness
Model Component Life Cycle
Operation
Functional maturity
Technological maturity