Skip to main content

Preserving Accuracy While Stealing Watermarked Deep Neural Networks

Publication ,  Conference
Ray, A; Firouzi, F; Lafata, K; Chakrabarty, K
Published in: Proceedings 2024 International Conference on Machine Learning and Applications Icmla 2024
January 1, 2024

The deployment of Deep Neural Networks (DNNs) as cloud services has accelerated significantly over the years. Training an application-specific DNN for cloud deployment requires substantial computational resources and costs associated with hyper-parameter tuning and model selection. To preserve Intellectual Property (IP) rights, model owners embed watermarks into publicly deployed DNNs. These trigger inputs and labels are uniquely selected and embedded into the watermarked DNN by the model owner, remaining undisclosed during deployment. If a watermarked DNN (target classifier) is stolen via white-box access and re-deployed by an adversary (pirated classifier) without securing the IP rights from the model owner, the model owner can identify their IP by sending trigger inputs to retrieve trigger labels. Typically, adversaries tamper with the model weights of the target classifier prior to deployment, which in turn reduces the utility of the well-trained DNN. The authors proposes re-deploying the target classifier without altering the model weights to preserve model utility, and using a small sample of non-identical in-distribution inputs (used for training the target classifier) to train a Siamese neural network to evade detection, at inference stage. Experimental evaluations on standard benchmark datasets- MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100- using ResNet architectures with varying triggers demonstrate that the proposed method achieves zero false positive rate (fraction of clean testing input incorrectly labelled as trigger inputs) and false negative rate (fraction of trigger inputs incorrectly labelled as clean in-distribution inputs) in nearly all cases, proving its efficacy.

Duke Scholars

Published In

Proceedings 2024 International Conference on Machine Learning and Applications Icmla 2024

DOI

Publication Date

January 1, 2024

Start / End Page

1466 / 1473
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Ray, A., Firouzi, F., Lafata, K., & Chakrabarty, K. (2024). Preserving Accuracy While Stealing Watermarked Deep Neural Networks. In Proceedings 2024 International Conference on Machine Learning and Applications Icmla 2024 (pp. 1466–1473). https://doi.org/10.1109/ICMLA61862.2024.00227
Ray, A., F. Firouzi, K. Lafata, and K. Chakrabarty. “Preserving Accuracy While Stealing Watermarked Deep Neural Networks.” In Proceedings 2024 International Conference on Machine Learning and Applications Icmla 2024, 1466–73, 2024. https://doi.org/10.1109/ICMLA61862.2024.00227.
Ray A, Firouzi F, Lafata K, Chakrabarty K. Preserving Accuracy While Stealing Watermarked Deep Neural Networks. In: Proceedings 2024 International Conference on Machine Learning and Applications Icmla 2024. 2024. p. 1466–73.
Ray, A., et al. “Preserving Accuracy While Stealing Watermarked Deep Neural Networks.” Proceedings 2024 International Conference on Machine Learning and Applications Icmla 2024, 2024, pp. 1466–73. Scopus, doi:10.1109/ICMLA61862.2024.00227.
Ray A, Firouzi F, Lafata K, Chakrabarty K. Preserving Accuracy While Stealing Watermarked Deep Neural Networks. Proceedings 2024 International Conference on Machine Learning and Applications Icmla 2024. 2024. p. 1466–1473.

Published In

Proceedings 2024 International Conference on Machine Learning and Applications Icmla 2024

DOI

Publication Date

January 1, 2024

Start / End Page

1466 / 1473