SECURITY OF FACIAL FORENSICS MODELS AGAINST ADVERSARIAL ATTACKS

Rong Huang, Fuming Fang, Huy H. Nguyen, Junichi Yamagishi, Isao Echizen
preprint paper


mark



Resulting Examples from "DeepFakes" Dataset

Individual adversarial attack for fabricating classification outputs (10 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Individual adversarial attack for fabricating classification outputs (20 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Individual adversarial attack for fabricating segmentation outputs (100 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Individual adversarial attack for fabricating segmentation outputs (500 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Universal adversarial attack for fabricating classification outputs
(a) Original images (b) Universal adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks

Resulting Examples from "Face2Face" Dataset

Individual adversarial attack for fabricating classification outputs (10 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Individual adversarial attack for fabricating classification outputs (20 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Individual adversarial attack for fabricating segmentation outputs (100 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Individual adversarial attack for fabricating segmentation outputs (500 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Universal adversarial attack for fabricating classification outputs
(a) Original images (b) Universal adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks

Resulting Examples from "FaceSwap" Dataset

Individual adversarial attack for fabricating classification outputs (10 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Individual adversarial attack for fabricating classification outputs (20 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Individual adversarial attack for fabricating segmentation outputs (100 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Individual adversarial attack for fabricating segmentation outputs (500 iterations)
(a) Original images (b) Adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks


Universal adversarial attack for fabricating classification outputs
(a) Original images (b) Universal adversarial perturbations (min-max scaled for display) (c) Adversarial images (d) Segmentation outputs (e) Ground-truth masks