TrueFace_a_Dataset_for_the_Detection_of_Synthetic_Face_Images_from_Social_Networks
TrueFace_a_Dataset_for_the_Detection_of_Synthetic_Face_Images_from_Social_Networks
Giulia Boato, Cecilia Pasquini, Antonio L. Stefani, Sebastiano Verde Daniele Miorandi
DISI, University of Trento U-Hopper
Trento, Italy Trento, Italy
Real (70K)
Authorized licensed use limited to: Chittagon University of Engineering and Technology. Downloaded on November 24,2024 at 16:46:45 UTC from IEEE Xplore. Restrictions apply.
which are capable of automatically generating highly real- ing detectors discriminating synthetic versus real faces and
istic fakes, especially for what concerns images and videos in particular for extending such analysis in-the-wild, where
depicting faces [26, 32]. Recent studies have shown that images are shared on social media.
human performance (even of experts) in distinguishing real The dataset was then evaluated through an experimen-
faces from GAN-generated ones are alarmingly poor [15]. tal campaign involving a pre-trained ResNet-50 model [12],
This has given rise to the phenomenon of deep fakes, which consequently fine-tuned on different subsets of TrueFace.
is threatening the trustworthiness of visual contents from The results highlighted that models trained only on pre-
online sources, and social networks in particular. Harmful social data fail to perform effectively on images shared on
effects have already been seen in the creation of fake social social networks. This motivates the need for datasets of
media profiles [11] or, recently, in attempts to distort facts post-social images, which enables the possibility of fine-
in the Russian-Ukrainian war [23]. Developing tools to pre- tuning existing models and extending their detection capa-
serve trust in shared images and videos is therefore a need bilities to the more challenging scenario of shared GAN-
that our society can no longer ignore. generated imagery.
Several works in media forensics have investigated the
detection of image manipulations and the identification of 2. Related Works
digital sources, obtaining promising results in laboratory
Long before the advent of modern generation technolo-
conditions and well-defined scenarios [34]. More recently,
gies like GANs, researchers have been investigating the de-
the research community has raised the bar by pursuing the
tection of real pictures from computer-generated ones for
ambition to scale forensic analyses to real-world web-based
the past two decades. Earlier studies focused on statisti-
systems, which involve routinely applied operations such as
cal features related to the acquisition process of natural im-
the act of sharing on social media platforms [27]. Concern-
ages [25], such as color filter array interpolation [10] and
ing deep fakes, researchers have already demonstrated the
lens chromatic aberration [7]. Other approaches included
possibility to distinguish synthetic images from real ones
the analysis of the distribution of wavelet coefficients [4,17]
by training data-driven models to recognize the traces left
and of the geometry of the face [5, 6].
by the generators [19, 21].
More recently, analyses based on convolutional neural
The creation of synthetic imagery, however, involves a
networks (CNNs) were performed by combining the contri-
variety of generation techniques also evolving over time,
bution of small patches of the target image [29]. In gen-
and data-driven detectors typically suffer non-aligned sce-
eral, it was shown that deeper general-purpose networks
narios, where testing generators differ from the training
like Xception typically outperform shallow ones [30].
ones [11]. Furthermore, fake contents tend to go viral on so-
The first studies on the detection of GAN-generated
cial networks long before any forensic analysis can be car-
imagery observed the presence of peculiar traces left by
ried out on them, and it is known that the processing applied
the generators on the image spatial domain. In particular,
by the sharing platforms tends to act as an obfuscating fac-
GANs were found to be subjected to a series of intrinsic
tor and to compromise the effectiveness of detection strate-
limitations [2]: narrow dynamic range, and absence of sat-
gies [28]. The availability of benchmark datasets includ-
urated or under-exposed regions [22]; non-preserved cor-
ing both synthetic contents (obtained through the most re-
relation among color spaces [16], which results from the
cent generators) and authentic images, uploaded on popular
fact that synthetic images are typically generated from RGB
sharing platforms, is therefore an essential resource towards
information alone, entailing inconsistencies in other color
a reliable evaluation of forensic detectors outside controlled
spaces. Another popular feature consists in co-occurrence
settings. At the same time, the generation and collection
matrices extracted from the RGB channels, which can be
of such data represents a challenging and time-consuming
used as a cross-band analysis tool [1], or directly fed in in-
task, resulting in a current lack of reference datasets.
put to a CNN [24]. Solutions based on deep neural networks
In this view, we present TrueFace1 , a first image data-
were also investigated in [19], where different state-of-the-
base for the authentication of synthetic-vs-real faces from
art pre-trained models are shown to perform effectively in
social networks (Figure 1). TrueFace contains GAN-
the detection of synthetic imagery.
generated images obtained through different versions of
The frequency domain has been studied as well in GAN
StyleGAN [13, 14] that were then shared (uploaded and
image forensics. In [39], the authors observed the pres-
downloaded) over three popular social media platforms,
ence of spectral peaks caused by the upsampling operations
namely Facebook, Telegram, and Twitter. Adopted models
performed in most GAN architectures. Different kinds of
for face generation produce hyper-realistic faces not distin-
spectral artifacts were also considered in [9] by means of an
guishable from real ones by humans, as recently demon-
ad hoc CNN classifier, with the purpose of not only telling
strated in [15,31]. Thus, it can be used both for benchmark-
apart real images from synthetic ones, but also discriminat-
1 Dataset available at https://bit.ly/3bAEH75 ing among different generative architectures [20, 37]. Fi-
Authorized licensed use limited to: Chittagon University of Engineering and Technology. Downloaded on November 24,2024 at 16:46:45 UTC from IEEE Xplore. Restrictions apply.
nally, a further type of frequency-based detector relies on The post-social dataset contains in total 60K images, half
the spectral energy distribution, which is shown to exhibit real and half fake, uploaded and downloaded on three of the
a different behavior in GAN-generated images with respect most popular social media platforms: Facebook (FB), Tele-
to natural ones [8]. gram (TG), Twitter (TW). A fraction of 20K images from
All methods described above are fully supervised, and the pre-social dataset (10K real, 5K from StyleGAN, and
as such, they perform very effectively on images synthe- 5K from StyleGAN2) have been independently uploaded,
sized by GANs that were present in the training set, but so to have overlapping images but subject to different shar-
they present limits when applied to data generated by un- ing operations.
seen models [11]. To address this problem, in [36] the au-
thors propose a data augmentation strategy based on Gaus- Platform Real StyleGAN StyleGAN2
sian blurring, in order to force the detector to learn higher-
Facebook (FB) 10K 5K 5K
level features. A solution based on incremental learning is
Telegram (TL) 10K 5K 5K
proposed in [21], although the method still requires some
Twitter (TW) 10K 5K 5K
examples of the new GAN architectures to generalize prop-
erly. A different approach in [3] consists in a fully convo- Table 2. Post-social dataset.
lutional patch-based classifier, with which the authors show
that better performance can be achieved by focusing on lo- 3.1. Sharing methodology
cal patches rather than the global structure.
The process of sharing the original images from the pre-
Similarly to the approach we adopted to evaluate True-
social dataset on the three selected platforms was carried
Face, in [35] a standard pre-trained model of the ResNet-
out automatically through the following methodologies.
50 architecture [12] is further fine-tuned with a strong aug-
Facebook offers a set of official APIs called GraphAPI
mentation based on compression and blurring. Experiments
that allow to read and write on the social platform. Down-
show that the learned features generalize well to unseen ar-
loading is a built-in function in the APIs, and can be auto-
chitectures and training datasets, even when trained on a
mated with a suitable script. Uploading requires an authen-
single GAN generator.
tication token, which is available to developer accounts, and
a script to make requests to the endpoints.
3. Dataset Telegram was handled through python-telegram-bot, a
The TrueFace dataset is the first publicly available data- wrapper library that provides an asynchronous interface for
base for synthetic-vs-real image classification with data also the Telegram Bot API. A bot can be implemented to upload
shared via social networks. It includes a total of 210K im- an image, return a unique identifier of the uploaded content,
ages of faces, divided into a pre-social and a post-social col- and then use the identifier to download it. 2
lection (Figure 1). Twitter makes available an official API for developer
The pre-social dataset contains a total of 150K images users, which can be accessed through the Tweepy wrapper
of faces, 70K of which are real and 80K are synthetic. library. Both upload and download are supported by the
The synthetic ones were generated by two popular gener- API, upon the obtainment of a set of access tokens.
ator models, namely StyleGAN [13] and StyleGAN2 [14],
3.2. Dataset analysis
40K each. Moreover, images for each model were equally
divided according to the StyleGAN fantasy parameter (20K Although all the considered social networks apply JPEG
per parameter). The fantasy parameter ψ determines how compression, they also carry out different operations on up-
far the generative network should deviate from the data av- loaded images. Table 3 reports a set of quantitative met-
erage. By adjusting this parameter, the variety/quality trade- rics obtained through the comparison of original images and
off of the output data can be determined. shared ones. In particular, PSNR denotes the peak signal-
Real images were obtained from the FFHQ Dataset [13], to-noise ratio, MSE is the pixel-wise mean squared error,
originally proposed as a benchmark for GANs. Both real SSIM is the structural similarity metric, and QF is the JPEG
and synthetic images are RGB and have a resolution of quality factor, estimated using ImageMagick 3 .
1024×1024 pixels. The following table summarizes the As can be seen, all social networks provide high PSNR
content of the pre-social dataset. and SSIM values, indicating a high visual similarity of
shared images with the original ones. Relatively speaking,
StyleGAN StyleGAN2 Facebook seems to provide the least faithful reproductions
Real
ψ = 0.7 ψ = 1 ψ = 0.5 ψ = 1 2 Images in Telegram can be shared as document (a file) or as photo;
Authorized licensed use limited to: Chittagon University of Engineering and Technology. Downloaded on November 24,2024 at 16:46:45 UTC from IEEE Xplore. Restrictions apply.
Platform PSNR MSE SSIM QF Size (px) 4.2. Results
FB 38.37 10.53 0.97 80 720×720 To benchmark the synthetic-vs-real detection perfor-
TL 40.86 5.74 0.98 87 1024×1024 mance, we ran a set of experiments by fine-tuning the
TW 39.28 9.49 0.97 85 1024×1024 ResNet-50 model (pretrained on ImageNet) on specific sub-
Table 3. Post-social dataset metrics. sets of TrueFace. In each case, a fixed 70/30 split for fine-
tuning and testing is adopted, while keeping an equal num-
ber of real and fake images in each set. Accuracy is used as
of uploaded images, as it uses the lowest JPEG quality fac- a performance metric.
tor (QF = 80) and it performs an image resizing to the res- First, we employ pre-social data only (indicated as PRE)
olution of of 720×720 pixels, leading to the lowest PSNR for fine-tuning, obtaining a CNN that we will refer to as
and the highest MSE values. High MSE values are exhib- baseline model. Then, four additional models are obtained
ited by Twitter as well. Telegram appears to affect images by fine-tuning on post-social data only, first by considering
the least, as it uses a JPEG quality factor QF = 87, provid- the three social networks individually (FB, TL, TW) and
ing the highest PSNR and the lowest MSE values. then together (ALL). Similarly, different testing sets (PRE,
FB, TL, TW, ALL) are considered, thus evaluating the mod-
4. Experiments els in both aligned and non-aligned scenarios. Results are
reported in Table 4, where fine-tuning sets are arranged row-
The presented TrueFace dataset was validated through wise and testing sets column-wise. Accuracy values are also
a series of experiments aimed at assessing the possibility visualized in Figure 2.
to fine-tune a pre-trained image classification model to the As can be observed in the first row of Table 4, when
synthetic-vs-real problem on social media images. In the evaluated on PRE images (i.e., when fine-tuning and test-
following subsections, we first briefly introduce the em- ing are fully aligned), the baseline model obtains an over-
ployed baseline model, and then outline the performed ex- all accuracy of 99.89%. Thus, the detection capabilities
periments and the obtained results. when real/fake images are not affected by compression and
platform-related processing operations are very high. In
4.1. Model contrast, when fine-tuning and testing data are not aligned,
i.e., the baseline model is tested on post-social data, the de-
For validating TrueFace, we resorted to a ResNet-50 im- tection accuracy drops significantly for all platforms, reach-
age classification model, which has been successfully em- ing about 56% for FB and TW and 71% for TL. These re-
ployed in similar forensic tasks [11, 35]. sults show that even detectors with extremely high classi-
ResNet (standing for Residual Neural Network) [12] is fication accuracy on non-shared images struggle to retain
a well-known CNN architecture, particularly successful for their performance on images from social media.
its image processing capabilities, and for solving a critical In the case of post-social data only, in aligned scenarios
issue known as the “vanishing gradient problem”. CNNs are where fine-tuning and testing images come from the same
very effective at solving highly complex image processing platform, accuracy values are always above 90%. In cross-
tasks. However, the large number of hidden layers makes platform evaluations, accuracy is lower on average than the
them progressively harder to train, due to the gradient value aligned scenarios (although never below 70%) but higher
decreasing significantly during the backpropagation phase. than the ones obtained by the baseline model (i.e., fine-
ResNet solves the problem by introducing the concept of tuned only on pre-social data) on FB, TL, TW.
“skip connections”, that is, direct connection skipping a Interestingly, images shared on Telegram seem easier to
block of layers and feeding the input of that block directly to detect in non-aligned scenarios by all models, including the
the subsequent one. This feature helps contrasting the van- baseline one. This may be related to the lower compression
ishing gradient problem, and thus allowing to train much level introduced by the platform compared to the other two,
deeper networks than previously possible. as previously discussed in Section 3.2. Also, when images
We started from the 50-layer version of the ResNet ar- from all the three social networks are employed together in
chitecture (ResNet-50) pre-trained on the ImageNet dataset. the fine-tuning (ALL), the accuracy on Telegram and Twitter
Subsequently, we ran a series of fine-tuning tests using dif- images benefits from the larger fine-tuning set. In contrast,
ferent partitions of TrueFace. Fine-tuning is a common the accuracy on Facebook images slightly decreases with
practice to make use of a pre-trained network and adapt it respect to the fully aligned scenario.
to a different set of data, while keeping part of the previ- In general, such results suggest that the model accuracy
ously learned weights. In the case of ResNet, fine-tuning can be improved by enlarging the fine-tuning sets. In par-
is performed by recalculating only the weights of the fully ticular, the synthetic-vs-real problem on images from so-
connected layer. cial networks cannot be solved with models trained on non-
Authorized licensed use limited to: Chittagon University of Engineering and Technology. Downloaded on November 24,2024 at 16:46:45 UTC from IEEE Xplore. Restrictions apply.
POST
Train\Test PRE
FB TL TW ALL
PRE 99.89% 56.39% 71.62% 56.58% 60.58%
FB 96.49% 93.45% 97.04% 81.27% 90.98%
TL 99.73% 87.04% 97.60% 72.38% 85.00%
TW 96.14% 87.63% 97.13% 97.12% 92.42%
ALL 98.78% 92.38% 98.41% 98.37% 97.16%
Authorized licensed use limited to: Chittagon University of Engineering and Technology. Downloaded on November 24,2024 at 16:46:45 UTC from IEEE Xplore. Restrictions apply.
This work was also partially supported by the project
PREMIER (PREserving Media trustworthiness in the artifi-
cial Intelligence ERa), funded by the Italian Ministry of Ed-
ucation, University, and Research (MIUR) within the PRIN
2017 program.
The authors wish to thank Mattia Florio for his valuable
contribution to the construction of the post-social dataset.
References
[1] M. Barni, K. Kallas, E. Nowroozi, and B. Tondi. CNN detection
of GAN-generated face images based on cross-band co-occurrences
analysis. In Proceedings of the IEEE International Workshop on In-
formation Forensics and Security, pages 1–6, 2020. 2
[2] D. Bau, J.-Y. Zhu, J. Wulff, W. Peebles, H. Strobelt, B. Zhou, and
A. Torralba. Seeing what a GAN cannot generate. In Proceedings of
Figure 4. Misalignment loss, fine-tuning gain, and forgetting loss the IEEE International Conference on Computer Vision, pages 4502–
4511, 2019. 2
of the fine-tuned ResNet-50 models. Empty and filled symbols
[3] L. Chai, D. Bau, S.-N. Lim, and P. Isola. What makes fake images
denote the baseline and refined models, respectively; circles and detectable? understanding properties that generalize. In Proceedings
triangles denote tests on PRE and POST data, respectively. of the European Conference on Computer Vision, pages 103–120,
2020. 3
[4] D. Chen, J. Li, S. Wang, and S. Li. Identifying computer generated
5. Conclusions and digital camera images using fractional lower order moments. In
Proceedings of the IEEE Conference on Industrial Electronics and
This paper presents TrueFace, a first benchmarking Applications, pages 230–235, 2009. 2
dataset of synthetic and real human faces shared on social [5] D. Dang-Nguyen, G. Boato, and F. De Natale. Identify computer
media. The dataset is made of two parts. The pre-social generated characters by analysing facial expressions variation. In
Proceedings of the IEEE International Workshop on Information
collection includes a total of 150k images, where the syn- Forensics and Security, pages 252–257, 2012. 2
thetic ones are generated using StyleGAN and StyleGAN2. [6] D.-T. Dang-Nguyen, G. Boato, and F. G. De Natale. 3D-model-based
The post-social collection includes 20k images (taken from video analysis for computer generated faces identification. IEEE
the pre-social one) as they result after being shared on Face- Transactions on Information Forensics and Security, 10(8):1752–
book, Telegram, and Twitter (yielding a total of 60K shared 1763, 2015. 2
[7] A. E. Dirik, H. T. Sencar, and N. Memon. Source camera identifica-
images). This new dataset aims at providing the research tion based on sensor dust characteristics. In Proceedings of the IEEE
community with the possibility to test new detectors dis- Workshop on Signal Processing Applications for Public Security and
criminating real and fake faces, both in a controlled labora- Forensics, pages 1–6, 2007. 2
tory scenario and in the more realistic one where media are [8] R. Durall, M. Keuper, and J. Keuper. Watch your up-convolution:
shared on social networks. CNN based generative deep neural networks are failing to reproduce
spectral distributions. In Proceedings of the IEEE Conference on
Experimental tests performed by fine-tuning a general Computer Vision and Pattern Recognition, pages 7890–7899, 2020.
purpose CNN show that even highly accurate detectors on 3
pre-social data struggle to retain their detection capabilities [9] J. Frank, T. Eisenhofer, L. Schönherr, A. Fischer, D. Kolossa, and
when applied to images shared on social platforms. Increas- T. Holz. Leveraging frequency analysis for deep fake image recog-
nition. In Proceedings of the International Conference on Machine
ing the fine-tuning set with shared images mitigates such Learning, pages 3247–3258, 2020. 2
generalization issues, although the network tends to “for- [10] A. C. Gallagher and T. Chen. Image authentication by detecting
get” and decrease in accuracy on pre-social data. This calls traces of demosaicing. In Proceedings of the IEEE/CVF Conference
for future investigations on training strategies that can si- on Computer Vision and Pattern Recognition Workshops, pages 1–8,
multaneously optimize detection performance on a variety 2008. 2
[11] D. Gragnaniello, D. Cozzolino, F. Marra, G. Poggi, and L. Verdoliva.
of pre-social and post-social images.
Are GAN generated images easy to detect? A critical analysis of the
state-of-the-art. In Proceedings of the IEEE International Confer-
Acknowledgment ence on Multimedia and Expo, pages 1–6, 2021. 2, 3, 4
[12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for
This work has received funding from the TruBlo con- image recognition. In Proceedings of the IEEE/CVF Conference on
sortium as part of the TrueBees project (https://www. Computer Vision and Pattern Recognition, pages 770–778, 2016. 2,
trublo.eu/truebees/). TrueBees is a project carried 3, 4
[13] T. Karras, S. Laine, and T. Aila. A style-based generator archi-
out by the University of Trento and U-Hopper. It is funded tecture for generative adversarial networks. In Proceedings of the
by TruBlo under the Europe’s Horizon 2020 programme IEEE/CVF Conference on Computer Vision and Pattern Recognition,
(grant agreement No. 957228). pages 4401–4410, 2019. 1, 2, 3
Authorized licensed use limited to: Chittagon University of Engineering and Technology. Downloaded on November 24,2024 at 16:46:45 UTC from IEEE Xplore. Restrictions apply.
[14] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. [33] T. Thomson, D. Angus, and P. Dootson. 3.2 billion images and
Analyzing and improving the image quality of StyleGAN. In Pro- 720,000 hours of video are shared online daily. can you sort real from
ceedings of the IEEE/CVF Conference on Computer Vision and Pat- fake? https://bit.ly/3NDupjY, 2020. 1
tern Recognition, pages 8110–8119, 2020. 1, 2, 3 [34] L. Verdoliva. Media forensics and deepfakes: An overview. IEEE
[15] F. Lago, C. Pasquini, R. Böhme, H. Dumont, V. Goffaux, and Journal of Selected Topics in Signal Processing, 14(5):910–932,
G. Boato. More real than real: A study on human visual perception of 2020. 2
synthetic faces. IEEE Signal Processing Magazine, 39(1):109–116, [35] S.-Y. Wang, O. Wang, R. Zhang, A. Owens, and A. A. Efros. CNN-
2021. 2 generated images are surprisingly easy to spot... for now. In Proceed-
[16] H. Li, B. Li, S. Tan, and J. Huang. Identification of deep network ings of the IEEE/CVF Conference on Computer Vision and Pattern
generated images using disparities in color components. Signal Pro- Recognition, pages 8695–8704, 2020. 3, 4
cessing, 174:107616, 2020. 2 [36] X. Xuan, B. Peng, W. Wang, and J. Dong. On the generalization of
[17] S. Lyu and H. Farid. How realistic is photorealistic? IEEE Transac- GAN image forensics. In Proceedings of the Chinese conference on
tions on Signal Processing, 53(2):845–850, 2005. 2 biometric recognition, pages 134–141, 2019. 3
[18] F. Marcon, C. Pasquini, and G. Boato. Detection of manipulated face [37] N. Yu, L. S. Davis, and M. Fritz. Attributing fake images to GANs:
videos over social networks: A large-scale study. Journal of Imaging, Learning and analyzing gan fingerprints. In Proceedings of the
7(10):193, 2021. 5 IEEE international conference on computer vision, pages 7556–
[19] F. Marra, D. Gragnaniello, D. Cozzolino, and L. Verdoliva. Detection 7566, 2019. 2
of GAN-generated fake images over social networks. In Proceedings [38] S. Zannettou, M. Sirivianos, J. Blackburn, and N. Kourtellis. The
of the IEEE Conference on Multimedia Information Processing and web of false information: Rumors, fake news, hoaxes, clickbait, and
Retrieval, pages 384–389, 2018. 2 various other shenanigans. Journal of Data and Information Quality,
[20] F. Marra, D. Gragnaniello, L. Verdoliva, and G. Poggi. Do GANs 11(3):1–37, 2019. 1
leave artificial fingerprints? In Proceedings of the IEEE conference
[39] X. Zhang, S. Karaman, and S.-F. Chang. Detecting and simulating
on multimedia information processing and retrieval, pages 506–511,
artifacts in GAN fake images. In Proceedings of the IEEE Interna-
2019. 2
tional Workshop on Information Forensics and Security, pages 1–6,
[21] F. Marra, C. Saltori, G. Boato, and L. Verdoliva. Incremental learn-
2019. 2
ing for the detection and classification of GAN-generated images.
In Proceedings of the IEEE International Workshop on Information
Forensics and Security, pages 1–6, 2019. 2, 3
[22] S. McCloskey and M. Albright. Detecting GAN-generated imagery
using saturation cues. In Proceedings of the IEEE International Con-
ference on Image Processing, pages 4584–4588, 2019. 2
[23] R. Metz. Deepfakes are now trying to change the course of
war. https://edition.cnn.com/2022/03/25/tech/
deepfakes-disinformation-war/index.html, 2022. 2
[24] L. Nataraj, T. M. Mohammed, B. Manjunath, S. Chandrasekaran,
A. Flenner, J. H. Bappy, and A. K. Roy-Chowdhury. Detecting
GAN generated fake images using co-occurrence matrices. Elec-
tronic Imaging, 2019(5):532–1, 2019. 2
[25] T.-T. Ng, S.-F. Chang, J. Hsu, L. Xie, and M.-P. Tsui. Physics-
motivated features for distinguishing photographic images and com-
puter graphics. In Proceedings of the ACM international conference
on Multimedia, pages 239–248, 2005. 2
[26] M. Ngo, S. Karaoglu, and T. Gevers. Self-supervised face image
manipulation by conditioning GAN on face decomposition. IEEE
Transactions on Multimedia, pages 1–1, 2021. 2
[27] C. Pasquini, I. Amerini, and G. Boato. Media forensics on social me-
dia platforms: a survey. EURASIP Journal on Information Security,
2021(1):1–19, 2021. 2
[28] C. Pasquini, C. Brunetta, A. F. Vinci, V. Conotter, and G. Boato.
Towards the verification of image integrity in online news. In Pro-
ceedings of the IEEE International Conference on Multimedia Expo
Workshops, pages 1–6, 2015. 2
[29] N. Rahmouni, V. Nozick, J. Yamagishi, and I. Echizen. Distinguish-
ing computer graphics from natural images using convolution neu-
ral networks. In Proceedings of the IEEE Workshop on Information
Forensics and Security, pages 1–6, 2017. 2
[30] A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and
M. Nießner. FaceForensics++: Learning to detect manipulated fa-
cial images. In Proceedings of the IEEE International Conference
on Computer Vision, pages 1–11, 2019. 2
[31] S. SJ Nightingale and H. Farid. AI-synthesized faces are indistin-
guishable from real faces and more trustworthy. Proceedings of the
National Academy of Sciences, 119(8), 2022. 2
[32] J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Niess-
ner. Face2face: Real-time face capture and reenactment of RGB
videos. In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, June 2016. 2
Authorized licensed use limited to: Chittagon University of Engineering and Technology. Downloaded on November 24,2024 at 16:46:45 UTC from IEEE Xplore. Restrictions apply.