0% found this document useful (0 votes)
6 views5 pages

Ssssss

The document presents SSGAN, a novel one-stage GAN-based architecture for generating photorealistic images from scene-level freehand sketches, addressing the limitations of existing two-stage methods. It introduces a Semantic Fusion Module (SFM) to enhance the learning of intermediate features, allowing for a more efficient sketch-to-image generation process. Experimental results on the SketchyCOCO dataset demonstrate that SSGAN achieves competitive performance compared to state-of-the-art methods.

Uploaded by

santhuyrgowda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views5 pages

Ssssss

The document presents SSGAN, a novel one-stage GAN-based architecture for generating photorealistic images from scene-level freehand sketches, addressing the limitations of existing two-stage methods. It introduces a Semantic Fusion Module (SFM) to enhance the learning of intermediate features, allowing for a more efficient sketch-to-image generation process. Experimental results on the SketchyCOCO dataset demonstrate that SSGAN achieves competitive performance compared to state-of-the-art methods.

Uploaded by

santhuyrgowda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

,&(7,6-DQXDU\+DUELQ&KLQD

SSGAN: Image Generation from Freehand Scene Sketches


Mengying Ji 1,*, Xianlin Zhang 1, Xueming Li 1
1
School of Digital Media and Design Arts, Beijing University of Posts and Telecommunications Beijing Key Laboratory
of Network System and Network Culture, Beijing, China
*
Corresponding author’s email: [email protected]

Abstract
With the remarkable progress on deep CNNs, recent approaches have achieved certain success on image generation from
scene-level freehand sketches. However, most of the researches adopt a two-staged way, that is, to generate the foreground
and the background of the image respectively. In this paper, we propose a novel one-stage paradigm of GAN-based
architecture, which named SSGAN for image generation using sketch-to-image directly. Moreover, we design a novel
Semantic Fusion Module (SFM) for better learn the intermediate features. Extensive experiments on SketchyCOCO demon-
strate that our proposed framework can obtain competitive performance compared with the state-of-the-art methods.

1 Introduction feeding the semantic mask of scene sketch into SSGAN,


can generate high-resolution images with visually con-
In recent years, the advent of Generative Adversarial Net- sistent foreground and background.
works (GANs) had a huge influence on the progress of im- Our contributions are summarized as follows:
age synthesis research. In particular, high-fidelity, realistic- y We present a new unified end-to-end pipeline to real-
looking images could be generated by unconditional gen- ize the image generation from scene-level freehand
erative models trained on object-level data (e.g., face im- sketches.
ages [12] [13]). For practical applications, generating y We present Semantic Fusion Module (SFM) for
photo-realistic images conditioning on certain input could achieving the sketch-to-mask-to-image pipeline.
be more useful. This has been widely investigated in the y Experiments on SketchyCOCO datasets reveal the ef-
recent years, conditional generative approaches have used ficiency of the proposed model.
class labels [1][15], text [10], sketch [4] [7] [14], layout The organization of this paper is as following, Sec. 1 is the
[19], semantic maps [16] [20] [21], to describe the desired Introduction, Related work is presented in Sec.2, Sec.3 de-
image. scribes the proposed model of SSGAN, Experimental re-
In this paper, we are interested in a specific form of condi- sults and analysis is revealed in Sec.4, and Conclusion is
tional image synthesis, which is converting a scene-level given in Sec.5.
freehand sketch to a photorealistic image. Compared to
class label and text, a freehand sketch can express the user's
intention more intuitively. Compared to layout and seman- 2 Related Work
tic maps, freehand sketches are more universal. However,
image generation from scene-level freehand sketches is a 2.1 Sketch-to-Image
challenging task as (a). sketches are abstract, and people Early sketch-based image synthesis approaches are based
may have different expressions of the same object. (b). the on image retrieval. Such as, both the kernel idea of
inconsistency of user’s attention on the contents, the back- Sketch2Photo [3] and PhotoSketcher [5] were first retrieve
ground of the sketches is usually rough or even missing. objects and backgrounds from a given sketch, and then syn-
Besides, the researches on this task are still sparse though thesized realistic images by compositing. SketchyGAN [4]
the generative-based learning methods are sprung up eve- proposed a new training method of gradual transition from
rywhere. Among them, most of the work implemented a edge-image synthesis to sketch-image synthesis. Contextu-
two phases way which means a complicated training and a alGAN [14] proposed a novel sketch-edge joint image
wasted calculation. Different from these recent methods, completion approach. SketchyGAN2 [22] matched the user
we propose a one-stage scene-level sketch-based architec- sketches by adjusting a subset of the model weights on pre-
ture for generative adversarial networks (SSGAN) to ad- trained generative models which were pretrained on large-
dress the sketch-to-image problem by learning sketch-to- scale data. These methods have demonstrated the value of
mask-to-image. There is a lot of blanks in freehand scene GANs for image generation from object-level sketches. In
sketches and inferring the semantics of blank based on ex- fact, SketchyCOCO[7] was the first to realistically propose
isting information is a straightforward intermediate step. a two-staged method from scene sketch to image genera-
Thus, we propose a Semantic Fusion Module (SFM) to re- tion. Different from SketchyCOCO, which generated fore-
alize the sketch-to-mask-to-image pipeline in our model. ground and background of two-stages, our approach de-
That is, give a freehand scene sketch, generates an image signs a single GAN network for generation.
via two steps: (i). a pre-trained semantic segmentation net-
work obtains semantic information about scene sketch; (ii).

,6%1 
Authorized licensed use limited to: VTU Consortium. Downloaded on ‹9'(9(5/$**0%+Â%HUOLQÂ2IIHQEDFK
May 16,2025 at 15:50:44 UTC from IEEE Xplore. Restrictions apply.
,&(7,6-DQXDU\+DUELQ&KLQD

Figure 1 Overview of the proposed SSGAN for image synthesis from scene-level freehand sketches. Given a scene-
level freehand sketch, we obtain the semantic mask of the sketch by using a pre-trained segmentation model. Proposed
Semantic Fusion Module (SFM) realizes the learning of sketch-to-mask-to-image for the generative learning problem of
sketch-to-image. Moreover, the right-bottom illustrates the SFM.

Where ߠீ represents the parameters of the generation func-


2.2 Semantic-to-Image tion.
Semantic Image Synthesis aims to turn semantic label maps
into photo-realistic images. For instance, GauGAN [16] 3.2 Architecture
proposed a spatially-adaptive normalization to preserve se- As illustrated in Figure 1, given a scene-level freehand
mantic information of input semantic masks for generating sketch S, we first convert S into a semantic segmentation
photorealistic images, DAGAN [21] proposed two mod- map ‫ܯ‬଴ ‫{ א‬0,1}ு×ௐ×஼ by leveraging the sketch segmen-
ules, position-wise Spatial Attention Module (SAM) and tation method in [25], where ‫ ܥ‬denotes the number of cat-
scale-Wise Channel Attention (CAM) to learn spatial atten- egories, and ‫ܪ‬, ܹ are the height and width of the seman-
tion and channel attention respectively. OASIS [20] re- tic segmentation map, respectively. After that, by taking
placed the original discriminator with a segmentation- ‫ܯ‬଴ as input, the final image is achieved by SSGAN.
based discriminator. CC-FPSE [26] generated the interme- The structure of SSGAN is shown in Figure 1, the whole
diate feature maps by predicting convolutional kernels con- network is composed of several SPADE layers [16]. How-
ditioned on the semantic label map. Semantic information ever, unlike the original SPADE layer, which uses pre-ex-
provides useful guidance in image generation. isting masks in the datasets as inputˈwe use the semantic
masks learned from SFM.
Specifically, let‫ݔ‬௜ denote the output feature of i-th SPADE
3 Method layer, ‫ݔ‬௜ is mapped to an intermediate mask ݉௜ through
In this section, we first define the problem formulation in a simple ‘ToMask’ operation. Where the ‘ToMask’ opera-
Sec. 3.1. We then introduce the architecture of SSGAN tion is implemented by Conv+Sigmoid.
(Sec. 3.2), and present Semantic Fusion Module (SFM)
(Sec. 3.3). Finally, the optimization objective of the pro- ݉௜ = ܶ‫ݔ(݇ݏܽܯ݋‬௜ ) (2)
posed framework is presented (Sec. 3.4).
Then, we feed ݉௜ and ‫ܯ‬଴ into the SFM to get a new se-
3.1 Problem Formulation mantic feature map ‫ܯ‬௜ as the input of the next SPADE
layer. Note that the input to the first SPADE layer is ‫ܯ‬଴ .
Assuming a set of scene-level freehand sketches ࣭ and
their corresponding images ࣣ, given a ground-truth image ‫ܯ‬௜ = ܵ‫݉(ܯܨ‬௜ , ‫ܯ‬଴ ) (3)
‫ ࣣ א ܫ‬and its corresponding scene sketch S ‫࣭ א‬, we want
to find a generator function ‫ ܩ‬to capture the underlying
conditional data distribution ‫ = ݌‬൫ ‫ܵ פפ ܫ‬, ‫ݖ‬img ൯ , where
3.3 Semantic Fusion Module (SFM)
‫ݖ‬img is the latent code used to control the overall style of The Semantic Fusion Module (SFM) is presented to learn
the image. Similar to [18], we express our task in this work the mask from feature maps at different stage in the gener-
as in Equation 1: ator. There are a lot of unknown parts in the semantic mask
of scene sketch, so we introduced SFM to encode semantic
‫ܩ = ܫ‬൫ܵ, ‫ݖ‬img ; ߠீ ൯ (1) sketch and the intermediate mask obtained by the ‘To-
Mask’ operation to hallucinate a new fine-grained mask
map.

,6%1 
Authorized licensed use limited to: VTU Consortium. Downloaded on ‹9'(9(5/$**0%+Â%HUOLQÂ2IIHQEDFK
May 16,2025 at 15:50:44 UTC from IEEE Xplore. Restrictions apply.
,&(7,6-DQXDU\+DUELQ&KLQD

Mathematically, we define ‫ܯ‬௜ ‫{ א‬0,1}ு×ௐ×஼ as interme- ral images from COCO Stuff [2], using the segmentation
diate mask from the i-th SPADE layer.‫ܯ‬଴ ‫{ א‬0,1}ு×ௐ×஼ is masks of these natural images as reference, scene sketches
the input semantic sketch and ‫ܯ‬௙ ‫{ א‬0,1}ு×ௐ×஼ is the were generated by compositing the instance freehand
foreground segmentation of the sketch which is kept the sketches from Sketchy [17], Tu-berlin [6], and QuickDraw
same shape as ‫ܯ‬଴ by padding 0. As illustrated in Figure [8]. SketchyCOCO datasets contain 14081 images and split
1, we first use a convolutional network ࣠ଵ to encode the them into two sets, 80% for training and the remaining 20%
label maps into feature maps: for test.
We use two metrics to evaluate generated images. The first
݂଴ = ࣠ଵ ൫‫ܯ‬௙ ൯ ْ ܲ௠௘௔௡ ൫࣠ଵ (‫ܯ‬଴ , ‫ܯ‬௜ )൯ (4) metric is FID [9] which has been widely used to evaluate
the quality of generated images. The lower the FID value,
where ܲ௠௘௔௡ represents average pooling, Ͱdenotes ele- the more realistic the image. Another metric is the struc-
mentwise addition. Average pooling is used because it pre- tural similarity metric (SSIM) [23] used to quantify the
serves background information better. We then use another structural similarity between the generated image and the
convolutional network ࣠ଶ to obtain final updated feature ground truth images. The higher the SSIM value, the closer
maps: they are.

݂ = ݂଴ ْ ࣠ଶ (݂଴ ) (5) 4.1.2 Methods in Comparison


SketchyCOCO [7] is the only existing method which is spe-
we obtain the final feature map ݂, which contains infor- cifically designed for image generation from scene-level
mation from both sketch and hallucinated stage segmenta- freehand sketches. In addition to compare our approach
tion map. with it, we also compare with the advanced approaches
which generate images using other forms of input (e.g., lay-
3.4 Objective out, semantic mask).
We train the generator with the same multi-scale discrimi- y SketchyCOCO: SketchyCOCO introduced the first
nator and loss function used in GauGAN [16]. Where the method for automatic image generation from scene-
discriminator adopts the hinge loss while the generator is level freehand sketches, EdgeGAN [7] and Pix2Pix
optimized with three different losses, including the hinge- [11] are used to generate the foreground and back-
based adversarial loss, discriminator feature matching loss, ground respectively.
and perceptual loss, respectively. y GauGAN [16]: The GauGAN model takes the seman-
Therefore, the loss function for discriminator is defined in tic segmentation maps as input. we test the public
Equation 6: model pre-trained on the dataset COCO Stuff. In addi-
tion, we reuse the results reported in the
‫ܮ‬஽ = െॱ(௫,௦) ൣmin൫0, െ1 + ‫ݔ(ܦ‬, ‫)ݏ‬൯൧ SketchyCOCO in our comparisons, where a GauGAN
(6) model is trained by taking the semantic sketches on
െॱ௭,௦ ൣmin൫0, െ1 െ ‫ݖ(ܩ(ܦ‬, ‫)ݏ‬, ‫)ݏ‬൯൧
SketchyCOCO dataset as input.
y LostGANs [19]: The LostGANs model takes the lay-
where ‫ݔ‬, ‫ ݏ‬and ‫ ݖ‬denote the real image, the semantic la-
outs as input. We compared of their pre-trained model
bel map of sketch and the input noise map, respectively.
which trained on the dataset COCO Stuff. To ensure
The loss function for generators is defined in Equation 7:
fairness, we restrict the categories in the generated im-
‫ீܮ‬ = െॱ(௭,௦) ‫ݖ(ܩ(ܦ‬, ‫)ݏ‬, ‫)ݏ‬ ages, test only the categories included in the
SketchyCOCO dataset.
+ߣிெ ॱ(௭,௦) ‫ܮ‬ிெ (‫ݖ(ܩ‬, ‫)ݏ‬, ‫)ݔ‬ (7)
+ߣ௉ ॱ(௭,௦) ‫ܮ‬௉ (‫ݖ(ܩ‬, ‫)ݏ‬, ‫)ݔ‬ 4.1.3 Implementation Details
We evaluate our SSGAN at resolution 256 × 256. We fol-
where ‫ܮ‬ிெ (‫ݖ(ܩ‬, ‫)ݏ‬, ‫ )ݔ‬is the discriminator feature match- low the training procedures of GANs and alternatively train
ing loss and ‫ܮ‬௉ (‫ݖ(ܩ‬, ‫)ݏ‬, ‫ )ݔ‬is the perceptual loss. We set the generator G and discriminator D. We use Adam as the
ߣிெ and ߣ௉ equal to 10 in our experiments. optimizer and set ȕ1 =0, ȕ2 =0.999. The learning rates for
the generator and discriminator are both set to 0.0002. We
4 Experiments conduct the experiments on a single NVIDIA 2080Ti GPU.

4.1 Experimental Setup 4.2 Qualitative results


We provide quantitative results in Table 1. Clearly, the
4.1.1 Dataset and Evaluation metrics GauGAN model trained using semantic maps is superior to
We use SketchyCOCO [7] dataset to evaluate our SSGAN. ours in terms of FID and SSIM. However, the semantic
SketchyCOCO dataset is the only scene-level freehand map specifies the category of each pixel, offered tighter
sketch dataset and covering 3 background classes and 14 constraint than sketch. Another reason is that the GauGAN
foreground classes. SketchyCOCO dataset collected natu-

,6%1 
Authorized licensed use limited to: VTU Consortium. Downloaded on ‹9'(9(5/$**0%+Â%HUOLQÂ2IIHQEDFK
May 16,2025 at 15:50:44 UTC from IEEE Xplore. Restrictions apply.
,&(7,6-DQXDU\+DUELQ&KLQD

model trained using the semantic maps contains all catego- 4.3 Quantitative results
ries in the COCO Stuff dataset, while our model trained on Figure 2 shows the images generated by our method and
SketchyCOCO which only contain a part of categories in the comparison methods. Note that we cannot reproduce
ground truth. Compared with the GauGAN model trained the results of SketchyCOCO because it only provides the
using semantic sketches, SSGAN’s score is the same as pre-trained foreground generation model, not the pre-
GauGAN-semantic sketch’s score in SSIM, but our method trained background generation model. Figure 2 demon-
yields better results for FID. Indicating that the SFM can strates that SSGAN is able to generate complex images
effectively learn fine-grained mask. Compare with the with multiple objects from simple scene-level freehand
scene-level sketch-based image generation baseline model sketches, and the generated images respect the constraints
SketchyCOCO, our SSGAN achieves better score on FID of the input scene-level freehand sketches. We can see our
but lower score on SSIM. This may be because approach produce much better results than LostGANs
SketchyCOCO generate foreground separately, and using which use layouts as input. But compared to the GauGAN
the generated foreground instances as constraints which model trained using semantic maps, our approach produces
provide a more explicit spatial constraint. slightly worse images. This is consistent with our analysis
of the qualitative results.
In Figure 3 we prove the effectiveness of proposed SFM.
(c) shows the semantic masks learned by SFM. It is clear
that our approach represents the foreground object accu-
rately and infer background from limited information.

Figure 3 (a) Input scene-level freehand sketches, (b) Se-


mantic segmentations of scene sketches, (c) Masks learned
by SFM, (d) Generated images by our SSGAN.

5 Conclusion
In this paper, we propose SSGAN for synthesis images
from scene-level freehand sketches, which use a joint learn-
ing paradigm to transform sketch-to-image into sketch-to-
mask-to-image. Specifically, we present a new module,
Figure 2 Scene-level comparison. (a) Input layout, (b)
SFM, which fuses the segmentation masks of phase and the
Generated images by LostGANs, (c) Input semantic map,
semantic sketches to realize the sketch-to-mask-to-image
(d) Generated images by GauGAN, (e) Input scene-level
pipeline. Comprehensive experiments on SketchyCOCO
freehand sketch, (f) Generated images by our SSGAN.
datasets demonstrate the effectiveness of our proposed
model.
Table 1 The results of quantitative experiments
Model ),'Ļ 66,0Ĺ
LostGANs-layout 134.6 0.280
References
GauGAN-semantic map 80.3 0.306 [1] Brock, Andrew, Jeff Donahue, and Karen Simonyan.
“Large Scale GAN Training for High Fidelity Natural
GauGAN-semantic sketch 215.1 0.285
Image Synthesis.” ArXiv Preprint ArXiv:1809.11096,
SketchyCOCO-scene 164.8 0.288 2018.
Ours 123.8 0.285 [2] Caesar, Holger, Jasper Uijlings, and Vittorio Ferrari.
“COCO-Stuff: Thing and Stuff Classes in Context.”
ArXiv:1612.03716 [Cs], March 28, 2018.

,6%1 
Authorized licensed use limited to: VTU Consortium. Downloaded on ‹9'(9(5/$**0%+Â%HUOLQÂ2IIHQEDFK
May 16,2025 at 15:50:44 UTC from IEEE Xplore. Restrictions apply.
,&(7,6-DQXDU\+DUELQ&KLQD

[3] Chen, Tao, Ming-Ming Cheng, Ping Tan, Ariel [19] Sun, Wei, and Tianfu Wu. “Learning Layout and Style
Shamir, and Shi-Min Hu. “Sketch2Photo: Internet Reconfigurable GANs for Controllable Image
Image Montage.” ACM Transactions on Graphics 28, Synthesis.” ArXiv:2003.11571 [Cs], March 26, 2021.
no. 5 (December 2009): 1–10. [20] Sushko, Vadim, Edgar Schönfeld, Dan Zhang,
[4] Chen, Wengling, and James Hays. “SketchyGAN: Juergen Gall, Bernt Schiele, and Anna Khoreva. “You
Towards Diverse and Realistic Sketch to Image Only Need Adversarial Supervision for Semantic
Synthesis.” ArXiv:1801.02753 [Cs], April 12, 2018. Image Synthesis.” ArXiv:2012.04781 [Cs, Eess],
[5] Eitz, M., R. Richter, K. Hildebrand, T. Boubekeur, March 19, 2021.
and M. Alexa. “Photosketcher: Interactive Sketch- [21] Tang, Hao, Song Bai, and Nicu Sebe. “Dual Attention
Based Image Synthesis.” IEEE Computer Graphics GANs for Semantic Image Synthesis.”
and Applications 31, no. 6 (November 2011): 56–66. ArXiv:2008.13024 [Cs], August 29, 2020.
[6] Eitz, Mathias, James Hays, and Marc Alexa. “How Do [22] Wang, Sheng-Yu, David Bau, and Jun-Yan Zhu.
Humans Sketch Objects?” ACM Transactions on “Sketch Your Own GAN.” ArXiv:2108.02774 [Cs],
Graphics 31, no. 4 (August 5, 2012): 1–10. September 20, 2021.
[7] Gao, Chengying, Qi Liu, Qi Xu, Limin Wang, [23] :DQJ = ³,PDJH 4XDOLW\ $VVHVVPHQWௗ )URP (UURU
Jianzhuang Liu, and Changqing Zou. “SketchyCOCO: Visibility to Structural Similarity.” IEEE Trans-
Image Generation from Freehand Scene Sketches.” actions on Image Processing, 2004.
ArXiv:2003.02683 [Cs], April 7, 2020. [24] Zhao, Bo, Lili Meng, Weidong Yin, and Leonid Sigal.
[8] Ha, D., and D. Eck. “A Neural Representation of “Image Generation From Layout.” In 2019 IEEE/CVF
Sketch Drawings,” 2017. Conference on Computer Vision and Pattern Recog-
[9] Heusel, M., H. Ramsauer, T. Unterthiner, B. Nessler, nition (CVPR), 8576–85. Long Beach, CA, USA:
and S. Hochreiter. “GANs Trained by a Two Time- IEEE, 2019.
Scale Update Rule Converge to a Local Nash [25] Zou, Changqing, Haoran Mo, Chengying Gao, Ruofei
Equilibrium,” 2017. Du, and Hongbo Fu. “Language-Based Colorization
[10] Hong, Seunghoon, Dingdong Yang, Jongwook Choi, of Scene Sketches.” ACM Transactions on Graphics
and Honglak Lee. “Inferring Semantic Layout for 38, no. 6 (November 8, 2019): 1–16.
Hierarchical Text-to-Image Synthesis.” [26] Wang, Ting-Chun, Ming-Yu Liu, Jun-Yan Zhu,
ArXiv:1801.05091 [Cs], July 25, 2018. Andrew Tao, Jan Kautz, and Bryan Catanzaro. “High-
[11] Isola, Phillip, Jun-Yan Zhu, Tinghui Zhou, and Alexei Resolution Image Synthesis and Semantic Manipu-
A Efros. “Image-to-Image Translation with Condi- lation with Conditional GANs.” ArXiv:1711.11585
tional Adversarial Networks.” In Proceedings of the [Cs], August 20, 2018.
IEEE Conference on Computer Vision and Pattern 
Recognition, 1125–34, 2017.
[12] Karras, Tero, Miika Aittala, Janne Hellsten, Samuli
Laine, Jaakko Lehtinen, and Timo Aila. “Training
Generative Adversarial Networks with Limited Data.”
ArXiv:2006.06676 [Cs, Stat], October 7, 2020.
[13] Karras, Tero, Samuli Laine, and Timo Aila. “A Style-
Based Generator Architecture for Generative Adver-
sarial Networks.” ArXiv:1812.04948 [Cs, Stat],
March 29, 2019.
[14] Lu, Yongyi, Shangzhe Wu, Yu-Wing Tai, and Chi-
Keung Tang. “Image Generation from Sketch
Constraint Using Contextual GAN.” ArXiv:1711.08972
[Cs], July 25, 2018.
[15] Mirza, Mehdi, and Simon Osindero. “Conditional
Generative Adversarial Nets.” ArXiv:1411.1784 [Cs,
Stat], November 6, 2014.
[16] Park, Taesung, Ming-Yu Liu, Ting-Chun Wang, and
Jun-Yan Zhu. “Semantic Image Synthesis with Spa-
tially-Adaptive Normalization.” ArXiv:1903.07291
[Cs], November 5, 2019.
[17] Sangkloy, Patsorn, Nathan Burnell, Cusuh Ham, and
James Hays. “The Sketchy Database: Learning to
Retrieve Badly Drawn Bunnies.” ACM Transactions
on Graphics 35, no. 4 (July 11, 2016): 1–12.
[18] Sun, Wei, and Tianfu Wu. “Image Synthesis From
Reconfigurable Layout and Style,” n.d., 10.

,6%1 
Authorized licensed use limited to: VTU Consortium. Downloaded on ‹9'(9(5/$**0%+Â%HUOLQÂ2IIHQEDFK
May 16,2025 at 15:50:44 UTC from IEEE Xplore. Restrictions apply.

You might also like