0% found this document useful (0 votes)
5 views

B-OPC_0105

The document presents B-OPC, a novel mask optimization framework utilizing Binary Neural Networks (BNNs) to enhance computational efficiency in semiconductor lithography. B-OPC significantly reduces memory usage and computational overhead while maintaining high performance, achieving competitive results compared to traditional methods like Neural-ILT and PGAN-OPC. The framework demonstrates a scalable solution for large-scale semiconductor manufacturing, effectively addressing diffraction effects in mask patterns.

Uploaded by

celiachang24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

B-OPC_0105

The document presents B-OPC, a novel mask optimization framework utilizing Binary Neural Networks (BNNs) to enhance computational efficiency in semiconductor lithography. B-OPC significantly reduces memory usage and computational overhead while maintaining high performance, achieving competitive results compared to traditional methods like Neural-ILT and PGAN-OPC. The framework demonstrates a scalable solution for large-scale semiconductor manufacturing, effectively addressing diffraction effects in mask patterns.

Uploaded by

celiachang24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

B-OPC: A 1-Bit Supervised Learning Framework

for Mask Optimization in Computational


Lithography
Hsiao-Tung Chang1 and Chih-Chin Lai2
1
Department of Photonics Engineering, National Sun Yat-sen University,
Kaohsiung, 804201, Taiwan. Email: [email protected]
2
Department of Electrical Engineering, National University of Kaohsiung,
Kaohsiung, 811726, Taiwan. Email: [email protected]

January 5, 2025

Keywords— Binary Neural Networks, Convolutional Neural Networks, Mask Op-


timization, Optical Proximity Correction, Computational Lithography

Abstract
This paper introduces B-OPC, a novel mask optimization framework
based on Binary Neural Networks (BNNs), designed to enhance compu-
tational efficiency while maintaining high performance in semiconductor
lithography. The proposed approach leverages the inherent binary nature
of mask patterns and design targets, enabling significant reductions in
memory usage and computational overhead. Experimental results demon-
strate that the trained model effectively compensates for diffraction effects
on specific design patterns. For 10 ICCAD 2013 contest design patterns,
B-OPC achieves competitive performance compared to traditional meth-
ods such as Neural-ILT and PGAN-OPC, while substantially reducing
training time and computational resource requirements. The framework
introduces a new direction for deep learning-based mask optimization, of-
fering a scalable and efficient solution for large-scale semiconductor man-
ufacturing.

1 Introduction
Lithography is a critical process in semiconductor manufacturing, where circuit pat-
terns are transferred onto wafers by projecting light through a photomask. As tran-
sistor sizes continue to shrink, optical effects such as diffraction increasingly distort
the projected patterns, making mask optimization essential for achieving accurate pat-
tern reproduction. Optical Proximity Correction (OPC) addresses these distortions
by modifying the mask geometry to minimize deviations between the intended and

1
printed patterns. Traditional OPC methods include model-based approaches [1, 2],
which iteratively simulate and refine patterns, and Inverse Lithography Techniques
(ILT) [3, 4], which solve an inverse problem to derive optimal mask designs. While
these methods yield high-quality results, their high computational cost and runtime
pose significant challenges to scalability, prompting ongoing efforts to improve effi-
ciency and practicality.

In recent years, deep learning methods have emerged as powerful tools for reducing
runtime and enhancing the printability of optimized masks [5, 6, 7, 8, 9, 10]. Unlike
ILT, which relies on iterative gradient descent to minimize loss and refine mask de-
signs, deep neural network (DNN)-based approaches can generate optimized masks
in a single forward pass. This eliminates the need for iterative approximations [11],
significantly improving computational efficiency. Moreover, most DNN-based meth-
ods produce near-optimal mask designs, requiring only minimal refinement through
conventional mask optimization techniques to achieve final results.

Recent studies have integrated various DNN frameworks with traditional OPC and
ILT to further enhance computational efficiency. Yang et al. [5] introduced PGAN-
OPC, a mask optimization framework utilizing Conditional Generative Adversarial
Networks (CGANs). To address the inherent instability in GAN training, an ILT-
guided pre-training phase was incorporated, ensuring stable convergence and reducing
variability in image generation. Jiang et al. [8] proposed Neural-ILT, an end-to-end
framework built upon the U-Net architecture[12] with additional optimization lay-
ers, which enhances computational efficiency through CUDA acceleration and reduces
mask complexity. This model integrates an ILT correction layer and a complexity
refinement layer to optimize mask patterns. Similar to PGAN-OPC, Neural-ILT em-
ploys a pre-training phase to establish a robust foundation for target-mask mapping.
Compared to both PGAN-OPC and conventional ILT methods, Neural-ILT demon-
strates superior mask printability while significantly reducing mask complexity. Chen
et al. [7] introduced DevelSet, an improved level set-based ILT framework that com-
bines CUDA and DNN acceleration to enhance printability and enable rapid iterative
convergence. This approach refines the level set-based ILT algorithm by incorporating
a curvature term to minimize mask complexity while leveraging GPU capabilities to
overcome computational bottlenecks.

We observe that binary representation naturally aligns with the characteristics of


both mask patterns and design targets, as each pixel’s information can be encoded
using a single bit. Leveraging this inherent property, we propose Binary Neural Net-
works (BNNs) as a framework for OPC tasks. BNNs utilize 1-bit values for activations
and weights across all hidden layers, a process termed binarization, which drastically
reduces computational resource requirements. As highlighted in [13], BNNs decrease
memory usage by a factor of 32 compared to conventional 32-bit DNNs and signifi-
cantly lower memory access demands, yielding substantial computational savings. This
advantage aligns particularly well with mask design, where the binary nature of pat-
terns is inherently compatible with the binarization process. Furthermore, reducing
data storage and computational burdens facilitates the training of models capable of
processing high-resolution images. Prior study [6] have shown that increasing image
resolution enhances mask optimization outcomes. Consequently, the use of 1-bit data
for representing mask patterns is not only efficient but also sufficient, making bina-
rization an ideal approach for optimizing OPC tasks.

2
Building upon prior advancements in printability and computational efficiency,
this study seeks to further enhance performance by leveraging BNNs. We propose a
novel mask optimization framework, termed B-OPC, which integrates a U-Net-inspired
supervised learning approach with BNNs to achieve substantial reductions in compu-
tational cost. This innovation aims to streamline both model training and the mask
optimization process. While BNNs excel in reducing memory requirements and com-
putational overhead, a key challenge is preserving mask quality and minimizing the
information loss introduced by binarization. To address this, B-OPC employs a U-Net
architecture to ensure robust initialization and stable convergence. U-Net’s skip con-
nections are particularly critical, as they enable the preservation of low-level features,
such as edges, from earlier layers, bypassing the bottleneck and contributing directly
to the decoding process [12]. This mechanism ensures that fine-grained details are pre-
served, even within a binarized framework. Although advanced U-Net variants, such
as UNet++[14] and UNet3+[15], have been developed, their integration with BNNs
remains unexplored in the current literature. Given the sensitivity of BNNs training to
hyperparameter adjustments, the U-Net architecture offers a conservative yet effective
choice for achieving the desired balance between efficiency and performance.

2 Preliminaries
2.1 Lithography Simulation
The B-OPC framework generates optimized mask patterns, and to evaluate their ef-
fectiveness, it is essential to simulate the lithography process to produce corresponding
resist images. For this purpose, we utilize a lithography simulation model based on
Hopkins’ diffraction theory [16], which provides an approximation of the printed re-
sults. This model computes the aerial image I, which represents the light intensity
projected onto the wafer. The light intensity is derived by convolving the mask pattern
M with the optical kernels hk . The computation is expressed as follows:
2
N
X
I(x, y) = ωk |M(x, y) ⊗ hk (x, y)|2 (1)
k=1
where hk denotes the k-th optical kernel, and ωk represents its corresponding
weight. To balance computational efficiency and accuracy, we adopt an approxima-
tion of order Nk = 24, as suggested in[17]. This simplification enables the effective
evaluation of mask quality while maintaining manageable computational costs.

2.2 Metrics on Lithography Results


The primary objective of the B-OPC model is to refine the mask pattern such that
the resulting lithographic output, specifically the printed pattern on the wafer, closely
aligns with the design target. Additionally, it is crucial to ensure that the printed
pattern remains robust against variations in process conditions. To evaluate the per-
formance of the optimized masks, we employ two key metrics: the L2 loss and the
Process Variation Band (PVB). These metrics provide a comprehensive evaluation of
the optimized masks, assessing both their accuracy in reproducing the target design
and their robustness to process variations.

3
The L2 loss quantifies the fidelity of the nominal printed pattern to the target
design. It is defined as the squared Euclidean distance between the nominal printed
image Znom and the target image T:

L2(Znom , T) = ∥Znom − T∥22 , (2)

where T represents the target image, and Znom denotes the printed image under
nominal process conditions.
The PVB measures the extent of variation in the printed pattern across different
process conditions, ±2% dose error in this case, reflecting the robustness of the de-
sign under real-world manufacturing variations. The PVB is defined as the squared
Euclidean distance between the printed images under maximum Zmax and minimum
Zmin process conditions:

P V B(Zmax , Zmin ) = ∥Zmax − Zmin ∥22 , (3)

where Zmax and Zmin represent the printed images under maximum and minimum
process conditions, respectively.

2.3 Binary Neural Networks


BNNs are a specialized class of neural networks characterized by weights constrained
to two discrete values, typically +1 and -1. This constraint simplifies computations to
XNOR operations and population counts (pop-counts), significantly improving infer-
ence efficiency in terms of both time and resource usage, as detailed in [18]. During
forward propagation in BNNs, activations and weights are binarized using a function
such as the Sign function:
(
b +1 if x ≥ 0,
x = Sign(x) = (4)
−1 otherwise.

where xb represents the binarized variable and x the real-valued variable. The
binarization process enables efficient matrix multiplication through XNOR operations
and pop-counts, demonstrating a substantial reduction in computational demands.
However, the binarization process introduces challenges such as gradient mismatch
and information loss during training [18].

One of the primary challenges in training BNNs lies in implementing gradient


descent, as the derivative of the binarization function (Sign function) is zero almost
everywhere. Consequently, BNNs cannot directly utilize traditional backward propa-
gation methods. To address this limitation, BNNs employ a straight-through estimator
(STE) [13]. The STE approximates the gradients by replacing the Sign function during
the backward pass with a differentiable surrogate, such as the clip function, defined
as:

−1 if x < −1,

clip(x, −1, 1) = x if − 1 ≤ x ≤ 1, (5)

1 if x > 1.

This approach enables gradient propagation and facilitates effective training. Dur-
ing backward propagation, real-valued gradients are accumulated for weight updates.

4
Figure 1: B-OPC model structure

To ensure compliance with the binarization constraint, weights are clipped between −1
and 1, and binarized values are recalculated. This STE-enabled method allows BNNs
to achieve significant efficiency while maintaining stable training dynamics. Addi-
tionally, binarization acts as a form of regularization, aiding in better generalization
[13].

3 Model: B-OPC
The objective of our DNN approach to OPC is to employ BNN to identify the opti-
mized mask Mop for a given layout target Zt , such that Mop closely approximates the
ground truth mask M .The dataset used in this study was sourced from the authors of
GAN-OPC[5]. It consist of 4,875 training instances synthesized based on design speci-
fications from existing 32nm M1 layout topologies, with each image sized at 256 × 256
pixels. Among these 4,875 design target and ground truth mask pairs, 4,400 instances
were utilized for training and validation, with a 9:1 split ratio, while 475 instances
were reserved for testing.

3.1 Model Details


The structure of the B-OPC model is illustrated in Figure 1. The model processes
a 256 × 256 × 1 binarized target image as input and outputs a binary mask of the
same dimensions. The encoder path comprises four downsampling blocks, each con-
taining a quantized 2D convolution layer (QuantConv2D), batch normalization, hard
tanh activation, and max-pooling (except for the last block). The number of filters
increases progressively (64, 128, 256, 512) while maintaining a consistent 6 × 6 kernel
size, allowing the network to capture increasingly complex features. The bottleneck
block serves as a transition between the encoder and decoder paths. It consists of a
single convolution layer with 512 filters, followed by batch normalization and a hard
tanh activation function, without any max-pooling layers. The decoder path mirrors
the encoder with four upsampling blocks, each featuring a quantized 2D transposed
convolution layer (QuantConv2DTranspose), batch normalization, hard tanh activa-
tion, and concatenation with the corresponding encoder layer via skip connections.

5
These skip connections concatenate features from the corresponding encoder layers,
preserving fine-grained details from the input image. The output layer consists of a
final quantized 2D transposed convolution with a single filter and hard tanh activation,
producing a 256 × 256 × 1 binary mask for OPC correction.

Binarization is implemented throughout the B-OPC model using the ste-sign quan-
tizer for both inputs and kernels in the convolutional layers, built using the Larq
framework[19]. This approach constrains weights and activations to binary values (-1
or +1), significantly reducing computational complexity and memory requirements.
To ensure the weights remain within the binary range during training, the weightclip
constraint is applied to the kernels. Kernel weights are initialized to ones, providing
a uniform starting point for the binarization process and enhancing training stability.
The hard tanh function serves as the activation throughout the network, approximat-
ing the binary step function while allowing gradient flow during backward propagation.
Together, these binarization techniques and activation functions enable the network
to effectively learn OPC corrections within the limitations of a binary representation,
achieving a trade-off between computational efficiency and performance.

3.2 B-OPC Training


The model training was conducted on an NVIDIA L4 GPU provided by Google Co-
lab, leveraging the TensorFlow framework for implementation. The training process
utilized a batch size of 32, an initial learning rate of 0.001, and the Adam optimizer.
The model was trained for a total of 60 epochs. Validation was performed after each
epoch using a validation split of 10% from the training dataset, with data shuffling
applied during each iteration to promote generalization and prevent overfitting.

The primary objective of training in B-OPC is to minimize the discrepancy between


the predicted mask and the ground truth mask. The training workflow is illustrated
in Figure 2a. When the design target is passed into the B-OPC model, the output is
the predicted optimized mask. The mean squared error (MSE) loss function is em-
ployed for backward propagation, quantifying the difference between the ground truth
mask and the predicted mask. This training approach proves to be more computa-
tionally efficient than the ILT-guided pre-training method utilized in PGAN-OPC [5],
as shown in Figure 2b. In the PGAN-OPC framework [5], the training objective is to
minimize the loss calculated between the target image and the printed image, which
is derived by passing the predicted mask through a lithography simulator. This ap-
proach involves additional computational overhead due to the lithography simulation.
In contrast, B-OPC simplifies the process by directly using ground truth masks as the
ideal reference for optimized masks. This direct training strategy significantly reduces
computational resources and runtime while maintaining high-quality results.

4 Experimental Results
To evaluate the performance of the B-OPC model, we conducted experiments using
two distinct test cases. The first test case consists of 475 instances, separated from the
4,875 synthesized instances derived from the dataset provided by GAN-OPC [5]. Eval-

6
(a)

(b)

Figure 2: (a)B-OPC training (b)PGAN-OPC ILT-guided pre-train

uating B-OPC on this test set provides an initial assessment of the model’s training
outcomes. The second test case comprises ten ICCAD 2013 contest test designs, which
are industrial M1 designs based on the 32nm technology node. This dataset serves as a
widely recognized benchmark in mask optimization research and has been extensively
used in previous studies [17, 5, 8, 7]. Evaluating B-OPC on this benchmark dataset
allows us to determine whether the model can effectively handle realistic circuit design
patterns. Furthermore, it enables a direct comparison with other DNN-based mask
optimization methods, providing insights into B-OPC’s competitiveness and practical
applicability.

4.1 Evaluation on Synthesized Test Cases


In the first experiment, we evaluated the performance of B-OPC by comparing the
L2 loss and PVB between the lithographic results of ground truth masks and those
generated by B-OPC. As presented in Table 1, B-OPC outperformed the ground truth
masks, achieving a 29.7% reduction in L2 loss and a 3.1% reduction in PVB. These
results highlight B-OPC’s ability to produce masks with higher fidelity to the design
target while maintaining robustness under process variations.

Figure 3 further illustrates specific design patterns where B-OPC demonstrates


notable advantages. In the first and second rows of Figure 3, B-OPC achieves superior
results in dense design regions. While ground truth masks often produce connected
patterns in these areas, B-OPC generates clearer separations, improving pattern fi-
delity. Although some artifacts are still observed in B-OPC-generated masks, the
model consistently achieves better L2 loss compared to the lithographic results of
ground truth masks.

7
Table 1: Comparison Lithography Simulation Result of Ground Truth
Masks and BOPC Generated Masks

Ground Truth Masks BOPC Generated Mask ratio


Average L2 69695 49046 0.703
Average PVB 37302 36163 0.969

(a)

(b)

(c)

(d)

Figure 3: (a)-(d) are four cases in test dataset. The images from
left to right represent the target image, ground truth mask, printed
image of ground truth mask, B-OPC mask, printed image of B-OPC
mask

An intriguing observation was made regarding the ground truth mask: certain de-
sign patterns are missing in the printed images derived from the ground truth masks,
but are accurately detected in the B-OPC masks. This phenomenon is evident in
the third and fourth rows of Figure 3. Specifically, small rectangular patterns, which
are separated and isolated from larger pattern regions in the design, are inadequately
compensated in the ground truth masks. These masks fail to include compensation
features around such isolated structures. In contrast, the masks generated by B-OPC
successfully address these limitations. B-OPC incorporates compensation features for
the isolated rectangular structures, ensuring they are preserved in the printed images.
This improvement demonstrates B-OPC’s capability to handle complex and isolated
design features more effectively than the ground truth masks, enhancing the overall
fidelity of the lithographic results.

8
Figure 4: Evaluation flow of ten ICCAD 2013 test design

4.2 Evaluation on ICCAD 2013 Benchmark


A notable difference in the mask optimization process arises in the second test case,
which involves evaluation on the ten ICCAD 2013 test designs. BNNs, while compu-
tationally efficient, have been observed to exhibit performance degradation compared
to real-valued DNNs due to the constraints imposed by binarization [18]. To address
this limitation, an ILT-refinement process is applied to enhance the performance of
BNN-generated masks. This additional refinement step has also been employed in
previous studies, such as [5, 8].

This experiment comprises two processing stages, as illustrated in Figure 4. The


mask generation process, implemented using B-OPC and ILT-refinement, was con-
ducted on a MacBook equipped with an M2 chip and 8GB RAM. In the first stage,
the B-OPC model generates ten coarse masks, with an average runtime of 0.2 seconds
per mask. While these coarse masks improve the mask patterns to some extent, further
refinement is required to achieve better lithographic accuracy. To this end, the sec-
ond stage employs a pixel-based ILT-refinement engine, sourced from the LithoBench
GitHub repository [20]. The refinement process involves 30 iterations on masks with a
resolution of 512 × 512 pixels, followed by 5 additional iterations on 1024 × 1024 pixel
masks. The average runtime for the refinement procedure across the 10 test cases is
approximately 87 seconds. Consequently, the total turnaround time (TAT) for the
complete mask optimization workflow, as depicted in Figure 4, is approximately 87.2
seconds. This demonstrates the computational efficiency of the proposed approach,
balancing the speed of B-OPC with the precision of ILT-refinement.

The quantitative results after the ILT-refinement process are presented in Table 2.
The results demonstrate a significant improvement in L2 loss, with an average re-
duction to 60.1% of the initial value. However, the PVB exhibits a slight increase,
averaging 6.9% higher than the pre-refinement values. This increase in PVB can be
attributed to the successful printing of patterns that were previously omitted in the
coarse masks. Consequently, the total printed area becomes larger, leading to a higher

9
PVB measurement. Figure 5 provides visual examples of this phenomenon through
several cases from the benchmark dataset.

Table 2: Comparison the result before finetuned and after finetuned

Before Finetuned After Finetuned


ID Area (nm2 )
L2 (nm2 ) PVB (nm2 ) L2 (nm2 ) PVB (nm2 )
case1 215344 78316 46344 47701 49802
case2 169280 62679 41166 36578 44257
case3 213504 131344 53035 83668 74704
case4 82560 46622 30404 20648 33165
case5 281958 72532 64666 38993 61250
case6 286234 63561 53128 39761 56040
case7 229149 33285 49832 28759 50874
case8 128544 22401 23042 17796 24845
case9 317581 89210 71415 46334 68190
case10 102400 19064 18617 12014 19855
Average - 61901.4 45164.9 37225.2 48298.2
Ratio - 1 1 0.601 1.0693

4.3 Comparison with State-of-the-Art Methods


We compared the ILT-refinement results of B-OPC with those of PGAN-OPC, Neural-
ILT, and DevelSet, as presented in Table 3. The results demonstrate that B-OPC
performs competitively against these state-of-the-art methods. Specifically, compared
to Neural-ILT, B-OPC achieves a 3.4% reduction in L2 loss and a 4.5% reduction in
PVB loss. Additionally, B-OPC achieves a 7.3% reduction and a 3.4% reduction in
PVB loss in L2 loss compared to PGAN-OPC. When compared to DevelSet, B-OPC
outperforms by reducing L2 loss by 3.4% and PVB by 0.8%. These improvements
highlight the effectiveness of B-OPC in generating high-quality masks with superior
lithographic accuracy and robustness.

Interestingly, the lithography results from the first stage of B-OPC, as shown in
the third column of Figure 5, are comparable to the results produced by the U-Net
backbone in Neural-ILT, as depicted in Figure 4 of [8]. However, B-OPC offers signif-
icant advantages in terms of training simplicity and resource efficiency. For instance,
the U-Net structure in Neural-ILT requires pre-training for 20 epochs, which takes
approximately 19 hours on a single Titan V GPU [8]. In contrast, B-OPC completes
its training in 60 epochs within just 1 hour on an L4 GPU provided by Google Colab.

Moreover, compared to PGAN-OPC[5], which requires a generative adversarial


training process involving both a generator and a discriminator, B-OPC benefits from
a simpler supervised learning framework. This simplification significantly reduces
training complexity and runtime. Similarly, while DevelSet employs a hybrid level-set
approach with GPU acceleration[7], it exhibits higher resource demands, particularly
for larger and more complex design patterns. B-OPC’s use of BNNs and its straight-
forward U-Net structure result in a more efficient and scalable solution.

10
(a)

(b)

(c)

(d)

Figure 5: (a)-(d) are four cases in the benchmark. The image from
left to right represents the target image, the coarse mask, the printed
image of the coarse mask, the finetuned mask, the printed image of
the finetuned mask

These findings suggest that integrating Binary Neural Networks(BNNs) with a


U-Net structure in a supervised learning framework is a promising direction for DNN-
based OPC research. B-OPC’s ability to deliver competitive performance while sig-
nificantly reducing training time and resource requirements highlights its potential to
evolve into a more efficient and practical solution for mask optimization in semicon-
ductor manufacturing.

Table 3: Comparison of different methods on various benchmarks.


PGAN-OPC[5] Neural-ILT[8] DevelSet[7] B-OPC
ID Area (nm2 )
L2 (nm2 ) PVB (nm2 ) L2 (nm2 ) PVB (nm2 ) L2 (nm2 ) PVB (nm2 ) L2 (nm2 ) PVB (nm2 )
case1 215344 52570 56267 50795 63695 49142 59607 47701 49802
case2 169280 42253 50822 36969 60232 34489 52012 36578 44257
case3 213504 83663 94498 94447 85358 93498 76558 83668 74704
case4 82560 19965 28957 17420 32287 18682 29047 20648 33165
case5 281958 44733 59328 42337 65536 44256 58085 38993 61250
case6 286234 46062 52845 39601 59247 41730 53410 39761 56040
case7 229149 26438 47981 25424 50109 25797 46606 28759 50874
case8 128544 17690 23564 15588 25826 15460 24836 17796 24845
case9 317581 56125 65417 52304 68650 50834 64950 46334 68190
case10 102400 9990 19893 10153 22443 10140 21619 12014 19855
Average - 39948.9 49957.2 38503.8 53338.3 38402.8 48673.0 37225.2 48298.2
Ratio - 1.073 1.034 1.034 1.045 1.032 1.008 1.000 1.000

11
5 Conclusion
This research proposes B-OPC, a Binary Neural Network(BNN)-based mask optimiza-
tion framework that demonstrates exceptional computational efficiency during model
training, requiring significantly less time and resources compared to conventional meth-
ods. By leveraging the inherent binary nature of mask patterns and design targets,
B-OPC achieves substantial reductions in memory usage and computational overhead
while maintaining high performance in semiconductor lithography. The experimen-
tal results demonstrate that B-OPC effectively compensates for diffraction effects on
specific design patterns. By incorporating a lightweight ILT-refinement process, B-
OPC achieves performance comparable to state-of-the-art methods such as Neural-
ILT, PGAN-OPC, and DevelSet, while requiring significantly less training time and
lower hardware resources.

Future improvements to the B-OPC framework include expanding training datasets


to include more diverse and realistic design patterns, utilizing higher-resolution images
to enhance accuracy, and exploring advanced model architectures such as UNet++
and UNet3+. These enhancements are expected to further improve the model’s gen-
eralization capabilities and robustness, making it more applicable to industrial-scale
semiconductor manufacturing.

References
[1] J. Kuang, W.-K. Chow, and E. F. Y. Young, “A robust approach for process
variation aware mask optimization,” in 2015 Design, Automation Test in Europe
Conference Exhibition (DATE), pp. 1591–1594, 2015.
[2] A. Awad, A. Takahashi, S. Tanaka, and C. Kodama, “A fast process variation
and pattern fidelity aware mask optimization algorithm,” in 2014 IEEE/ACM In-
ternational Conference on Computer-Aided Design (ICCAD), pp. 238–245, 2014.
[3] A. Poonawala and P. Milanfar, “Mask design for optical microlithography—an
inverse imaging problem,” IEEE Transactions on Image Processing, vol. 16, no. 3,
pp. 774–788, 2007.
[4] D. S. Abrams and L. Pang, “Fast inverse lithography technology,” in Optical
Microlithography XIX (D. G. Flagello, ed.), vol. 6154, p. 61541J, International
Society for Optics and Photonics, SPIE, 2006.
[5] H. Yang, S. Li, Y. Ma, B. Yu, and E. F. Y. Young, “Gan-opc: Mask op-
timization with lithography-guided generative adversarial nets,” in 2018 55th
ACM/ESDA/IEEE Design Automation Conference (DAC), pp. 1–6, 2018.
[6] G. Chen, W. Chen, Y. Ma, H. Yang, and B. Yu, “Damo: Deep agile mask op-
timization for full chip scale,” in 2020 IEEE/ACM International Conference On
Computer Aided Design (ICCAD), pp. 1–9, 2020.
[7] G. Chen, Z. Yu, H. Liu, Y. Ma, and B. Yu, “Develset: Deep neural level set for
instant mask optimization,” in 2021 IEEE/ACM International Conference On
Computer Aided Design (ICCAD), pp. 1–9, 2021.
[8] B. Jiang, L. Liu, Y. Ma, H. Zhang, B. Yu, and E. F. Young, “Neural-ilt: Migrat-
ing ilt to neural networks for mask printability and complexity co-optimization,”

12
in 2020 IEEE/ACM International Conference On Computer Aided Design (IC-
CAD), pp. 1–9, 2020.
[9] H. Yang and H. Ren, “Enabling scalable ai computational lithography with
physics-inspired models,” in 2023 28th Asia and South Pacific Design Automation
Conference (ASP-DAC), pp. 715–720, 2023.
[10] X. Liang, Y. Ouyang, H. Yang, B. Yu, and Y. Ma, “Rl-opc: Mask optimiza-
tion with deep reinforcement learning,” IEEE Transactions on Computer-Aided
Design of Integrated Circuits and Systems, vol. 43, no. 1, pp. 340–351, 2024.
[11] S. Zheng, H. Yang, B. Zhu, B. Yu, and M. Wong, “Lithobench: Benchmark-
ing ai computational lithography for semiconductor manufacturing,” in Advances
in Neural Information Processing Systems (A. Oh, T. Naumann, A. Globerson,
K. Saenko, M. Hardt, and S. Levine, eds.), vol. 36, pp. 30243–30254, Curran
Associates, Inc., 2023.
[12] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for
biomedical image segmentation,” in Medical Image Computing and Computer-
Assisted Intervention – MICCAI 2015 (N. Navab, J. Hornegger, W. M. Wells,
and A. F. Frangi, eds.), (Cham), pp. 234–241, Springer International Publishing,
2015.
[13] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized
neural networks: Training deep neural networks with weights and activations
constrained to +1 or -1,” 2016.
[14] Z. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: A
nested u-net architecture for medical image segmentation,” in Deep Learning in
Medical Image Analysis and Multimodal Learning for Clinical Decision Support
(D. Stoyanov, Z. Taylor, G. Carneiro, T. Syeda-Mahmood, A. Martel, L. Maier-
Hein, J. M. R. Tavares, A. Bradley, J. P. Papa, V. Belagiannis, J. C. Nascimento,
Z. Lu, S. Conjeti, M. Moradi, H. Greenspan, and A. Madabhushi, eds.), (Cham),
pp. 3–11, Springer International Publishing, 2018.
[15] H. Huang, L. Lin, R. Tong, H. Hu, Q. Zhang, Y. Iwamoto, X. Han, Y.-W. Chen,
and J. Wu, “Unet 3+: A full-scale connected unet for medical image segmenta-
tion,” 2020.
[16] H. H. Hopkins, “The concept of partial coherence in optics,” Proceedings of the
Royal Society of London A: Mathematical, Physical and Engineering Sciences,
vol. 208, no. 1093, pp. 263–277, 1951.
[17] J.-R. Gao, X. Xu, B. Yu, and D. Z. Pan, “Mosaic: Mask optimizing solution
with process window aware inverse correction,” in 2014 51st ACM/EDAC/IEEE
Design Automation Conference (DAC), pp. 1–6, 2014.
[18] H. Qin, R. Gong, X. Liu, X. Bai, J. Song, and N. Sebe, “Binary neural networks:
A survey,” CoRR, vol. abs/2004.03333, 2020.
[19] L. Geiger and P. Team, “Larq: An open-source library for training binarized
neural networks,” Journal of Open Source Software, vol. 5, no. 45, p. 1746, 2020.
[20] J. Jiang, “Lithobench: Benchmarking ai computational lithography for semi-
conductor manufacturing.” https://github.com/shelljane/lithobench, 2023. Ac-
cessed: 2024/6/7.

13

You might also like