0% found this document useful (0 votes)
45 views

Deep-Learning-Generated Holography

Uploaded by

wangxz1983
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Deep-Learning-Generated Holography

Uploaded by

wangxz1983
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Research Article Vol. 57, No.

14 / 10 May 2018 / Applied Optics 3859

Deep-learning-generated holography
RYOICHI HORISAKI,1,2,* RYOSUKE TAKAGI,1 AND JUN TANIDA1
1
Department of Information and Physical Sciences, Graduate School of Information Science and Technology, Osaka University,
1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
2
JST, PRESTO, 4-1-8 Honcho, Kawaguchi-shi, Saitama 332-0012, Japan
*Corresponding author: [email protected]

Received 12 February 2018; revised 12 April 2018; accepted 12 April 2018; posted 13 April 2018 (Doc. ID 323038); published 8 May 2018

We present a method for computer-generated holography based on deep learning. The inverse process of light
propagation is regressed with a number of computationally generated speckle data sets. This method enables
noniterative calculation of computer-generated holograms (CGHs). The proposed method was experimentally
verified with a phase-only CGH. © 2018 Optical Society of America
OCIS codes: (090.1760) Computer holography; (090.1970) Diffractive optics; (110.1758) Computational imaging; (100.3190) Inverse
problems; (100.4996) Pattern recognition, neural networks.

https://doi.org/10.1364/AO.57.003859

1. INTRODUCTION generating an arbitrary light field at a certain propagating dis-


Computer-generated holography is a technique for calculating tance. Our method uses a number of pairs of random input
an interference pattern that generates an arbitrary optical field patterns and their propagating speckle-like intensity output
[1–5]. Spatial light modulators (SLMs) have realized rewritable patterns for training the network. We experimentally demon-
computer-generated holograms (CGHs). Promising applica- strated our method with a phase-only CGH. The relationship
tions of such CGHs include holographic 3D television and between our proposed method and the learning-based phase
dynamic beam shaping, in both of which optical intensity retrieval methods [20,22–24] is analogous to the relationship
distributions in an output space are controlled by CGHs between the GS method for computer-generated holography
[6–9]. A longstanding and challenging issue in computer- and the error-reduction method for phase retrieval [10,29].
generated holography based on SLMs is the incomplete control Our method is a promising approach for high-speed parallel
of light waves in conventional SLMs; that is to say, the SLMs focusing, such as multispot stimulation in optogenetics and
modulate only the amplitude or phase of the light waves. optical tweezers for biology [8,30–34].
Iterative optimization algorithms have been used for calculating
CGHs, such as the Gerchberg–Saxton (GS) method, the 2. METHOD
optimal-rotation-angle method, and so on [10–14]. Several
noniterative methods have also been proposed [15–18]. A diagram of the proposed method is shown in Fig. 1. In this
However, they have some issues with image quality, spatial paper, we assume a phase-only CGH implemented by the
resolution, or field-of-view due to speckle noise, the down- optical setup shown in Fig. 1. This forward process F • is
sampling effect, and the zero-order and conjugate light. expressed as
Recently, machine learning has become a hot topic in the y  F x, (1)
field of optics, and we have introduced machine learning tech-
niques to optical sensing and control [19–23]. Several research  jP z expixj2 , (2)
projects based on deep learning (deep neural networks) have
2
also been proposed for phase retrieval, superresolution, and where x ∈ RN ×1 is a phase pattern displayed on the SLM,
2
ghost imaging [24–27]. A shallow neural network has been in- y ∈ RN ×1 is an intensity pattern that appears on the image
troduced for noniterative computer-generated holography [28]. sensor located at a distance z from the SLM, and P z • is
However, in this method, a traditional iterative method is the Fresnel propagation at distance z [4]. Here, N is the num-
needed for generating the training data sets, and the computa- ber of pixels along one spatial dimension.
tional cost is high. We use a deep convolutional residual network (ResNet) for
In this paper, to solve the above issues, we present a non- calculating the inverse process of Eq. (1) [35]. Residual learning
iterative method for calculating CGHs based on deep learning. enables optimization of deep layers by preventing stagnation
The deep network generates an interferometric pattern for with skip connections. ResNet has been used for phase retrieval

1559-128X/18/143859-05 Journal © 2018 Optical Society of America


3860 Vol. 57, No. 14 / 10 May 2018 / Applied Optics Research Article

SLM with Image In the training process, the optical setup shown in Fig. 1
phase mode sensor
is simulated in a computer based on Eq. (2). A number of
2
training pairs of random phase patterns x̌ ∈ RN ×1 on the
N pixels SLM and their Fresnel propagating intensity patterns
2
Collimated Distance z y̌ ∈ RN ×1 on the image sensor are calculated. The network
beam is regressed to the inverse process of the optical propagation
with the training pairs.
Computational
propagation The network is composed of multiscale ResNets, as shown
in Fig. 2(a), where K is the number of filters at the convolu-
Random phase Speckle intensity
input patterns output patterns tional layers. The pixel counts of both the holographic plane
Training
and the image sensor plane are assumed to be a power of 2
in this paper, but this is easily extendable to an arbitrary pixel
count with zero padding. “D” is a block for downsampling, as
shown in Fig. 2(b), “U” is a block for upsampling, as shown in
Network Target pattern Fig. 2(c), “R” is a block with residual convolutions, as shown in
Fig. 2(d), and “S” is a convolutional block for a skip convolu-
Calculated
hologram
tional connection, as shown in Fig. 2(e). The definitions of the
Reproduced pattern layers are as follows [36]: “BatchNorm” is a layer of the batch
normalization [37]; “ReLU” is a layer of the rectified linear unit
Fig. 1. Diagram illustrating the proposed method.
[38]; “Conv S, L” is a layer of the 2D convolution with a filter
size S and a stride L; “TConv S, L” is a layer of the transposed
2D convolution with the filter size S and the stride L. The loss
and has shown promising results [24]. The network architec- function of the final regression layer is the mean squared error.
ture we use is shown in Fig. 2(a). The inverse process F −1 • of
Eq. (1) is written as
3. EXPERIMENTAL DEMONSTRATION
−1
x̂  F ŷ: (3) The proposed method was demonstrated experimentally. The
pixel counts of holograms and target patterns were 642  N 2 ,
The output of this inverse process and the network in and the number of filters at the convolutional layers was
2
Fig. 2(a) is a phase pattern x̂ ∈ RN ×1 displayed on the SLM, 32 K . The training data sets were composed of uniform
2
to reproduce a target intensity pattern ŷ ∈ RN ×1 on the image random phase patterns x̌ and their Fresnel propagating patterns
sensor, as shown in Fig. 1. y̌ at the distance z. The number of training speckle pairs was
100,000. A learning algorithm called “Adam” was used for
optimizing the network with an initial learning rate of
Holographic Image sensor
(SLM) plane plane 0.001, a mini-batch size of 50, and a maximum number of
S
epochs of 100 [39]. Those parameters were chosen experimen-
tally. The code was implemented with MATLAB.
R U U U U U D D D D D A collimated beam from a laser [1103P manufactured by
2
22×K 1 ×K 22×K Edmund Optics; wavelength (λ): 632.8 nm] was used for illu-
(N/4)2×K (N/ 4)2×K minating the transmissive SLM [LC 2012 manufactured by
(N/2)2×K (N/2)2×K
N 2 2
N ×K
(a)
N2 Holoeye; pixel pitch (p): 36 μm; pixel count: 768 × 1024]
operating in the phase mode. The image sensor (PL-B953
manufactured by PixeLink; pixel pitch: 4.65 μm; pixel count:
TConv (2, 2)
Conv (3, 2)

768 × 1024) was located 13 cm ( z) from the SLM. These


optical elements were arranged in an on-axis configuration,
as the optical setup shown in Fig. 1.
TConv (2, 2)
BatchNorm

BatchNorm

BatchNorm

BatchNorm
Conv (3, 1)

Conv (3, 2)

Conv (3, 1)

In the first demonstration, single-spot focusing with the


ReLU

ReLU

ReLU

ReLU
+

proposed method was experimentally performed. The results


(b) (c)
are shown in Fig. 3. The hologram x̂ calculated by providing
a delta function to the trained network is shown in Fig. 3(a).
R-subblock The single-spot pattern ŷ in Fig. 3(b) was optically reproduced
from the hologram. A line profile of the spot is shown in
R-subblock

BatchNorm

BatchNorm

Bat c hNorm

BatchNorm
Conv (3, 1)

Conv (3, 1)

Conv (3, 1)

Conv (3, 1)

Fig. 3(c). The FWHM of the profile was 36.0 μm, and the theo-
ReLU

ReLU

ReLU

ReLU
+

retical diffraction limit was 35.7 μm ( zλ∕pN ) [4]. Therefore,


the hologram based on our method approximately achieved the
(d) (e) diffraction limit. The peak-to-background ratio was 17.9,
Fig. 2. Structures of the (a) whole network, (b) D-block, (c) U-block, which is calculated as s∕b, where s is the spot intensity value,
(d) R-block, and (e) S-block. and b is the average intensity value in the non-spot area [40].
Research Article Vol. 57, No. 14 / 10 May 2018 / Applied Optics 3861

(a) (b)
1
Normalized intensity (a.u.)

0.8

0.6

0.4

0.2

0
1150 1200 1250

(c)

Fig. 3. Experimental data of single-spot focusing. (a) Hologram.


(b) Its reproduced pattern. (c) Line profile of the spot. Hologram
(phase) pattern is normalized in the interval −π, π.

In the second experiment, image reproduction was demon-


strated based on our method, as shown in Fig. 4. The three
target intensity patterns ŷ shown in Fig. 4(a) were composed
of a two-by-two array of digits, which were randomly selected
from the handwritten digit data set of MATLAB. The holo-
grams x̂ in Fig. 4(b) were calculated by the trained network
with the target patterns. Their optically reproduced results
are shown in Fig. 4(c). The zero-order light was not observed.
A dot-array-like artifact in Fig. 4(c) may have been caused by
the Talbot effect of the wiring grid used for pixel-wise control
of the liquid crystal on the SLM and is not speckle noise [4]. It
might be possible to alleviate this with an SLM having a larger Fig. 4. Experimental data of image reproduction. (a) Target inten-
fill factor, such as a reflective SLM. The root mean squared sity patterns. (b) Holograms based on the proposed method and
error (RMSE) between the target intensity patterns in Fig. 4(a) (c) their reproduced patterns. (d) Holograms based on the GS method
and the reproduced results with the proposed method in and (e) their reproduced patterns. Hologram (phase) patterns are nor-
Fig. 4(c) was 0.16. The holograms calculated based on the malized in the interval −π, π. The reproduced patterns were normal-
GS method are shown in Fig. 4(d). The GS method is a rep- ized with the same light intensity value.
resentative iterative algorithm for calculating CGHs, and the
number of iterations in the experiment was set to 100 [10].
The optically reproduced results are shown in Fig. 4(e). The shorter computational time compared with the conventional
RMSE between the target patterns in Fig. 4(a) and the repro- iterative method. The training time in our method was 40 h.
duced results with the GS method in Fig. 4(e) was 0.17. The relationship between the number of training pairs and
Therefore, the reproduced results of our method were the focusing performance at different hologram sizes was nu-
comparable with those of the conventional method. merically investigated by simulating the first experiment, as
Computational times for calculating a single hologram shown in Fig. 5. The number of training pairs was set to
based on our method and the GS method were 26 and 1000, 10,000, and 100,000. The pixel counts (N 2 ) of the holo-
94 ms, respectively, when using a computer with an Intel grams and target patterns were set to 162 , 322 , and 642 . The
Core i7 processor with a clock rate of 3.1 GHz and a memory other parameters and the procedures for training the networks
size of 16 GB without any parallelization or a graphics process- and calculating the holograms were the same as those in the first
ing unit. Therefore, our method calculates holograms with a experimental demonstration. The reproduced focusing patterns
3862 Vol. 57, No. 14 / 10 May 2018 / Applied Optics Research Article

0.08 REFERENCES
N=16
0.07 N=32 1. B. R. Brown and A. W. Lohmann, “Complex spatial filtering with binary
N=64 masks,” Appl. Opt. 5, 967–969 (1966).
2. W. H. Lee, “Sampled Fourier transform hologram generated by
RMSE of spot patterns

0.06
computer,” Appl. Opt. 9, 639–643 (1970).
0.05 3. D. Leseberg and C. Frère, “Computer-generated holograms of
3-D objects composed of tilted planar segments,” Appl. Opt. 27,
0.04
3020–3024 (1988).
0.03 4. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).
5. G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog
0.02 holography in three-dimensional imaging,” Adv. Opt. Photon. 4,
472–553 (2012).
0.01 6. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holog-
raphy as a generic display technology,” Computer 38, 46–53 (2005).
0
1000 10000 100000 7. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photon.
Number of training pairs 5, 456–535 (2013).
8. K. Dholakia and T. Čižmár, “Shaping the future of manipulation,”
Fig. 5. Relationship between the number of training pairs and the Nat. Photonics 5, 335–342 (2011).
RMSE in a focusing simulation. 9. J. A. Rodrigo and T. Alieva, “Freestyle 3D laser traps: tools for study-
ing light-driven particle dynamics and beyond,” Optica 2, 812–815
(2015).
10. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the de-
termination of the phase from image and diffraction plane pictures,”
were computationally calculated with Fresnel propagation.
Optik 35, 237–246 (1972).
The focusing performance was evaluated with the RMSEs 11. J. Bengtsson, “Kinoform design with an optimal-rotation-angle
between the target and the numerically reproduced focusing method,” Appl. Opt. 33, 6879–6884 (1994).
patterns, as shown in Fig. 5. This result shows that a larger 12. N. Yoshikawa, M. Itoh, and T. Yatagai, “Quantized phase optimization
hologram and a larger number of training data sets provide of two-dimensional Fourier kinoforms by a genetic algorithm,” Opt.
Lett. 20, 752–754 (1995).
better performance. 13. T. Dresel, M. Beyerlein, and J. Schwider, “Design of computer-
generated beam-shaping holograms by iterative finite-element mesh
adaption,” Appl. Opt. 35, 6865–6874 (1996).
4. CONCLUSION 14. T. G. Jabbour and S. M. Kuebler, “Vectorial beam shaping,” Opt.
Express 16, 7203–7213 (2008).
In this paper, we proposed a noniterative method for calculat- 15. A. W. Lohmann and D. P. Paris, “Binary Fraunhofer holograms,
ing CGHs based on a convolutional deep neural network. The generated by computer,” Appl. Opt. 6, 1739–1748 (1967).
network was regressed to the inverse process of the optical 16. P. W. M. Tsang and T. C. Poon, “Novel method for converting digital
Fresnel hologram to phase-only hologram based on bidirectional error
propagation with a number of speckle pairs. We experimentally
diffusion,” Opt. Express 21, 23680–23686 (2013).
demonstrated a phase-only CGH based on our method. The 17. P. W. M. Tsang, Y.-T. Chow, and T. C. Poon, “Generation of phase-
results were compared with those obtained with the GS only Fresnel hologram based on down-sampling,” Opt. Express 22,
method. The demonstrations showed reasonable image 25208–25214 (2014).
quality of the reproduced intensity pattern and a shorter com- 18. T. Shimobaba and T. Ito, “Random phase-free computer-generated
hologram,” Opt. Express 23, 9549–9554 (2015).
putational time for the proposed method compared with the 19. T. Ando, R. Horisaki, and J. Tanida, “Speckle-learning-based object
conventional one. recognition through scattering media,” Opt. Express 23, 33902–33910
We used the network shown in Fig. 2(a) for calculating the (2015).
CGH in this paper. Deep neural networks have many options 20. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging
and tuning parameters [36]. Further investigation of the net- through scattering media,” Opt. Express 24, 13738–13743 (2016).
21. R. Takagi, R. Horisaki, and J. Tanida, “Object recognition through a
work architecture is needed for improving the performance. In multi-mode fiber,” Opt. Rev. 24, 117–120 (2017).
this paper, we demonstrated control of 2D intensity with a 22. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based focusing
phase-only CGH using visible light. Our method is readily ex- through scattering media,” Appl. Opt. 56, 4358–4362 (2017).
tendable to various wave fields, e.g., x-rays and acoustic waves, 23. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based single-shot
superresolution in diffractive imaging,” Appl. Opt. 56, 8896–8901
and higher-dimensional pattern shaping, such as 3D spatial
(2017).
focusing, multispectral (color) holography, and temporal pulse 24. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational
shaping. It is also applicable to simultaneous holographic imaging through deep learning,” Optica 4, 1117–1125 (2017).
control of the complex amplitude (both amplitude and phase) 25. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A.
and amplitude-only/complex-amplitude CGHs. Therefore, the Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
26. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ,
proposed technique is promising in various applications, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017).
including biomedical sensing/control, industrial engineering, 27. Y. Jo, S. Park, J. Jung, J. Yoon, H. Joo, M.-H. Kim, S.-J. Kang, M. C.
and entertainment. Choi, S. Y. Lee, and Y. Park, “Holographic deep learning for rapid
optical screening of anthrax spores,” Sci. Adv. 3, e1700606 (2017).
28. S. Yamauchi, Y.-W. Chen, and Z. Nakao, “Optimization of computer-
Funding. Japan Society for the Promotion of Science
generated holograms by an artificial neural network,” in Proceedings
(JSPS) (JP17H02799, JP17K00233); Precursory Research of Second International Conference on Knowledge-Based Intelligent
for Embryonic Science and Technology (PRESTO) Electronic Systems (KES’98) (Cat. No. 98EX111) (IEEE, 1998), Vol. 3,
(JPMJPR17PB). pp. 220–223.
Research Article Vol. 57, No. 14 / 10 May 2018 / Applied Optics 3863

29. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 35. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
21, 2758–2769 (1982). image recognition,” in IEEE Conference on Computer Vision and
30. A. M. Packer, L. E. Russell, H. W. P. Dalgleish, and M. Häusser, Pattern Recognition (CVPR) (IEEE, 2016), pp. 770–778.
“Simultaneous all-optical manipulation and recording of neural circuit 36. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT,
activity with cellular resolution in vivo,” Nat. Methods 12, 140–146 2016).
(2014). 37. S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep net-
31. O. Hernandez, E. Papagiakoumou, D. Tanese, K. Fidelin, C. Wyart, work training by reducing internal covariate shift,” in Proceedings of
and V. Emiliani, “Three-dimensional spatiotemporal focusing of the 32nd International Conference on Machine Learning (ICML’15)
holographic patterns,” Nat. Commun. 7, 11928 (2016). (JMLR, 2015), Vol. 37, pp. 448–456.
32. N. C. Pégard, A. R. Mardinly, I. A. Oldenburg, S. Sridharan, L. Waller, 38. V. Nair and G. E. Hinton, “Rectified linear units improve restricted
and H. Adesnik, “Three-dimensional scanless holographic optoge- Boltzmann machines,” in Proceedings of the 27th International
netics with temporal focusing (3D-SHOT),” Nat. Commun. 8, 1228 Conference on Machine Learning (ICML’10) (Omnipress, 2010),
(2017). pp. 807–814.
33. D. G. Grier, “A revolution in optical manipulation,” Nature 424, 39. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,”
810–816 (2003). in International Conference on Learning Representations (ICLR)
34. Y. Pang, H. Song, J. H. Kim, X. Hou, and W. Cheng, “Optical trapping (2015).
of individual human immunodeficiency viruses in culture fluid reveals 40. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-
heterogeneity with single-molecule resolution,” Nat. Nanotechnol. 9, shaping methods for focusing light into biological tissue,” Nat.
624–630 (2014). Photonics 9, 563–571 (2015).

You might also like