Deep-Learning-Generated Holography
Deep-Learning-Generated Holography
Deep-learning-generated holography
RYOICHI HORISAKI,1,2,* RYOSUKE TAKAGI,1 AND JUN TANIDA1
1
Department of Information and Physical Sciences, Graduate School of Information Science and Technology, Osaka University,
1-5 Yamadaoka, Suita, Osaka 565-0871, Japan
2
JST, PRESTO, 4-1-8 Honcho, Kawaguchi-shi, Saitama 332-0012, Japan
*Corresponding author: [email protected]
Received 12 February 2018; revised 12 April 2018; accepted 12 April 2018; posted 13 April 2018 (Doc. ID 323038); published 8 May 2018
We present a method for computer-generated holography based on deep learning. The inverse process of light
propagation is regressed with a number of computationally generated speckle data sets. This method enables
noniterative calculation of computer-generated holograms (CGHs). The proposed method was experimentally
verified with a phase-only CGH. © 2018 Optical Society of America
OCIS codes: (090.1760) Computer holography; (090.1970) Diffractive optics; (110.1758) Computational imaging; (100.3190) Inverse
problems; (100.4996) Pattern recognition, neural networks.
https://doi.org/10.1364/AO.57.003859
SLM with Image In the training process, the optical setup shown in Fig. 1
phase mode sensor
is simulated in a computer based on Eq. (2). A number of
2
training pairs of random phase patterns x̌ ∈ RN ×1 on the
N pixels SLM and their Fresnel propagating intensity patterns
2
Collimated Distance z y̌ ∈ RN ×1 on the image sensor are calculated. The network
beam is regressed to the inverse process of the optical propagation
with the training pairs.
Computational
propagation The network is composed of multiscale ResNets, as shown
in Fig. 2(a), where K is the number of filters at the convolu-
Random phase Speckle intensity
input patterns output patterns tional layers. The pixel counts of both the holographic plane
Training
and the image sensor plane are assumed to be a power of 2
in this paper, but this is easily extendable to an arbitrary pixel
count with zero padding. “D” is a block for downsampling, as
shown in Fig. 2(b), “U” is a block for upsampling, as shown in
Network Target pattern Fig. 2(c), “R” is a block with residual convolutions, as shown in
Fig. 2(d), and “S” is a convolutional block for a skip convolu-
Calculated
hologram
tional connection, as shown in Fig. 2(e). The definitions of the
Reproduced pattern layers are as follows [36]: “BatchNorm” is a layer of the batch
normalization [37]; “ReLU” is a layer of the rectified linear unit
Fig. 1. Diagram illustrating the proposed method.
[38]; “Conv S, L” is a layer of the 2D convolution with a filter
size S and a stride L; “TConv S, L” is a layer of the transposed
2D convolution with the filter size S and the stride L. The loss
and has shown promising results [24]. The network architec- function of the final regression layer is the mean squared error.
ture we use is shown in Fig. 2(a). The inverse process F −1 • of
Eq. (1) is written as
3. EXPERIMENTAL DEMONSTRATION
−1
x̂ F ŷ: (3) The proposed method was demonstrated experimentally. The
pixel counts of holograms and target patterns were 642 N 2 ,
The output of this inverse process and the network in and the number of filters at the convolutional layers was
2
Fig. 2(a) is a phase pattern x̂ ∈ RN ×1 displayed on the SLM, 32 K . The training data sets were composed of uniform
2
to reproduce a target intensity pattern ŷ ∈ RN ×1 on the image random phase patterns x̌ and their Fresnel propagating patterns
sensor, as shown in Fig. 1. y̌ at the distance z. The number of training speckle pairs was
100,000. A learning algorithm called “Adam” was used for
optimizing the network with an initial learning rate of
Holographic Image sensor
(SLM) plane plane 0.001, a mini-batch size of 50, and a maximum number of
S
epochs of 100 [39]. Those parameters were chosen experimen-
tally. The code was implemented with MATLAB.
R U U U U U D D D D D A collimated beam from a laser [1103P manufactured by
2
22×K 1 ×K 22×K Edmund Optics; wavelength (λ): 632.8 nm] was used for illu-
(N/4)2×K (N/ 4)2×K minating the transmissive SLM [LC 2012 manufactured by
(N/2)2×K (N/2)2×K
N 2 2
N ×K
(a)
N2 Holoeye; pixel pitch (p): 36 μm; pixel count: 768 × 1024]
operating in the phase mode. The image sensor (PL-B953
manufactured by PixeLink; pixel pitch: 4.65 μm; pixel count:
TConv (2, 2)
Conv (3, 2)
BatchNorm
BatchNorm
BatchNorm
Conv (3, 1)
Conv (3, 2)
Conv (3, 1)
ReLU
ReLU
ReLU
+
BatchNorm
BatchNorm
Bat c hNorm
BatchNorm
Conv (3, 1)
Conv (3, 1)
Conv (3, 1)
Conv (3, 1)
Fig. 3(c). The FWHM of the profile was 36.0 μm, and the theo-
ReLU
ReLU
ReLU
ReLU
+
(a) (b)
1
Normalized intensity (a.u.)
0.8
0.6
0.4
0.2
0
1150 1200 1250
(c)
0.08 REFERENCES
N=16
0.07 N=32 1. B. R. Brown and A. W. Lohmann, “Complex spatial filtering with binary
N=64 masks,” Appl. Opt. 5, 967–969 (1966).
2. W. H. Lee, “Sampled Fourier transform hologram generated by
RMSE of spot patterns
0.06
computer,” Appl. Opt. 9, 639–643 (1970).
0.05 3. D. Leseberg and C. Frère, “Computer-generated holograms of
3-D objects composed of tilted planar segments,” Appl. Opt. 27,
0.04
3020–3024 (1988).
0.03 4. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).
5. G. Nehmetallah and P. P. Banerjee, “Applications of digital and analog
0.02 holography in three-dimensional imaging,” Adv. Opt. Photon. 4,
472–553 (2012).
0.01 6. C. Slinger, C. Cameron, and M. Stanley, “Computer-generated holog-
raphy as a generic display technology,” Computer 38, 46–53 (2005).
0
1000 10000 100000 7. J. Geng, “Three-dimensional display technologies,” Adv. Opt. Photon.
Number of training pairs 5, 456–535 (2013).
8. K. Dholakia and T. Čižmár, “Shaping the future of manipulation,”
Fig. 5. Relationship between the number of training pairs and the Nat. Photonics 5, 335–342 (2011).
RMSE in a focusing simulation. 9. J. A. Rodrigo and T. Alieva, “Freestyle 3D laser traps: tools for study-
ing light-driven particle dynamics and beyond,” Optica 2, 812–815
(2015).
10. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the de-
termination of the phase from image and diffraction plane pictures,”
were computationally calculated with Fresnel propagation.
Optik 35, 237–246 (1972).
The focusing performance was evaluated with the RMSEs 11. J. Bengtsson, “Kinoform design with an optimal-rotation-angle
between the target and the numerically reproduced focusing method,” Appl. Opt. 33, 6879–6884 (1994).
patterns, as shown in Fig. 5. This result shows that a larger 12. N. Yoshikawa, M. Itoh, and T. Yatagai, “Quantized phase optimization
hologram and a larger number of training data sets provide of two-dimensional Fourier kinoforms by a genetic algorithm,” Opt.
Lett. 20, 752–754 (1995).
better performance. 13. T. Dresel, M. Beyerlein, and J. Schwider, “Design of computer-
generated beam-shaping holograms by iterative finite-element mesh
adaption,” Appl. Opt. 35, 6865–6874 (1996).
4. CONCLUSION 14. T. G. Jabbour and S. M. Kuebler, “Vectorial beam shaping,” Opt.
Express 16, 7203–7213 (2008).
In this paper, we proposed a noniterative method for calculat- 15. A. W. Lohmann and D. P. Paris, “Binary Fraunhofer holograms,
ing CGHs based on a convolutional deep neural network. The generated by computer,” Appl. Opt. 6, 1739–1748 (1967).
network was regressed to the inverse process of the optical 16. P. W. M. Tsang and T. C. Poon, “Novel method for converting digital
Fresnel hologram to phase-only hologram based on bidirectional error
propagation with a number of speckle pairs. We experimentally
diffusion,” Opt. Express 21, 23680–23686 (2013).
demonstrated a phase-only CGH based on our method. The 17. P. W. M. Tsang, Y.-T. Chow, and T. C. Poon, “Generation of phase-
results were compared with those obtained with the GS only Fresnel hologram based on down-sampling,” Opt. Express 22,
method. The demonstrations showed reasonable image 25208–25214 (2014).
quality of the reproduced intensity pattern and a shorter com- 18. T. Shimobaba and T. Ito, “Random phase-free computer-generated
hologram,” Opt. Express 23, 9549–9554 (2015).
putational time for the proposed method compared with the 19. T. Ando, R. Horisaki, and J. Tanida, “Speckle-learning-based object
conventional one. recognition through scattering media,” Opt. Express 23, 33902–33910
We used the network shown in Fig. 2(a) for calculating the (2015).
CGH in this paper. Deep neural networks have many options 20. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based imaging
and tuning parameters [36]. Further investigation of the net- through scattering media,” Opt. Express 24, 13738–13743 (2016).
21. R. Takagi, R. Horisaki, and J. Tanida, “Object recognition through a
work architecture is needed for improving the performance. In multi-mode fiber,” Opt. Rev. 24, 117–120 (2017).
this paper, we demonstrated control of 2D intensity with a 22. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based focusing
phase-only CGH using visible light. Our method is readily ex- through scattering media,” Appl. Opt. 56, 4358–4362 (2017).
tendable to various wave fields, e.g., x-rays and acoustic waves, 23. R. Horisaki, R. Takagi, and J. Tanida, “Learning-based single-shot
superresolution in diffractive imaging,” Appl. Opt. 56, 8896–8901
and higher-dimensional pattern shaping, such as 3D spatial
(2017).
focusing, multispectral (color) holography, and temporal pulse 24. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational
shaping. It is also applicable to simultaneous holographic imaging through deep learning,” Optica 4, 1117–1125 (2017).
control of the complex amplitude (both amplitude and phase) 25. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A.
and amplitude-only/complex-amplitude CGHs. Therefore, the Ozcan, “Deep learning microscopy,” Optica 4, 1437–1443 (2017).
26. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ,
proposed technique is promising in various applications, “Deep-learning-based ghost imaging,” Sci. Rep. 7, 17865 (2017).
including biomedical sensing/control, industrial engineering, 27. Y. Jo, S. Park, J. Jung, J. Yoon, H. Joo, M.-H. Kim, S.-J. Kang, M. C.
and entertainment. Choi, S. Y. Lee, and Y. Park, “Holographic deep learning for rapid
optical screening of anthrax spores,” Sci. Adv. 3, e1700606 (2017).
28. S. Yamauchi, Y.-W. Chen, and Z. Nakao, “Optimization of computer-
Funding. Japan Society for the Promotion of Science
generated holograms by an artificial neural network,” in Proceedings
(JSPS) (JP17H02799, JP17K00233); Precursory Research of Second International Conference on Knowledge-Based Intelligent
for Embryonic Science and Technology (PRESTO) Electronic Systems (KES’98) (Cat. No. 98EX111) (IEEE, 1998), Vol. 3,
(JPMJPR17PB). pp. 220–223.
Research Article Vol. 57, No. 14 / 10 May 2018 / Applied Optics 3863
29. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 35. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for
21, 2758–2769 (1982). image recognition,” in IEEE Conference on Computer Vision and
30. A. M. Packer, L. E. Russell, H. W. P. Dalgleish, and M. Häusser, Pattern Recognition (CVPR) (IEEE, 2016), pp. 770–778.
“Simultaneous all-optical manipulation and recording of neural circuit 36. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning (MIT,
activity with cellular resolution in vivo,” Nat. Methods 12, 140–146 2016).
(2014). 37. S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep net-
31. O. Hernandez, E. Papagiakoumou, D. Tanese, K. Fidelin, C. Wyart, work training by reducing internal covariate shift,” in Proceedings of
and V. Emiliani, “Three-dimensional spatiotemporal focusing of the 32nd International Conference on Machine Learning (ICML’15)
holographic patterns,” Nat. Commun. 7, 11928 (2016). (JMLR, 2015), Vol. 37, pp. 448–456.
32. N. C. Pégard, A. R. Mardinly, I. A. Oldenburg, S. Sridharan, L. Waller, 38. V. Nair and G. E. Hinton, “Rectified linear units improve restricted
and H. Adesnik, “Three-dimensional scanless holographic optoge- Boltzmann machines,” in Proceedings of the 27th International
netics with temporal focusing (3D-SHOT),” Nat. Commun. 8, 1228 Conference on Machine Learning (ICML’10) (Omnipress, 2010),
(2017). pp. 807–814.
33. D. G. Grier, “A revolution in optical manipulation,” Nature 424, 39. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,”
810–816 (2003). in International Conference on Learning Representations (ICLR)
34. Y. Pang, H. Song, J. H. Kim, X. Hou, and W. Cheng, “Optical trapping (2015).
of individual human immunodeficiency viruses in culture fluid reveals 40. R. Horstmeyer, H. Ruan, and C. Yang, “Guidestar-assisted wavefront-
heterogeneity with single-molecule resolution,” Nat. Nanotechnol. 9, shaping methods for focusing light into biological tissue,” Nat.
624–630 (2014). Photonics 9, 563–571 (2015).