Full Download Image and Video Technology Shin'Ichi Satoh PDF DOCX
Full Download Image and Video Technology Shin'Ichi Satoh PDF DOCX
com
https://textbookfull.com/product/image-and-video-
technology-shinichi-satoh/
https://textbookfull.com/product/intelligent-image-and-video-
compression-communicating-pictures-2nd-edition-david-bull/
textbookfull.com
https://textbookfull.com/product/image-and-video-compression-for-
multimedia-engineering-fundamentals-algorithms-and-standards-third-
edition-shi/
textbookfull.com
https://textbookfull.com/product/international-finance-theory-and-
policy-eleventh-edition-paul-r-krugman/
textbookfull.com
Reviews on Biomarker Studies in Psychiatric and
Neurodegenerative Disorders Paul C. Guest
https://textbookfull.com/product/reviews-on-biomarker-studies-in-
psychiatric-and-neurodegenerative-disorders-paul-c-guest/
textbookfull.com
https://textbookfull.com/product/normal-people-sally-rooney/
textbookfull.com
https://textbookfull.com/product/childrens-literature-collections-
approaches-to-research-1st-edition-keith-osullivan/
textbookfull.com
https://textbookfull.com/product/high-resolution-nmr-techniques-in-
organic-chemistry-3-ed-3rd-edition-claridge-timothy-d-w/
textbookfull.com
Lost with her Alien Mate A science fiction romance
Warriors of the D tali Book 6 1st Edition Ava York &
Starr Huntress [York
https://textbookfull.com/product/lost-with-her-alien-mate-a-science-
fiction-romance-warriors-of-the-d-tali-book-6-1st-edition-ava-york-
starr-huntress-york/
textbookfull.com
Shin'ichi Satoh (Ed.)
LNCS 10799
Image and
Video Technology
PSIVT 2017 International Workshops
Wuhan, China, November 20–24, 2017
Revised Selected Papers
123
Lecture Notes in Computer Science 10799
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, Lancaster, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Friedemann Mattern
ETH Zurich, Zurich, Switzerland
John C. Mitchell
Stanford University, Stanford, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
C. Pandu Rangan
Indian Institute of Technology Madras, Chennai, India
Bernhard Steffen
TU Dortmund University, Dortmund, Germany
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbrücken, Germany
More information about this series at http://www.springer.com/series/7412
Shin’ichi Satoh (Ed.)
Image and
Video Technology
PSIVT 2017 International Workshops
Wuhan, China, November 20–24, 2017
Revised Selected Papers
123
Editor
Shin’ichi Satoh
National Institute of Informatics
Tokyo
Japan
LNCS Sublibrary: SL6 – Image Processing, Computer Vision, Pattern Recognition, and Graphics
Cover illustration: Yellow Crane Pagoda, Wuhan. Photo by Reinhard Klette, Auckland, New Zealand
This Springer imprint is published by the registered company Springer International Publishing AG
part of Springer Nature
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The 8th Pacific Rim Symposium on Image and Video Technology (PSIVT 2017), held
in Wuhan, China, during November 20–24, 2017, was accompanied by a series of five
high-quality workshops covering the full range of state-of-the-art research topics in
image and video technology.
The workshops consisted of two full-day workshops and three half-day workshops
and took place on November 21. Their topics ranged from well-established areas to
novel current trends: human behavior analysis; educational cloud and image- and
video-enriched cloud services; vision meets graphics; passive and active electro-optical
sensors for aerial and space imaging; and computer vision and modern vehicles.
The workshops received 103 paper submissions (including dual submissions with
the main conference) and 36 presentations were selected by the individual workshop
committee, yielding an overall acceptance rate of 35%. The PSIVT 2017 workshop
proceedings comprise a short introduction to each workshop and all workshop con-
tributions arranged by each of the workshop organizers. We thank everyone involved
in the remarkable programs, i.e., committees, reviewers, and authors, for their distin-
guished contributions. We hope that you will enjoy reading these contributions, which
may inspire your research.
Fast Haze Removal of UAV Images Based on Dark Channel Prior . . . . . . . . 254
Siyu Zhang, Congli Li, and Song Xue
With the rapid development of image and video technologies, it became possible to
analyze human behavior via nonintrusive sensors. This endows computers with a
capacity to understand what people are doing, the things they are interested in, their
preference and personality in a nonintrusive manner. Human behavior analysis has long
been a critical issue in developing human-centered multimedia interaction systems
including affect-sensitive systems with educational goals, intelligent surveillance,
gesture interaction system, and smart tutoring, etc.
This workshop received 18 submissions about using image and video technologies
for human behavior analysis, including action and activity recognition, affect and
attention analysis, social signal processing, face analysis, gestures interaction, intelli-
gent surveillance, and smart tutoring. After a rigorous peer reviewing, this workshop
accepted six submissions. The workshop was held on November 21, 2017, and all the
accepted papers were presented.
Organization
Workshop Committee
1 Introduction
Head pose estimation is the key step in many computer vision applications,
such as human computer interaction, intelligent robotics, face recognition, and
recognition of visual focus of attention [14,28]. The existing techniques achieve
satisfactory results in well-designed environments. In real-world applications,
however, factors, such as illumination variation, occlusion, poor image quality,
etc., make head pose estimation much more challenging [19,22]. Hence, we pro-
pose a deep transfer feature based convolutional neural forest (D-CNF) method
to estimate head pose estimation in unconstrained environment.
The rest of this paper is organized as follows: Sect. 2 presents our D-CNF
method in details. Section 3 discusses the experimental results using publicly
available datasets. Section 4 concludes this paper with a summary of our method.
In this section, we address the D-CNF approach for head pose estimation in
unconstrained environment. First, we present robust deep feature representation
based on facial patches, which can reduce the influence of various noises, such
as over-fitting, illumination, low image resolution, etc. Then, we describe the
framework of D-CNF training procedure for head pose estimation in details.
Finally, we give the D-CNF prediction for head pose estimation in unconstrained
environment.
We extract deep transfer feature from facial patches with a pre-trained CNN
model, i.e., Vgg-face [23]. We employ the Vgg-face architecture that is pre-
trained with the LFW and YTF face datasets [23] to derive deep high-level
feature representation, as shown in Fig. 2. The model includes 13 convolution
layers, 5 max-pooling layers, and 3 fully connected layers. The deep transfer
feature is described as:
y j = max(0, xi wi,j + bj ), (1)
i
where y j is the j th output feature value of the first fully connected layer, xi
is the ith feature map of the last convolution layer, wi,j indicates the weight
between the ith feature map and the j th output feature value, and bj donates
the bias of the j th output feature value. The deep transfer feature is used to train
a two-layer network through back propagation, which can transfer the original
Vgg-face feature to the pose feature.
8 Y. Liu et al.
Fig. 2. The structure of pre-trained CNN network for deep feature representation. The
trained network model includes 13 convolution layers, 5 max-pooling layers, and 3 full
connection layers. In our work, we extract deep features from facial patches on the first
connection layer.
where η > 0 is the learning rate, π is facial expression label and B is a random
subset (a.k.a. mini-batch) of samples. The gradient with respect to Y is obtained
by chain rule as follows:
Hence, the gradient term that depends on the neutral decision tree is
∂L(Y, π; P )
= −(dNr Nl
n (P ; Y ) + dn (P ; Y )), (5)
∂fn (P ; Y )
where given a node N in a tree and Nr and Nl denote its right and left child,
respectively.
To split a node, Information Gain (IG) is maximized:
S
d n
ϕ̃ = arg max(H(dn ) − (H(dn ))), (6)
ϕ S∈{Nr ,Nl } |dn |
|dS |
where |dnn | , s ∈ {Nr , Nl } is the ratio between the number of samples in dNl
n (arriv-
ing at the left child node), set dN r
n (arriving at the right child node), and H(dn )
is the entropy of dn .
where θ and Σlθ are the mean and covariance of leaves’ head pose probabilities,
respectively.
where θ and Σlθ are the mean and covariance of leaves’ head pose probabilities,
respectively.
While Eq. 8 models the probability for a feature patch pi ending in the leaf
l of a single tree, the probability of the forest is obtained by averaging over all
trees:
1
p(θ|P ) = p(θ|lt (P )) (9)
T t
where lt is the corresponding leaf for the tree Tt , T is the number of trees in
D-CNF.
10 Y. Liu et al.
3 Experimental Results
3.1 Datasets and Settings
To evaluate our approach, three challenging face datasets were used: Pointing’04
dataset [10], BU3D-HP dataset [31], and CCNU-HP dataset in the wide class-
room [17]. These datasets were chosen since they contained unconstrained face
images with poses ranging from −90◦ to +90◦ . The Pointing’04 head pose dataset
is a benchmark of 2790 monocular face images of 15 people with variations of
yaw and pitch angles from −90◦ to +90◦ . For every person, 2 series of 93 images
(93 different poses) are available. The CCNU dataset was collected included an
annotated set of 38 people with 75 different head poses from an overhead camera
in the wide scene. It contains head poses spanning from −90◦ to 90◦ in horizon-
tal direction, and −45◦ to 90◦ in vertical direction. The multi-view BU3D-HP
database contains 100 people of different ethnicities, including 56 females and
44 males with variations of yaw angles from −90◦ to +90◦ .
Fig. 3. The examples of head pose estimation on Pointing’04, BU3D-HP and CCNU-
HP datasets. Top row: results of Pointing’04. Middle row: results of the BU3D-HP
dataset. Bottom row: results of CCNU-HP dataset.
4.00 GHz, RAM 32 GB, NVIDIA GeForce GTX 1080 (2). We use the Caffe frame-
work [12] for the transfer CNN and deep feature representation.
Figure 4 shows the head poses estimation results on Pointing’04 datasets in yaw
and pitch rotations, respectively. The average accuracy on 9 yaw head poses and
9 pitch head poses is 95.6%. As it is shown, the highest accuracy is 98.4% of 90◦
in the yaw rotation. The lowest accuracy is 92.6% of −45◦ in the pitch rotation,
due to more occlusion in a face area.
Fig. 4. Head pose estimation on Pointing’04 datasets in the yaw and pitch rotations
Table 1. Accuracy (%) and average error (in degrees) using different methods on
Pointing’04 dataset.
Each image in the BU3D-HP dataset is automatically annotated with one out
of the nine head pose labels ({−90◦ , −60◦ , −45◦ , −30◦ , 0◦ , +30◦ , +60◦ , +75◦ ,
90◦ }). We train a D-CNF of 50 neural trees using 15498 head pose images.
Figure 5 shows the confusion matrix of head pose estimation on BU3D-HP
dataset. The D-CNF estimated 9 head pose classes in the horizontal direction
and achieved the average accuracy of 98.99%. Examples of the estimated head
pose are shown in Fig. 3.
The average accuracy of our D-CNF method is compared with that of CNN,
Zheng GSRRR [33], and SIFT + CNN [32] in Table 2. The CNN in this experi-
ment contains three convolution layers followed by three max-pooling layers and
two fully connected layers. Each filter is of size 5×5 and there are 32, 64, and 128
such filters in the first three layers, respectively. The input images are rescaled
to 224 by 224.
The accuracy of the CNN on BU3D-HP dataset is 69.61% as presented in
Table 2. The accuracies achieved with SIFT using algorithms proposed in [32,
33] are 87.36% and 92.26%, respectively. Our method achieves 98.99% which is
Deep Transfer Feature Based Convolutional Neural Forests 13
Table 2. Accuracy (%) and STD using different methods on multi-view BU3D-HP
dataset.
competitive to the methods above. The lowest STD. of 0.5% using our method
also proved the robustness of the proposed D-CNF.
Fig. 6. The annotation categories of the yaw and pitch angels in the experiments. (a)
The annotation classes in the yaw rotation, (b) The annotation classes in the yaw
rotation.
Fig. 7. Confusion matrixs of head pose estimation on CCNU-HP dataset. (a) The
matrix of yaw angles, (b) The matrix of pitch angles.
4 Conclusion
This paper described a novel deep transfer feature based convolutional neural
enhanced forests (D-CNF) method for head pose estimation in unconstrained
environment. In this method, robust deep transfer features are extracted from
facial patches using transfer CNN model, firstly. Then, the D-CNF integrates
random trees with the representation learning from deep convolutional neural
networks for head pose estimation. Besides, a neural connected split function
(NCSF) is introduced to D-CNF to split node learning. Finally, a prediction
procedure of the trained D-CNF can classify head pose in unconstrained envi-
ronment. Our method can perform well in limit number of datasets owing to
transferring pre-trained CNN to fast decision node splitting in a Random For-
est. The experiments demonstrate that our method has remarkable robustness
and efficiency.
Experiments were conducted using public Pointing’04, BU3D-HP and
CCNU-HP datasets. Our results demonstrated that the proposed deep feature
outperformed the other popular image features. Compared to the state-of-the-
art methods, the proposed D-CNF achieved improved performance and great
robustness with an average accuracy of 98.99% on BU3D-HP dataset, 95.7% on
Pointing’04 dataset, and 82.46% on CCNU-HP dataset. The average time for
performing a head pose estimation is about 113 ms.
Compared to CNN method from popular deep learning, our method achieved
the greatest performance on limited number of datasets. In future, we plan to
investigate on-line learning methods to achieve real-time estimation by integrat-
ing head movement tracking.
Deep Transfer Feature Based Convolutional Neural Forests 15
References
1. Ahn, B., Park, J., Kweon, I.S.: Real-time head orientation from a monocular cam-
era using deep neural network. In: Cremers, D., Reid, I., Saito, H., Yang, M.-H.
(eds.) ACCV 2014. LNCS, vol. 9005, pp. 82–96. Springer, Cham (2015). https://
doi.org/10.1007/978-3-319-16811-1 6
2. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
3. Bulo, S.R., Kontschieder, P.: Neural decision forests for semantic image labeling.
In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 81–88
(2014)
4. Chu, X., Ouyang, W., Li, H., Wang, X.: Structured feature learning for pose esti-
mation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pp. 4715–4723 (2016)
5. Dantone, M., Gall, J., Fanelli, G., Van Gool, L.: Real-time facial feature detection
using conditional regression forests. In: 2012 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), pp. 2578–2585. IEEE (2012)
6. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.:
Decaf: a deep convolutional activation feature for generic visual recognition. In:
ICML, vol. 32, 647–655 (2014)
7. Fanelli, G., Yao, A., Noel, P.-L., Gall, J., Van Gool, L.: Hough forest-based facial
expression recognition from video sequences. In: Kutulakos, K.N. (ed.) ECCV 2010.
LNCS, vol. 6553, pp. 195–206. Springer, Heidelberg (2012). https://doi.org/10.
1007/978-3-642-35749-7 15
8. Garcı́a-Montero, M., Redondo-Cabrera, C., López-Sastre, R., Tuytelaars, T.: Fast
head pose estimation for human-computer interaction. In: Paredes, R., Cardoso,
J.S., Pardo, X.M. (eds.) IbPRIA 2015. LNCS, vol. 9117, pp. 101–110. Springer,
Cham (2015). https://doi.org/10.1007/978-3-319-19390-8 12
9. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE International Conference on
Computer Vision, pp. 1440–1448 (2015)
10. Gourier, N., Hall, D., Crowley, J.: Estimating face orientation from robust detec-
tion of salient facial features in pointing. In: International Conference on Pattern
Recognition Workshop on Visual Observation of Deictic Gestures, pp. 1379–1382
(2004)
11. Insafutdinov, E., Pishchulin, L., Andres, B., Andriluka, M., Schiele, B.: DeeperCut:
a deeper, stronger, and faster multi-person pose estimation model. In: Leibe, B.,
Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 34–50.
Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4 3
12. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadar-
rama, S., Darrell, T.: Proceedings of the 22nd ACM International Conference on
Multimedia
13. Wu, J., Trivedi, M.M.: A two-stage head pose estimation framework and evaluation.
Pattern Recogn. 41, 1138–1158 (2008)
14. Kim, H., Sohn, M., Kim, D., Lee, S.: Kernel locality-constrained sparse coding for
head pose estimation. IET Comput. Vis. 10(8), 828–835 (2016)
16 Y. Liu et al.
15. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep con-
volutional neural networks. In: Advances in Neural Information Processing Sys-
tems, pp. 1097–1105 (2012)
16. Liu, X., Liang, W., Wang, Y., Li, S., Pei, M.: 3D head pose estimation with convo-
lutional neural network trained on synthetic images. In: 2016 IEEE International
Conference on Image Processing (ICIP), pp. 1289–1293. IEEE (2016)
17. Liu, Y., Chen, J., Shu, Z., Luo, Z., Liu, L., Zhang, K.: Robust head pose estimation
using dirichlet-tree distribution enhanced random forests. Neurocomputing 173,
42–53 (2016)
18. Liu, Y., Xie, Z., Yuan, X., Chen, J., Song, W.: Multi-level structured hybrid forest
for joint head detection and pose estimation. Neurocomputing 266, 206–215 (2017)
19. Ma, B., Li, A., Chai, X., Shan, S.: CovGa: a novel descriptor based on symmetry
of regions for head pose estimation. Neurocomputing 143, 97–108 (2014)
20. Mukherjee, S.S., Robertson, N.M.: Deep head pose: gaze-direction estimation in
multimodal video. IEEE Trans. Multimedia 17(11), 2094–2107 (2015)
21. Murphy-Chutorian, E., Trivedi, M.M.: Head pose estimation in computer vision:
a survey. IEEE Trans. Pattern Anal. Mach. Intell. 31(4), 607–626 (2009)
22. Orozco, J., Gong, S., Xiang, T.: Head pose classification in crowded scenes. In:
British Machine Vision Conference, London, UK, pp. 1–3, 7–10 September 2009
23. Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. In: BMVC, vol.
1, p. 6 (2015)
24. Patacchiola, M., Cangelosi, A.: Head pose estimation in the wild using convolu-
tional neural networks and adaptive gradient methods. Pattern Recogn. 71, 132–
143 (2017)
25. Ranjan, R., Patel, V.M., Chellappa, R.: Hyperface: a deep multi-task learning
framework for face detection, landmark localization, pose estimation, and gender
recognition. arXiv preprint arXiv:1603.01249 (2016)
26. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: imagenet classi-
fication using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe,
N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham
(2016). https://doi.org/10.1007/978-3-319-46493-0 32
27. Schwarz, A., Lin, Z., Stiefelhagen, R.: HeHOP: highly efficient head orientation
and position estimation. In: 2016 IEEE Winter Conference on Applications of
Computer Vision (WACV), pp. 1–8. IEEE (2016)
28. Wu, S., Kan, M., He, Z., Shan, S., Chen, X.: Funnel-structured cascade for
multi-view face detection with alignment-awareness. Neurocomputing 221, 138–
145 (2017)
29. Xin, G., Xia, Y.: Head pose estimation based on multivariate label distribution.
In: IEEE Conference on Computer Vision and Pattern Recognition, Ohio, USA,
pp. 1837–1842, 24–27 June 2014
30. Xu, X., Kakadiaris, I.A.: Joint head pose estimation and face alignment framework
using global and local CNN features. In: Proceedings of the 12th IEEE Conference
on Automatic Face and Gesture Recognition, Washington, DC, vol. 2 (2017)
31. Yin, L., Wei, X., Sun, Y., Wang, J., Rosato, M.J.: A 3D facial expression database
for facial behavior research. In: 2006 7th International Conference on Automatic
Face and Gesture Recognition, FGR 2006, pp. 211–216. IEEE (2006)
32. Zhang, T., Zheng, W., Cui, Z., Zong, Y., Yan, J., Yan, K.: A deep neural network-
driven feature learning method for multi-view facial expression recognition. IEEE
Trans. Multimedia 18(12), 2528–2536 (2016)
33. Zheng, W.: Multi-view facial expression recognition based on group sparse reduced-
rank regression. IEEE Trans. Affect. Comput. 5(1), 71–85 (2014)
Random documents with unrelated
content Scribd suggests to you:
happen that the original photograph would re-develop on top of the
second, very careful chemical cleaning of the plate being necessary
to prevent this. Mumler’s first spirit photograph was probably
produced in this way, and the knowledge was turned to good
account by several of the earlier spirit photographers. Some of the
unexpected results obtained by amateurs may be attributable to this
cause, because a certain number of used plates are returned to
plate manufacturers, who clean off the emulsion and use the glass
again. The cleansing may sometimes be imperfect, and in these
cases the original image may appear on development.
2. Faces may be sketched in chemicals on small pieces of card, or
even on the medium’s fingers. On opportunity arising in the dark-
room, the medium holds or steadies the plate for an instant,
bringing the chemical pictures into contact with the plate. Or he may
so manœuvre it that the plate is laid face down on a prepared
surface of the dark-room work-bench, probably while it is being
marked[7]; upon development of the plate extras will duly appear.
The most refined version of this method consists in the preparation
of small rubber stamps in which the chemicals are smeared. These
can easily be palmed and dabbed for a moment on the plate in a
manner which appears quite unsuspicious. A number of active
chemicals will produce this effect, but the medium must be careful
to know whether the substance he is using will accelerate or retard
development in the affected part; for cases have occurred in which a
positive extra has been produced on the negative plate, giving a
negative spirit on the finished print!
3. Mr. Bush, in his recent pamphlet, “Spirit Photography Exposed,”
describes a piece of apparatus made out of an empty blacking-tin
containing a small electric bulb, one side of the tin being replaced by
a positive transparency of the desired extra. This, he alleges, is used
by Hope, the Crewe spirit photographer, the transparency being
pressed against the plate and the light switched on for a second. If
carefully faced with black velvet round the transparency, this device
should be quite useful; but it must be remembered that an escaping
ray of white light would at once catch the eye in the dark-room.
Skilful palming and manipulation should make it quite possible for an
extra to be printed on the plate in this way, if the medium can cover
it with his hand for a moment or two. All Hope’s results are certainly
not produced in this way, however, as is implied by Mr. Bush.
4. The medium may palm a positive transparency; if he is allowed to
handle the plate he will hold it close to the red lamp with the
transparency between; if the lamp is rather bright, or is not a very
deep red, an impression is soon made on the plate.
5. With a pinhole in the dark-room lamp, and a transparency inside—
a perfectly practicable arrangement with some of the more
complicated dark-room “safe-lights,”—a pinhole projector can be
formed, which will throw an image on a suitably-placed plate. Any
leakage of white light into the dark-room, either from the lamp or
from outside, can be used to produce blotches and streaks on the
plate. A very little mechanical ingenuity will enable a medium who
takes a pride in his work to rig up an arrangement of this kind which
can be switched off and on at will and which will project an image
on a predetermined spot on the bench. By the simple expedient of
having the bench so cluttered up with bottles and miscellaneous
rubbish that this spot is the only unencumbered one, the
unsuspecting sitter may be forced to lay a plate on this spot while,
for example, he is marking another. The medium may ostentatiously
stand at the other end of the room and “switch on” for a moment
while the sitter’s attention is engaged with his marking.
6. Photographic plates are sensitive to rays invisible to the eye, as
has been pointed out in considering the effect of fluorescent
substances. X-rays and ultra-violet rays, for instance, both invisible
yet strongly actinic, might be used in the most baffling manner in
the production of spirit extras. The expense and technical difficulties
would be considerable, but were any medium to take the method
up, he might safely defy the most critical investigation and would
soon recoup himself for the few pounds initial outlay.
There are undoubtedly many other methods used by mediums for
this purpose; but if the sitter who has obtained spirit extras under
test conditions carefully considers the procedure employed, in the
light of the suggestions made above, he will probably find that
several loopholes were left by which fraud might have been
introduced.
B.—Experiments in Fraud
The argument most frequently brought forward, in favour of the
genuineness of spirit photographs, is that the conditions employed in
their taking leave no loophole for fraud. It has been pointed out in
the preceding section that the usual “test conditions” leave not one,
but many, such loopholes. Evidence of fraud has at some time or
other been brought against most spirit photograph mediums, and
they have consequently been more or less discredited. Other
mediums have been more clever—or more fortunate—and many
people therefore argue that they are not all to be tarred with the
same brush; it is pointed out that spirit extras have been obtained
under the strictest conditions imposed by acute observers who have
found nothing suspicious of trickery.
It occurred to me that the most effective way to refute this
argument was actually to produce bogus spirit photographs under
similar, or even more stringent, test conditions. This I accordingly
attempted in a series of séances, held in my rooms at Cambridge in
the summer of 1919. At four of these séances photographs were
taken, and on each occasion one plate showed a more or less
conventional spirit extra. As I was experimenting primarily for my
own satisfaction, my seven victims were drawn from among my own
friends, and were enjoined to keep the matter as quiet as possible.
They were not, of course, specially trained psychic researchers, but
could not, I think, be considered as being particularly easy men to
deceive. Five of the seven were ex-Service men, and all were of B.A.
or “fourth year” University status; they included two chemists, two
medical students, a geologist, and two physiologists who were also
studying psychology. They were all therefore of a scientific bent,
and, with possibly one exception, were completely sceptical about
spiritualistic phenomena when the experiments started.
I first suggested to four of them that we might try to obtain a spirit
photograph, like those described and reproduced in recent magazine
articles. They did not take me very seriously at first, but after we
had obtained the right atmosphere with a little table-turning, they
consented to try for a spirit photograph. When a spirit face duly
developed in addition to the sitter, everyone present expressed
amazement! I was naturally asked if I was “pulling their legs.” I
hedged and refused to say either yes or no, explaining that I wanted
the experiments to continue under scientific conditions. If, on the
one hand, I declared that I had not in any way faked the
photograph, they would probably believe me, and would not insist
on further photographs being taken under test conditions. If, on the
other hand, I refused to give such an assurance, they would think
that I was probably tricking them, and would take all possible steps
to “bowl me out”; and when they failed to do so would thereby
establish evidence of the genuineness of any further photographs we
might be lucky enough to obtain. After some little demur they saw
the point of this—or as much of it as I wished them to see—and
agreed to meet again in my room on the following Sunday evening,
promising that I should be given no opportunity of playing any
tricks. It was also agreed that notes should be taken during the
séances as far as possible, and that full reports of what occurred
should be drawn up afterwards by all of us in conjunction, which
everyone would sign.
I now quote their report on the next two meetings, omitting nothing
except their names, which I have replaced by single letters, at their
request.
Then follow the signatures. As they made me sign the report on this
meeting, I had to see that it was worded rather carefully, particularly
the last paragraph; the report was true, so far as it went; and the
explanation of the result was rather elaborate; so I felt I could safely
sign it.
I did not hold another photographic séance, but being emboldened
by success, introduced at the next meeting “a medium from
London.” (As a matter of fact he came from Trinity, but I had
ascertained that nobody knew him, which was the important thing.)
After suitable preliminaries we all sat round a large table in semi-
darkness, holding hands. When the medium had arranged “the
balance of the circle” to his liking, he proceeded to go into a trance,
when queer things began to happen. A candlestick was seen to slide
along the mantelpiece and crash into the coal-box, taking a framed
photograph with it; sounds were heard from a small cupboard; the
window-curtains were parted; several people saw spirit forms and
eyes; and one was favoured with a spirit touch. The medium’s
Egyptian control, Nemetra, gave us wonderful accounts of life in
Memphis in the days of the Pharaohs—accounts which certainly
made up in picturesque detail for anything they lacked in historical
accuracy.
Unfortunately this meeting was not a complete success, as,
immediately the show was over, our ever-curious geologist E began
hunting about the floor, and discovered a small loop of fishing-line
(being a post-war fishing-line, the spirit forces had broken it). He
could not very well announce his find at the time, as the medium
was not yet roused from his trance, and the others were busy feeling
his pulse, fanning him and administering cold water!
By this time the results of the photographic séances had become
pretty generally known, and the undesired notoriety brought so
many requests to allow other visitors at the séances that it became
evident to me that the proceedings must terminate. So the next
morning, after seeing E, I told him and the others that the whole
thing had been a hoax, and that the photographs were frauds. I
should like to add that with one exception they took it extraordinarily
well, particularly when I explained what had been my object. They
were still quite in the dark about how the photographs had been
done, particularly when I told them that there was no accomplice
among them.
All the photographs were obtained by the general method of double
exposure and substitution, the substitution being effected at a
different point on each occasion; the methods used, or slight
variations of them, are all described in the section on “Methods of
Fraud.”
Now I maintain that the conditions imposed upon me were as strict,
or stricter, than any professional medium allows. If an amateur
photographer but little practised in sleight-of-hand can under such
conditions deceive intelligent observers—not once, but several times
over—how much easier will it not be for the professional spirit
photographer, who makes such frauds his business?