This document summarizes recent research on applying self-attention mechanisms from Transformers to domains other than language, such as computer vision. It discusses models that use self-attention for images, including ViT, DeiT, and T2T, which apply Transformers to divided image patches. It also covers more general attention modules like the Perceiver that aims to be domain-agnostic. Finally, it discusses work on transferring pretrained language Transformers to other modalities through frozen weights, showing they can function as universal computation engines.
[DL輪読会]Recent Advances in Autoencoder-Based Representation LearningDeep Learning JP
1. Recent advances in autoencoder-based representation learning include incorporating meta-priors to encourage disentanglement and using rate-distortion and rate-distortion-usefulness tradeoffs to balance compression and reconstruction.
2. Variational autoencoders introduce priors to disentangle latent factors, but recent work aggregates posteriors to directly encourage disentanglement.
3. The rate-distortion framework balances the rate of information transmission against reconstruction distortion, while rate-distortion-usefulness also considers downstream task usefulness.
cvpaper.challenge の Meta Study Group 発表スライド
cvpaper.challenge はコンピュータビジョン分野の今を映し、トレンドを創り出す挑戦です。論文サマリ・アイディア考案・議論・実装・論文投稿に取り組み、凡ゆる知識を共有します。2019の目標「トップ会議30+本投稿」「2回以上のトップ会議網羅的サーベイ」
http://xpaperchallenge.org/cv/
[DL輪読会]Recent Advances in Autoencoder-Based Representation LearningDeep Learning JP
1. Recent advances in autoencoder-based representation learning include incorporating meta-priors to encourage disentanglement and using rate-distortion and rate-distortion-usefulness tradeoffs to balance compression and reconstruction.
2. Variational autoencoders introduce priors to disentangle latent factors, but recent work aggregates posteriors to directly encourage disentanglement.
3. The rate-distortion framework balances the rate of information transmission against reconstruction distortion, while rate-distortion-usefulness also considers downstream task usefulness.
cvpaper.challenge の Meta Study Group 発表スライド
cvpaper.challenge はコンピュータビジョン分野の今を映し、トレンドを創り出す挑戦です。論文サマリ・アイディア考案・議論・実装・論文投稿に取り組み、凡ゆる知識を共有します。2019の目標「トップ会議30+本投稿」「2回以上のトップ会議網羅的サーベイ」
http://xpaperchallenge.org/cv/
You Only Look One-level Featureの解説と見せかけた物体検出のよもやま話Yusuke Uchida
第7回全日本コンピュータビジョン勉強会「CVPR2021読み会」(前編)の発表資料です
https://kantocv.connpass.com/event/216701/
You Only Look One-level Featureの解説と、YOLO系の雑談や、物体検出における関連する手法等を広く説明しています
Semi supervised, weakly-supervised, unsupervised, and active learningYusuke Uchida
An overview of semi supervised learning, weakly-supervised learning, unsupervised learning, and active learning.
Focused on recent deep learning-based image recognition approaches.
DeNA AIシステム部内の輪講で発表した資料です。Deep fakesの種類やその検出法の紹介です。
主に下記の論文の紹介
S. Agarwal, et al., "Protecting World Leaders Against Deep Fakes," in Proc. of CVPR Workshop on Media Forensics, 2019.
A. Rossler, et al., "FaceForensics++: Learning to Detect Manipulated Facial Images," in Proc. of ICCV, 2019.
6. 局所特徴を用いた特定物体認識
4/16/2018 5
①Extract local regions
(patches) from images
②Describe the patches
by d-dimensional vectors
③Make correspondences
between similar patches
④Calculate similarity
between the images
Similarity: 3
Position (x, y)
Orientation θ
Scale σ
Feature vector f
(e.g., 128-dim SIFT)
Local feature
12. どれを使えば良いの?
11
• 精度重視
– SIFT or Hessian Affine detector
+ RootSIFT descriptor
• 速度重視
– ORB detector + ORB descriptor
• Local Feature Detectors, Descriptors, and Image Representations: A Survey
https://arxiv.org/abs/1607.08368
13. RootSIFT [Arandjelovic+, CVPR’12]
4/16/2018 12
• Hellinger kernel works better than Euclidean distance
in comparing histograms such as SIFT
• Hellinger kernel (Bhattacharyya’s coefficient) for L1
normalized histograms x and y:
• Explicit feature map of x into x’ :
– L1 normalize x
– element-wise square root x to give x’
– then x’ is L2 normalized
• Computing Euclidean distance in the feature map
space is equivalent to Hellinger distance in the
original space:
RootSIFT
RootSIFT
14. Large-scale Object Recognition
4/16/2018 13
・
・
・
Distance
calculation
Query
image
Reference
images
Explicit feature matching
requires high computational cost
and memory footprint
Match
Bag-of-visual words!
15. Bag-of-Visual Words [Sivic+, ICCV’03]
4/16/2018 14
• Offline
– Collect a large number of training vectors
– Perform clustering algorithm (e.g., k-means)
– Centroids of clusters = visual words (VWs)
• Online:
– All features are assigned to their nearest visual words
– An image is represented by the frequency histogram of VWs
– (Dis)similarity is defined by the distance between histograms
Visual words (VW)
VW1
VWn
VW2
…
Visual words
-
-
・
・・
-
-
-
・・
・-
-
-
・・
・-
-
-
・
・・
-
-
-
・・
・
-
Frequency
}1|{ Nii vV
16. Bag-of-Visual Words [Sivic+, ICCV’03]
4/16/2018 1515
VW1
VW2
VWk
VWn
・
・
・
・
・
・
Indexing step
(quantization)
Search step
(quantization)
Match
Match
Matching can be performed in O(1)
with an inverted index
Query
image
Reference
images
Nearest VW
17. 1
2
w
N
Inverted index
Image ID
1 2 3 4 5 6 7 8 9 10 11 12 ...
Image ID
Accumulated scores
VW ID
Obtain image IDs
Query image Reference image
Image ID ...(x, y) σ θ
(1) Feature detection
(2) Feature description
(3) Quantization
(1) Feature detection
(2) Feature description
(3) Quantization
(4) Voting
...
... ...
...
Visual word v1
...
Visual word vw
...
Visual word vN
Visual words
1 4 5 7 10 16 19
Offline step
Visual word v1
...
Visual word vw
...
Visual word vN
Visual words
Get images with the top-K scores
Results
inlier
outlier
(5) Geometric verification
全体処理
Geometric
verification
24. Average Query Expansion [Chum+, ICCV’07]
4/16/2018 23
• Obtain top (m < 50) verified results of original query
• Construct new query using average of these results
Without geometric verification,
QE degrades accuracy!
Query image
Verified results
New query
25. Multiple Image Resolution Expansion [Chum+, ICCV’07]
4/16/2018 24
ROI
Query image
ROI
ROIROI
ROI
ROI
ROI
First verified results
ROI
ROI
ROI
ROI
ROI
ROI
• Calculate relative change in resolution
• Construct average query for each resolution
New query1 New query2 New query3
27. Discriminative Query Expansion [Arandjelovic+, CVPR’12]
4/16/2018 26
• Train a linear SVM classifier
– Use verified results as positive training data
– Use low ranked images as negative training data
– Rank images on their signed distance from the decision
boundary
– Reranking can be efficient with an inverted index!