This document summarizes recent research on applying self-attention mechanisms from Transformers to domains other than language, such as computer vision. It discusses models that use self-attention for images, including ViT, DeiT, and T2T, which apply Transformers to divided image patches. It also covers more general attention modules like the Perceiver that aims to be domain-agnostic. Finally, it discusses work on transferring pretrained language Transformers to other modalities through frozen weights, showing they can function as universal computation engines.
This document summarizes recent developments in action recognition using deep learning techniques. It discusses early approaches using improved dense trajectories and two-stream convolutional neural networks. It then focuses on advances using 3D convolutional networks, enabled by large video datasets like Kinetics. State-of-the-art results are achieved using inflated 3D convolutional networks and temporal aggregation methods like temporal linear encoding. The document provides an overview of popular datasets and challenges and concludes with tips on training models at scale.
This document summarizes a research paper on scaling laws for neural language models. Some key findings of the paper include:
- Language model performance depends strongly on model scale and weakly on model shape. With enough compute and data, performance scales as a power law of parameters, compute, and data.
- Overfitting is universal, with penalties depending on the ratio of parameters to data.
- Large models have higher sample efficiency and can reach the same performance levels with less optimization steps and data points.
- The paper motivated subsequent work by OpenAI on applying scaling laws to other domains like computer vision and developing increasingly large language models like GPT-3.
Several recent papers have explored self-supervised learning methods for vision transformers (ViT). Key approaches include:
1. Masked prediction tasks that predict masked patches of the input image.
2. Contrastive learning using techniques like MoCo to learn representations by contrasting augmented views of the same image.
3. Self-distillation methods like DINO that distill a teacher ViT into a student ViT using different views of the same image.
4. Hybrid approaches that combine masked prediction with self-distillation, such as iBOT.
本スライドは、弊社の梅本により弊社内の技術勉強会で使用されたものです。
近年注目を集めるアーキテクチャーである「Transformer」の解説スライドとなっております。
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
This document summarizes recent research on applying self-attention mechanisms from Transformers to domains other than language, such as computer vision. It discusses models that use self-attention for images, including ViT, DeiT, and T2T, which apply Transformers to divided image patches. It also covers more general attention modules like the Perceiver that aims to be domain-agnostic. Finally, it discusses work on transferring pretrained language Transformers to other modalities through frozen weights, showing they can function as universal computation engines.
This document summarizes recent developments in action recognition using deep learning techniques. It discusses early approaches using improved dense trajectories and two-stream convolutional neural networks. It then focuses on advances using 3D convolutional networks, enabled by large video datasets like Kinetics. State-of-the-art results are achieved using inflated 3D convolutional networks and temporal aggregation methods like temporal linear encoding. The document provides an overview of popular datasets and challenges and concludes with tips on training models at scale.
This document summarizes a research paper on scaling laws for neural language models. Some key findings of the paper include:
- Language model performance depends strongly on model scale and weakly on model shape. With enough compute and data, performance scales as a power law of parameters, compute, and data.
- Overfitting is universal, with penalties depending on the ratio of parameters to data.
- Large models have higher sample efficiency and can reach the same performance levels with less optimization steps and data points.
- The paper motivated subsequent work by OpenAI on applying scaling laws to other domains like computer vision and developing increasingly large language models like GPT-3.
Several recent papers have explored self-supervised learning methods for vision transformers (ViT). Key approaches include:
1. Masked prediction tasks that predict masked patches of the input image.
2. Contrastive learning using techniques like MoCo to learn representations by contrasting augmented views of the same image.
3. Self-distillation methods like DINO that distill a teacher ViT into a student ViT using different views of the same image.
4. Hybrid approaches that combine masked prediction with self-distillation, such as iBOT.
本スライドは、弊社の梅本により弊社内の技術勉強会で使用されたものです。
近年注目を集めるアーキテクチャーである「Transformer」の解説スライドとなっております。
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
EfficientDet: Scalable and Efficient Object Detectionharmonylab
出典: Mingxing Tan,Ruoming Pang,Quoc V. Le:EfficientDet: Scalable and Efficient Object Detection,Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition(CVPR2020)
公開URL:https://arxiv.org/pdf/1911.09070
概要:本論文では物体検出のためのニュラールネットワークアーキテクチャ設計の選択肢を体系的に調査し、効率を改善するいくつかの最適化を提案しています。まず簡単で高速にマルチスケール特徴を融合するBiFPN、次に物体検出器の複数のパラメータを同時に変更するcompound scalingを提案し、それらに基づいて物体検出器であるEfficientDetを開発しています。これによあり幅広いリソース制約において一貫して従来手法よりも優れた効率を達成しました。
【メタサーベイ】Transformerから基盤モデルまでの流れ / From Transformer to Foundation Models
1. From Transformer to Foundation Models
Transformerから基盤モデルまでの流れ
cvpaper.challenge
1
http://xpaperchallenge.org/cv
2. 基盤モデル | Foundation models
2
Foundation models @On the Opportunities and Risks of Foundation Models
̶ any model that is trained on broad data at scale and can be
adapted (e.g., fine-tuned) to a wide range of downstream tasks...
広範なデータにより学習され(追加学習等により)広い範囲の下流タスクに適用可能なモデル
基盤モデル
Photo from Stanford HAI
3. Foundation modelsが⽬指す先とは?
3
AGI: Artificial General Intelligence*(汎⽤⼈⼯知能)
̶ 汎⽤的にタスクを解く⼈⼯知能に対する挑戦
Robotics
Vision
Language
Audio
Foundation
Model
Philosophy
Interaction
・・・まだまだ広がりを見せようとしている
*: AGIは人工知能の究極の目標のひとつと言われます
が,Foundation Modelsの目的は種々あります
5. From Transformer to FMs(1/N)
5
⾃然⾔語処理 (NLP)分野でTransformerが提案
● Transformer
● Self-attention (⾃⼰注視)機構により系
列データを⼀括処理
● “Attention Is All You Need”とタイトル
を名付けるくらいには衝撃的だった
● 学習時間短縮・性能向上を同時に実現
【Why Transformer?】
Transformerの提案論⽂ “Attention Is All You
Need”(NIPS 2017)にて,機械翻訳タスク(Neural
Machine Translation; NMT)を⾼度に解いたモデル
だからだと思っているのですが諸説あり︖
Transformerについてはこちらも参照
https://www.slideshare.net/cvpaperchallenge/transformer-247407256
6. From Transformer to FMs(1/N)
6
NLP分野にてTransformerが拡がる
● BERT(Bi-directional Encoder Representations from Transformers)
● 翻訳・予測などNLPのタスクを幅広く解くことができるモデル
● ⽂章の「意味を理解」することができるようになったと話題
● なぜBERTが躍進したか︖
● ⾃⼰教師学習によりラベルなし⽂章を学習に適⽤可能
● 双⽅向モデルにつき,単語の前後から⽂脈を把握
https://arxiv.org/abs/1810.04805
BERTでは多くのタスクを単⼀モデルで解くことが
できるが,その学習は「⽂章のマスクと復元」の
⾃⼰教師あり学習により実施される
Attention is All You Need.(元データ)
↓ 意図的に⽋損作成
Attention is All ___ Need.(復元前)
↓ BERTにより推定
Attention is All You Need.(復元後)
7. GPT-3論⽂はNeurIPS 2020にて
Best Paper Awardを獲得
From Transformer to FMs(1/N)
7
⼈間レベルの⽂章⽣成を可能にした
● GPT(Generative Pre-trained Transformer)
● 与えられた⽂章の先を予測して⽂章⽣成
● 拡張される度にパラメータ数 / 学習テキストサイズが⼀気に増加
○ GPT-1: 1.2億パラメータ
○ GPT-2: 15億パラメータ, 40GBテキスト
○ GPT-3: 1750億パラメータ, 570GBテキスト
○ 想像を絶するパラメータ数の増加により⼤幅な性能改善が⾒られた
● 「シンギュラリティが来た」と⾔われるくらいの⽂章⽣成能⼒を獲得
https://arxiv.org/pdf/2005.14165.pdf
https://neuripsconf.medium.com/announcing-the-neurips-2020-award-recipients-73e4d3101537
8. Transformerは尚もNLP分野にて進展,Audio/Robotics分野にも展開
From Transformer to FMs(1/N)
8
その後もTransformerの勢いは⽌まらない
Attentionこそ全て︕ ⽂章の先を予測︕
(その後⼤規模化により
GPT-2/3に改良)
⽂章の⽂脈を双⽅向
から理解︕マスク・
復元により⾃⼰教師
学習 画像と⾔語を処理
畳み込みとの融合により
画像認識(検出)を実現
純粋にTransformer構造で画像認識
その後,⼊⼒の⼯夫で動画認識
Natural Language Processing Natural Language Processing Vision & Language Computer Vision