本スライドは、弊社の梅本により弊社内の技術勉強会で使用されたものです。
近年注目を集めるアーキテクチャーである「Transformer」の解説スライドとなっております。
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
1. The document discusses knowledge representation and deep learning techniques for knowledge graphs, including embedding models like TransE, TransH, and neural network models.
2. It provides an overview of methods for tasks like link prediction, question answering, and language modeling using recurrent neural networks and memory networks.
3. The document references several papers on knowledge graph embedding models and their applications to natural language processing tasks.
This document summarizes a research paper on scaling laws for neural language models. Some key findings of the paper include:
- Language model performance depends strongly on model scale and weakly on model shape. With enough compute and data, performance scales as a power law of parameters, compute, and data.
- Overfitting is universal, with penalties depending on the ratio of parameters to data.
- Large models have higher sample efficiency and can reach the same performance levels with less optimization steps and data points.
- The paper motivated subsequent work by OpenAI on applying scaling laws to other domains like computer vision and developing increasingly large language models like GPT-3.
This document summarizes a research paper on inverse constrained reinforcement learning. The paper proposes a method to estimate cost functions from expert data in continuous action spaces to achieve optimal behavior under constraints. It formulates cost function inference as a maximum entropy inverse reinforcement learning model and uses a neural network to approximate the cost function. The method employs importance sampling and early stopping to improve learning efficiency. Evaluation results demonstrate the method outperforms alternatives in terms of cumulative reward and constraint violations, and the learned cost functions can be effectively transferred to new tasks.
This document discusses generative adversarial networks (GANs) and their relationship to reinforcement learning. It begins with an introduction to GANs, explaining how they can generate images without explicitly defining a probability distribution by using an adversarial training process. The second half discusses how GANs are related to actor-critic models and inverse reinforcement learning in reinforcement learning. It explains how GANs can be viewed as training a generator to fool a discriminator, similar to how policies are trained in reinforcement learning.
cvpaper.challengeにおいてECCVのOral論文をまとめた「ECCV 2020 報告」です。
ECCV2020 Oral論文 完全読破(2/2) [https://www.slideshare.net/cvpaperchallenge/eccv2020-22-238640597/1]
pp. 7-10 ECCVトレンド
pp. 12-81 3D geometry & reconstruction
pp. 82-137 Geometry, mapping and tracking
pp. 138-206 Image and Video synthesis
pp. 207-252 Learning methods
cvpaper.challengeはコンピュータビジョン分野の今を映し、トレンドを創り出す挑戦です。論文サマリ作成・アイディア考案・議論・実装・論文投稿に取り組み、凡ゆる知識を共有します。2020の目標は「トップ会議に30+本投稿」することです。
This document summarizes a research paper on scaling laws for neural language models. Some key findings of the paper include:
- Language model performance depends strongly on model scale and weakly on model shape. With enough compute and data, performance scales as a power law of parameters, compute, and data.
- Overfitting is universal, with penalties depending on the ratio of parameters to data.
- Large models have higher sample efficiency and can reach the same performance levels with less optimization steps and data points.
- The paper motivated subsequent work by OpenAI on applying scaling laws to other domains like computer vision and developing increasingly large language models like GPT-3.
This document summarizes a research paper on inverse constrained reinforcement learning. The paper proposes a method to estimate cost functions from expert data in continuous action spaces to achieve optimal behavior under constraints. It formulates cost function inference as a maximum entropy inverse reinforcement learning model and uses a neural network to approximate the cost function. The method employs importance sampling and early stopping to improve learning efficiency. Evaluation results demonstrate the method outperforms alternatives in terms of cumulative reward and constraint violations, and the learned cost functions can be effectively transferred to new tasks.
This document discusses generative adversarial networks (GANs) and their relationship to reinforcement learning. It begins with an introduction to GANs, explaining how they can generate images without explicitly defining a probability distribution by using an adversarial training process. The second half discusses how GANs are related to actor-critic models and inverse reinforcement learning in reinforcement learning. It explains how GANs can be viewed as training a generator to fool a discriminator, similar to how policies are trained in reinforcement learning.
cvpaper.challengeにおいてECCVのOral論文をまとめた「ECCV 2020 報告」です。
ECCV2020 Oral論文 完全読破(2/2) [https://www.slideshare.net/cvpaperchallenge/eccv2020-22-238640597/1]
pp. 7-10 ECCVトレンド
pp. 12-81 3D geometry & reconstruction
pp. 82-137 Geometry, mapping and tracking
pp. 138-206 Image and Video synthesis
pp. 207-252 Learning methods
cvpaper.challengeはコンピュータビジョン分野の今を映し、トレンドを創り出す挑戦です。論文サマリ作成・アイディア考案・議論・実装・論文投稿に取り組み、凡ゆる知識を共有します。2020の目標は「トップ会議に30+本投稿」することです。
The document discusses Twitter and GitHub accounts, an IPSJ conference, and hardware including an Intel Core i7, FPGA boards from Digilent and ScalableCore, and code snippets for C programs and hardware designs including for a convolutional neural network layer.