This document summarizes recent research on applying self-attention mechanisms from Transformers to domains other than language, such as computer vision. It discusses models that use self-attention for images, including ViT, DeiT, and T2T, which apply Transformers to divided image patches. It also covers more general attention modules like the Perceiver that aims to be domain-agnostic. Finally, it discusses work on transferring pretrained language Transformers to other modalities through frozen weights, showing they can function as universal computation engines.
This document discusses some of the challenges in developing AI systems that utilize machine learning. It notes that machine learning systems rely on probabilities and statistics based on training data, making quality assurance difficult. It is also difficult to fully understand and interpret models from deep neural networks. The document suggests that new approaches are needed for developing machine learning-based systems, as traditional software engineering approaches do not work well. Establishing the field of "machine learning engineering" is important for building AI systems that can reliably ensure quality.
This document summarizes recent research on applying self-attention mechanisms from Transformers to domains other than language, such as computer vision. It discusses models that use self-attention for images, including ViT, DeiT, and T2T, which apply Transformers to divided image patches. It also covers more general attention modules like the Perceiver that aims to be domain-agnostic. Finally, it discusses work on transferring pretrained language Transformers to other modalities through frozen weights, showing they can function as universal computation engines.
This document discusses some of the challenges in developing AI systems that utilize machine learning. It notes that machine learning systems rely on probabilities and statistics based on training data, making quality assurance difficult. It is also difficult to fully understand and interpret models from deep neural networks. The document suggests that new approaches are needed for developing machine learning-based systems, as traditional software engineering approaches do not work well. Establishing the field of "machine learning engineering" is important for building AI systems that can reliably ensure quality.
This presentation of which title is "Successful point for the IT Project Management" is for PMI Japan Monthly Seminar in March by Mr.Shinji, Notohara,Director,
IT innovation,inc,Japan.
2022年7月1日に開催された「インターオペラビリティがもたらすエンタープライズブロックチェーンの進化とは【EEA Japan x Blockchain EXE】」の登壇資料です。
▼イベント詳細ページ
https://peatix.com/event/3266446/view