KDD Cup 2021で開催された時系列異常検知コンペ
Multi-dataset Time Series Anomaly Detection (https://compete.hexagon-ml.com/practice/competition/39/) に参加して
5位入賞した解法の紹介と上位解法の整理のための資料です.
9/24のKDD2021参加報告&論文読み会 (https://connpass.com/event/223966/) の発表資料です.
This document discusses generative adversarial networks (GANs) and their relationship to reinforcement learning. It begins with an introduction to GANs, explaining how they can generate images without explicitly defining a probability distribution by using an adversarial training process. The second half discusses how GANs are related to actor-critic models and inverse reinforcement learning in reinforcement learning. It explains how GANs can be viewed as training a generator to fool a discriminator, similar to how policies are trained in reinforcement learning.
cvpaper.challenge の Meta Study Group 発表スライド
cvpaper.challenge はコンピュータビジョン分野の今を映し、トレンドを創り出す挑戦です。論文サマリ・アイディア考案・議論・実装・論文投稿に取り組み、凡ゆる知識を共有します。2019の目標「トップ会議30+本投稿」「2回以上のトップ会議網羅的サーベイ」
http://xpaperchallenge.org/cv/
Several recent papers have explored self-supervised learning methods for vision transformers (ViT). Key approaches include:
1. Masked prediction tasks that predict masked patches of the input image.
2. Contrastive learning using techniques like MoCo to learn representations by contrasting augmented views of the same image.
3. Self-distillation methods like DINO that distill a teacher ViT into a student ViT using different views of the same image.
4. Hybrid approaches that combine masked prediction with self-distillation, such as iBOT.
cvpaper.challenge の Meta Study Group 発表スライド
cvpaper.challenge はコンピュータビジョン分野の今を映し、トレンドを創り出す挑戦です。論文サマリ・アイディア考案・議論・実装・論文投稿に取り組み、凡ゆる知識を共有します。2019の目標「トップ会議30+本投稿」「2回以上のトップ会議網羅的サーベイ」
http://xpaperchallenge.org/cv/
Several recent papers have explored self-supervised learning methods for vision transformers (ViT). Key approaches include:
1. Masked prediction tasks that predict masked patches of the input image.
2. Contrastive learning using techniques like MoCo to learn representations by contrasting augmented views of the same image.
3. Self-distillation methods like DINO that distill a teacher ViT into a student ViT using different views of the same image.
4. Hybrid approaches that combine masked prediction with self-distillation, such as iBOT.
Deep Learningについて、日本情報システム・ユーザー協会(JUAS)のJUAS ビジネスデータ研究会 AI分科会で発表しました。その際に使用した資料です。専門家向けではなく、一般向けの資料です。
なお本資料は、2015年12月の日本情報システム・ユーザー協会(JUAS)での発表資料の改訂版となります。
This document summarizes face image quality assessment (FIQA) and introduces several FIQA algorithms. It defines FIQA and outlines common FIQA processes of inputting a face image, detecting the face region, and applying a FIQA algorithm to output a quality score. It discusses levels of FIQA algorithms from unlearned to integrated with face recognition. Example algorithms described include FaceQnet, SER-FIQ, and MagFace. FaceQnet generates quality score ground truths from face recognition and trains a model to predict scores. SER-FIQ and MagFace leverage face embeddings from recognition models to assess quality without separate training.
論文紹介: Long-Tailed Classification by Keeping the Good and Removing the Bad Mom...Plot Hong
1) The paper proposes a new method called De-confound-TDE to address long-tailed classification problems by removing the bad causal effect of head classes' momentum on tail classes during training.
2) It decouples representation and classifier learning via multi-head normalization and removes the effect of feature drift toward head classes via counterfactual TDE inference.
3) Experimental results show it achieves state-of-the-art performance on long-tailed classification benchmarks like CIFAR-10-LT, CIFAR-100-LT, and ImageNet-LT, as well as object detection and segmentation benchmarks like LVIS.
This document discusses deepfakes, including their creation and detection. It begins with an introduction to face swapping, face reenactment, and face synthesis techniques used to generate deepfakes. It then describes several methods for creating deepfakes, such as faceswap algorithms, 3D modeling approaches, and GAN-based methods. The document also reviews several datasets used to detect deepfakes. Finally, it analyzes current research on detecting deepfakes using techniques like two-stream neural networks, analyzing inconsistencies in audio-video, and detecting warping artifacts.