Wang, Ziyu, et al. "Dueling network architectures for deep reinforcement learning." Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1995-2003, 2016.
【DL輪読会】Diffusion Policy: Visuomotor Policy Learning via Action DiffusionDeep Learning JP
This document discusses a paper on visuomotor policy learning via action diffusion. The paper presents a method for training policies that map camera images directly to actions by incorporating action diffusion, which adds noise to actions during training. This helps explore the action space and avoid getting stuck in local optima during policy learning. The method can learn policies for complex manipulation tasks entirely from pixels using self-supervised reinforcement learning with image rewards.
[DL輪読会]深層強化学習はなぜ難しいのか?Why Deep RL fails? A brief survey of recent works.Deep Learning JP
Deep reinforcement learning algorithms often fail to learn complex tasks. Recent works have identified three issues that form a "deadly triad" contributing to this problem: non-stationary targets, high variance, and positive correlation. New algorithms aim to address these issues by improving exploration, stabilizing learning, and decorrelating updates. Overall, deep reinforcement learning remains a challenging area with opportunities to develop more data-efficient and generally applicable algorithms.
The document summarizes a research paper that compares the performance of MLP-based models to Transformer-based models on various natural language processing and computer vision tasks. The key points are:
1. Gated MLP (gMLP) architectures can achieve performance comparable to Transformers on most tasks, demonstrating that attention mechanisms may not be strictly necessary.
2. However, attention still provides benefits for some NLP tasks, as models combining gMLP and attention outperformed pure gMLP models on certain benchmarks.
3. For computer vision, gMLP achieved results close to Vision Transformers and CNNs on image classification, indicating gMLP can match their data efficiency.
This document summarizes a presentation on offline reinforcement learning. It discusses how offline RL can learn from fixed datasets without further interaction with the environment, which allows for fully off-policy learning. However, offline RL faces challenges from distribution shift between the behavior policy that generated the data and the learned target policy. The document reviews several offline policy evaluation, policy gradient, and deep deterministic policy gradient methods, and also discusses using uncertainty and constraints to address distribution shift in offline deep reinforcement learning.
【DL輪読会】Diffusion Policy: Visuomotor Policy Learning via Action DiffusionDeep Learning JP
This document discusses a paper on visuomotor policy learning via action diffusion. The paper presents a method for training policies that map camera images directly to actions by incorporating action diffusion, which adds noise to actions during training. This helps explore the action space and avoid getting stuck in local optima during policy learning. The method can learn policies for complex manipulation tasks entirely from pixels using self-supervised reinforcement learning with image rewards.
[DL輪読会]深層強化学習はなぜ難しいのか?Why Deep RL fails? A brief survey of recent works.Deep Learning JP
Deep reinforcement learning algorithms often fail to learn complex tasks. Recent works have identified three issues that form a "deadly triad" contributing to this problem: non-stationary targets, high variance, and positive correlation. New algorithms aim to address these issues by improving exploration, stabilizing learning, and decorrelating updates. Overall, deep reinforcement learning remains a challenging area with opportunities to develop more data-efficient and generally applicable algorithms.
The document summarizes a research paper that compares the performance of MLP-based models to Transformer-based models on various natural language processing and computer vision tasks. The key points are:
1. Gated MLP (gMLP) architectures can achieve performance comparable to Transformers on most tasks, demonstrating that attention mechanisms may not be strictly necessary.
2. However, attention still provides benefits for some NLP tasks, as models combining gMLP and attention outperformed pure gMLP models on certain benchmarks.
3. For computer vision, gMLP achieved results close to Vision Transformers and CNNs on image classification, indicating gMLP can match their data efficiency.
This document summarizes a presentation on offline reinforcement learning. It discusses how offline RL can learn from fixed datasets without further interaction with the environment, which allows for fully off-policy learning. However, offline RL faces challenges from distribution shift between the behavior policy that generated the data and the learned target policy. The document reviews several offline policy evaluation, policy gradient, and deep deterministic policy gradient methods, and also discusses using uncertainty and constraints to address distribution shift in offline deep reinforcement learning.
2. 紹介論文
Ziyu Wang, Tom Schaul, Matteo Hessel, Hado van
Hasselt, Marc Lanctot, Nando de Freitas
“Dueling network architectures for deep
reinforcement learning.”
Proceedings of The 33rd International Conference
on Machine Learning, pp. 1995–2003, 2016
2