This document summarizes a research paper on scaling laws for neural language models. Some key findings of the paper include:
- Language model performance depends strongly on model scale and weakly on model shape. With enough compute and data, performance scales as a power law of parameters, compute, and data.
- Overfitting is universal, with penalties depending on the ratio of parameters to data.
- Large models have higher sample efficiency and can reach the same performance levels with less optimization steps and data points.
- The paper motivated subsequent work by OpenAI on applying scaling laws to other domains like computer vision and developing increasingly large language models like GPT-3.
機械学習の社会実装では、予測精度が高くても、機械学習がブラックボックであるために使うことができないということがよく起きます。
このスライドでは機械学習が不得意な予測結果の根拠を示すために考案されたLIMEの論文を解説します。
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "" Why should i trust you?" Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
This document summarizes a research paper on scaling laws for neural language models. Some key findings of the paper include:
- Language model performance depends strongly on model scale and weakly on model shape. With enough compute and data, performance scales as a power law of parameters, compute, and data.
- Overfitting is universal, with penalties depending on the ratio of parameters to data.
- Large models have higher sample efficiency and can reach the same performance levels with less optimization steps and data points.
- The paper motivated subsequent work by OpenAI on applying scaling laws to other domains like computer vision and developing increasingly large language models like GPT-3.
機械学習の社会実装では、予測精度が高くても、機械学習がブラックボックであるために使うことができないということがよく起きます。
このスライドでは機械学習が不得意な予測結果の根拠を示すために考案されたLIMEの論文を解説します。
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "" Why should i trust you?" Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016.
Deep State Space Models for Time Series Forecasting の紹介
1. Syama Sundar Rangapuram, Matthias Seeger, Jan Gasthaus,
Lorenzo Stella, Yuyang Wang, Tim Januschowski. Deep State
Space Models for Time Series Forecasting. In Advances in
Neural Information Processing Systems 31, 2018.
2019年1月26日
三原 千尋
NeurIPS2018
論文紹介
複数時系列の状態空間モデルに対する
RNNを用いた横断的パラメータ推定
Deep State Space Models for Time Series Forecasting
出典
https://papers.nips.cc /paper/8004-deep-state-space-models...