This document provides a summary of sampling-based approximations for reinforcement learning. It discusses using samples to approximate value iteration, policy iteration, and Q-learning when the state-action space is too large to store a table of values. Key points covered include using Q-learning with function approximation instead of a table, using features to generalize Q-values across states, and examples of feature representations like those used for the Tetris domain. Convergence properties of approximate Q-learning are also discussed.