This document provides an overview of single layer perceptrons (SLPs) and classification. It defines a perceptron as the simplest form of neural network consisting of adjustable weights and a bias. SLPs can perform binary classification of linearly separable patterns by adjusting weights during training. The document outlines limitations of SLPs, including their inability to represent non-linearly separable functions like XOR. It introduces Bayesian decision theory and how it can be used for optimal classification by comparing posterior probabilities given prior probabilities and likelihood functions. Decision boundaries are defined for dividing a feature space into non-overlapping regions to classify patterns.