Universal Approximation Theorem visualization
Universal Approximation Theorem visualization
This ReLU has slope=+5 This ReLU has slope=+15 This ReLU has slope=20
and x offset = +2 and x offset = +3 and x offset = -1
Now plot all the ReLUs on the same graph
weights ~ biases
weights
-1 +1
-20
+1 ~ +5
Input=x 0
+1 Output
-5
~
+5
+1 -2
+15
~ ReLU
+1
-3 activation
~
Universal Approximation Theorem
The Universal Approximation Theorem states that a
feedforward neural network with a single hidden
layer containing a finite number of neurons can
approximate any continuous function on a subset of
inputs to any desired degree of accuracy, provided the
activation function is non-constant, bounded, and
continuous.
feedforward