lect 7
lect 7
• Input Data: You feed data into the network. The data could be
anything — for example, different images, sounds, or even
numbers.
• Best Matching Unit (BMU): The network tries to find the "best"
neuron that matches the input data the closest. This is called
the Best Matching Unit (BMU). Imagine you're trying to match
an apple with the best matching fruit in a basket.
• Updating the Network: Once the BMU is
found, that neuron and the nearby neurons
adjust to be more like the input data. So, the
network "learns" and becomes better at
organizing similar data over time.
• Self-Organization: Over time, the network
arranges the neurons so that similar inputs
(like different kinds of apples) are placed
closer to each other. This is the self-organizing
part.
• Example: Clustering Fruits
• Let’s say you have a lot of fruits with different
attributes like color, shape, and size.
• feed this data into the Kohonen network.
• The network will organize these fruits by
finding groups based on their similarities.
• For example, apples might be grouped
together based on their round shape and
color, bananas might form a different group
due to their elongated shape and yellow color.
Real-Life Examples of Kohonen Networks
• Customer Segmentation: In marketing,
companies use Kohonen networks to organize
customers into different groups based on their
buying behavior. It helps identify which
customers are more likely to buy certain
products.
• Image Recognition: Kohonen networks are
used in image recognition, where the network
groups similar images together (like sorting
different photos of dogs based on their breed).
• Speech Recognition: The network can group
similar speech patterns together, helping
machines understand and process human
speech.
• Medical Diagnosis: Kohonen networks can
help in identifying patterns in medical data,
such as grouping different types of diseases
based on symptoms or test results.
Summary
• To sum it up, a Kohonen Neural Network
(Self-Organizing Map) is like a smart sorting
system. It learns from data by grouping similar
things together, without needing any labeled
information. It’s useful for applications like
customer segmentation, image recognition,
and pattern identification in many fields.
• To decrease the learning rate (η) and
neighborhood radius (σ) after each iteration
or epoch, we use decay functions that reduce
these parameters gradually as the algorithm
progresses. This ensures that the network
fine-tunes its weights with smaller
adjustments as it learns.