0% found this document useful (0 votes)
25 views4 pages

Ayu Experiment 3

The document discusses refining feature detection for image segmentation. Techniques include multi-scale analysis, non-maximum suppression, adaptive thresholding, edge refinement, feature filtering and selection, feature fusion, contextual information, and deep learning approaches. The implementation loads test and dataset images, detects ORB features, computes descriptors, matches features, and sorts results by matching score.

Uploaded by

meayushman2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views4 pages

Ayu Experiment 3

The document discusses refining feature detection for image segmentation. Techniques include multi-scale analysis, non-maximum suppression, adaptive thresholding, edge refinement, feature filtering and selection, feature fusion, contextual information, and deep learning approaches. The implementation loads test and dataset images, detects ORB features, computes descriptors, matches features, and sorts results by matching score.

Uploaded by

meayushman2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Course Name: Computer Vision Lab Course Code: CSP-422

Experiment:1.3

Aim:
Write a program to analyze the impact of refining feature detection for image segmentation.

Software Required: Google Colab

Description:
Refining feature detection for image segmentation involves improving the quality and accuracy of detected
features. Techniques for this purpose include:

1. **Multi-scale Analysis**: Detect features at different scales using algorithms like SIFT or SURF, capturing
objects of varying sizes.
2. **Non-Maximum Suppression**: Eliminate less significant feature responses produced by noise or
variations, retaining the most salient features.
3. **Adaptive Thresholding**: Dynamically adjust detection thresholds based on local image statistics for
varying lighting or contrast conditions.
4. **Edge Refinement**: Enhance detected edges using methods like edge thinning, linking, or contour
enhancement.
5. **Feature Filtering and Selection**: Remove irrelevant or false features, considering quality measures,
spatial distribution, or context.
6. **Feature Fusion**: Combine different feature types or descriptors to create a more comprehensive
representation of image content.
7. **Contextual Information**: Incorporate relationships between neighboring pixels or features, utilizing
spatial constraints, semantic priors, or contextual cues.
8. **Deep Learning-based Approaches**: Train deep learning models like CNNs to directly detect and
refine features, capturing complex patterns and context for improved accuracy.

Name: Ayushman Kumar UID: 20BCS8232


Course Name: Computer Vision Lab Course Code: CSP-422

Pseudo code/Algorithms/Flowchart/Steps:

• Import OpenCV as cv2.


• Define image paths for the test image and dataset.

• Read the test image and create an ORB feature detector.

• Detect key points and compute descriptors for the test image.

• Initialize an empty list called store for matching results.

• Loop through dataset images, detecting key points, computing descriptors, and
calculating matching scores.

• Sort matching results by score, from highest to lowest.

• Print results and display images with their scores.

• Wait for a key press to close displayed image windows.

• Close all image windows when the program finishes.

Implementation:
import cv2
test_image = 'imagetest.png'
dataset = ['test1.jpg', 'test2.jpg', 'test3.jpeg']

testing_image = cv2.imread(test_image)

orb = cv2.ORB_create()
kp_target, des_target = orb.detectAndCompute(testing_image, None)

store = []

for image_path in dataset:


dataset_image = cv2.imread(image_path)
kp_dataset, des_dataset = orb.detectAndCompute(dataset_image, None)

Name: Ayushman Kumar UID: 20BCS8232


Course Name: Computer Vision Lab Course Code: CSP-422

bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des_target, des_dataset)

matches = sorted(matches, key=lambda x: x.distance)


good_matches = [m for m in matches if m.distance < 75]

matching_score = len(good_matches) / len(kp_target)

store.append((image_path, matching_score))

store.sort(key=lambda x: x[1], reverse=True)

for result in store:


print(f"{result[0]} - Matching Score: {result[1]:.2f}")

for result in store:


image_path = result[0]
matching_score = result[1]
img = cv2.imread(image_path)
cv2.imshow(f"Matching Score: {matching_score:.2f}", img)
cv2.waitKey(0)

Images used-
Data Set images-
Test Image-

Name: Ayushman Kumar UID: 20BCS8232


Course Name: Computer Vision Lab Course Code: CSP-422

Output:

Name: Ayushman Kumar UID: 20BCS8232

You might also like