Video surveillance is active research topic in
computer vision research area for humans & vehicles, so it is
used over a great extent. Multiple images generated using a fixed
camera contains various objects, which are taken under different
variations, illumination changes after that the object’s identity
and orientation are provided to the user. This scheme is used to
represent individual images as well as various objects classes in a
single, scale and rotation invariant model.The objective is to
improve object recognition accuracy for surveillance purposes &
to detect multiple objects with sufficient level of scale
invariance.Multiple objects detection& recognition is important
in the analysis of video data and higher level security system. This
method can efficiently detect the objects from query images as
well as videos by extracting frames one by one. When given a
query image at runtime, by generating the set of query features
and it will find best match it to other sets within the database.
Using SURF algorithm find the database object with the best
feature matching, then object is present in the query image.
The document discusses visual pattern recognition and the design and implementation of visual pattern classifiers. It describes the common steps in designing a statistical visual pattern classifier, which include defining the problem, extracting relevant features, selecting a classification method, selecting a dataset for training and testing, training the classifier on a subset of images, testing the classifier, and refining the solution. It also defines what patterns and pattern classes are in the context of pattern recognition.
Performance analysis of chain code descriptor for hand shape classificationijcga
Feature Extraction is an important task for any Image processing application. The visual properties of any image are its shape, texture and colour. Out of these shape description plays important role in any image classification. The shape description method classified into two types, contour base and region based. The contour base method concentrated on the shape boundary line and the region based method considers whole area. In this paper, contour based, the chain code description method was experimented for different hand shape.
The chain code descriptor of various hand shapes was calculated and tested with different classifier such as k-nearest- neighbour (k-NN), Support vector machine (SVM) and Naive Bayes. Principal component analysis (PCA) was applied after the chain code description. The performance of SVM was found better than k-NN and Naive Bayes with recognition rate 93%.
Two Dimensional Shape and Texture Quantification - Medical Image ProcessingChamod Mune
1. The document discusses various methods for quantifying two-dimensional shapes and textures in medical images, including statistical moments, spatial moments, radial distance measures, chain codes, Fourier descriptors, thinning, and texture measures.
2. Compactness, calculated using perimeter and area, quantifies how close a shape is to a circle. Spatial moments provide quantitative measurements of point distributions and shapes. Radial distance measures analyze boundary curvature. Chain codes represent boundary points.
3. Fourier descriptors and thinning/skeletonization reduce shapes to descriptors and graphs for analysis. Texture is quantified using statistical moments, co-occurrence matrices, spectral measures, and fractal dimensions.
The document discusses image representation and feature extraction techniques. It describes how representation makes image information more accessible for computer interpretation using either boundaries or pixel regions. Feature extraction quantifies these representations by extracting descriptors like geometric properties, statistical moments, and textures. Desirable properties for descriptors include being invariant to transformations, compact, robust to noise, and having low complexity. Various boundary and regional descriptors are defined, such as chain codes, shape numbers, and moments.
This document discusses various shape features and descriptors that can be used to represent shapes. It covers properties shape features should have like being invariant to translation, rotation, and scale. Common shape descriptors are classified as either contour-based or region-based. Simple geometric features like center of gravity, circularity ratio, and rectangularity are described first. One-dimensional functions for shape representation include the centroid distance function, area function, and chord length function. Other shape features mentioned are basic and differential chain codes, chain code histograms, and shape matrices. The conclusion discusses how shape signatures can be made invariant and robust to noise.
This document discusses various techniques for representing and describing images for image processing and segmentation. It covers chain codes, polygonal approximations using minimum perimeter polygons and merging/splitting techniques, signatures which provide a 1D functional representation of boundaries, boundary segments to extract information from concave parts of objects, and skeletons which reduce regions to graphs by obtaining medial axis transformations. It also provides examples of thinning algorithms used to obtain skeletons by iteratively deleting contour points while ensuring the overall shape is preserved.
Features image processing and ExtactionAli A Jalil
This document discusses various techniques for extracting features and representing shapes from images, including:
1. External representations based on boundary properties and internal representations based on texture and statistical moments.
2. Principal component analysis (PCA) is mentioned as a statistical method for feature extraction.
3. Feature vectors are described as arrays that encode measured features of an image numerically, symbolically, or both.
The document discusses different methods for representing segmented image regions, including:
1) Representing regions based on their external (boundary-based) characteristics or internal (pixel-based) characteristics.
2) Common boundary representation methods are boundary following algorithms, chain codes, and polygon approximation.
3) Chain codes represent boundaries as sequences of line segments coded by direction. Polygon approximation finds the minimum perimeter polygon to capture a boundary shape using the fewest line segments.
Template matching is a technique used in computer vision to find sub-images in a target image that match a template image. It involves moving the template over the target image and calculating a measure of similarity at each position. This is computationally expensive. Template matching can be done at the pixel level or on higher-level features and regions. Various measures are used to quantify the similarity or dissimilarity between images during the matching process. Template matching has applications in areas like object detection but faces challenges with noise, occlusions, and variations in scale and rotation.
The document discusses using the Hough transform for edge detection and boundary linking in images. [1] The Hough transform is a technique that can find edge points that lie along a straight line or curve without needing prior knowledge about the position or orientation of lines in the image. [2] It works by transforming each edge point in the image space to a line in the parameter space, and the intersection of lines corresponds to parameters of the line on which multiple edge points lie. [3] The Hough transform can handle cases like vertical lines that pose problems for other edge linking techniques.
The document discusses Bézier curves and provides information about a CS 354 class. It includes details about an in-class quiz, the professor's office hours, and an upcoming lecture on Bézier curves and Project 2, which is due on Friday. The lecture will cover procedural generation of a torus from a 2D grid, GLSL functions needed for the project, normal maps, coordinate spaces, interpolation curves, and Bézier curves.
The Hough transform is a feature extraction technique used in image analysis and computer vision to detect shapes within images. It works by detecting imperfect instances of objects of a certain class of shapes via a voting procedure. Specifically, the Hough transform can be used to detect lines, circles, and other shapes in an image if their parametric equations are known, and it provides robust detection even under noise and partial occlusion. It works by quantizing the parameter space that describes the shape and counting the number of votes each parametric description receives from edge points in the image.
Image similarity using symbolic representation and its variationssipij
This paper proposes a new method for image/object retrieval. A pre-processing technique is applied to
describe the object, in one dimensional representation, as a pseudo time series. The proposed algorithm
develops the modified versions of the SAX representation: applies an approach called Extended SAX
(ESAX) in order to realize efficient and accurate discovering of important patterns, necessary for retrieving
the most plausible similar objects. Our approach depends upon a table contains the break-points that
divide a Gaussian distribution in an arbitrary number of equiprobable regions. Each breakpoint has more
than one cardinality. A distance measure is used to decide the most plausible matching between strings of
symbolic words. The experimental results have shown that our approach improves detection accuracy.
This document discusses various techniques for image segmentation. It describes two main approaches to segmentation: discontinuity-based methods that detect edges or boundaries, and region-based methods that partition an image into uniform regions. Specific techniques discussed include thresholding, gradient operators, edge detection, the Hough transform, region growing, region splitting and merging, and morphological watershed transforms. Motion can also be used for segmentation by analyzing differences between frames in a video.
The hit-and-miss transform is a binary morphological operation that can detect particular patterns in an image. It uses a structuring element containing foreground and background pixels to search an image. If the structuring element pattern matches the image pixels underneath, the output pixel is set to foreground, otherwise it is set to background. The hit-and-miss transform can find features like corners, endpoints, and junctions and is used to implement other morphological operations like thinning and thickening. It is performed by matching the structuring element at all points in the image.
Morphological image processing uses mathematical morphology tools to extract image components and describe shapes. Some key tools include binary erosion and dilation, which thin and thicken objects. Erosion shrinks objects while dilation grows them. Opening and closing are combinations of erosion and dilation that smooth contours or fill gaps. The hit-or-miss transform detects shapes by requiring matches of foreground and background pixels. Other algorithms include boundary extraction, hole filling, and thinning to find skeletons, which are medial axes of object shapes.
At the end of this lecture, you should be able to;
describe the importance of morphological features in an image.
describe the operation of erosion, dilation, open and close operations.
identify the practical advantage of the morphological operations.
apply morphological operations for problem solving.
Unit 3 discusses image segmentation techniques. Similarity based techniques group similar image components, like pixels or frames, for compact representation. Common applications include medical imaging, satellite images, and surveillance. Methods include thresholding and k-means clustering. Segmentation of grayscale images is based on discontinuities in pixel values, detecting edges, or similarities using thresholding, region growing, and splitting/merging. Region growing starts with seed pixels and groups neighboring pixels with similar properties. Region splitting starts with the full image and divides non-homogeneous regions, while region merging combines small similar regions.
The document provides an agenda for a practical session on digital image processing. It discusses stages of computer vision including stereo images, optical flow, and machine learning techniques like classification and clustering. Stereo vision and depth maps from stereo images are explained. Optical flow concepts like the Lucas-Kanade method are covered. Machine learning algorithms like KNN, SVM, and K-means clustering are also summarized. The document concludes with information about a project, assignment, and notable AI companies in Egypt.
This document discusses advanced computer graphics and realistic image generation techniques. It covers topics like modeling objects, lighting, rendering, visible surface determination, shading, textures, shadows, transparency, camera models, and anti-aliasing. Realism involves modeling objects and lighting conditions, determining visible surfaces, calculating pixel colors based on light reflection, and supporting animation. Rendering techniques like line drawings, shading, and shadows add information to convey depth. Anti-aliasing reduces jagged edges by using techniques like supersampling and weighted area sampling.
This document provides an overview of various digital image processing techniques including morphological transformations, geometric transformations, image gradients, Canny edge detection, image thresholding, and a practical demo assignment. It discusses the basic concepts and algorithms for each technique and provides examples code. The document is presented as part of a practical course on digital image processing.
Morphological operations like dilation and erosion are non-linear image transformations used to extract shape-related information from images by processing objects based on their morphology or shape properties using a structuring element, with dilation adding pixels to object boundaries and erosion removing pixels from object boundaries. The size and shape of the structuring element controls the number of pixels added or removed during these operations, which are used for tasks like noise removal, feature extraction, and image segmentation.
This document summarizes key concepts in morphological image processing including dilation, erosion, opening, closing, and hit-or-miss transformations. Morphological operations manipulate image shapes and structures using structuring elements based on set theory operations. Dilation adds pixels to the boundaries of objects in an image, while erosion removes pixels on object boundaries. Opening can remove noise and smooth object contours, while closing can fill in small holes and fill gaps in object shapes. Hit-or-miss transformations are used to detect specific patterns of on and off pixels. These operations form the basis for morphological algorithms like boundary extraction.
Image segmentation techniques
More information on this research can be found in:
Hussein, Rania, Frederic D. McKenzie. “Identifying Ambiguous Prostate Gland Contours from Histology Using Capsule Shape Information and Least Squares Curve Fitting.” The International Journal of Computer Assisted Radiology and Surgery ( IJCARS), Volume 2 Numbers 3-4, pp. 143-150, December 2007.
This document discusses morphological operations in image processing. It describes how morphological operations like erosion, dilation, opening, and closing can be used to extract shapes and boundaries from binary and grayscale images. Erosion shrinks foreground regions while dilation expands them. Opening performs erosion followed by dilation to remove noise, and closing does the opposite to join broken parts. The hit-and-miss transform is also introduced to detect patterns in binary images using a structuring element containing foreground and background pixels. Examples are provided to illustrate each morphological operation.
Invention of digital technology has lead to increase in the number of images that can be stored in digital format. So searching and retrieving images in large image databases has become more challenging. From the last few years, Content Based Image Retrieval (CBIR) gained increasing attention from researcher. CBIR is a system which uses visual features of image to search user required image from large image
database and user’s requests in the form of a query image. Important features of images are colour, texture and shape which give detailed information about the image. CBIR techniques using different feature extraction techniques are discussed in this paper.
IRJET-Feature based Image Retrieval based on ColorIRJET Journal
This document discusses a content-based image retrieval system based on color features. It begins with an introduction to content-based image retrieval and discusses how color histograms are commonly used to represent color features. It then reviews related work on color-based image retrieval systems and the color spaces and distance metrics used. The document outlines the steps of the described retrieval system, including training with color histograms, querying with a sample image, and returning similar images based on color histogram similarity. It presents results of the system and discusses future areas of improvement like incorporating more semantic features.
Features image processing and ExtactionAli A Jalil
This document discusses various techniques for extracting features and representing shapes from images, including:
1. External representations based on boundary properties and internal representations based on texture and statistical moments.
2. Principal component analysis (PCA) is mentioned as a statistical method for feature extraction.
3. Feature vectors are described as arrays that encode measured features of an image numerically, symbolically, or both.
The document discusses different methods for representing segmented image regions, including:
1) Representing regions based on their external (boundary-based) characteristics or internal (pixel-based) characteristics.
2) Common boundary representation methods are boundary following algorithms, chain codes, and polygon approximation.
3) Chain codes represent boundaries as sequences of line segments coded by direction. Polygon approximation finds the minimum perimeter polygon to capture a boundary shape using the fewest line segments.
Template matching is a technique used in computer vision to find sub-images in a target image that match a template image. It involves moving the template over the target image and calculating a measure of similarity at each position. This is computationally expensive. Template matching can be done at the pixel level or on higher-level features and regions. Various measures are used to quantify the similarity or dissimilarity between images during the matching process. Template matching has applications in areas like object detection but faces challenges with noise, occlusions, and variations in scale and rotation.
The document discusses using the Hough transform for edge detection and boundary linking in images. [1] The Hough transform is a technique that can find edge points that lie along a straight line or curve without needing prior knowledge about the position or orientation of lines in the image. [2] It works by transforming each edge point in the image space to a line in the parameter space, and the intersection of lines corresponds to parameters of the line on which multiple edge points lie. [3] The Hough transform can handle cases like vertical lines that pose problems for other edge linking techniques.
The document discusses Bézier curves and provides information about a CS 354 class. It includes details about an in-class quiz, the professor's office hours, and an upcoming lecture on Bézier curves and Project 2, which is due on Friday. The lecture will cover procedural generation of a torus from a 2D grid, GLSL functions needed for the project, normal maps, coordinate spaces, interpolation curves, and Bézier curves.
The Hough transform is a feature extraction technique used in image analysis and computer vision to detect shapes within images. It works by detecting imperfect instances of objects of a certain class of shapes via a voting procedure. Specifically, the Hough transform can be used to detect lines, circles, and other shapes in an image if their parametric equations are known, and it provides robust detection even under noise and partial occlusion. It works by quantizing the parameter space that describes the shape and counting the number of votes each parametric description receives from edge points in the image.
Image similarity using symbolic representation and its variationssipij
This paper proposes a new method for image/object retrieval. A pre-processing technique is applied to
describe the object, in one dimensional representation, as a pseudo time series. The proposed algorithm
develops the modified versions of the SAX representation: applies an approach called Extended SAX
(ESAX) in order to realize efficient and accurate discovering of important patterns, necessary for retrieving
the most plausible similar objects. Our approach depends upon a table contains the break-points that
divide a Gaussian distribution in an arbitrary number of equiprobable regions. Each breakpoint has more
than one cardinality. A distance measure is used to decide the most plausible matching between strings of
symbolic words. The experimental results have shown that our approach improves detection accuracy.
This document discusses various techniques for image segmentation. It describes two main approaches to segmentation: discontinuity-based methods that detect edges or boundaries, and region-based methods that partition an image into uniform regions. Specific techniques discussed include thresholding, gradient operators, edge detection, the Hough transform, region growing, region splitting and merging, and morphological watershed transforms. Motion can also be used for segmentation by analyzing differences between frames in a video.
The hit-and-miss transform is a binary morphological operation that can detect particular patterns in an image. It uses a structuring element containing foreground and background pixels to search an image. If the structuring element pattern matches the image pixels underneath, the output pixel is set to foreground, otherwise it is set to background. The hit-and-miss transform can find features like corners, endpoints, and junctions and is used to implement other morphological operations like thinning and thickening. It is performed by matching the structuring element at all points in the image.
Morphological image processing uses mathematical morphology tools to extract image components and describe shapes. Some key tools include binary erosion and dilation, which thin and thicken objects. Erosion shrinks objects while dilation grows them. Opening and closing are combinations of erosion and dilation that smooth contours or fill gaps. The hit-or-miss transform detects shapes by requiring matches of foreground and background pixels. Other algorithms include boundary extraction, hole filling, and thinning to find skeletons, which are medial axes of object shapes.
At the end of this lecture, you should be able to;
describe the importance of morphological features in an image.
describe the operation of erosion, dilation, open and close operations.
identify the practical advantage of the morphological operations.
apply morphological operations for problem solving.
Unit 3 discusses image segmentation techniques. Similarity based techniques group similar image components, like pixels or frames, for compact representation. Common applications include medical imaging, satellite images, and surveillance. Methods include thresholding and k-means clustering. Segmentation of grayscale images is based on discontinuities in pixel values, detecting edges, or similarities using thresholding, region growing, and splitting/merging. Region growing starts with seed pixels and groups neighboring pixels with similar properties. Region splitting starts with the full image and divides non-homogeneous regions, while region merging combines small similar regions.
The document provides an agenda for a practical session on digital image processing. It discusses stages of computer vision including stereo images, optical flow, and machine learning techniques like classification and clustering. Stereo vision and depth maps from stereo images are explained. Optical flow concepts like the Lucas-Kanade method are covered. Machine learning algorithms like KNN, SVM, and K-means clustering are also summarized. The document concludes with information about a project, assignment, and notable AI companies in Egypt.
This document discusses advanced computer graphics and realistic image generation techniques. It covers topics like modeling objects, lighting, rendering, visible surface determination, shading, textures, shadows, transparency, camera models, and anti-aliasing. Realism involves modeling objects and lighting conditions, determining visible surfaces, calculating pixel colors based on light reflection, and supporting animation. Rendering techniques like line drawings, shading, and shadows add information to convey depth. Anti-aliasing reduces jagged edges by using techniques like supersampling and weighted area sampling.
This document provides an overview of various digital image processing techniques including morphological transformations, geometric transformations, image gradients, Canny edge detection, image thresholding, and a practical demo assignment. It discusses the basic concepts and algorithms for each technique and provides examples code. The document is presented as part of a practical course on digital image processing.
Morphological operations like dilation and erosion are non-linear image transformations used to extract shape-related information from images by processing objects based on their morphology or shape properties using a structuring element, with dilation adding pixels to object boundaries and erosion removing pixels from object boundaries. The size and shape of the structuring element controls the number of pixels added or removed during these operations, which are used for tasks like noise removal, feature extraction, and image segmentation.
This document summarizes key concepts in morphological image processing including dilation, erosion, opening, closing, and hit-or-miss transformations. Morphological operations manipulate image shapes and structures using structuring elements based on set theory operations. Dilation adds pixels to the boundaries of objects in an image, while erosion removes pixels on object boundaries. Opening can remove noise and smooth object contours, while closing can fill in small holes and fill gaps in object shapes. Hit-or-miss transformations are used to detect specific patterns of on and off pixels. These operations form the basis for morphological algorithms like boundary extraction.
Image segmentation techniques
More information on this research can be found in:
Hussein, Rania, Frederic D. McKenzie. “Identifying Ambiguous Prostate Gland Contours from Histology Using Capsule Shape Information and Least Squares Curve Fitting.” The International Journal of Computer Assisted Radiology and Surgery ( IJCARS), Volume 2 Numbers 3-4, pp. 143-150, December 2007.
This document discusses morphological operations in image processing. It describes how morphological operations like erosion, dilation, opening, and closing can be used to extract shapes and boundaries from binary and grayscale images. Erosion shrinks foreground regions while dilation expands them. Opening performs erosion followed by dilation to remove noise, and closing does the opposite to join broken parts. The hit-and-miss transform is also introduced to detect patterns in binary images using a structuring element containing foreground and background pixels. Examples are provided to illustrate each morphological operation.
Invention of digital technology has lead to increase in the number of images that can be stored in digital format. So searching and retrieving images in large image databases has become more challenging. From the last few years, Content Based Image Retrieval (CBIR) gained increasing attention from researcher. CBIR is a system which uses visual features of image to search user required image from large image
database and user’s requests in the form of a query image. Important features of images are colour, texture and shape which give detailed information about the image. CBIR techniques using different feature extraction techniques are discussed in this paper.
IRJET-Feature based Image Retrieval based on ColorIRJET Journal
This document discusses a content-based image retrieval system based on color features. It begins with an introduction to content-based image retrieval and discusses how color histograms are commonly used to represent color features. It then reviews related work on color-based image retrieval systems and the color spaces and distance metrics used. The document outlines the steps of the described retrieval system, including training with color histograms, querying with a sample image, and returning similar images based on color histogram similarity. It presents results of the system and discusses future areas of improvement like incorporating more semantic features.
Content-based image retrieval (CBIR) uses visual image content to search large image databases according to user needs. CBIR systems represent images by extracting features related to color, shape, texture, and spatial layout. Features are extracted from regions of the image and compared to features of images in the database to find the most similar matches. CBIR has applications in medical imaging, fingerprints, photo collections, and more. Techniques include representing images with histograms of color and texture features extracted through transforms.
An Unsupervised Cluster-based Image Retrieval Algorithm using Relevance FeedbackIJMIT JOURNAL
Content-based image retrieval (CBIR) systems utilize low level query image feature as identifying similarity between a query image and the image database. Image contents are plays significant role for image retrieval. There are three fundamental bases for content-based image retrieval, i.e. visual feature extraction, multidimensional indexing, and retrieval system design. Each image has three contents such as: color, texture and shape features. Color and texture both plays important image visual features used in Content-Based Image Retrieval to improve results. Color histogram and texture features have potential to retrieve similar images on the basis of their properties. As the feature extracted from a query is low level, it is extremely difficult for user to provide an appropriate example in based query. To overcome these problems and reach higher accuracy in CBIR system, providing user with relevance feedback is famous for provide promising solutio
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Ijaems apr-2016-16 Active Learning Method for Interactive Image RetrievalINFOGAIN PUBLICATION
With many possible multimedia applications, content-based image retrieval (CBIR) has recently gained more interest for image management and web search. CBIR is a technique that utilizes the visual content of an image, to search for similar images in large-scale image databases, according to a user’s concern. In image retrieval algorithms, retrieval is according to feature similarities with respect to the query, ignoring the similarities among images in database. To use the feature similarities information, this paper presents the k-means clustering algorithm to image retrieval system. This clustering algorithm optimizes the relevance results by firstly clustering the similar images in the database. In this paper, we are also implementing wavelet transform which demonstrates significant rough and precise filtering. We also apply the Euclidean distance metric and input a query image based on similarity features of which we can retrieve the output images. The results show that the proposed approach can greatly improve the efficiency and performances of image retrieval.
A comparative study on content based image retrieval methodsIJLT EMAS
Content-based image retrieval (CBIR) is a method of
finding images from a huge image database according to persons’
interests. Content-based here means that the search involves
analysis the actual content present in the image. As database of
images is growing daybyday, researchers/scholars are searching
for better techniques for retrieval of images maintaining good
efficiency. This paper presents the visual features and various
ways for image retrieval from the huge image database.
Evaluation of Euclidean and Manhanttan Metrics In Content Based Image Retriev...IJERA Editor
This document evaluates the performance of the Euclidean and Manhattan distance metrics in a content-based image retrieval system. It finds that the Manhattan distance metric showed better precision than the Euclidean distance metric. The system uses color histograms and Gabor texture features to represent images. Color is represented in HSV color space and histograms of hue, saturation and value are used. Gabor filters are applied to capture texture at different scales and orientations. Distance between feature vectors is calculated using Euclidean and Manhattan distance formulas to find similar images from the database. The system was tested on a dataset of 1000 Corel images and Manhattan distance produced more relevant search results.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Content based image retrieval based on shape with texture featuresAlexander Decker
This document describes a content-based image retrieval system that extracts shape and texture features from images. It uses the HSV color space and wavelet transform for feature extraction. Color features are extracted by quantizing the H, S, and V components of HSV into unequal intervals based on human color perception. Texture features are extracted using wavelet transforms. The color and texture features are then combined to form a feature vector for each image. During retrieval, the similarity between a query image and images in the database is measured using the Euclidean distance between their feature vectors. The results show that retrieving images using HSV color features provides more accurate results and faster retrieval times compared to using RGB color features.
- Content-Based Image Retrieval (CBIR) is a technique used to retrieve images from large databases based on their visual content. It involves extracting features from an input query image and finding similar images from the database based on extracted features.
- The paper proposes a CBIR technique based on color feature extraction, where the queried image is divided into parts and color features are extracted to form a feature vector, which is then compared to feature vectors of images in the database to find similar images.
- The technique currently only uses color as the feature for similarity comparison, which limits its effectiveness, so future work involves combining multiple features like texture and shape for more accurate image retrieval.
SEMANTIC IMAGE RETRIEVAL USING MULTIPLE FEATUREScscpconf
In Content Based Image Retrieval (CBIR) some problem such as recognizing the similar
images, the need for databases, the semantic gap, and retrieving the desired images from huge
collections are the keys to improve. CBIR system analyzes the image content for indexing,
management, extraction and retrieval via low-level features such as color, texture and shape.
To achieve higher semantic performance, recent system seeks to combine the low-level features
of images with high-level features that conation perceptual information for human beings.
Performance improvements of indexing and retrieval play an important role for providing
advanced CBIR services. To overcome these above problems, a new query-by-image technique
using combination of multiple features is proposed. The proposed technique efficiently sifts through the dataset of images to retrieve semantically similar images.
The content based image retrieval (CBIR) technique
is one of the most popular and evolving research areas of the
digital image processing. The goal of CBIR is to extract visual
content like colour, texture or shape, of an image automatically.
This paper proposes an image retrieval method that uses colour
and texture for feature extraction. This system uses the query by
example model. The system allows user to choose the feature on
the basis of which retrieval will take place. For the retrieval
based on colour feature, RGB and HSV models are taken into
consideration. Whereas for texture the GLCM is used for
extracting the textural features which then goes into Vector
Quantization phase to speed up the retrieval process.
IRJET- Content Based Image Retrieval (CBIR)IRJET Journal
This document describes a content-based image retrieval system that uses color features to retrieve similar images from a large database. It discusses using color descriptor features to extract feature vectors from images that can then be used to retrieve near matches based on similarity. Color features provide approximate matches more quickly than individual approaches. The system works by extracting visual features from both a query image and images in the database, then comparing the features to retrieve the most similar matches from the database. Color histograms and color moments are discussed as common color features used for this type of content-based image retrieval.
This document summarizes an approach for content-based image retrieval using histograms. It discusses representing images as Histogram Attributed Relational Graphs (HARGs) where each node is an image region and edges represent relations between regions. A query is converted to a FARG which is compared to database FARGs using a graph matching algorithm. The system was tested on a database of natural images and performance was quantified using standard measures. It achieved good retrieval results but leaves room for improving retrieval time and reducing semantic gaps between low-level features and human perceptions.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Wavelet-Based Color Histogram on Content-Based Image RetrievalTELKOMNIKA JOURNAL
The growth of image databases in many domains, including fashion, biometric, graphic design,
architecture, etc. has increased rapidly. Content Based Image Retrieval System (CBIR) is a technique used
for finding relevant images from those huge and unannotated image databases based on low-level features
of the query images. In this study, an attempt to employ 2nd level Wavelet Based Color Histogram (WBCH)
on a CBIR system is proposed. Image database used in this study are taken from Wang’s image database
containing 1000 color images. The experiment results show that 2nd level WBCH gives better precision
(0.777) than the other methods, including 1st level WBCH, Color Histogram, Color Co-occurrence Matrix,
and Wavelet texture feature. It can be concluded that the 2nd Level of WBCH can be applied to CBIR system.
Content Based Image Retrieval Using Dominant Color and Texture FeaturesIJMTST Journal
The purpose of this Paper is to describe our research on different feature extraction and matching techniques in designing a Content Based Image Retrieval (CBIR) system. Due to the enormous increase in image database sizes, as well as its vast deployment in various applications, the need for CBIR development arose. Content Based Image Retrieval (CBIR) is the retrieval of images based on features such as color and texture. Image retrieval using color feature cannot provide good solution for accuracy and efficiency. The most important features are Color and texture. In this paper technique used for retrieving the images based on their content namely dominant color, texture and combination of both color and texture. The technique verifies the superiority of image retrieval using multi feature than the single feature.
Color and texture based image retrievaleSAT Journals
Abstract Content-based image retrieval (CBIR) is an vital research area for manipulating bulky image databases and records. Alongside the conventional method where the images are searched on the basis of words, CBIR system uses visual contents to retrieve the images. In content based image retrieval systems texture and color features have been the primal descriptors. We use HSV color information and mean of the image as texture information. The performance of proposed scheme is calculated on the basis of precision, recall and accuracy. As an effect, the blend of color and texture features of the image provides strong feature set for image retrieval. Keywords: image retrieval, HSV color space, color histogram, image texture.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Submission Deadline: 30th September 2022
Acceptance Notification: Within Three Days’ time period
Online Publication: Within 24 Hrs. time Period
Expected Date of Dispatch of Printed Journal: 5th October 2022
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
White layer thickness (WLT) formed and surface roughness in wire electric discharge turning (WEDT) of tungsten carbide composite has been made to model through response surface methodology (RSM). A Taguchi’s standard Design of experiments involving five input variables with three levels has been employed to establish a mathematical model between input parameters and responses. Percentage of cobalt content, spindle speed, Pulse on-time, wire feed and pulse off-time were changed during the experimental tests based on the Taguchi’s orthogonal array L27 (3^13). Analysis of variance (ANOVA) revealed that the mathematical models obtained can adequately describe performance within the parameters of the factors considered. There was a good agreement between the experimental and predicted values in this study.
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
The study explores the reasons for a transgender to become entrepreneurs. In this study transgender entrepreneur was taken as independent variable and reasons to become as dependent variable. Data were collected through a structured questionnaire containing a five point Likert Scale. The study examined the data of 30 transgender entrepreneurs in Salem Municipal Corporation of Tamil Nadu State, India. Simple Random sampling technique was used. Garrett Ranking Technique (Percentile Position, Mean Scores) was used as the analysis for the present study to identify the top 13 stimulus factors for establishment of trans entrepreneurial venture. Economic advancement of a nation is governed upon the upshot of a resolute entrepreneurial doings. The conception of entrepreneurship has stretched and materialized to the socially deflated uncharted sections of transgender community. Presently transgenders have smashed their stereotypes and are making recent headlines of achievements in various fields of our Indian society. The trans-community is gradually being observed in a new light and has been trying to achieve prospective growth in entrepreneurship. The findings of the research revealed that the optimistic changes are taking place to change affirmative societal outlook of the transgender for entrepreneurial ventureship. It also laid emphasis on other transgenders to renovate their traditional living. The paper also highlights that legislators, supervisory body should endorse an impartial canons and reforms in Tamil Nadu Transgender Welfare Board Association.
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
Since ages gender difference is always a debatable theme whether caused by nature, evolution or environment. The birth of a transgender is dreadful not only for the child but also for their parents. The pain of living in the wrong physique and treated as second class victimized citizen is outrageous and fully harboured with vicious baseless negative scruples. For so long, social exclusion had perpetuated inequality and deprivation experiencing ingrained malign stigma and besieged victims of crime or violence across their life spans. They are pushed into the murky way of life with a source of eternal disgust, bereft sexual potency and perennial fear. Although they are highly visible but very little is known about them. The common public needs to comprehend the ravaged arrogance on these insensitive souls and assist in integrating them into the mainstream by offering equal opportunity, treat with humanity and respect their dignity. Entrepreneurship in the current age is endorsing the gender fairness movement. Unstable careers and economic inadequacy had inclined one of the gender variant people called Transgender to become entrepreneurs. These tiny budding entrepreneurs resulted in economic transition by means of employment, free from the clutches of stereotype jobs, raised standard of living and handful of financial empowerment. Besides all these inhibitions, they were able to witness a platform for skill set development that ignited them to enter into entrepreneurial domain. This paper epitomizes skill sets involved in trans-entrepreneurs of Thoothukudi Municipal Corporation of Tamil Nadu State and is a groundbreaking determination to sightsee various skills incorporated and the impact on entrepreneurship.
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
The banking and financial services industries are experiencing increased technology penetration. Among them, the banking industry has made technological advancements to better serve the general populace. The economy focused on transforming the banking sector's system into a cashless, paperless, and faceless one. The researcher wants to evaluate the user's intention for utilising a mobile banking application. The study also examines the variables affecting the user's behaviour intention when selecting specific applications for financial transactions. The researcher employed a well-structured questionnaire and a descriptive study methodology to gather the respondents' primary data utilising the snowball sampling technique. The study includes variables like performance expectations, effort expectations, social impact, enabling circumstances, and perceived risk. Each of the aforementioned variables has a major impact on how users utilise mobile banking applications. The outcome will assist the service provider in comprehending the user's history with mobile banking applications.
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
Technology upgradation in banking sector took the economy to view that payment mode towards online transactions using mobile applications. This system enabled connectivity between banks, Merchant and user in a convenient mode. there are various applications used for online transactions such as Google pay, Paytm, freecharge, mobikiwi, oxygen, phonepe and so on and it also includes mobile banking applications. The study aimed at evaluating the predilection of the user in adopting digital transaction. The study is descriptive in nature. The researcher used random sample techniques to collect the data. The findings reveal that mobile applications differ with the quality of service rendered by Gpay and Phonepe. The researcher suggest the Phonepe application should focus on implementing the application should be user friendly interface and Gpay on motivating the users to feel the importance of request for money and modes of payments in the application.
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
The prototype of a voice-based ATM for visually impaired using Arduino is to help people who are blind. This uses RFID cards which contain users fingerprint encrypted on it and interacts with the users through voice commands. ATM operates when sensor detects the presence of one person in the cabin. After scanning the RFID card, it will ask to select the mode like –normal or blind. User can select the respective mode through voice input, if blind mode is selected the balance check or cash withdraw can be done through voice input. Normal mode procedure is same as the existing ATM.
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
There is increasing acceptability of emotional intelligence as a major factor in personality assessment and effective human resource management. Emotional intelligence as the ability to build capacity, empathize, co-operate, motivate and develop others cannot be divorced from both effective performance and human resource management systems. The human person is crucial in defining organizational leadership and fortunes in terms of challenges and opportunities and walking across both multinational and bilateral relationships. The growing complexity of the business world requires a great deal of self-confidence, integrity, communication, conflict and diversity management to keep the global enterprise within the paths of productivity and sustainability. Using the exploratory research design and 255 participants the result of this original study indicates strong positive correlation between emotional intelligence and effective human resource management. The paper offers suggestions on further studies between emotional intelligence and human capital development and recommends for conflict management as an integral part of effective human resource management.
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
Our life journey, in general, is closely defined by the way we understand the meaning of why we coexist and deal with its challenges. As we develop the "inspiration economy", we could say that nearly all of the challenges we have faced are opportunities that help us to discover the rest of our journey. In this note paper, we explore how being faced with the opportunity of being a close carer for an aging parent with dementia brought intangible discoveries that changed our insight of the meaning of the rest of our life journey.
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
The main objective of this study is to analyze the impact of aspects of Organizational Culture on the Effectiveness of the Performance Management System (PMS) in the Health Care Organization at Thanjavur. Organizational Culture and PMS play a crucial role in present-day organizations in achieving their objectives. PMS needs employees’ cooperation to achieve its intended objectives. Employees' cooperation depends upon the organization’s culture. The present study uses exploratory research to examine the relationship between the Organization's culture and the Effectiveness of the Performance Management System. The study uses a Structured Questionnaire to collect the primary data. For this study, Thirty-six non-clinical employees were selected from twelve randomly selected Health Care organizations at Thanjavur. Thirty-two fully completed questionnaires were received.
Living in 21st century in itself reminds all of us the necessity of police and its administration. As more and more we are entering into the modern society and culture, the more we require the services of the so called ‘Khaki Worthy’ men i.e., the police personnel. Whether we talk of Indian police or the other nation’s police, they all have the same recognition as they have in India. But as already mentioned, their services and requirements are different after the like 26th November, 2008 incidents, where they without saving their own lives has sacrificed themselves without any hitch and without caring about their respective family members and wards. In other words, they are like our heroes and mentors who can guide us from the darkness of fear, militancy, corruption and other dark sides of life and so on. Now the question arises, if Gandhi would have been alive today, what would have been his reaction/opinion to the police and its functioning? Would he have some thing different in his mind now what he had been in his mind before the partition or would he be going to start some Satyagraha in the form of some improvement in the functioning of the police administration? Really these questions or rather night mares can come to any one’s mind, when there is too much confusion is prevailing in our minds, when there is too much corruption in the society and when the polices working is also in the questioning because of one or the other case throughout the India. It is matter of great concern that we have to thing over our administration and our practical approach because the police personals are also like us, they are part and parcel of our society and among one of us, so why we all are pin pointing towards them.
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
The goal of this study was to see how talent management affected employee retention in the selected IT organizations in Chennai. The fundamental issue was the difficulty to attract, hire, and retain talented personnel who perform well and the gap between supply and demand of talent acquisition and retaining them within the firms. The study's main goals were to determine the impact of talent management on employee retention in IT companies in Chennai, investigate talent management strategies that IT companies could use to improve talent acquisition, performance management, career planning and formulate retention strategies that the IT firms could use. The respondents were given a structured close-ended questionnaire with the 5 Point Likert Scale as part of the study's quantitative research design. The target population consisted of 289 IT professionals. The questionnaires were distributed and collected by the researcher directly. The Statistical Package for Social Sciences (SPSS) was used to collect and analyse the questionnaire responses. Hypotheses that were formulated for the various areas of the study were tested using a variety of statistical tests. The key findings of the study suggested that talent management had an impact on employee retention. The studies also found that there is a clear link between the implementation of talent management and retention measures. Management should provide enough training and development for employees, clarify job responsibilities, provide adequate remuneration packages, and recognise employees for exceptional performance.
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
Globally, Millions of dollars were spent by the organizations for employing skilled Information Technology (IT) professionals. It is costly to replace unskilled employees with IT professionals possessing technical skills and competencies that aid in interconnecting the business processes. The organization’s employment tactics were forced to alter by globalization along with technological innovations as they consistently diminish to remain lean, outsource to concentrate on core competencies along with restructuring/reallocate personnel to gather efficiency. As other jobs, organizations or professions have become reasonably more appropriate in a shifting employment landscape, the above alterations trigger both involuntary as well as voluntary turnover. The employee view on jobs is also afflicted by the COVID-19 pandemic along with the employee-driven labour market. So, having effective strategies is necessary to tackle the withdrawal rate of employees. By associating Emotional Intelligence (EI) along with Talent Management (TM) in the IT industry, the rise in attrition rate was analyzed in this study. Only 303 respondents were collected out of 350 participants to whom questionnaires were distributed. From the employees of IT organizations located in Bangalore (India), the data were congregated. A simple random sampling methodology was employed to congregate data as of the respondents. Generating the hypothesis along with testing is eventuated. The effect of EI and TM along with regression analysis between TM and EI was analyzed. The outcomes indicated that employee and Organizational Performance (OP) were elevated by effective EI along with TM.
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
By implementing talent management strategy, organizations would have the option to retain their skilled professionals while additionally working on their overall performance. It is the course of appropriately utilizing the ideal individuals, setting them up for future top positions, exploring and dealing with their performance, and holding them back from leaving the organization. It is employee performance that determines the success of every organization. The firm quickly obtains an upper hand over its rivals in the event that its employees having particular skills that cannot be duplicated by the competitors. Thus, firms are centred on creating successful talent management practices and processes to deal with the unique human resources. Firms are additionally endeavouring to keep their top/key staff since on the off chance that they leave; the whole store of information leaves the firm's hands. The study's objective was to determine the impact of talent management on organizational performance among the selected IT organizations in Chennai. The study recommends that talent management limitedly affects performance. On the off chance that this talent is appropriately management and implemented properly, organizations might benefit as much as possible from their maintained assets to support development and productivity, both monetarily and non-monetarily.
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
Banking regulations act of India, 1949 defines banking as “acceptance of deposits for the purpose of lending or investment from the public, repayment on demand or otherwise and withdrawable through cheques, drafts order or otherwise”, the major participants of the Indian financial system are commercial banks, the financial institution encompassing term lending institutions. Investments institutions, specialized financial institution and the state level development banks, non banking financial companies (NBFC) and other market intermediaries such has the stock brokers and money lenders are among the oldest of the certain variants of NBFC and the oldest market participants. The asset quality of banks is one of the most important indicators of their financial health. The Indian banking sector has been facing severe problems of increasing Non- Performing Assets (NPAs). The NPAs growth directly and indirectly affects the quality of assets and profitability of banks. It also shows the efficiency of banks credit risk management and the recovery effectiveness. NPA do not generate any income, whereas, the bank is required to make provisions for such as assets that why is a double edge weapon. This paper outlines the concept of quality of bank loans of different types like Housing, Agriculture and MSME loans in state Haryana of selected public and private sector banks. This study is highlighting problems associated with the role of commercial bank in financing Small and Medium Scale Enterprises (SME). The overall objective of the research was to assess the effect of the financing provisions existing for the setting up and operations of MSMEs in the country and to generate recommendations for more robust financing mechanisms for successful operation of the MSMEs, in turn understanding the impact of MSME loans on financial institutions due to NPA. There are many research conducted on the topic of Non- Performing Assets (NPA) Management, concerning particular bank, comparative study of public and private banks etc. In this paper the researcher is considering the aggregate data of selected public sector and private sector banks and attempts to compare the NPA of Housing, Agriculture and MSME loans in state Haryana of public and private sector banks. The tools used in the study are average and Anova test and variance. The findings reveal that NPA is common problem for both public and private sector banks and is associated with all types of loans either that is housing loans, agriculture loans and loans to SMES. NPAs of both public and private sector banks show the increasing trend. In 2010-11 GNPA of public and private sector were at same level it was 2% but after 2010-11 it increased in many fold and at present there is GNPA in some more than 15%. It shows the dark area of Indian banking sector.
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
An experiment conducted in this study found that BaSO4 changed Nylon 6's mechanical properties. By changing the weight ratios, BaSO4 was used to make Nylon 6. This Researcher looked into how hard Nylon-6/BaSO4 composites are and how well they wear. Experiments were done based on Taguchi design L9. Nylon-6/BaSO4 composites can be tested for their hardness number using a Rockwell hardness testing apparatus. On Nylon/BaSO4, the wear behavior was measured by a wear monitor, pinon-disc friction by varying reinforcement, sliding speed, and sliding distance, and the microstructure of the crack surfaces was observed by SEM. This study provides significant contributions to ultimate strength by increasing BaSO4 content up to 16% in the composites, and sliding speed contributes 72.45% to the wear rate
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
The majority of the population in India lives in villages. The village is the back bone of the country. Village or rural industries play an important role in the national economy, particularly in the rural development. Developing the rural economy is one of the key indicators towards a country’s success. Whether it be the need to look after the welfare of the farmers or invest in rural infrastructure, Governments have to ensure that rural development isn’t compromised. The economic development of our country largely depends on the progress of rural areas and the standard of living of rural masses. Village or rural industries play an important role in the national economy, particularly in the rural development. Rural entrepreneurship is based on stimulating local entrepreneurial talent and the subsequent growth of indigenous enterprises. It recognizes opportunity in the rural areas and accelerates a unique blend of resources either inside or outside of agriculture. Rural entrepreneurship brings an economic value to the rural sector by creating new methods of production, new markets, new products and generate employment opportunities thereby ensuring continuous rural development. Social Entrepreneurship has the direct and primary objective of serving the society along with the earning profits. So, social entrepreneurship is different from the economic entrepreneurship as its basic objective is not to earn profits but for providing innovative solutions to meet the society needs which are not taken care by majority of the entrepreneurs as they are in the business for profit making as a sole objective. So, the Social Entrepreneurs have the huge growth potential particularly in the developing countries like India where we have huge societal disparities in terms of the financial positions of the population. Still 22 percent of the Indian population is below the poverty line and also there is disparity among the rural & urban population in terms of families living under BPL. 25.7 percent of the rural population & 13.7 percent of the urban population is under BPL which clearly shows the disparity of the poor people in the rural and urban areas. The need to develop social entrepreneurship in agriculture is dictated by a large number of social problems. Such problems include low living standards, unemployment, and social tension. The reasons that led to the emergence of the practice of social entrepreneurship are the above factors. The research problem lays upon disclosing the importance of role of social entrepreneurship in rural development of India. The paper the tendencies of social entrepreneurship in India, to present successful examples of such business for providing recommendations how to improve situation in rural areas in terms of social entrepreneurship development. Indian government has made some steps towards development of social enterprises, social entrepreneurship, and social in- novation, but a lot remains to be improved.
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
Distribution system is a critical link between the electric power distributor and the consumers. Most of the distribution networks commonly used by the electric utility is the radial distribution network. However in this type of network, it has technical issues such as enormous power losses which affect the quality of the supply. Nowadays, the introduction of Distributed Generation (DG) units in the system help improve and support the voltage profile of the network as well as the performance of the system components through power loss mitigation. In this study network reconfiguration was done using two meta-heuristic algorithms Particle Swarm Optimization and Gravitational Search Algorithm (PSO-GSA) to enhance power quality and voltage profile in the system when simultaneously applied with the DG units. Backward/Forward Sweep Method was used in the load flow analysis and simulated using the MATLAB program. Five cases were considered in the Reconfiguration based on the contribution of DG units. The proposed method was tested using IEEE 33 bus system. Based on the results, there was a voltage profile improvement in the system from 0.9038 p.u. to 0.9594 p.u.. The integration of DG in the network also reduced power losses from 210.98 kW to 69.3963 kW. Simulated results are drawn to show the performance of each case.
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
Manufacturing industries have witnessed an outburst in productivity. For productivity improvement manufacturing industries are taking various initiatives by using lean tools and techniques. However, in different manufacturing industries, frugal approach is applied in product design and services as a tool for improvement. Frugal approach contributed to prove less is more and seems indirectly contributing to improve productivity. Hence, there is need to understand status of frugal approach application in manufacturing industries. All manufacturing industries are trying hard and putting continuous efforts for competitive existence. For productivity improvements, manufacturing industries are coming up with different effective and efficient solutions in manufacturing processes and operations. To overcome current challenges, manufacturing industries have started using frugal approach in product design and services. For this study, methodology adopted with both primary and secondary sources of data. For primary source interview and observation technique is used and for secondary source review has done based on available literatures in website, printed magazines, manual etc. An attempt has made for understanding application of frugal approach with the study of manufacturing industry project. Manufacturing industry selected for this project study is Mahindra and Mahindra Ltd. This paper will help researcher to find the connections between the two concepts productivity improvement and frugal approach. This paper will help to understand significance of frugal approach for productivity improvement in manufacturing industry. This will also help to understand current scenario of frugal approach in manufacturing industry. In manufacturing industries various process are involved to deliver the final product. In the process of converting input in to output through manufacturing process productivity plays very critical role. Hence this study will help to evolve status of frugal approach in productivity improvement programme. The notion of frugal can be viewed as an approach towards productivity improvement in manufacturing industries.
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
In this paper, we investigated a queuing model of fuzzy environment-based a multiple channel queuing model (M/M/C) ( /FCFS) and study its performance under realistic conditions. It applies a nonagonal fuzzy number to analyse the relevant performance of a multiple channel queuing model (M/M/C) ( /FCFS). Based on the sub interval average ranking method for nonagonal fuzzy number, we convert fuzzy number to crisp one. Numerical results reveal that the efficiency of this method. Intuitively, the fuzzy environment adapts well to a multiple channel queuing models (M/M/C) ( /FCFS) are very well.
UiPath Community Berlin: Studio Tips & Tricks and UiPath InsightsUiPathCommunity
Join the UiPath Community Berlin (Virtual) meetup on May 27 to discover handy Studio Tips & Tricks and get introduced to UiPath Insights. Learn how to boost your development workflow, improve efficiency, and gain visibility into your automation performance.
📕 Agenda:
- Welcome & Introductions
- UiPath Studio Tips & Tricks for Efficient Development
- Best Practices for Workflow Design
- Introduction to UiPath Insights
- Creating Dashboards & Tracking KPIs (Demo)
- Q&A and Open Discussion
Perfect for developers, analysts, and automation enthusiasts!
This session streamed live on May 27, 18:00 CET.
Check out all our upcoming UiPath Community sessions at:
👉 https://community.uipath.com/events/
Join our UiPath Community Berlin chapter:
👉 https://community.uipath.com/berlin/
Dev Dives: System-to-system integration with UiPath API WorkflowsUiPathCommunity
Join the next Dev Dives webinar on May 29 for a first contact with UiPath API Workflows, a powerful tool purpose-fit for API integration and data manipulation!
This session will guide you through the technical aspects of automating communication between applications, systems and data sources using API workflows.
📕 We'll delve into:
- How this feature delivers API integration as a first-party concept of the UiPath Platform.
- How to design, implement, and debug API workflows to integrate with your existing systems seamlessly and securely.
- How to optimize your API integrations with runtime built for speed and scalability.
This session is ideal for developers looking to solve API integration use cases with the power of the UiPath Platform.
👨🏫 Speakers:
Gunter De Souter, Sr. Director, Product Manager @UiPath
Ramsay Grove, Product Manager @UiPath
This session streamed live on May 29, 2025, 16:00 CET.
Check out all our upcoming UiPath Dev Dives sessions:
👉 https://community.uipath.com/dev-dives-automation-developer-2025/
Offshore IT Support: Balancing In-House and Offshore Help Desk Techniciansjohn823664
In today's always-on digital environment, businesses must deliver seamless IT support across time zones, devices, and departments. This SlideShare explores how companies can strategically combine in-house expertise with offshore talent to build a high-performing, cost-efficient help desk operation.
From the benefits and challenges of offshore support to practical models for integrating global teams, this presentation offers insights, real-world examples, and key metrics for success. Whether you're scaling a startup or optimizing enterprise support, discover how to balance cost, quality, and responsiveness with a hybrid IT support strategy.
Perfect for IT managers, operations leads, and business owners considering global help desk solutions.
Neural representations have shown the potential to accelerate ray casting in a conventional ray-tracing-based rendering pipeline. We introduce a novel approach called Locally-Subdivided Neural Intersection Function (LSNIF) that replaces bottom-level BVHs used as traditional geometric representations with a neural network. Our method introduces a sparse hash grid encoding scheme incorporating geometry voxelization, a scene-agnostic training data collection, and a tailored loss function. It enables the network to output not only visibility but also hit-point information and material indices. LSNIF can be trained offline for a single object, allowing us to use LSNIF as a replacement for its corresponding BVH. With these designs, the network can handle hit-point queries from any arbitrary viewpoint, supporting all types of rays in the rendering pipeline. We demonstrate that LSNIF can render a variety of scenes, including real-world scenes designed for other path tracers, while achieving a memory footprint reduction of up to 106.2x compared to a compressed BVH.
https://arxiv.org/abs/2504.21627
Co-Constructing Explanations for AI Systems using ProvenancePaul Groth
Explanation is not a one off - it's a process where people and systems work together to gain understanding. This idea of co-constructing explanations or explanation by exploration is powerful way to frame the problem of explanation. In this talk, I discuss our first experiments with this approach for explaining complex AI systems by using provenance. Importantly, I discuss the difficulty of evaluation and discuss some of our first approaches to evaluating these systems at scale. Finally, I touch on the importance of explanation to the comprehensive evaluation of AI systems.
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...James Anderson
The Quantum Apocalypse: A Looming Threat & The Need for Post-Quantum Encryption
We explore the imminent risks posed by quantum computing to modern encryption standards and the urgent need for post-quantum cryptography (PQC).
Bio: With 30 years in cybersecurity, including as a CISO, Tommy is a strategic leader driving security transformation, risk management, and program maturity. He has led high-performing teams, shaped industry policies, and advised organizations on complex cyber, compliance, and data protection challenges.
Evaluation Challenges in Using Generative AI for Science & Technical ContentPaul Groth
Evaluation Challenges in Using Generative AI for Science & Technical Content.
Foundation Models show impressive results in a wide-range of tasks on scientific and legal content from information extraction to question answering and even literature synthesis. However, standard evaluation approaches (e.g. comparing to ground truth) often don't seem to work. Qualitatively the results look great but quantitive scores do not align with these observations. In this talk, I discuss the challenges we've face in our lab in evaluation. I then outline potential routes forward.
Introduction and Background:
Study Overview and Methodology: The study analyzes the IT market in Israel, covering over 160 markets and 760 companies/products/services. It includes vendor rankings, IT budgets, and trends from 2025-2029. Vendors participate in detailed briefings and surveys.
Vendor Listings: The presentation lists numerous vendors across various pages, detailing their names and services. These vendors are ranked based on their participation and market presence.
Market Insights and Trends: Key insights include IT market forecasts, economic factors affecting IT budgets, and the impact of AI on enterprise IT. The study highlights the importance of AI integration and the concept of creative destruction.
Agentic AI and Future Predictions: Agentic AI is expected to transform human-agent collaboration, with AI systems understanding context and orchestrating complex processes. Future predictions include AI's role in shopping and enterprise IT.
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....Jasper Oosterveld
Sensitivity labels, powered by Microsoft Purview Information Protection, serve as the foundation for classifying and protecting your sensitive data within Microsoft 365. Their importance extends beyond classification and play a crucial role in enforcing governance policies across your Microsoft 365 environment. Join me, a Data Security Consultant and Microsoft MVP, as I share practical tips and tricks to get the full potential of sensitivity labels. I discuss sensitive information types, automatic labeling, and seamless integration with Data Loss Prevention, Teams Premium, and Microsoft 365 Copilot.
Supercharge Your AI Development with Local LLMsFrancesco Corti
In today's AI development landscape, developers face significant challenges when building applications that leverage powerful large language models (LLMs) through SaaS platforms like ChatGPT, Gemini, and others. While these services offer impressive capabilities, they come with substantial costs that can quickly escalate especially during the development lifecycle. Additionally, the inherent latency of web-based APIs creates frustrating bottlenecks during the critical testing and iteration phases of development, slowing down innovation and frustrating developers.
This talk will introduce the transformative approach of integrating local LLMs directly into their development environments. By bringing these models closer to where the code lives, developers can dramatically accelerate development lifecycles while maintaining complete control over model selection and configuration. This methodology effectively reduces costs to zero by eliminating dependency on pay-per-use SaaS services, while opening new possibilities for comprehensive integration testing, rapid prototyping, and specialized use cases.
Nix(OS) for Python Developers - PyCon 25 (Bologna, Italia)Peter Bittner
How do you onboard new colleagues in 2025? How long does it take? Would you love a standardized setup under version control that everyone can customize for themselves? A stable desktop setup, reinstalled in just minutes. It can be done.
This talk was given in Italian, 29 May 2025, at PyCon 25, Bologna, Italy. All slides are provided in English.
Original slides at https://slides.com/bittner/pycon25-nixos-for-python-developers
Introducing the OSA 3200 SP and OSA 3250 ePRCAdtran
Adtran's latest Oscilloquartz solutions make optical pumping cesium timing more accessible than ever. Discover how the new OSA 3200 SP and OSA 3250 ePRC deliver superior stability, simplified deployment and lower total cost of ownership. Built on a shared platform and engineered for scalable, future-ready networks, these models are ideal for telecom, defense, metrology and more.
Improving Developer Productivity With DORA, SPACE, and DevExJustin Reock
Ready to measure and improve developer productivity in your organization?
Join Justin Reock, Deputy CTO at DX, for an interactive session where you'll learn actionable strategies to measure and increase engineering performance.
Leave this session equipped with a comprehensive understanding of developer productivity and a roadmap to create a high-performing engineering team in your company.
Securiport is a border security systems provider with a progressive team approach to its task. The company acknowledges the importance of specialized skills in creating the latest in innovative security tech. The company has offices throughout the world to serve clients, and its employees speak more than twenty languages at the Washington D.C. headquarters alone.
Agentic AI - The New Era of IntelligenceMuzammil Shah
This presentation is specifically designed to introduce final-year university students to the foundational principles of Agentic Artificial Intelligence (AI). It aims to provide a clear understanding of how Agentic AI systems function, their key components, and the underlying technologies that empower them. By exploring real-world applications and emerging trends, the session will equip students with essential knowledge to engage with this rapidly evolving area of AI, preparing them for further study or professional work in the field.
Exploring the advantages of on-premises Dell PowerEdge servers with AMD EPYC processors vs. the cloud for small to medium businesses’ AI workloads
AI initiatives can bring tremendous value to your business, but you need to support your new AI workloads effectively. That means choosing the best possible infrastructure for your needs—and many companies are finding that the cloud isn’t right for them. According to a recent Rackspace survey of IT executives, 69 percent of companies have moved some of their applications on-premises from the cloud, with half of those citing security and compliance as the reason and 44 percent citing cost.
On-premises solutions provide a number of advantages. With full control over your security infrastructure, you can be certain that all compliance requirements remain firmly in the hands of your IT team. Opting for on-premises also gives you the ability to design your infrastructure to the precise needs of that team and your new AI workloads. Depending on the workload, you may also see performance benefits, along with more predictable costs. As you start to build your next AI initiative, consider an on-premises solution utilizing AMD EPYC processor-powered Dell PowerEdge servers.
Jeremy Millul - A Talented Software DeveloperJeremy Millul
Jeremy Millul is a talented software developer based in NYC, known for leading impactful projects such as a Community Engagement Platform and a Hiking Trail Finder. Using React, MongoDB, and geolocation tools, Jeremy delivers intuitive applications that foster engagement and usability. A graduate of NYU’s Computer Science program, he brings creativity and technical expertise to every project, ensuring seamless user experiences and meaningful results in software development.
2. Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14)
30 – 31, December 2014, Ernakulam, India
135
second level uses the statistical feature and the third level uses the combination of color and statistical feature for image
retrieval. Each level uses a distance measure to calculate the similarity between the images. This paper analyses the
performance of these three levels and results are shown.
2. RELATED WORKS
Content based image retrieval algorithms compare the actual content of the images rather than text. Once the
specified feature has been extracted from the image, there are also a number of options for carrying out the actual
comparison between images. Generally similarity between two images is based on a computation involving the
Euclidean distance or histogram intersection between the respective extracted features of two images. The three most
common characteristics upon which images are compared in content based image retrieval algorithms are color, shape
and texture [7]. Utilizing shape information for automated image comparisons requires algorithms that perform some
form of edge detection or image segmentation. The color feature is one of the most widely used visual features in image
retrieval. In image retrieval, the color histogram is the most commonly used color feature representation. Statistically, it
represents the intensities of the three color channels. Swain and Ballard proposed histogram intersection, an L1 metric, as
the similarity measure for the color histogram [8]. In 2009 Ji-quan-ma presents an approach based on HSV color space
and texture characteristics of the image retrieval [5].In March 2011, Neetu Sharma., Paresh Rawat. S and Jaikaran
Singh.S compare various global descriptor attributes and found that images retrieved by using the global color histogram
may not be semantically related even though they share similar color distribution in some results [4].
3. THEORY RELATED TO WORK
3.1 CBIR
A CBIR system incorporates a query image and an image database. The purpose of CBIR system is to retrieve
the images from the database which are similar to the query image. CBIR is performed in two steps: indexing and
searching. In indexing step contents (features) of the image are extracted and are stored in the form of a feature vector in
the feature database. In the searching step, user query image feature vector is constructed and compared with all feature
vectors in the database for similarity to retrieve the most similar images to the query image from the database.
3.2 Color Representations
Color is one of the most widely used visual features in multimedia context and image / video retrieval. Color is
a subjective human sensation of visible light depending on an intensity and a set of wavelengths associated with the
electromagnetic spectrum.
• RGB color space
Color is a subjective visual characteristic describing how perceived electromagnetic radiation F(l) is distributed
in the range of wavelengths l of visible light [380 nm ... 780 nm]. A color space is a multidimensional space of color
components. Human color perception combines the three primary colors: red (R) Blue (B) and Green (G).
3. Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14)
30 – 31, December 2014, Ernakulam, India
136
The RGB color space is not perceptually uniform, and equal distances in different areas do not reflect equal
perceptual dissimilarity of colors. Because of the lack of a single perceptually uniform color space, a large number of
spaces derived from the RGB space have been used in practice for a query-by-color.
3.3 Statistical features
The texture of an image can be analyzed using statistical approach. We can use statistical parameters to
characterize the content of an image. Statistical methods can be further classified into first-order (one pixel), second-
order (two pixels) and higher-order (three or more pixels) statistics. The basic difference is that first-order statistics
estimate properties (e.g. average and variance) of individual pixel values, ignoring the spatial interaction between image
pixels, whereas second- and higher order statistics estimate properties of two or more pixel values occurring at specific
locations relative to each other. Histogram based approach is based on the intensity value concentrations on all or part of
an image represented as a histogram. Common features include moments such as mean, variance, dispersion, mean
square value or average energy, entropy, skewness and kurtosis.
3.4 Global Histogram Based Approach
This approach is used to calculate the RGB global histograms for all the images, reduce the dimensions of the
image descriptor vectors using Principal Component Analysis and calculate the similarity measures between the images.
The clustering results are then analyzed to see if the results have any semantic meaning.
3.4.1 Features based on Histogram
• Mean
The mean of a data set is simply the arithmetic average of the values in the set, obtained by summing the values
and dividing by the number of values.
The mean is a measure of the center of the distribution. The mean is a weighted average of the class marks, with
the relative frequencies as the weight factors.
• Variance and Standard Deviation
The variance of a data set is the arithmetic average of the squared differences between the values and the mean.
The standard deviation is the square root of the variance:
The variance and the standard deviation are both measures of the spread of the distribution about the mean.
• Skew
Skew is a measure of the extent to which a data distribution is distorted from a symmetrical normal distribution. The
distortion is in one direction, either toward higher values or lower values. The skew measures the asymmetry (unbalance)
about the mean in the gray-level distribution. Skew can be calculated using the formula Where is the
mean and S is the standard deviation.
4. Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14)
30 – 31, December 2014, Ernakulam, India
137
• Entropy
E = ENTROPY(I)
returns E, a scalar value representing the entropy of grayscale image I. Entropy is a statistical measure of randomness
that can be used to characterize the texture of the input image. Entropy is defined as -sum (p.*log2 (p))
where p contains the histogram counts.
• Energy
The energy measure tells us something about how gray levels are distributed. The energy measure has a value of
1 for an image with a constant value. This value gets smaller as the pixel values are distributed across more gray level
values. A high energy means the number of gray levels in the image is few. Therefore it is easier to compress the image
data. Energy E can be calculated as ( )E
MN
X i j
j
n
i
m
=
==
∑∑
1
11
,
where M and N are the dimensions of the image, and X is the intensity of the pixel located at row i and column j in the
image map.
3.5 Co-occurrence Matrix
Co-occurrence Matrix represents the distance and angular spatial relationship over an image sub relationship
over an image sub region of specific size [14]. The GLCM is created from a gray scale image. The GLCM is calculates
how often a pixel with gray value i occurs either horizontally, vertically, or diagonally to adjacent pixels with the value
j.GLCM can be used to derive different statistics which provides information about the texture of the image.
4. EXPERIMENTAL METHODOLOGY
The data set contains 250 JPEG images, used to evaluate the effectiveness and efficiency of the selected color
features. Before starting the processing of images, the query image and the data set images are re sized to the same level.
Images are represented in RGB color space and the features are extracted using histogram. The GCH (Global Color
histogram) represents images with single histogram. Then the relevant images are identified based on different color
features and their combinations. In all the three cases the most similar 5 images are displayed.
In the first level images are divided into fixed blocks of size 16 x 16. For each block, its color histogram is
obtained. The GCH of query image and the images in the data set is computed and distance is measured. Relevant images
are retrieved by computing the similarity between the query vector and image vectors.
There are several ways tocalculate distances between two vectors. Here the sum of the distance between the
RGB values of the pixels in the same position is calculated.
In the next level visual feature representation is extracted by incorporating the color histograms and color
moments. The texture features of the query image and images in the data set is calculated using above mentioned
equations and stored. The relevant images are ranked by using a fixed threshold value of the difference of the texture
feature of query image and the database images.
4.1 Histogram
The system used global color histograms in extracting the color features of images. The RGB values of the
image is grabbed and stored in a histogram vector of size 64. A number of distance measures are available to find the
difference between two vectors.
4.2 Euclidean Distance
Euclidean distance between two color histograms h and g can be calculated as [13]
The Euclidean distance is calculated between the query image and every image in the database. All the images
in the data set have been compared with the query image. Upon completion of the Euclidean distance algorithm, we have
an array of Euclidean distances, which is then sorted. The five topmost images are then displayed as a result of the
texture search.
5. Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14)
30 – 31, December 2014, Ernakulam, India
138
4.3 SAD (Sum of Absolute Differences)
SAD is an algorithm for measuring the similarity between image blocks. It works by taking the absolute
difference between each pixel in the original block and the corresponding pixel in the block being used for comparison.
These differences are summed to create a simple metric of block similarity. Here the size of the block is defined as
16.The sum of the difference between the pixel values of query image and the images in the data set is calculated and
used to find the color similarity of the images.
4.4 Level 1
• The input image is read and the color features of the image is extracted using GCH and stored
• The color features of each of the image in the data set is calculated and stored
• The similarity of the query image and the images in the data set is calculated
• Sort the images based on the distance
• Similar 5 images are displayed.
4.5 Level 2
• The input image is read and converted to gray scale image
• Construct a Gray Level Co-occurrence Matrix of the image
• The texture features mean, standard deviation, variance, entropy, skew and energy are calculated from the GCH
and stored.
• Each image in the data set is taken and its texture features are calculated and stored.
• Calculate and store the Euclidean distance of input image and images in the data set
• Cluster and sort the images by keeping distance as the key
• Retrieve the images with minimum distance.
6. Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14)
30 – 31, December 2014, Ernakulam, India
139
5. RESULT AND DISCUSSION
5.1 Database
The image data set that used in this work contains 250 JPEG images randomly selected from the World Wide
Web. The following figure depicts a sample of images in the database:
Figure: Image Database
Figure: The query image
5.2 Color Extraction & Matching
The color feature of the query image and the images in the data set is calculated and stored in a vector of size 64
separately. The histograms of the query image and the images in the database are compared by calculating the sum of the
difference of pixel values in the same position in the histogram vectors and obtained the following top 5 results:
7. Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14)
30 – 31, December 2014, Ernakulam, India
140
Images Color distance
0.0
0.454
0.682
0.791
0.999
Figure: Color results for the searching of query image
The result shows that the most of the images in the result are unrelated to the query.
5.3 Texture Extraction & Matching
The statistical approach is used to analyse an image texture. The statistical features of the images are calculated
using above mentioned equations from the GCH of the query image and the data set images and are compared using the
Euclidean Distance Metric, obtained the following top 5 results:
Images Statistical distance
0.0
0.576
0.594
0.632
0.689
Figure: Texture results for the searching of query image
8. Proceedings of the International Conference on Emerging Trends in Engineering and Management (ICETEM14)
30 – 31, December 2014, Ernakulam, India
141
The system is tested with different query images and it is found that the texture search gives the results better
than the result obtained from the previous search.
6. CONCLUSIONS
An experimental comparison of a number of different color descriptors for content-based image retrieval was
carried out. Color histogram and color moments are considered for retrieval. The application performs a simple color-
based search in an image database for an input query image, using global color histograms. It then compares the color
histograms of different images. The SAD algorithm is used to find the similarity of the images. For enhancing the search,
the application performs statistical feature-based search using global color histogram. The comparison of the images are
done using the Euclidean Distance Equation.
According to the result obtained it is found that the performance depends on the color distribution of images.
Most of the images retrieved using the image search based on color feature are unrelated to the query. The test results
indicate that the search which uses the statistical feature gives the better result compared to the color feature search. The
results can be improved further by making a search considering the combined image properties of color and texture. In
addition to that more enhancement can be done by making a search considering the image properties of color, texture and
shape.
REFERENCES
[1] Gaurav Jaswal, Amit Kaul, Rajan Parmar, Content based Image Retrieval using Color Space Approaches,
International Journal of Engineering and Advanced Technology (IJEAT) ISSN: 2249 – 8958, Volume-2,
Issue-1, October 2012.
[2] Poulami Halda, Joydeep Mukherjee, Content based Image Retrieval using Histogram, Color and Edge,
International Journal of Computer Applications (0975 – 888), Volume 48– No.11, June 2012.
[3] Ja-Hwung Su, Wei-Jyun Huang, Philip S. Yu,Fellow, IEEE, andVincent S. Tseng, Efficient Relevance Feedback
for Content-Based Image Retrieval by Mining User Navigation Patterns, IEEE Transactions On Knowledge
And Data Engineering, Vol. 23, No. 3, March 2011.
[4] Neetu Sharma., Paresh Rawat and jaikaran Singh, Efficient CBIR Using Color Histogram Processing, Signal &
Image Processing : An International Journal(SIPIJ) Vol.2, No.1, March 2011.
[5] Ji-quan ma Heilongjiang University Harbin, China, Content-Based Image Retrieval with HSV Color Space and
Texture Features, International Conference on Web Information Systems and Mining 2009.
[6] Sharmin Siddique, A Wavelet Based Technique for Analysis and Classification of Texture Images, Carleton
University, Ottawa, Canada, Proj. Rep. 70.593, April 2002.
[7] A. Jain and A. Vailaya, Image Retrieval using Color and Shape, Elsevier Science Ltd, vol. 29, pp. 1233- 1244,
1996.
[8] J. Swain and D. H. Ballard, Color Indexing, International Journal of Computer Vision, vol. 7, pp. 11-32, 1991.
[9] Robson Barcellos, Rogério Oliani Saranz, Luciana Tarlá Lorenzi, Adilson Gonzaga Universidade de São Paulo,
Content Based Image Retrieval Using Color Autocorrelograms in HSV Color Space.
[10] G. N.Srinivasan, and Shobha G, Statistical Texture Analysis, Proceedings of World Academy of Science,
Engineering and Technology Volume 36 December 2008 ISSN 2070-3740.
[11] S.Selvarajah and S.R. Kodituwakku , Analysis and Comparison of Texture Features for Content Based Image
Retrieval, International Journal of Latest Trends in Computing, Volume 2, March 2011.
[12] S.R.Kodituwakku,S.Selvarajah ,Comparison of Color Features for Image Retrieval, Indian Journal of Computer
Science and Engineering Vol. 1 No. 3 207-211.
[13] Bongani Malinga, Daniela Raicu, Jacob Furst, Local vs. Global Histogram-Based Color Image Clustering,
University of Depaul, Technical Reports: TR06-010 (2006).
[14] V.Vinitha M.Tech , Dr. J.Jagadeesan Ph.D., R. Augustian Isaac, A.P. (Sr.G), Web Image Search Reranking
Using CBIR, International Journal of Computer Science & Engineering Technology (IJCSET).
[15] Dr. Prashant Chatur, Pushpanjali Chouragade, “Visual Rerank: A Soft Computing Approach for Image Retrieval
from Large Scale Image Database”, International Journal of Computer Engineering & Technology (IJCET),
Volume 3, Issue 3, 2012, pp. 446 - 458, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.
[16] Dr. Prashantchatur and Pushpanjali Chouragade, “A Soft Computing Approach for Image Searching using
Visual Reranking”, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 2,
2013, pp. 543 - 555, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.