Face Recognition and Retrieval Using LLBP and URFB: G.Komala Yadav, M.Venkata Ramana
Face Recognition and Retrieval Using LLBP and URFB: G.Komala Yadav, M.Venkata Ramana
Abstract: Face recognition is one of the major issues in biometric technology. It identifies and/or verifies a person by using a 2D/3D physical characteristics of the face images. Several techniques have been proposed for solving a major problem in face recognition such as fisher face, elastic bunch graph matching and support vector machine. However there are still many challenge problems in face recognition system such as facial expressions, pose variations occlusion and illumination change. Those variations dramatically degrade the performance of face recognition system. It is essential to build an efficient system for face recognition. We introduce a novel face representation method for face recognition integrated with URFB called Local Line Binary Pattern (LLBP) summarizes the local spatial structure of an image by thresholding the local window with binary weight and introduce the decimal number as a texture presentation .Moreover it consumes less computational cost. The basic idea of LLBP is to first obtain the binary code both along the horizontal and vertical directions separately and its magnitude, which characterizes the change in image intensity such as edges and corners, is then computed along with the unified relevance feedback(URFB) shows the advantage over traditional retrieval mechanisms. To seamlessly combine texture feature based retrieval system, a query concept-dependent fusion strategy is automatically learned. Experimental results on ORL data base consisting of 400 images show that the proposed framework is widely scalable, and effective for recognition, classification and retrieval. Keywords: Binary Code, LLBP, ORL database, Texture , URFB I. Introduction
As the necessity for higher levels of security rises, technology is bound to swell to fulfill these needs. Any new creation, enterprise , or development should be uncomplicated and acceptable for end users in order to spread worldwide. This strong demand for user-friendly systems which can secure our assets and protect our privacy without losing our identity in a sea of numbers, grabbed the attention and studies of scientists toward whats called biometrics. Biometrics is the emerging area of bioengineering; it is the automated method of recognizing person based on a physiological or behavioral characteristic. There exist several biometric systems such as signature, finger prints, voice, iris, retina, hand geometry, ear geometry, and face. Among these systems, facial recognition appears to be one of the most universal, collectable, and accessible systems. Biometric face recognition, otherwise known as Automatic Face Recognition (AFR), is a particularly attractive biometric approach, since it focuses on the same identifier that humans use primarily to distinguish one person from another: their faces. One of its main goals is the understanding of the complex human visual system The face recognition problem can be divided into two main stages: face verification (or authentication), and face identification (or recognition). The detection stage is the first stage; it includes identifying and locating a face in an image. The recognition stage is the second stage; it includes feature extraction, where important information for discrimination is saved, and the matching, where the recognition result is given with the aid of a face database. Face recognition methods have been proposed. In the vast literature on the topic there are different classifications of the existing techniques. The following is one possible high-level classification: Holistic Methods: The whole face image is used as the raw input to the recognition system. An example is the well-known PCA-based technique introduced by Kirby and Sirovich, followed by Turk and Pentland. Local Feature-based Methods: Local features are extracted, such as eyes, nose and mouth. Their locations and local statistics (appearance) are the input to the recognition stage. An example of this method is Elastic Bunch Graph Matching (EBGM). Although progress in face recognition has been encouraging, the task has also turned out to be a difficult endeavor. In the following sections, we give a brief review on technical advances and analyze technical challenges.
www.iosrjournals.org
1|Page
1.2.Overview
As one of the most successful applications of image analysis and understanding, face recognition has recently gained significant attention. Over the last ten years or so, it has become a popular area of research in computer vision and one of the most successful applications of image analysis and understanding. Some examples of face recognition application areas are: Security Computer and physical access control Government Events Criminal Terrorists screening; Surveillance Immigration/Customs Illegal immigrant detection; Passport/ ID Card authentication Casino Filtering suspicious gamblers /VIPs Toy Intelligent robotic Vehicle Safety alert system based on eyelid movement The largest face recognition systems in the world with over 75 million photographs that is actively used for visa processing operates in the U.S. Department of State. In 2006, the performance of the latest face recognition algorithms was evaluated in the Face Recognition Grand Challenge. High-resolution face images, 3-D face scans, and iris images were used in the tests. The results indicated that the new algorithms are 10 times more accurate than the face recognition algorithms of 2002 and 100 times more accurate than those of 1995. Some of the algorithms were able to outperform human participants in recognizing faces and could uniquely identify identical twins.
www.iosrjournals.org
2 | Page
Fig(1):Different images illustrating problems The illumination problem is illustrated in the following figure, where the same face appears differently due to the change in lighting. More specifically, the changes induced by illumination could be larger than the differences between individuals, causing systems based on comparing images to misclassify the identity of the input image.
Fig(.2):Illustration of illumination problem The pose problem is illustrated in the following figure, where the same face appears differently due to changes in viewing condition. The pose problem has been divided into three categories: The simple case with small rotation angles, The most commonly addressed case, when there is a set of training image pairs (frontal and rotated image The most difficult case, when training image pairs are not available and illumination variations are present.
II.
Binary pattern is an image produced by a formula that includes binary operations and results in a 32bit integer number. These patterns are closely tied to the 32bit RGB color system. These patterns may be used with any integer numbers. The most simple pattern is x or y, result of which will be pixel color (in RGB), where x and y represent coordinates of image pixel we search. x or y operation where x and y are 32bit numbers is per-bit or. For example, if we have 108 (1101100) and 226 (11100010) then 108 or 226 is 238 (11101110). This pattern is not restricted only to 2D, and can be extended into higher dimensions such as 3D.
Uses
Color generation possibility to create dozens of new colors, which won't be close to any of basic colors, such as Red, Green, Blue, etc. Desktop backgrounds using several filters its possible to achieve nice patterns, which may be tiled on desktop screen Texture generation different formulas may provide images suitable for procedural image generation, for example creating carpets or walls.
Fig(4):Colour = x or y
Figure(6): LBP operator: (left) the binary sequence (8 bits) and (right) the weighted threshold
Fig(7): Proposed System model Both the Test Image and the Training Image are to be compared in order to find the most relevant image from the database. ORL database is choosen where it consists of atmost 4000 images.Before comparing the images (test and training images),Texture content has to be generated for each and every image.These vectors are differnent for different images based on texture properties. Normalization is the first step to be done while generating the texture content .The given input image is normalized in order to decrease the high contrast intensity values. Each image has to be segmented or to be subdivided in order to apply Local Line Binary Pattern(LLBP).Hence an individual image N*N has been divided into n*n (block size). For each sub block an LLBP operator is applied, since LLBP is an high constrast image. Texture values will differ from image to image.After comparing test and training image by using a quality assement technigue called SSIM(Structural Similarity Index Metric),5 most relevant images were retrieved and displayed.If the result is satisfied by the user then there is no concept of feedback.If the result is nat satisfied by the user,then feedback concept araises.According to user choice,feedback is given from the retrieved images.Feedback may be i.e query or test image may be from the 5 most retrieved images inorder to incresae the search rate or recognition rate. Feedback ia an iterative processes.Till the user satisfied with the result,feedback can be applied to the system.
Fig(9): Example of face image processed by LLBP operator with line length 9 pixels: (a) is original image, (b) and (c) are LLBP along with horizontal and vertical direction, and (d) is its magnitude line in pixel, is
the position of the center pixel hc on the horizontal line and vc on the vertical line, hn is the pixel along with the horizontal line and vn is the pixel along with the vertical line, and s function is same as in LBP.,
III.
Image retrieval
An image retrieval system is a computer system for browsing, searching and retrieving images from a large database of digital images. Most traditional and common methods of image retrieval utilize some method of adding metadata such as captioning, keywords, or descriptions to the images so that retrieval can be performed over the annotation words. Manual image annotation is time-consuming, laborious and expensive; to address this, there has been a large amount of research done on automatic image annotation. Additionally, the increase in social web applications and the semantic web have inspired the development of several web-based image annotation tools. The first microcomputer-based image database retrieval system was developed at MIT, in the 1980s, by Banireddy Prasaad, , Amar Gupta, Hoo-min Toong, and Stuart Madnick.
3.1Search methods
Image search is a specialized data search used to find images. To search for images, a user may provide query terms such as keyword, image file/link, or click on some image, and the system will return images "similar" to the query. The similarity used for search criteria could be meta tags, color distribution in images, region/shape attributes, etc. Image meta search - search of images based on associated metadata such as keywords, text, etc. Content-based image retrieval (CBIR) the application of computer vision to the image retrieval. CBIR aims at avoiding the use of textual descriptions and instead retrieves images based on similarities in their contents (textures, colors, shapes etc.) to a user-supplied query image or user-specified image features. o List of CBIR Engines - list of engines which search for images based image visual content such as color, texture, shape/object, etc.
www.iosrjournals.org
6 | Page
Face Recognition And Retrieval Using LLBP And URFB 3.2.Data Scope
It is crucial to understand the scope and nature of image data in order to determine the complexity of image search system design. The design is also largely influenced by factors such as the diversity of user-base and expected user traffic for a search system. Along this dimension, search data can be classified into the following categories: Archives - usually contain large volumes of structured or semi-structured homogeneous data pertaining to specific topics. Domain-Specific Collection - this is a homogeneous collection providing access to controlled users with very specific objectives. Examples of such a collection are biomedical and satellite image databases. Enterprise Collection - a heterogeneous collection of images that is accessible to users within an organizations intranet. Pictures may be stored in many different locations. Personal Collection - usually consists of a largely homogeneous collection and is generally small in size, accessible primarily to its owner, and usually stored on a local storage media. Web - World Wide Web images are accessible to everyone with an Internet connection. These image collections are semi-structured, non-homogeneous and massive in volume, and are usually stored in large disk arrays.
3.4.CBIR techniques
Many CBIR systems have been developed, but the problem of retrieving images on the basis of their pixel content remains largely unsolved.
3.4.1Query techniques
Different implementations of CBIR make use of different types of user queries. a) Query by example Query by example is a query technique that involves providing the CBIR system with an example image that it will then base its search upon. The underlying search algorithms may vary depending on the application, but result images should all share common elements with the provided example. Options for providing example images to the system include: A preexisting image may be supplied by the user or chosen from a random set. The user draws a rough approximation of the image they are looking for, for example with blobs of color or general shapes. This query technique removes the difficulties that can arise when trying to describe images with words. b)Other query methods Other query methods include browsing for example images, navigating customized/hierarchical categories, querying by image region (rather than the entire image), querying by multiple example images, querying by visual sketch, querying by direct specification of image features, and multimodal queries (e.g. combining touch, voice, etc.). CBIR systems can also make use of relevance feedback, where the user progressively refines the search results by marking images in the results as "relevant", "not relevant", or "neutral" to the search query, then repeating the search with the new information.
3.5.SEMANTIC RETRIEVAL
The ideal CBIR system from a user perspective would involve what is referred to as semantic retrieval, where the user makes a request like "find pictures of dogs" or even "find pictures of Abraham Lincoln". This type of open-ended task is very difficult for computers to perform - pictures of chihuahuas and Great Danes look very different, and Lincoln may not always be facing the camera or in the same pose. Current CBIR systems therefore generally make use of lower-level features like texture, color, and shape, although some systems take advantage of very common higher-level features like faces (see facial recognition system). Not every CBIR system is generic. Some systems are designed for a specific domain, e.g. shape matching can be used for finding parts inside a CAD-CAM database.
a) COLOR
Computing distance measures based on color similarity is achieved by computing a color histogram for each image that identifies the proportion of pixels within an image holding specific values (that humans express as colors). Current research is attempting to segment color proportion by region and by spatial relationship among several color regions. Examining images based on the colors they contain is one of the most widely used techniques because it does not depend on image size or orientation. Color searches will usually involve comparing color histograms, though this is not the only technique in practice.
b) TEXTURE
Texture measures look for visual patterns in images and how they are spatially defined. Textures are represented by texels which are then placed into a number of sets, depending on how many textures are detected in the image. These sets not only define the texture, but also where in the image the texture is located. www.iosrjournals.org 8 | Page
c) SHAPE
Shape does not refer to the shape of an image but to the shape of a particular region that is being sought out. Shapes will often be determined first applying segmentation or edge detection to an image. Other methods like [Tushabe and Wilkinson 2008] use shape filters to identify given shapes of an image. In some case accurate shape detection will require human intervention because methods like segmentation are very difficult to completely automate.
3.7.Applications
Some software producers are trying to push CBIR based applications into the filtering and law enforcement markets for the purpose of identifying and censoring images with skin-tones and shapes that could indicate the presence of nudity, with controversial results.
3.8.What is texture?
Everyday texture terms - rough, silky, bumpy - refer to touch. A texture that is rough to touch has: a large difference between high and low points, and a space between highs and lows approximately the same size as a finger. Silky would have little difference between high and low points, and the differences would be spaced very close together relative to finger size. Image texture works in the same way, except the highs and lows are brightness values (also called grey levels, GL, or digital numbers, DN) instead of elevation changes. instead of probing a finger over the surface, a "window" - a (usually square) box defining the size of the probe - is used.
3.8.2. GLCM
Grey-Level Co-occurrence Matrix texture measurements have been the workhorse of image texture since they were proposed by Haralick in the 1970s. To many image analysts, they are a button you push in the software that yields a band whose use improves classification - or not. The original works are necessarily condensed and mathematical, making the process difficult to understand for the student or front-line image analyst. This GLCM texture tutorial was developed to help such people, and it has been used extensively worldwide since 1999. This document concerns some of the most commonly used texture measures, those derived from the Grey Level Co-occurrence Matrix (GLCM). The essence is understanding the calculations and how to do them. This involves Defining a Grey Level Co-occurrence Matrix (GLCM) Creating a GLCM Using it to calculate texture in the exercises. Understanding how calculations are used to build up a texture image Viewing examples of the texture images created with various input parameters
Definition: Order:
The GLCM described here is used for a series of "second order" texture calculations. First order texture measures are statistics calculated from the original image values, like variance, and do not consider pixel neighbor relationships. Second order measures consider the relationship between groups of two (usually neighboring) pixels in the original image.
www.iosrjournals.org
9 | Page
3.8.4
1.It is square: The reference pixels have the same range of values as the neighbour pixels, so the values along the top are identical to the values along the side. 2.Has the same number of rows and columns as the quantization level of the image: The test image has four grey level values (0,1,2 and 3). Eight bit data has 256 possible values, so would yield a 256 x 256 square matrix, with 65,536 cells. 16 bit data would give a matrix of size 65536 x 65536 = 429,496,720 cells! 3. It is symmetrical around the diagonal: Some things to notice about the normalized symmetrical GLCM (called the GLCM from here on) The diagonal elements all represent pixel pairs with no grey level difference (0-0, 1-1, 2-2, 3-3 etc.). If there are high probabilities in these elements, then the image does not show much contrast: most pixels are identical to their neighbours. When values in the diagonal are summed, the result is the probability of any pixel's being the same grey level as its neighbour. Look at lines parallel to the diagonal. Cells one cell away from the diagonal represent pixel pairs with a difference of only one grey level (0-1, 1-2, 2-3 etc.). Similarly, values in cells two away from the diagonal show how many pixels have 2 grey level differences, and so forth. The farther away from the diagonal, the greater the difference between pixel grey levels. Sum up these parallel diagonals and the result is the probability of any pixel's being 1 or 2 or 3 etc. different from its neighbour.
www.iosrjournals.org
10 | Page
Fig(10):Creating a texture image Image edge pixels usually represent a very small fraction of total image pixels, so this is only a minor problem. However, if the image is very small or the window is very large, the image edge effect should be remembered Edge effects can be a problem in classification.
3.10.APPLICATIONS
The CBIR technology has been used in several applications such as fingerprint identification, biodiversity information systems, digital libraries, crime prevention, medicine, historical research, among others. Some of these applications are presented in this section.
Medical Applications
The use of CBIR can result in powerful services that can benefit biomedical information systems. Three large domains can instantly take advantage of CBIR techniques: teaching, research, and diagnostics [73]. From the teaching perspective, searching tools can be used to find important cases to present to students. Research also can be enhanced by using services combining image content information with different kinds of data. For example, scientists can use mining tools to discover unusual patterns among textual (e.g., treatments reports, and patient records) and image content information. Similarity queries based on image content descriptors can also help the diagnostic process. Clinicians usually use similar cases for case-based reasoning in their clinical decision-making process. In this sense, while textual data can be used to find images of interest, visual features can be used to retrieve relevant
Digital Libraries
There are several digital libraries that support services based on image content [74 79]. One example is the digital museum of butterflies [74], aimed at building a digital collection of Taiwanese butterflies. This digital library includes a module responsible for content-based image retrieval based on color, texture, and patterns. In a different image context, Zhu et al. [76] present a content-based image retrieval digital library that supports geographical image retrieval. The system manages air photos which can be retrieved through texture descriptors. Place names associated with retrieved images can be displayed by cross referencing with a Geographical Name Information System (GNIS) gazetter. In this same domain, Bergman et al. describe an architecture for storage and retrieval of satellite images and video data from a collection of heterogeneous archives. Other initiatives cover different concepts of the CBIR area. For example, while research presented in [77,78] concentrates on new searching strategies for improving the effectiveness of CBIR systems, another popular focus is on proposing image descriptors [79].
IV.
Relevance Feedback
Search systems operate using a standard retrieval model, where a searcher, with a need for information, searches for documents that will help supply this information. Searchers are typically expected to describe the information they require via a set of query words submitted to the search system. This query is compared to each document in the collection, and a set of potentially relevant documents is returned. It is rare that searchers will retrieve the information they seek in response to their initial retrieval formulation (Van Rijsbergen, 1986). However, such problems can be resolved by iterative, interactive techniques. The initial query can be reformulated during each iteration either explicitly by the searcher or based on searcher interaction. The direct involvement of the searcher in interactive IR results in a dialogue between the IR system and the searcher that is potentially muddled and misdirected (Ingwersen, 1992). Searchers may lack a sufficiently developed idea of what information they seek and may be unable to conceptualize their needs into a query statement understandable by the search system. When unfamiliar with the collection of documents being searched they may have insufficient search experience to adapt their query formulation strategy (Taylor, 1968; Kuhlthau, 1988), and it is often necessary for searchers to interact with the retrieval system to clarify their query. Relevance feedback (RF) is a technique that helps searchers improve the quality of their query statements and has been shown to be effective in non-interactive experimental environments (e.g., Salton and Buckley, 1990) and to a limited extent in IIR (Beaulieu, 1997). It allows searchers to mark documents as relevant to their needs and present this information to the IR system. The information can then be used to retrieve more documents like the relevant documents and rank documents similar to the relevant ones before other documents (Ruthven, 2001, p. 38). RF is a cyclical process: a set of documents retrieved in response to an initial query are presented to the searcher, who indicates which documents are relevant. This information is used by the system to produce a modified query which is used to retrieve a new set of documents that are presented to the searcher. This process is known as an iteration of RF, and repeats until the required set of documents is found. To work effectively, RF algorithms must obtain feedback from searchers about the relevance of the retrieved search results. This feedback typically involves the explicit marking of documents as relevant. The system takes terms from the documents marked and these are used to expand the query or re-weight the existing query terms. This process is referred to as query modification. The process increases the score of terms that occur in relevant documents and decreases the weights of those in non relevant documents. The terms chosen by the RF system are typically those that discriminate most between the documents marked and those that are not. The query statement that evolves can be thought of as a representation of a searchers interests within a search session (Ruthven et al., 2002a). The classic model of IR involves the retrieval of documents in response to a query devised and submitted by the searcher. The query is a one-time static conception of the problem, where the need assumed constant for the entire search session, regardless of the information viewed. RF is an iterative process to improve a search systems representation of a static information need. That is, the need after a number of iterations is assumed to be the same as at the beginning of the search (Bates, 1989). The aim of RF is not to provide information that enables a change in the topic of the search. The evolution of the query statement across a number of feedback iterations is best viewed as a linear process, resulting in the formulation of an improved query. Initially, this model of RF was not regarded as an interaction between searcher and system and a potential source of relevance information. However current accounts of www.iosrjournals.org 12 | Page
www.iosrjournals.org
13 | Page
4.1.Implicit feedback
Implicit feedback systems make inferences of what is relevant based on searcher interaction and do not intrude on the searchers primary line of activity i.e., satisfying their information needs (Furnas, 2002). In traditional relevance feedback systems, the function of making judgments is intentional, and specifically for the purpose of helping the system build up a richer body of evidence on what information is relevant. However, the ultimate goal of information seeking is to satisfy an information need, not to rate documents. Systems that use implicit feedback to model information needs and enhance search queries fit better with this goal. Implicit feedback systems typically use measures such as document reading time, scrolling and interaction to make decisions on what information is relevant (Claypool et al., 2001). However, these systems typically assume that searchers will view and interact with relevant documents more than non-relevant documents. These assumptions are context-dependent and vary greatly between searchers. The approach used for implicit feedback makes a potentially more robust assumption: searchers will try to view relevant information. Through monitoring the information searchers interact with search systems can approximate their interests. This is made possible since the interface components the search interfaces present are smaller than the full-text of documents, allowing relevance information to be communicated more accurately. In TRS Feedback and TRS Document some of the experimental systems use evidence gathered via implicit feedback to restructure the retrieved information during the search. In these systems, each retrieved document has an associated summary composed of the best four Top- Ranking Sentences that appear on the interface at the searchers request. The viewing of this summary is regarded as an indication of interest in the information it contains and is used as an indication of relevance. These relevance indications are used by the systems to reorder the Top-Ranking Sentences. Sentences are small and the differences in sentence scores between sentences are also small. Should there be a slight change in the systems formulation of the information need a list of sentences is much more likely to change than, say, a list of documents. At no point, in any experimental system, is the searcher shown the expanded query; they are only shown the effect of the query (i.e., the reordered top-ranking sentence list). Reordering the sentence list based on implicit feedback means it represents the systems current estimation of the searchers interests. Since this formulation is based solely on the viewed information the system is able to form reasonable approximations on what information is relevant. As the searcher becomes more sure of their need, or indeed as the need changes, the search system can adapt, select new query terms and use this query to update the ordering of the Top- Ranking Sentences list to reflect this change. The traditional view of information seeking assumes a searchers need is static and represented by a single query submitted at the start of the search session. However, as is suggested by Harter (1992) among others, the need is in fact dynamic and changes to reflect the information viewed during a search. As they view this content their knowledge changes and so does their problematic situation. It is therefore preferable to express this modified problem with a revised query. The experimental systems in TRS Feedback and TRS Document do this, selecting the most useful query expansion terms during a search. In the systems developed in these studies, the sentences are reordered using implicit relevance information gathered unobtrusively from searcher interaction. Experimental subjects found this a useful feature that helped them find relevant information. They suggested that it was most useful when they felt the initial query had retrieved a large amount of potentially relevant information and they wanted to focus their attention on only the most relevant parts. These are more push oriented than the static Top-Ranking Sentences system tested in TRS Presentation. The systems are adaptive, work to better represent information needs and consider changes in these needs, restructuring the content presented at the results interface. In TRS Feedback and TRS Document assumed that the viewing of a documents summary was an indication of an interest in the relevance of the summarys contents. There are several grounds on which this can be criticized; searchers will view non-relevant summaries, the title rather than the summary was what the user expressed an interest in, and the searcher may look at all retrieved documents before making real relevance decisions. Nevertheless I felt that this assumption was fair enough to allow an initial investigation into the use of implicit feedback. In TRS Document I introduced a timing mechanism to eliminate the problems caused by the accidental mouse over of document titles and the unwanted removal of sentences from the Top-Ranking Sentences list that follows. The results of TRS Document are testament to the success of a very limited version of an implicit feedback technique. More complex and effective techniques based on these findings are described in later chapters of this thesis. www.iosrjournals.org 14 | Page
The structural similarity (SSIM) metric aims to measure quality by capturing the similarity of images. A product of three aspects of similarity is measured: luminance, contrast, and structure. The luminance comparison function L(x,y) for reference image x and test image y is defined as ..(1)
where
and
where
and
VI. RESULTS:
Software Used
Matlab 10 basic is the software used for the coding of the proposed work.
The above window gives the horizotal,vertical and magnitude features of the test image.
Here, the intensity values of the query image after applying LLBP has been displayed www.iosrjournals.org 18 | Page
The above window gives the information of the guery image after feedback.Horizontal,vertical and magnitude features were displayed here.
The above window gives the intensity values of the query iamge after feedback.The inteensity values after feedback are high compared to the values before feedback.Hence the recognition rate incresses.
This window appears as and when we type guirun in the commandwindow. All the buttons i.e Train,Select Query,Search,Reset,Exit,Index number were created using GUI. www.iosrjournals.org 19 | Page
By clicking on the button Trainthis window appears when the system is training.It means that what all the images available in the database(called as training set or training images) are normalized and then LLBP is applied ,so that texture content has been calculated for the training images.
After all the database images were trained,query is selected among them.This is to be done by clicking on a select Query button which is created dy GUI.
As and when we select search query button,the above window appears on the screen.so that any one image is being selected as a query image. By observing the above window,the user has choosen 9 as the query image.so it has been selected as the query image.
www.iosrjournals.org
20 | Page
A query has been selected to search in the databasei.e. in the trained images.After selecting the query image,LLBP operator has been applied to obtain the texture content,in order to compare with the training images.prior to LLBP,normalization has also been done to the query image.
After selecting query, the user click search button,then the most relevant 5 images were displayed from the database corresponding to the query. By observing the above window,only 3 images are relevant to the query and the remaining 2 images are irrelevant. To increase the classification rate,feddback mechanism is imlemented.
Here,the index number button has been created by GUI to enter the id of the image from the 5 most retrieved images in order to give the feedback to get the best results. Hence the query image has been changed as 7,i.e the index number has been choosen by the user to get the best results. After entering index number as 7,press search. www.iosrjournals.org 21 | Page
As and when the feedback has been given to the system by entering the index number in the alloted box and presssearch button,again the search process for the query image7 begins. The results obtained with query7 are best compared with query9.i.e.all the 5 most relevant images displayed belongs to the query image7.It means that by applying the feedback,results can be improved. This is an iterative process,feedback is given iteratively till the user satisfies with the result.
VII.
Conclusion
A novel face recognition method for face recognition has been proposed by using Local Line Binary Pattern( LLBP) which is motivated from Line Binary Pattern(LBP). 5 Most relevant images were displayed by using Content Based Image Retrieval(CBIR) system. To increase the classification rate feedback has been applied to the system.Until the user satisfied with the result ,feedback is repeated.Hence this is an iterative process. Feedback mechanism has been implemented successfully if the user is not satisfied with the results Limitation: 100% recognition rate is not possible .
Applications:
Security Government Events Criminal Immigration/Customs Casino Toy Vehicle Computer and physical access control Terrorists screening; Surveillance Illegal immigrant detection; Passport/ ID Card authentication Filtering suspicious gamblers /VIPs Intelligent robotic Safety alert system based on eyelid movement
VIII.
Futurescope
A new method has been proposed for face recognition.Local Line Binary Pattern(LLBP) is applied to recognize a specified face. If the face is not recognized correctly, then more features can be extracted like face colour and moment invariant. Moment Invariants are sensitive features and are used in recognition applications. The face is recognized again using Decision Tree. Applying two stage recognition process increases the recognition accuracy by 2%.
References
[1] [2] [3] [4] [ 5] [6] [7] [8] J. Rocchio, Relevance Feedback in Information Retrieval. Upper Saddle River, NJ: Prentice-Hall, 1971. Ojala, M. Pietikainen, and D. Harwood. A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 29(1):5159, 1996. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Trans. Pattern Analysis and Machine Intelligence, 20:7186, 1997. A [7] G. Guo, S. Z. Li, and K. Chan. Face recognition by supporz vector machines. In Proc. Intl Conf. Automatic Face and Gesture Recognition, pages 196201, 2000. T. Ahonen, A. Hadid, Face recognition with local binary pattern,2004. T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. Huang. Illumination normalization for face recognition and uneven background correction using total variation based image models. In Proc. IEEE Intl. Conf. Computer Vision and Pattern Recognition, 2005. T. Chen,W. Yin, X. S. Zhou, D. Comaniciu, and T. S. Huang. Total variation models for variable lighting face recognition. IEEE Trans. Pattern Analysis and Machine Intelligence, 28:15191524, 2006. Q. Tao and R. N. J. Veldhuis. Illumination normalization based on simplified local binary patterns for a face verification system. In Biometrics Symposium 2007 at The Biometrics Consortium Conference, Baltimore, Maryland, pages 17, USA, September 2007. IEEE Computational Intelligence Society. [19] Google Image Search, [Online]. Available: http://images.google.com Yahoo Image Search, [Online]. Available: http://images.search.yahoo. com/ Altavisa Image Search, [Online]. Available: http://www.altavista.com/image/ Picsearch Image Search, [Online]. Available: http://www.picsearch.com
www.iosrjournals.org
22 | Page