Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
270 views
Image Processing
Image Processing Makaut Organizer of 2023-24
Uploaded by
Bittu Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Image Processing For Later
Download
Save
Save Image Processing For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
270 views
Image Processing
Image Processing Makaut Organizer of 2023-24
Uploaded by
Bittu Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Image Processing For Later
Carousel Previous
Carousel Next
Save
Save Image Processing For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 145
Search
Fullscreen
IMAGE PROCESSING Introduction 2 Digital Image Formation 14 Mathematical Preliminaries 21 Image Enhancement 36 Image Restoration 73 Image Segmentation 78 Image Data Compression 388 105 Morphological Image Processing NOTE: MAKAUT coursé structure and syllabus of 6" semester has been changed from 2021 IMAGE PROCESSING has been introduced as a new subject in present curriculum. The syllabus of this subject is almost same as Digital mage Processing [EC 802B). Taking special care of this matter we are providing the relevant MAKAUT university solutions of Digital Image Processing [EC 8028], so that students can get an idea about university questions patterns.POPULAR PUBLICATIONS INTRODUCTION Multiple Choice Type Questions 4. An image is a 2D array of [WBUT 2009, 2011, 2017] a) digital data b) electrical signals : : ¢) photographic objects 4) light signals Answer: (a) 2. Aline sensor is used to [WBUT 2009) a) capture a scene b) capture a 3D image ¢) scan a 2D image d) none of these are true Answer: (d) 3. What device is used to form an image on the film ofacamera? [WBUT 2009] a) Ap-n-p transistor b) A converging lens c) An Op-Amp ) A plane mirror Answer: (a) 4. tf an input image is f(x, y) anda transform 7 Is operated to get an processed image g(x, y), we can write [WBUT 2009] a) (x, »)=T[ a(x, »)] . b) f(x, y)=7/2(x, ») ©) a(x, »)=T[s(x, »)] d) none of these are true Answer: (c) 5. Euclidian distance of two points (x, y) and (s, 1) of a two-dimensional space Is 1 a [(e-s) +-0" P by [x-s|+]y-2| [WBUT 2010] €) Max(|x-s|. | ¥-r[) d) none of these Answer: (b) 6. Coloured Model Names IWBUT 2011, 204] a) The RGB colour model b) The CMY & CMYK colour models ¢) The HIS colour mode! 4) (a) & (b) only ©) (@), (6) & (4) Answer: (e) - 7. Digital image processing uses [WBUT 2011] a) Fuzzy set theory b) DFT ¢) DCT d) (b) & (c) @) (a), (b) & (c) Answer: (4) IP-2-8. Find the odd one out w.rt. DIP: a) Arithmetic operation c) Vector & matrix operations Answer: (c) 9, Intensity range of 8-bit pixel image is a)Oto7 ¢)0 to 34 Answer: (d) IMAGEPROCESSING {WBUT 2011] b) Softwares d) Image transforms [WBUT 2012] b) 0015 d) Oto 255 40. A digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are called [WBUT 2012] a) dot b) pixel c) point d) none of these Answer: (b) 44. The total amount of energy that flows from the light source and it is usually measured in watts (W) is called [WBUT 2012, 2018] a) Radiance b) Luminance c) Reflectance d) None of these Answer: (b) 42. Sampling of an image required for [WBUT 2012, 2019] a) quantization b) sharpening ¢) smoothing 4d) digitization Answer: (d) 13. The common major of transmission of digital data is (WBUT 2013] a) bit rate b) baud rate c) frame per second d) none of these Answer: (a) 14, HDTV stands for [WBUT 2013) a) High Definition Television b) High Level Digita! Television c) (a) & (b) both d) None of these Answer: (a) 15. Digital Image Processing deals with [WBUT 2013] a) analog signal b) digital signal c) discrete signal d) (b) & (c) both Answer: (a) 16. Colour Image processing Is gaining importance because (WBUT 2014) a) It's more pleasant to watch b) It’s cost effective ¢) It’s easy to capture and represent 4) Use of digital image over the internet has increased significantly Answer: (d) 1P-3POPULAR PUBLICATIONS 47. Which of the following is not an image file format? [WBUT 2015, 2017] a) TIFF b) BMP c) GIF d) none of these ‘Answer: (d) 48. The smallest units of a digital image is represented by [WBUT 2015) a) Aone dimensional matrix b) Logic 0 or 1 c) Dot d) Pixel Answer: (d) 49. Which of the following statement is true? [WBUT 2015) a) The resolution of CMOS image sensor is better than CCD image b) The resolution of CCD image sensor is better than CMOS image c) CCD image sensor is cost effective than CMOS image d) none of these Answer: (b) 20. An example of volume image is [WBUT 2015) a) A one dimensional image is b) Atwo dimensional image ¢) A three dimensional image d) all of these Answer: (c) 21. The spectrum of the visible light is [WBUT 2016] a) 10-350 nm b) 380-760 nm c) 760 nm and above d) none of these Answer: (b) 22. The image acquisition process is used to [WBUT 2016) a) store image b) transform image ¢) display image d) none of these Answer: (d) 23. Colour image is represented by [WBUT 201 a) 2-bit b) 8-bit c) 24-bit d) 64-bit 7 Answer: (c) 24. The photosensitive detector of the human eye is [WBUT 2016] a) eye lens b) iris ¢) retina d) cornea Answer: (c) 25. Through decimated by 2 operations the sampling rate is [BUT 2016) a) increased b) decreased ¢) none of these duane , Answer: (b) 26. What is the range of subjective brightness for human? ‘ : [BUT 201 a) scotopic threshold to glare limit b) photopic threshold to in limit a ¢) scotopic threshold to infinity ¢) photopi Answer: (b) 1) photopic threshold to infinity 1P-4IMAGE PROCESSING 27. The image function f(x, y) is characterized by two components: [WBUT 2017] L049) = iy) y) where a) 0
(p). These Points together with the 4 neighbours are called 8-neighboure of p and are denoted by N,(p). 3" Part: Two pixels p and q with values from V are m-adjacent if q is in N(p) or q is in N,(p) and the set N,(p)™N, (9) has no pixels whose values are from V. 3. Explain image sensing and acquisition (using single sensor, sensor strip and ‘sensor arrays). [WBUT 2010) Answer: Image sensing: An image sensor is a device that converts an optical image to an electric signal. It is mostly used in digital cameras and other imaging devices. IP-6PROCESSIN( Image acquisition is the creation of digital images, such as physical scene or of the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing and display of such images. Image acquisition using a single sensor: The most common sensor of this type is photodiode, which is constructed of silicon materials and whose output voltage waveform is proportional to light. In order to generate a 2D image using a single sensor, there have to be relative displacements in both the X- and Y-directions between the sensor and the area to be imaged. The single sensor is mounted on a lead screw that provides motion in the perpendicular direction. Image acquisition using a sensor strips: A geometry that is used much more frequently than single sensors consists of an in line arrangement of sensors in the form of a sensor strip. The strip provides imaging elements in one direction. Motion perpendicular to the strip provides imaging in the other direction. This type of arrangements used in flat bed scanners. Sensor strips mounted in a ring configuration are used in medical and industrial imaging to obtain cross-sectional (slice) images of 3D objects. I acquisition using sensor arrays: Individual sensors can be arranged in the form of a 2d array. Numerous electromagnetic and some ultrasonic sensing devices are arranged frequently in an array format. This is also the predominant arrangement found in digital cameras. A typical sensor for these cameras is a CCD array, which can be manufactured with a broad range of sensing properties. CCD sensors are widely used in digital cameras and other light sensing instruments. 4. Explain Bilinear interpolation method. (WBUT 2011, 2014) a R, Explain bilinear interpolation method used for image zooming. (WBUT 2015) Answer: . In Bi-lincar interpolation method, we use the four nearest neighbours to estimate the intensity at a given location. Let (x,y) denotes the coordinates of the location to which we want to assign an intensity value and let (x,y) denote that intensity value. For bi- linear interpolation, the assigned value is obtained using the equation v(x, y) = ax+by +exy+d where the four co-efficients are determined from the four equations in four unknowns that can be written using the four nearest neighbours of a Point. The intensity value assigned to point (x, y) is obtained using the equation 13 WoL ay! 53 where the sixteen co-efficients are determined from the sixteen equations in sixteen unknowns that can be written using the sixteen nearest neighbours of point (x,y) which is bicubic interpolation. If the limits of both summation are 0 to 1, then bi-cubic transforms to bi-linear. IP-7ULAR PUBLICATI 5, What is the resolution of an image? Compute the size of a 640 x 480 image at 240 pixels per inch. [WBUT 2012] Answer: 1" Part: The maximum number of points that can be displayed without overlap on CRT is its resolution. 24 Part: 640 480 8 Size of mage=—— by ——~ =~ by 2 inches. ize oF Image 719 ¥ 3403 yen 6. What do you mean by Digitization? Explain its two important steps. [WBUT 2013] Answer: 1" Part: The method of converting an image which is continuous in space as well as in its value, into a discrete numerical form is called digitization. 2" Part: The two important steps of digitization are: sampling and quantization. Taking measurements at regular space intervals, called sampling. Mapping the measured intensity or value to one of finite number of discrete levels, called quantization. 7. Write down the key stages in Digital Image Processing & explain them. [WBUT 2013, 2017] oR, With neat sketch explain briefly about the components of image. processing system. . [WBUT 2016] Answer: Image restoration Problem Tmage acquisition IP-8IMAGE PROCESSING Image acquisition is the first step of digital image. This steps involves preprocessing such as scaling. Here sampling and quantization require to convert the continuous sensed data into a digital form. Image enhancement technique convert the image to a form better suited for analysis. by a human or machine. Common enhancement techniques are point operation and spatial operation, which includes contrast stretching, filtering, noise clipping etc. Image restoration improves the quality of image acquired by optical means. Image segmentation defined by a set of regions that are connected and non overlapping, so that each pixel in a segment in the image acquires a unique region label that indicates the region it belongs to. Image compression helps to reduce the redundancies in raw image data in order to reduce the storage and communication bandwidth. 8. What is 8 bit colour image? For what purpose could it be used? Explain. [WBUT 2013, 2016, 2018] Answer: In the RGB model, each color appears in its primary spectral components of red, green and blue. This'model is based on Cartesian coordinate system. The different colors in this model are points on or inside the cube and are defined by vectors extending from the origin. For convenience, the assumption is that all color values have been normalized so that the cube is the unit cube i., all values of R, G and B are assumed to be in the range [0, 1]. Images represented in the RGB color model 8 consist of three component images, one for each primary color. When fed into an RGB monitor, these three images combine on the screen to produce a composite color image. The no. of bits used to represent each pixel in RGB space is called pixel depth. Consider an RGB image in which each of the red, green and blue images is an 8- it image. Under these conditions each RGB color pixel is said to have a depth of 24 bits (3 image planes times the number of bits per plane). The term full color image is used often to denote a 24 bit RGB color image. The total number of colors in a 24-bot RGB image is (2")' =16,777,216 Many monitor display use color maps with 8-bit index numbers, meaning that they can only display 256 different colors at any one time. Thus it is often wasteful to store more than 256 different colors in an image anyway, since it will not be possible to display them all on screen. Because of this many image formats use 8-bit color maps to restrict the maximum number of different colors to 256. Using this method, it is only necessary to store 8-bit index into the color map for each pixel, rather than the full 24-bit color value. 1P-9POPULAR PUBLICATIONS 9. What is Weber Ratio? Show the variation of Weber ratio for humans. [WBUT 2014, 2017] Answer: an The ratio of the increment of illumination to background of illumination is called as is small, then small percentage of change in intensity is needed, i.e. good brightness adaptation. If the rati is large, then large percentage of i change in intensity is needed i.e. poor brightness adaptation. 410. a) Differentiate between image and scene. [WBUT 2016] b) What are the image sensors and how are they used? Answer: . a) The Process of receiving and analyzing visual information by the human species is referred to as scene. Similarly the process of receiving and analyzing visual information or scene by digital computer is called image processing i.e. When scene is fed to the computer it is called image, because the computer stores and processes numerical image of ascene. b) An image sensor or imaging sensor is a sensor that detects and conveys the information that constitutes an image. It does so by converting the variable attenuation of light waves into signals, small burst of current that convey the information. The waves can be light or other electromagnetic radiation. Image sensors are used in electronic imaging devices of both analog and digital types, which include digital cameras, camera modules, medical imaging equipment, night vision equipment such as thermal imaging devices, radar, sonar and others. As technology changes, digital imaging tends to replace analog imaging. Early analog sensor for visible light were video camera tubes. Currently, used types are CCD (Charge coupled devices) or active pixel sensors in CMOS or NMOS technologies. Digital sensor include flat panel detectors. 41. a) What do you mean by aliasing in the context of image sampling? Explain. b) What do you mean by the term ‘image file format’? Mention some of the frequently used image file format. [WBUT 2016] Answer: a) Aliasing is a process in which high frequency components of a continuous function “masquerade” as lower frequencies in the sampled function. This is consistent with the common use of the term ‘alias’, which means “a false identity”. Unfortunately, except for some special cases, aliasing is always present in sampled signals because, even if the original sampled function is band limited, infinite frequency components are introduced the moments limit the duration of the function. We conclude that aliasing is an inevitable fact of working with sampled records of finite length. In practice, the effects of aliasing can be reduced by smoothing the input function to attenuate its higher frequencies. This IP-10IMAGE PROCESSING process, called anti-aliasing, has to be done before the function is sampled because aliasing is a sampling issue that cannot be undone after the fact using computational techniques. b) Images may be stored in a variety of file formats. Each file format is characteristics by a specific compression type and color depth. The choice of file formats would depend on the final image quality requires (e.g. some file formats support only 256 colors) and the import capabilities of the authoring system. The most popular file formats are: BMP (Bitmap), JPEG (Joint photographic expert group), TIFF (Tagged Image file format), GIF (Graphics Interchange format) etc. BMP supports RGB, Grayscale and Bitmap color modes. JPEG is commonly used to display photographs and other continuous tone images in HTML documents over the WWW and other on line services. TIFF is used to ‘exchange files between applications and computer platforms. It is a flexible bitmap image format supported by virtually all paint, image editing and page layout applications. 12, a) Discuss one technique of image acquisition. [WBUT 2017] b) What is meant by classification of image? Answer: a) Refer to Question No. 3(2 Part) of Short Answer Type Questions. b) A broad group of digital image processing techniques is directed towards image classification, the automated grouping of all or selected land cover features into summary categories. Classification approaches can be implemented to classify the total scene content into a limited number of major classes. Classification approaches can also be implemented to distinguish one or more specific classes of terrain (such as water bodies, paved surfaces, irrigated agriculture, forest cutting or other type of disturbances) within the landscape. The result of such classification can be used to spatially direct the efforts of subsequent digital operation or detailed visual interpretation. 13, Brightness discrimination is poor at low levels of illumination. Explain. [WBUT 2018] Answer: ; Brightness discrimination is poor (the weber ratio is large) at low levels of illumination and it improves significantly (the weber ratio decreases) as background illumination increases, At low levels of illumination vision is carried out by rods, whereas at high levels (showing better discrimination) vision is the function of the cones. This is shown below. IP-11POPULAR PUBLICATIONS -4 -3 -2 -1 0 1 2 3 4 44. Define digital image and explain image and explain image pixel. What are the storage requirements for (500%500) & (1024%1024) binary images? [WBUT 2018] Answer: : 1" Part: ‘An image is said to be a digital image if it is a computer readable format so that it can be stored in a computer for further processing. 2" & 3" Part: An image f(x, y) is said to be digital image, if its spatial coordinate (x,y) and the amplitude values are all finite and discrete quantities. A digital image is comprised of a finite number of elements called pixel or image pixel. Each image pixel has a specific location and value. Digital images require so much storage and computational power that progress in the field of digital image processing has been dependent on the development of digital computer and of supporting technologies that include data storage, display and transmission. 4° Part: Storage requirements for 500x500 = 500x500 x 2 = 500000 and storage requirements for 10241024 = 1024 x 1024x 2 = 1024 x 2048 = 2097152 1. Write short notes on the following: a) Colour image processing [WBUT 2011) b) Brightness adaptation [WBUT 2014, 2017] c) Weber ratio [WBUT 2018) Answer: a) Colour image processing: The use of colour in image processing is motivated by two principal factors. First, colour is a powerful descriptor that often simplifies object identification and extraction from a scene. Second, humans can discem thousands of colour shades and intensities, compared IP-12IMAGE PROCESSING to about only two dozen shades of gray. This second factor is particularly important in manual image analysis. Colour image processing is divided into two major areas: full colour and pseudo colour processing, In the first category, the images in question typically are acquired with a full- colour sensor, such as a colour TV camera, In the second category, the problem is one of assigning a colour to a particular monochrome intensity. Until relatively recently, most digital colour image processing was done at the pseudo colour level. b) Brightness adaptation: The range of light intensity levels to which the human visual system can adopt is enormous, on the order of 10'°from the scotopic threshold (minimum low light condition) to the glare limit (maximum light level condition). There is considerable experimental evidence indicating that subjective brightness, brightness as perceived by the human visual system, is a logarithmic function of the light intensity incident on the eye. This characteristics is illustrated below: Glare limit ul Subjective brightness 6 4 20 2 4 Log of intensity (mL) It is a plot of light intensity Vs. subjective brightness. The log solid curve represents the range of intensities to which the visual system can adapt. In photopic vision alone, the range is about 10°. The transition from scotopic to photopic vision is gradual over the approximate range from 0.001 to 0.1 millilambert (—3 to -1 mL in the long scale), as illustrated by the double branches of the adaptation curve in this range. The visual system does not operate over such a wide dynamic range simultaneously. Rather it accomplishes this large variation by changes in its overall sensitivity, a phenomenon known as Brightness adaptation, The total range of intensity levels it can discriminate simultaneously is rather small compared with the total adaptation range. ¢) Weber ratio: Refer to Question No, 9 of Short Answer Type Questions. 1P-13POPULAR PUBLICATIONS GITAL IMAGE FORMATION 4. Ifa function f(x, y) is real and we have F(u, v)=2DFFT[ f(x, y)]. [WBUT 2009, 2011) a) F(u, v) contains only real parts b) F(u, v) contains only imaginary parts ¢c) F(u, v) contains both real and imaginary parts d) none of these are true Answer: (c) 2. The classical Hough transform is concemed with the identification of [WBUT 2009, 2016, 2018] b) zeros in an image a) lines in an image d) none of these are true ¢c) poles in an image Answer: (a) 3. Which one of the following transform coding systems (usually) does not decompose the input image into several sub-images before transform? a) Discrete Fourier transforms coding [WBUT 2010] b) Walse-Hadamard transforms coding ¢) Discrete Cosine Transform coding . d) Wavelet Transform coding Answer: (d) 4, Faulty switching introduces [WBUT 2010] a) Gaussian noise b) Rayleigh noise c) Gamma noise d) Impulse noise Answer: (d) 5. Poor illumination introduces [WBUT 2010] a) Gaussian noise c) Exponential noise Answer: (a) 6. Inthe intensity distribution scale the background will of course be [WBUT 2014] a) Lower intensity value b) Higher intensity Value b) Rayleigh noise d) impulse noise ¢) Medium intensity value d) None of these Answer: (2) 7. The computation of Walsh coefficient involves [WUT 2015, 2017] b) only addition a) only subtraction d) none of these ¢) addition and subtraction Answer: (c) IP-14IMAGE PROCESSING 8. The transform which is widely used in JPEG compression scheme is a) FFT b) IDFT [WBUT 2015, 201 ¢) Hadamard Transform 3 Discrete Cosine Transform 4 Answer: (4) : 9. Which of the following uses 2 « 2 mask for edge datection? [WBUT 2017] a) Sobel b) Roberts ¢) Prewitts d) Kirsh Answer: (b) 10. Which of the following transforms is used for fine detection in image processing? [WBUT 2017] a) hadamard b) haugh ) haar d) slant Answer: (b) 11. Which of the following: image processing transforms does not satisfy the ‘separability property? (WBUT 2017] a) Walsh b) Fourier °) DCT 4) Hotelling Answer: (d) Short Answer Type Questions 4. a) What is image Sampling? [WBUT 2009, 2013] b) Define saturation in digital image. [WBUT 2009] Answer: a) Digitization of spatial coordinates (x, y) is called image sampling. It is suitable for computer processing. An image function f(x, y) must be digitized both spatially and in magnitude. b) The saturation is determined by the excitation purity and depends on the amount of white light mixed with the hue. A pure hue is fully saturated, i.e., no white light mixed in. Hue and saturation together determine the chromaticity for a given colour. Hue and saturation tool controls colour. Changing the hue alters the balance of the colour. Changing the saturation alters the strength of the colour. 2. Write down the various 2D transforms. (WBUT 2009] Answer: The various 2D transform are: Translation, scaling, rotation and shearing, reflection etc. Translation: A translation moves an object to a different position on the screen. We can translate a point by adding translation coordinate (1,, t,) to the original Coordinate (x, y) to get the new coordinate (", Y"). i =x+h,; Y=ytt,. TP-15POPULAR PUBLICATIONS Scaling: Scaling change the size of the object. Let us assume that the original coordinates are (x, y) and the scaling factors are S, and S, in x and y direction. the new coordinate will be -S, and ¥"=y'S, . x) (x5, 0 me (5)-C 0 s| P=P'S 5, 0 . , bo where S is the scaling matrix ie., [i 5 and P' is the new coordinate (x’, y’) y matrix. Rotation: In rotation, we rotate the object at particular angle @ from its origin. The new . ition willbe [X’ Y']=[x_y] cos@ sin? coordinate position will be [, J=[x ¥ — am cade ie, P=PR where is the rotation matrix . cos@ sind ie, =|") [-sin@ cosé P’ is the new coordinate and P is the old coordinate of the point. 3. What are the basic steps involved in image geometrical transformation? Develop the homogeneous from of this transformation? [WBUT 2014, 2018] Answer: 1* Part: A geometric transformation consists of two basic operations i) a spatial transformation of coordinates ii) intensity interpolation that assigns intensity values to the spatially transformed pixels. The transformation of coordinates may be expressed as (xy) =T{r,w)} where (y,w) are pixel coordinates in the original image and (x, y) are the corresponding pixel coordinates in the transformed image. For example, the transformation (x,y) =T{o.w)}= (4%) shrinks the original image to half its size in both spatial directions. 2™ Part: One of the most commonly used spatial coordinate transformation is the affine transformation which has the general form IP-16IMAGE PR\ ic 4 4a 0 ky I=bw 7=pwy |r, 4, 0 tote This is the homogenous form of transformation. This transformation can scale, rotate, translate or sheer a set of coordinate points depending on the value chosen for the element of matrix T. : 4, State the applications of image transforms. What is energy compaction of image Transform? [WBUT 2014] Answer: 1" Part: Image transforms usually refers to a class of unitary matrices used for representing images and are applicable to filtering, data compression, feature extraction and other analysis. Two dimensional transforms are applied to image enhancement, restoration, encoding and description. 24 Part: Many common unitary transforms tend to pack a large fraction of signal energy into a few transform co-efficient which is generally tetmed as energy compaction. 5. If an image is rotated by an angle = wi there be any change in the histogram of that image? Justify your answer. [WBUT 2015, 2017] Answer: Since rotation is a transformation, so transformation makes change in the histogram of an image. Thus a processed image after transformation is obtained by mapping each pixel in the input image with intensity say 7, into a corresponding pixel with level S, in the ‘ output imagei.e, S, =7(n)=(L- > P(x) s P.(r,) is the probability density function and intensity levels in an image may be viewed as random Variables in the interval (0, L-l). 6. Explain CMY and CMYK colour model. (WBUT 2019) Answer: . CMY stands for Cyan, magenta and yellow are the secondary colours of light or alternatively the primary colours of ents. For example, when a surface coated with cyan pigment is illuminated with white light, no red light is reflected from the surface i.e. cyan subtracts red light from reflected white light. In order to produce true black (which is the predominant colour in printing) a fourth colour black is added, giving rise to the CMYK colour model, So, CMY plus black ie YK. > 1P-17POPULAR PUBLICATIONS 7. How is image represented in digital formats? [WBUT 2019) Ani . 7 : i it The method of converting an image which is continuous in space as well as its value into a discrete numerical form is called ization. The steps for digital formal from analog formats are sampling and quantization. the conversion rules for id vice-versa. How can a (WBUT 2009, 2010) 1. Briefly describe any three colour models. Wri converting RGB cofour model to HS! colour model colour image be converted to gray scale image? Answer: 1* Part: RGB model: In RGB model, an image consists of three independent image planes, one in each of the primary colours: red, green and blue, as shown is an additive model i.e., the colours present n the light add to form new colours and is appropriate for the mixing of coloured light. For example, red and green produce yellow, blue and green produces cyan and red and blue produces magenta and red, green and blue produces white. The RGB model is used for colour monitors and most video cameras. CMY model: The CMY (Cyan-Magenta-Yellow) is a subtractive model appropriate to absorption of colours, for example due to pigments in paints. The CMY model is used by printing devices and filters. HSI model: WhiteIMAGE PROCESSING HSI (Hue-Saturation-Intensity) model is shown above. Hue is measured from red and: saturation is given by distance ftom the axis. Colours on the surface of the solid are fully saturated i.e., pure colours and the grey scale spectrum is on the axis of the solid. 2° Part: RGB to HSI: Suppose R,G and B are the red, green and blue values of a colour. The HSI intensity is given by the equation 1=(R+G+8)/3 Now, let m be the minimum value among R,G and B. The HSI saturation value of a colour is given by the equation S=l-m/I_ if I>0,or . $=0 if 7=0 To convert a colour’s overall hue, H , to an angle measure, n-oo'|R-3o-4a)) R’ +G? +B’ — RG- RB-GB ifG>B 1 2 where the inverse cosine output is in degrees. 1H =360-cs"|R- 6-$8)/ IR? +G> +B’ - RG-RB-GB if B>G HSI to RGB: If H=0,then R=/+2IS; G=1-IS; B=1-IS If 0
F(k,!)} then for the sequence f(m—m,,n) the DFT becomes [WBUT 2016] a) eo" FKL) by "FE, se) eb" FU) ) 2 F(K 1) Answer: (c) 5. The kernel for 2D-DFT with N =4is calculated by the command = [WBUT 2016] a) oft2(4) by #12(4) ) dftmtx(4) d) none of these Answer: (b) 6. DCT is widely used for [WBUT 2016] a) image degradation b) image compression c) image restoration d) none of these Answer: (b) Short Answer tions 1. Compare one dimension and two dimension DFT. (WBUT 2009) Answer: DFT (Discrete Fourier transform) is the sampled Fourier transform and therefore contain all frequencies forming an image. The number of frequencies corresponds to the number of pixels in the spatial domain image. IP-21POPULAR PUBLICATIONS In 1D DFT, x goes up in step of 1 and that there are N samples, at values of x from 0 to N-l,ie., FE x)e tent In 2D DFT, for a square image of size NxM in x and y we have yw F(u, oa Ys te yer" (+2) 2D DFT are symmetric, periodic extensions separability, sampled Fourier transform in nature. 2D DFT can be implemented as two consecutive 1D DFT first in x direction, then in y direction or vice-versa 1D DFT are periodic in nature 2D DFT can be decomposed using 1D DFT as primitive. In 1D DFT, the DFT and unitary DFT matrices are symmetric. The DFT or unitary DFT of areal sequence is conjugate symmetric about ° . 2D DFT has a conjugate symmetry arid also has a periodic extension. 2. Prove that imaginary part of a Fourier transform of an even function is zero. (WBUT 2010} Answer: Let Fourier transform of f(x) is F(s), then F(s)= [/(x)e%ae= J 1(2)(cossx-+ isin sx)ax = [1 (3)-cossx-dr +i f(x)sin sede -. The imaginary part of F(s) is = J S(x)sinsx-de = [1 2)sinse d+ j J (x}-sinsx-de « < Jf (x):sin sx-de — In -y)sin(-sy)-dy [+ Lety=-x = dy=-ds; sinsx =sin(-sy)=~sin sy] = [7 (=)sinsx-de+ [/O)sinsyay a [+ F(x) iseven function £(-x)= f(x)] IP-22= [s(a)sinseas [y(y)sinsyay [: [rteyee=-frters| = | (a)sinse-ds— [y(x)sinsede=0. 3. Show that the Fourier transform of the auto-correlation function of (z) is the powor epectrum |7(u)/'. [WBUT 2010] Answer: The Fourier transform of auto-correlation function f(x) is FLF(@)J= I f(x): de = | fxxt-xyemae = fale eat fx(t—zpe de au fx(t-xe de ‘ss w is the energy density function] Now, t—x=n in the second integral, we have FLAG] -0] (0a =al-2}- uf. . 4 Discuss briefly about the usefulness of discrete cosine transform. [WBUT 2014, 2015) Answer: Discrete Cosine Transform (DCT) are used to convert data into the summation of a series of cosine waves oscillating at different frequencies. They are widely used in image and audio compression. They are very similar to Fourier transform, but DCT involves the use of just cosine functions and real co-efficients, whereas Fourier transformations make use of both sine and cosine and require the use of complex numbers. DCT are simpler to calculate. Both Fourier and DCT convert data from a spatial domain into a frequency domain and their inverse function convert things back. 1P-23UBLICAT! 5. What is Hough transform and where is it usod? [WBUT 2011, 2014, 2015) OR, Discuss the Hough transform method for edge linking. [WBUT 2012, 2017] OR Explain Hough transformation and describe its application in image processing. [WBUT 2013) Answer: The Hough transform is a technique which can be used to isolate feature of a particular shape within an image. Because it requires that the desired features be specified in some parametric form. The classical Hough transform is most commonly used for the detection of regular curves such as lines, circles, ellipse etc. A generalized Hough transform can be employed in applications where a simple analytic descriptions of a features is not ssible. The Hough transform is particularly useful for computing global description of a features (where the number of solution classes need not be known as priori), given local measurements. The Hough transform can be used to identify the parameter of curve which best fits a set of given edge points. It is used to detect curves in pictures. 6. Define 4adjacency, 8-adjancency and m-adjacency. Consider the two-image Subset S1 and S2 shown below: st sz of0 00 0)0 07 4]0 1/0 0.4 0/0 4 0 of4 1/0 0 41 0/1 1 0 O]0 0|0 1114/0 0 0 Oj0 ootTtT+Tootii For v={1}, determine whether S1 and S2 are () 4-connected 8-connected M-connected. [WBUT 2012] Answer: 1" Part: 4adjacency: Two pixels p and q with values from V are 4-adjacent of q is in the set N.(P)- S-adjancy: Two pixels p and q with values from V are 8-adjacent if q is in the set ,()- m-adjacency: Two pixels p and q with values from V are m-adjacent of q is in N,(p) or g is in N,(p) and the set N,(p)MN,(q) has no pixels whose values are from V . IP-242" Part: Let p and g beas shown below, Fig: 1 Then, (i) S, and S, are not 4-connected because q is not in the set N,(p); Gi) S, and” S, are 8-connected because q is in the set N,(p); and (iii) S, and S, are m-connected because (a) q isin N,(p) and (b) the set N,(p)(1N, (a) is empty. 7. Draw the schematic diagram of 2-D DWT synthesis filter bank structure for Haar Wavelet Transform and explain the components. (WBUT 2013) Answer: To use the wavelet transform for image processing we must implement a 2D vision of the analysis and synthesis filter banks. In the 2D case, the 1D analysis filter bank is final applied to the columns of the image and then applied to the news. If the image has Ny rows and N3 columns, then after applying the 1D analysis filter bank to each column, we have'two sub-band images, each having Ny/2 rows and Nz/2 columns; after applying the 1D analysis filter bank to each row of both of the two sub-band images, we have four sub-band images each having N,/2 rows and N,/2 columns. This is illustrated in the diagram below. The 2D synthesis filter bank combines the four sub-band images to obtain the original image of size N; by Nz. Analysis filter 1 € (Low frequency) (Unpat figure) x t Analysis filter 2 We denote X as input signal. The signal C represents the low frequency part of X, while the signal d represent the high frequency. The analysis filter bank finet filters X using a 1P-25POPULAR PUBLICATIONS low pass and high pass filter. We denote the lowpass filter by af; (analysis filter 1) and the highpass filter by af,, (analysis filter 2) 8. What is m-connectivity among pixels? Give an example. © [WBUT 2045, 2018) Answer: . ; Let v be a set of gray level values used to define connectivity, then two pixels p and q with values from v are m-connected if i) q is in Na(p) i.e., 4-connectivity 2) q is in Np(p) (i-e., 8-connectivity) and the set N,(p)/N,(q) is empty. Example: 8 - connectivity 9. Define connectivity. What is the difference between 8-connectivity and m- connectivity? What are the different methods for calculating the distance of pixel? Explain with relevant example. IWBUT 2019] Answer: 1" Part: Connectivity between pixel is a fundamental concept that simplifies the definition of digital image concepts such as regions and boundaries. To establish of two pixels are connected it must be determined if they are neighbours and if their gray level satisfy the specified criterion of similarity connectivity between pixels issued for establishing boundary of an object and components of region in an image. 2° Part: Two pixels p and q with valued from ‘/” are 8-comnected if q is in the set of N,(P) [+ Vis set of gray level values]. Two pixels P and q with valued from ‘V’ are 8-connected of i) q isinthe set N,(P) or ii) g isin N,(P) and the set N,(P)1N,(q) is empty. m- connectivity is introduced to eliminate the multi-path connection that often arise when 8-connected is used. IP-263" Part: The different methods for calculating the distance of pixel are: i) Euclidean distance ii) D, distance (city block distance) iii) D, distance (chessboard distance) 4° Part: The Euclidean distance between P and q is defined DAP, 9)=|x-5]+|¥ -t| where (x,y) and (S, #) arethe coordinate of P and g. D, distance is D,(P, 9)=|2-5|+|¥-¢| and D,(P, 9)=max{|x-s|,|y=<[} city block distance. Example: The pixels with D, distance <2 from (x,y) i.e. centre point form the above contaurs of conistant distance. The pixel with D, =1 are the 4-neighbors of (x, y) Example: Chessboard distance 1. a) Discuss the global processing via the Hough transform. IWBUT 2011] b) Explain the role of Discrete cosine transform in image processing. t [WBUT 2011, 2014) 1P-27POPULAR PUBLICATIONS Answer: oo | a) A straight line at a distance s and orientation 6 of the Fig. (a) can be represented as s=xcosO+ ysind vee (1) 9 © (5,8) Fig: (b) Hough transform Fig: (a) Straight line The Hough transform of this line is just a point in the (s, 8) plane i.e., all the points on this line map into a single point (Fig. b). This fact can be used to detect straight lines in a given set of boundary points. Suppose we are given boundary points (x,, y,) for i=l, 2, ......, N.. For some chosen quantized values of parameters s and @, map each (x,. ¥,) into the (s, @) space and count C(s, @), the number of edge points that map into the location (s, 8) i.e., set C(s,, 6)=C(s,, 4) +1, if x,cosO+y, sind =, for @=6,. Then the local maxima of C(s, @) give the different straight line segments through the edge points. This two-dimensional search can be reduced to a one- dimensional search if the gradient @, at each edge location are also known. Differentiating both sides of Eqn. (1) w.r.t. x, we get y ~-eot0 ~tan( 40) a 2 Hence C(s, @) need to be evaluated only for @=—~@. The Hough transform can be 2 generalized to detect curves also other than straight lines. b) The discrete cosine transform (DCT) helps separate the image into parts (or spectral sub-bands) of differing importance (with respect to the images visual quality). The DCT is similar to the Discrete Fourier transform; it transforms a signal or image from the spatial domain to the frequency domain. S (iri) IP-28IMAGE PROCESSING The general equation for a one-dimensional DCT is defined by the equation 2y7et ru F(u)=|+| SA(i)cos} (27 i () (i) & (eal E40 4(i) And the corresponding inverse one-dimensional DCT is simply F’(u) i.c., 1 —= for =0 where A(i)=4J/2 r 1 for otherwise The general equation for a2D ( N by M image) DCT is defined by the equation 28/2 Fe . an _ F(u, »-(3) (2) LOA, col £4 eel 0s on]st i) and the corresponding inverse 2D DCT transform is simple F~'(u, v) i.e., 1 where 4(g)={2 ®™ §=9 1 otherwise * The basic operation of DCT is as follows: -@ The input image is N by M. © f(i, J) is the intensity of the pixel in row i andcolumn j. e F(u, v) isthe DCT co-efficient in row k, and column k, of the DCT matrix. © For most images, much of the signal energy lies at low frequency, these appear in the upper left corner of the DCT. © Compression is achieved since the lower right values represent higher frequency and are often small. © The DOT input is an 8x8 array of integer and the array contains each pixel’s gray scale level and 8 bit pixels have levels from 0 to 255. 2. What Is unitary transform? Define and compute the equations for unitary transform, Fourier transform, inverse Fourier transform for both 1-D and 2-D image. {WBUT 2014, 2018, 2019] Answer: A unitary transform is a specific type of linear transformation in which basic linear operation is exactly invertible and operator kemel satisfy certain orthogonality conditions. The forward unitary transform of the N, x N, image array F(n,,n,) results ina N,x.N, transformed image array as defined by a F4mm,}= YY Fema) Atay) && where A(v,,n,;m,,7™7,) Tepresents the forward transform kernel. 1P-29POPULAR PUBLICATIONS The one dimensional discrete fourier transform of a sequence {u(n),n=0,......N—1} is wa defined as V(K)= J u(n)W%,K = 0,1, ~lwhere Wy A emf =22} is gi Le Ko ‘The inverse transform is given by u(n) = ¥ V(K)W,"" ,n=0,,...N—-1 e The two dimensional DFT of an Nx N image {u(m,n)} is a separable transform defined as wa xt ViKO=> YD ummnewy OsKeSN-1 aS and the inverse transform is kot at u(mn) = Dy DVT MKoOW," ww," OsmnsN-1 mo and the inverse unitary transform in two dimensional is FU) = 5. ¥f(u»)BU, im») && 3. Draw the schematic of 2D DWT synthesis filter bank for Haar Wavelet transform and explain the components. [WBUT 2019] Answer: Like the 1-D discrete wave let transform (DWT), the 2-D DWT can be implemented using digital filters and down samplers. H24}-——» 0 (), mn) Rows (along m) }H2¢}—— 7 (m1) Rows 2D wavelet transform (a) The analysis filter bank With separable two dimensional scaling and wavelets functions, we simply take the 1-D FWT of the rows of f(x, y), followed by the 1-D FWT of the resulting columns. Fig. shows the process in block diagram form IP-30WC, mn) We, Wirt mn) Z Ll We(i, min) Weld, (b) The resulting decomposition W,(j+1.m.n) we jumn) W(J.mn) Rows (c) The synthesis filter bank Note that like its one dimensional counterpart, the 2-D FWT ‘filters’ the scale; i+1 approximation, co-efficient to construct the scale j approximation and detail co- efficient. In the two-dimensional case, however, we get three sets of detail co-efficient- ‘The horizontal, vertical and diagonal details. The single-scale filter bank of Fig.(a) can be operated to produce a P scale transform in which scale J is equal to J=1, J=2,.08,J > P. AS in the one-dimensional case, image f(x,y) is used as the H,(J, m, n) input. Convolving its rows with h,(~n) and h,(-n) and down sampling its columns, we get two sub images whose horizontal resolution are reduced by a factor of 2. The high pass or detect component characterizes the image's high frequency “information with vertical orientations; the low pass, approximation component contains its low frequency, vertical information. Both sub images are then filtered column wise and down sampled to yield four quarter-wise output sub images WW,, W%,W! and 2. These sub-images which are shown in the middle of (b), are the inner products of F(x, y) and the two-dimensional scaling and wavelet functions followed by down sampling by two in each dimension, Two interactions of the filtering process produces the two-seale decompositions at the far right of Fig. (b). Fig. (c) shows the synthesis filter bank that reverses the process just explained, As would be expected, the reconstruction algorithm is similar to the one-dimensional case, At each IP31POPULAR PUBLICATIONS operation, four scale j approximation and detail sub images are un-sampled and convolved with two one dimensional filters — one operating on the sub images columns and the other on its rows. Addition of the results yields the scale j+1. approximation, and the process is repeated until the original image is reconstructed. 4. Write short notes on the following: a) Fourier descriptor [WBUT 2008) b) DCT [WBUT 2009) OR, DCT in Image Compression ‘ [WBUT 2019) c) Hadamard transform [WBUT 2011) d) Walsh Transform [WBUT 2011, 2014, 2016) e) Discrete cosine transforms (WBUT 2012] f) Piecewise Linear transformation [WBUT 2015) g) Hotelling Transformation (WBUT 2016] Answer: a) Fourier descriptor: . Fourier descriptors is a method used in object tecognition and image processing to represent. The boundary shape of a segment in an image. The first few terms in a Fourier series provide the basis of a descriptor. This type of object descriptor is useful for recognition tasks because it can be designed to be independent of scaling, translation or rotation. The founder descriptors of a shape are calculated as follows: 1) Find the coordinate of the edge pixels of a shape and put them in a list in order, going clockwise around the shape. 2) Define a complex-valued vector using the coordinates obtained. Fourier descriptors inherit several properties from the fourier transform like translation . invariance, scaling, rotation etc. b) DCT: Refer to Question No. 1(b) of Long Answer Type Questions. ©) Hadamard transform: The Hadamard transform H,, is a 2"x2" matrix, the Hadamard matrix (scaled by a normalization factor), that transforms 2° real numbers x, into 2 real numbers x, . The Hadamard {transform can be defined in two ways: recursively or by using the binary representation of the indices n and & . The lowest order of Hadamard matrix is 2 and the ” 7 1 cresndne mers H{! 1f Hy, bea Hadamard matrix of order M, then H,, i Hy _ 1 is also Hadamard *, Hy matrix of order 2M . IP-32IMAGE PROCESSING Unlike the other transforms the elements of the basis veetors of the Hadamard transform take only the binary values +-1 and are therefore well suite for digital signal processing: ‘The Hadamard transform H is real, symmetric and orthogonal i.e., H=H'=H' =H The Hadamard transform is a fast transform and has good to very good energy compaction for highly correlated images. . Hadamard transform is faster than sinusoidal transforms, since no multiplication is required; useful for digital hardware implementation of image processing algorithm. Application areas are in image data compression, filtering and design of ‘codes. ‘4) Walsh Transform: When N =2", the discrete Walsh transform of a function f(x), denoted by W(u)is obtained by substituting the Kernel ster) EET oiae,., ig We=Pfe[veaHO, —- Where b,(Z) is the K,, bit in the binary representation of Z. The array tormed by the Walsh transformation Kemel is a symmetic matrix, whose rows and columns are orthogonal. Thus the inverse Kemel is identical to the forward Kernel except fora constant multiplicative factor of nt (xu) =] ] Dbi(2)o0), Wash transform kernel for N =8 Thus the inverse Walsh transform is given by f(xy=Sarw)f | -Yoie)6(W), = oy il Walsh transform consists of a series expansion of basic functions, whose values are +1 or ao 1P-33POPULAR PUBLICATIONS e) Discrete cosine transforms: Discrete Cosine Transform (DCT) is preferred for compression because it concentrates the largest percentage of the signal energy is a small percentage of the co-efficient, especially in the case of signals having high spatial correlation. DCT is a real transform in contrast to DFT (Discrete Fourier Transform) that is complex transform in general and thus, only real calculations are required for DCT computation. PT is used in compression standards such as JPEG due to its very strong compression properties. There are chips that implement DCT in hardware. f) Piecewise Linear transformation: The principal advantage of piecewise linear transformation is that these function can be arbitrarily complex. But their specification requires considerably more user input. Contrast stretching is the simplest piecewise linear transformation we may have various low contrast images and that might result due to various reasons such as lack of illumination, problem in imaging sensor or wrong setting of lens aperture during image acquisition. The idea behind contrast stretching is to increase the dynamic range of gray levels in the image being’ processed. Figure shows a typical contrast stretching transformation which can be expressed as if | v={B(u-a)+v,, asu
!. a=L/3; mid region stretch, Boi, b= bright region stretch, 7 > 1. The parameter a and 4 can be obtained by examining the histogram of the image. IP-34IMAGE PROCESSING g) Hotelling Transformation: The Hotelling transform is a conventional image processing transformation. The equation is as follows. Suppose the points P, are the feature points that we found, then covariance matrix is calculated by the mathematical relations. [n this part, the eigen values vectors are obtained. As the eigen values and vectors found, the geometric attack can be oo, by some mathematical relations. Reb] cab Sart mm, G [oles Fig: Eigen analysis Coes = Aye J =2 C, is the covariance matrix calculated from the mean vector and summation from every feature points. After getting the covariance vectors, the eigen analysis can be used to find out the eigen vectors and values. Thus, eigen vectors and values represent the characteristic of one image. The above fig: shows a concept of eigen analysis. The eigen sets of original image and attacked ones can be used to determine what kind of geometric attack is. IP-35POPULAR PUBLICATIONS IMAGE ENHANCEMENT Multiple Choice Type estion: 1. Edge detection of an image broadiy means [WBUT 2009, 2016, 2018) a) low spatial frequency enhancement b) high spatial frecuency enhancement ¢) thresholding iow spatial frequencies 4d) detection of intensity variation Answer: (d) 2. To obtain the impulse response of filter the input impulse image should be like Mv [WBUT 2010] b) a total black image of size 1/1’ c) a white dot in a centre of black ti.03¢ of MxN d) a black dot in a centre of white imaye of MxN Answer: (c) 3. If the image is degradea by motion biur and added noise thon .......... Gives the best result [WBUT 2010] a) median filter b) inverse filter c) Wiener filter d) constraint least square filter Answer: (c) 4. Diagonal edge can be detected by using which of the following masks? [WBUT 2010] a [e[*]?) ay71 swer: (c) 1P-36IMAGE PROCESSING &. Which of the following grey level transformations produces image negative? a) S=Clog(I+r) b) S=h-1-r [WBUT 2010] i ©) S=Cr* 4) 5, =. k= 0.1, 2.3, ce(L-N) ron Answer: (b) 6. Wiener Filter is used for [WBUT 2012, 2013, 2018) a) restoration b) smoothering ¢) sharpening d} none of these Answer: (d) 7. The effect, caused by the use of an insufficient number of gray leveis in smooth areas of a digital image is called (WBUT 2012, 2018] a) false counting b) gray levels'slicing c) bit plane d) thinning Answer: (a) 8. Consider an image of size M x N with 64 gray levels. The total number of bits required to store this digitized image is (WBUT 2012] a) Mx N x 64 b) Mx Nx 63 c)MENX6 d)MxNx8 Answer: (d) 9. Periodic noise happened due to [WBUT 2012] a) infinite frequency b) electrical & electromechanical interface during acquisition Answer: (b) 10. Averaging filter is used for [WBUT 2013, 2018) a) sharpening _b) contrast c) brightness d) smoothing Answer: (d) ‘ 11, Which of the following is improved by histogram technique? [WBUT 2013) a) Contrast b) Sharpness c) Brightness d) Both (a) & (b) Answer: (a) 412. Image restoration is a/ an [WBUT 2013] a) subjective process b) objective process c) (a) & (b) both d) None of these Answer: (d) 1P-37PULAR PUBLICATION 13. Measuring an intensity value of a fixed pixel including the effect of the neighborhood is called [WBUT 2014) a) Averaging b) Spatial filtering c) (a) & (b) both d) None of these Answer: (c) 14, Usually frequency domain operations are [WBUT 2015} a) Global operation b) Mask operation ¢) Point operation d) none of the above Answer: (b) 16. If the size of the mask for averaging is increased, the image will be a) noise free b) blurred [WBUT 2016] c) degraded d) none of these Answer: (b) 16. Salt and pepper noise can be removed by [WBUT 2016) a) weighted average filter b) Gaussian filter ¢) Median filter d) high boost filter Answer: (c) 17. The histogram equalization process [WBUT 2016] a) blurs the image b) fades the image ¢) improves the brightness of the image d) none of these Answer: (c) 18. A spatial averaging fitter in which all the coefficients are equal is called a) weighted averaging filter b) median filter IWBUT 2018) c) box filter d) none of these Answer: (c) 19. What is the basis of numerous spatial domain Processing? [WBUT 2019) a) Transformation b) Scaling ¢) Histogram d) None of these Answer: (c) 20. The objective of the sharpening filter is a) highlight the intensity transition ¢) highlight the bright transition Answer: (a) (WBUT 2019] b) highlight the low transition d) highlight the colour transition 21. Which is the image processin ig technique used to improve the quality of image for human viewing? [WBUT 2019} a) Compression b) Enhancement ¢) Restoration 4d) Analysis Answer: (b) IP-38\GE PI 22. Histogram is the technique processed in {WBUT 2019] a) intensity domain b) frequency domain c) spatial domain d) undefined domain Answer: (c) Short Answer 4. What are called median filters? [WBUT 2009] Answer: Median filter ig the best order statistics filter; it replaces the value of a pixel by the median of gray levels in the neighbourhood of the pixel. f(x, y)=median{g(S, 1)} The original of the pixel is included in the computation of the median of the filter are quite possible because for certain types of random noise. provide excellent noise reduction capabilities with considerably less blurring then smoothing filters of similar size. These are effective for bi-polar and unipolar impulse noise. 2. Distinguish between image enhancement and image restoration. {WBUT 2009, 2012, 2013, 2015, 2017, 2019] What is the equation for getting a negative image? {WBUT 2012] Answer: 1" Part: Enhancement technique is based primarily on tie pleasing aspects it might present to the viewer. For example, contrast stretching. Whereas removal of image blur by applying a deblurrings function is considered as a restoration technique. 2" Part: The equation for getting a negative image is S=L-1-r. 3. What are image negatives? (WBUT 2009, 2019] Answer: The negative of an image with gray levels in the range [0, L~1] is obtained by using the negative transformation, which is given by the expression $=Z—I-—r, where S is the output pixel and r is the input pixel. 4, Discuss the limiting effect of repeatedly applying a 3x3 spatial filter to a digital image. Ignore the border effects. [WBUT 2010] Answer: One of the easiest way to look at repeated applications of a spatial filter is to use superposition, Let f(x,y) and A(x, ¥) denote the image and the filter function . Tespectively. Assuming square images of size Nx N for convenience, we can express F(x, y) as the sum of at most N* images, each of which has only one non-zero pixel 1P-39POPULAR PUBLICATIONS (initially, we assume that NV’ can be infinite). Then, the process of running h(x, y) over L(x, y) can be expressed as the following convolution: A(x, 9)* L(x») = h(x, »)* Lh (2. 2) +L (2 ¥) + + fulxr)] ‘Suppose the illustrative purposes that f;(x, y) has value 1 at its center, while the other pixels are valued 0 (fig. (a)). If (x, y) is a 3*3 mask of 1/9°s (fig. (b)), then convolving h(x, y) with f(x. y) will produce an image, with a 3x3 array of 1/9’s at its center and 0's elsewhere (fig. (c)). If A(x, y*) is now applied to this image, the resulting image will be as shown in fig. (d). Note that the sum of the non-zero pixels in both fig. (c) and fig. (d) is the same and equal to the value of the original pixel. Thus, it is intuitively evident that successive application of (x, y) will “diffuse” the non-zero value of f(x, y). Since the sum remains constant, the value of the non-zero elements will become smaller and smaller, as the number of applications of
Dfxy) i.e., F(.0) is the sum of all terms on the original matrix f! 3 4] ie fe/5 6 7) ls 9 un] So, F(8,0)=[1+3+4+5+6+74+8+9+11]=54 3" Part: A slandard way of generating a colour histogram of an image is to concatenate * N* higher order bits for the red, green and blue values in the RGB space. IP-50IMAGE PROCESSING 4° Part: The histogram then has 2” bins which accumulate the count of pixels with similar colour. 20. What are spatial domain and frequency domain technique? Explain Mask or Kernels. What is Median Filter? What is Image Enhancement? Explain gray level “ie ma a 1" Part: Spatial domain technique deals with image plane itself. It works based on direct manipulation of pixels. Frequency domain technique works based on modifying Fourier transform. [t deals with the rate of pixel change. 2™ Part: In image processing, a kemel, convolution matrix, or mask is a small matrix. It is used for blurring, sharpening, embossing, edge detection, and more. This is accomplished by doing a convolution between a kernel and an image. 3 Part: The Median Filter is a non-linear digital filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to improve the results of later processing. Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise, also having applications in signal processing. 4" Part: Image enhancement is the process of adjusting digital images so that the results are more suitable for display or furtherimage analysis. For example, you can remove noise, sharpen, or brighten an image, making it easier to identify key features. S* Part: Grey level slicing is equivalent to band pass filtering. It manipulates group of intensity levels in an image up to specific range by diminishing rest or by leaving them alone. This transformation is applicable in medical images and satellite images such as X-ray flaws, CT scan. Long Answer Ty} sitions 1. a) What effect would, setting to zero the lower order bit planes, have on the histogram of an image, in general? b) What would be observed on the histogram If higher order bit planes are set to ‘O'? c) Obtain the Haar Transform matrix for N=8. {WBUT 2010) IP-51
You might also like
Unit-Iii: A Weather Dataset
PDF
No ratings yet
Unit-Iii: A Weather Dataset
12 pages
Soft Computing Quantum
PDF
No ratings yet
Soft Computing Quantum
100 pages
Data Compression Question Bank
PDF
No ratings yet
Data Compression Question Bank
3 pages
DBMS Organizer 2023
PDF
No ratings yet
DBMS Organizer 2023
160 pages
Btech Cs 6 Sem Data Compression Kcs 064 2023
PDF
No ratings yet
Btech Cs 6 Sem Data Compression Kcs 064 2023
2 pages
Data Warehousing & Data Mining
PDF
No ratings yet
Data Warehousing & Data Mining
97 pages
HR Organizer 2023
PDF
No ratings yet
HR Organizer 2023
112 pages
Principal of Management-EE
PDF
No ratings yet
Principal of Management-EE
144 pages
Viva Questions
PDF
No ratings yet
Viva Questions
2 pages
CS3351 AIML UNIT 5 NOTES EduEngg
PDF
No ratings yet
CS3351 AIML UNIT 5 NOTES EduEngg
35 pages
Cie QP 2 - 21ai71
PDF
No ratings yet
Cie QP 2 - 21ai71
2 pages
DIP - Assignment 11 Solution
PDF
No ratings yet
DIP - Assignment 11 Solution
8 pages
It6005 Digital Image Processing - QB
PDF
No ratings yet
It6005 Digital Image Processing - QB
21 pages
IML-IITKGP - Assignment 7 Solution
PDF
No ratings yet
IML-IITKGP - Assignment 7 Solution
8 pages
M.Tech JNTUK ADS UNIT-2
PDF
100% (1)
M.Tech JNTUK ADS UNIT-2
20 pages
Concept Learning
PDF
No ratings yet
Concept Learning
62 pages
Question Bank For 5 Units DIP-IT6005
PDF
0% (2)
Question Bank For 5 Units DIP-IT6005
5 pages
Unit II - Perceptron
PDF
No ratings yet
Unit II - Perceptron
20 pages
Possible Questions in Dip
PDF
No ratings yet
Possible Questions in Dip
1 page
Image and Video Processing Klu
PDF
No ratings yet
Image and Video Processing Klu
1 page
Question Bank - Module 2 - Module-3 Module 4 -Module 5
PDF
No ratings yet
Question Bank - Module 2 - Module-3 Module 4 -Module 5
4 pages
Digital Image Processing Question Answer Bank PDF
PDF
100% (1)
Digital Image Processing Question Answer Bank PDF
69 pages
Ada Lab Viva Voice Questions
PDF
No ratings yet
Ada Lab Viva Voice Questions
9 pages
Thyroid Disease Classification Using Machine Learning Project
PDF
No ratings yet
Thyroid Disease Classification Using Machine Learning Project
34 pages
21CS54 TIE SIMPdocx (1) (1) (1) (1) PDF
PDF
No ratings yet
21CS54 TIE SIMPdocx (1) (1) (1) (1) PDF
4 pages
CNN Lecture Notes
PDF
No ratings yet
CNN Lecture Notes
86 pages
IT6006 Data Analytics
PDF
No ratings yet
IT6006 Data Analytics
12 pages
ML Spectrum
PDF
No ratings yet
ML Spectrum
144 pages
Ai-Unit2 - QB-VDP
PDF
No ratings yet
Ai-Unit2 - QB-VDP
13 pages
Aids I Book Sem 6
PDF
No ratings yet
Aids I Book Sem 6
223 pages
Bda Super Imp
PDF
No ratings yet
Bda Super Imp
35 pages
Data Mining PDF
PDF
No ratings yet
Data Mining PDF
67 pages
MCA18R5103-Soft Computing Techniques Question Bank: 2 Marks
PDF
No ratings yet
MCA18R5103-Soft Computing Techniques Question Bank: 2 Marks
3 pages
EC2029-Digital Image Processing Two Marks Questions and Answers - New PDF
PDF
No ratings yet
EC2029-Digital Image Processing Two Marks Questions and Answers - New PDF
20 pages
MACHINE LEARNING Important Questions
PDF
100% (1)
MACHINE LEARNING Important Questions
2 pages
Assignment - Week 6 (Neural Networks) Type of Question: MCQ/MSQ
PDF
No ratings yet
Assignment - Week 6 (Neural Networks) Type of Question: MCQ/MSQ
4 pages
Concept Learning
PDF
No ratings yet
Concept Learning
85 pages
Data Copy in Copy Out
PDF
No ratings yet
Data Copy in Copy Out
2 pages
TCS NQT Real Interview Experenices
PDF
No ratings yet
TCS NQT Real Interview Experenices
19 pages
04.1 Points and Patches
PDF
No ratings yet
04.1 Points and Patches
37 pages
AD3491 - Unit 1 - Introduction to Data Science Important Questions 2 Marks With Answer --3-8
PDF
No ratings yet
AD3491 - Unit 1 - Introduction to Data Science Important Questions 2 Marks With Answer --3-8
6 pages
Image Processing QB
PDF
100% (1)
Image Processing QB
29 pages
Daa Important Questions
PDF
No ratings yet
Daa Important Questions
2 pages
Universal Collection of JNTU Materials
PDF
No ratings yet
Universal Collection of JNTU Materials
6 pages
Ai-Unit-Iii Notes
PDF
No ratings yet
Ai-Unit-Iii Notes
46 pages
Optimization For Long-Term Dependencies
PDF
No ratings yet
Optimization For Long-Term Dependencies
57 pages
Ec 1009 - Digital Image Processing
PDF
75% (4)
Ec 1009 - Digital Image Processing
30 pages
Machine Learning
PDF
No ratings yet
Machine Learning
7 pages
EC8093-Digital Image Processing
PDF
50% (2)
EC8093-Digital Image Processing
11 pages
1 - 11 - TOC - Made Easy Class Latest Notes
PDF
No ratings yet
1 - 11 - TOC - Made Easy Class Latest Notes
270 pages
21CS54 Aiml Module3 PPT
PDF
No ratings yet
21CS54 Aiml Module3 PPT
102 pages
Digital Image Processing Question Bank
PDF
No ratings yet
Digital Image Processing Question Bank
4 pages
Flutter For Beginners
PDF
No ratings yet
Flutter For Beginners
39 pages
Vivekananda: Lecture Notes On
PDF
No ratings yet
Vivekananda: Lecture Notes On
135 pages
CS6659 Artificial Intelligence
PDF
No ratings yet
CS6659 Artificial Intelligence
10 pages
Ads &aa Unit 5
PDF
100% (1)
Ads &aa Unit 5
17 pages
Dip Notes
PDF
No ratings yet
Dip Notes
190 pages
20 431 Internship PPT Final
PDF
No ratings yet
20 431 Internship PPT Final
19 pages
Download
PDF
No ratings yet
Download
132 pages
Scheme and Solution: Rns Institute of Technology
PDF
No ratings yet
Scheme and Solution: Rns Institute of Technology
8 pages
Presentation On Computer Networks
PDF
No ratings yet
Presentation On Computer Networks
4 pages
Computer Networks
PDF
No ratings yet
Computer Networks
161 pages
Human Resource Development & Organisational Behaviour
PDF
No ratings yet
Human Resource Development & Organisational Behaviour
121 pages
Database Management System
PDF
No ratings yet
Database Management System
173 pages