0% found this document useful (0 votes)
40 views

CV PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
40 views

CV PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 38
i ' [MS 59 | i ey The The total transmission timdwoitkd be =e A zy Cdl ~-—Teanertiosioy: Speed | than if icles Example 2.7/ Consifier a medical image of 8 x & inch’ dimension TF lution b $ tion cyeles/mm. How many pixels are required? Will an image size 6 be eto | hen Solution The sampling fesolutionis 5 eycles/mm. Ther pixels per cycle | Fequired. This means that 10 pixels/mm ate required i da Therefore, the pixel size is 0.1 mm, The minimum numbet of pixels required = 8 x 2.54 x10 = 20 2 rt 0 mm is requ ima id be 2032 x 2032 pixels. Hence, an iniage o 2 Pn : enough, | a — —4 | 25.2 Resampling ~~ During the scaling of an image, the numberof pixels should be increased to x ain imag | quality. Similarly, somtimes, the pixels are reduced for better compression. This i Pr or decrease ofthe numberof pixels is called resampling, Ths esamplingis ofthe fol two forms | 1. Downsampling 2. Upsanipling Downsampling (or bubsampling) is a spatial resolution technique in which the is scaled down by half} by reducing the sampling rate: This is done by choosin alternate samples, Subsampling or downsampling is also known as image reduction. This is performed by replacenfent of a group of pixels by an arbitrarily chosen pixel value. Fit the pixel value can be ¢hosen randomly or the top-left pixel within the neighbourhood c: be chosen. This method is computationally simpler, However, for larger neighbourhoods, this technique would npt yield good results. Consider the following image 3]3] 919) 3 9 Al a9 folo Subsampling can be dobe by choosing an upper-Ielt pixel and repl with a chosen pixel valle, that is, 3 3) 33) This method is called single pixel selection, Alternatively, the stafstical sample can be chosen. This can be the mean ofthe pixels: it replaces the neighbourhood. This technique yields the following image ing the neighbourhood | p= 60 DIGITAL IMAGE PROCESSING j 3434949 | 4 _(6 °) [3asos9 34+34+9+9| 16 6 4 4 } Upsampling can be done using replication or interpolation, Replication is called a zero. order hold process, where each pixel along the scan line is repeated once. Then the scan line ts repeated. The aim is to increase the number of pixels, thereby increasing the dimension 5 | 1 of the image, For example, the image F = ( i is replicated as follows | This process is called a zero-ord replicated to yield the following ima process. Once are inserted, the pixels are aay yee] a ol BG ES) 1 Linear interpolation is equivalent to fitting a Straight line by taking the average along the rows and columns. The process is described as follows: Consider the following image: Ufo Q 1[3]3 {t can be observed that the resulting image has a greater of pixels. Dow = ; x : lll “3 |. For example, the matrix H can be zero-interlaced as follows: (0/0/0/0 27oJijo H O/|O0}0l0 3 2. Intetpolate the rows. This is achieved by taking the average of the columns. This yields the following image: z olfol}n]- ol|elol— in the © columns. This is ack maj ose ExampleQ@y Avalme a D image F Zero-hold interpolation and linear interpol Solution: T zeto-hifld interp lation technique the 1D form, Now the kize cteases the size of th of the im ¢ is increased twie F= [23045 060 Linear intepolatiog creates addtional pixels by rein ing the inserted zero with th the adjacent pixels, The * approximated tothe nearest integer, In this mar can be interpolate A185 25+ 452% 24.and 45 + 6572, which is approxi idea can be extended tq 2D images as well B PIRI Assitme the im (0 7) (3 13) and apply linear intepoaton, Solution The intetpolation can be of then the aver the given i a higher order, where the ima 8 of the rows and columns are consi e can béenlarged as follows: ge is first enlarged wit Row interpolation ibvolves taking the av Image yields the following image: age of the columns. Row interpolation of the given 6 AGE PROCE Column interpolation can be applied by taking th rage following im warn 0} 35 3 O43 3549 THIS 35475 9 Is 45 5 eegis eas lela at 7 SS 25.3 Image Quantization A natural imayehas continuously varying shades and coloufs, and is known as a cont tone image. Therefore, it is necessary to convert a Continuous-tone image to 2 ‘mage, Where discrete points,of grey tone or brightness gre used Image quantization process of Converting sampled analog pixel intensity into a discrete valued r {he image quantizer maps a continuous value x into a fiscrete variable x’. An ‘inal has an infinitely large number of distinct values Hence, it is necessary to con the continuous Values into a smaller set through the Process Of quantization, The 'T quantization involves partitioning of input values into e ally spaced intervals. The iis ofthe intervil ate called decision boundaries. Let tk decision boundaries by PY B® {by by, «24, By), Let the input values be in the Tangy —Xinay, to +X... The length of the interval between successive decision boundaries is called the step size. This is denoted 's 4. The step size is given as (2x.1,4.)/M. The midpoitt between successive d lecision boundar * Is called the output or reconstruction level. Let it be R = Wt» Yar ooo Fah Of quantizers aré used. They ae called midrise dnd midtread, Midincad i used when the output levels are even, whereas midtise is assopiated with an odd number of levels, Miditead also produces 0 as an output if necessary, Two t When d= 1, Quantizer ~ Midrise(x) = [x]- 0.5 Quantizer ~ Midtread(x) = [+ The differ quanti nee between the actual value and the r chnstructed value is called a Mion error, The quantization is in the range {(-4/2)t0(4/2)], The number of bits Necessary Wo encode the output levels is piven by =[log, ul Ifthe input is not uniformly Aistrlbuted, the uniform quantizer may not yield yood result tn that case, a fon-uniform lanier Is helpful, Lloyd-Max quantizer and companded ghiantzer are examples of non. walform quantizers. 8) Fig. 2.17 _ Image quantizatioh (a) Original image (b) Quantization at 64 bits (4) Quantization at 16 bits (e] Quantization at & bits (1) Quantization at 4 bits (g) Photography isthe simplest thing in the world, but i is incredibly complicated to make it really work ~Martin Pargl | " LEARNING OBJECTIVES This chapter provides an overview of image processing operations, The 8 _conceptsof image topology suchas neighbourhood and connectty are introduced inthis ee The important arithmetic, logical, statistical, spata, Seometrica, and set opérations are discussed. After studying this chapter, the reader will become familiar with the following Concepts of image gebmetry and topology Image processing operations Logical operations Geometrical operatiohs Interpolation techniques Set operations * Spatial operations | 3.1 BASIC RELATIONSMIPS AND DISTANCE METRICS This chapter sea the image processing operations. To carry out any image processing operation, the image must be stored in the memory in a suitable form, Chapter 2 discusses how the images are fampled and quantized with rectangular grids, During this sampling Process, the digital tmage is a set of discrete pixels arranged in a rectangular grid. The pixel itself assumes the shape of a small rectangle. Ifthe width and height of the pixel are the same, the shgpe of the pixel is a square. Even though grids of other shapes such as hexagonal and triangular are available, rectangular grids are widely used as they are intuitive, simple, and are inherently used in the discretization of images by CCD (charge coupled device) cameras and scanners, 3.1.1 Image Coordinate Syster Images.can be easily represented as a two-dimensional array or mattix. The popularity of the matrix form is due to the fact that most of the programming languages support 2D array data structure and cah easily implement matrix-level computation DIGITAL INAGE|PROCESSING OPERATIONS % Pixels can be visualized logically and physically 2170,2) 0,2) (2,2) Logical pixels specify the points of a continuous P/O) S01) £2, 2D function. These are logical in the sense that y4017@,0) fa,o) s(2 they specify a location but occupy no physical | x=0 1 2 area. Normally, this is represented in the Cartesian | frst coordinate system. Physical pixels, on the Flg.3.1| Analog image f(x,y) ea et ane hand. occupy a small amount of space when 9424rark of Cartesian coordinate syste displayed on the output device. Digitized image in the Cane; fnalog image of size 3 = 3 is ) coordinate system as shown in Fig. 3.| ates an image f(x, y) of dimension 3 x $, where /(0, 0) is the bottom Starts from the coordinate position (0, Q), it ends with /(2, 2), that is, Vandi 0.12.2. N= land dente dimensions of the ir ‘an fourth coordinate: system. For ex Fepresented in the first quadrant of the Cartesi Figure 3.1 illustr left corner. Since it ¥=0,1,2,...M4 However, in digital image prog Discrete imag It Of the Cartesian coordinate system, A di in Fig. 3.2(a), with an index of (1, 1). The 13.200). the fourth quadrant of the *=017(0,0) £01) F¢0, 2) *=1 70,0) 0,1) fa,2) ¥= S20) £(2,1) (2,2) (a). F=1/ F0,1) [FG 2) the fundamental properties ¢ neighbourhood, paths among pixbls, boundary, and connected i ties such as neighbourhood, adjacency, and connectivity, Neighbourhood is fundamental to Understanding image topology. In the simplest case, the neighbours ofa given reference pixel are those pixels with which the given | (0 x 9 reference pixel shares its edges and corners, X plxyy) X In N,(p), the reference pixel ‘P(x, ¥) at the coordinate ee cg: Position (x, y) has two horizontal and two vertical pixels i 3.3. 4-Neighbourhood v,(p) aS neighbours. This is shown graphically in Fig, 3.3, | 84 DIGITAL IMAGE PROGESSING The set of pitels {(x + 1,V), (t= 1.y), (esp + 1p (x,y = 1)}, callel! the 4-neighbours of p, is denoted as ' Na(p). Thus 4-néighbourhood includes the four direct es neighbours fi pixel p(x, y). By duality, these are Ms pixels that sharela common edge with the given refer- Fig. 3.4 Diagonal elements Nl) ence pixel p(x,y} Similarly, the pixel may have four diagonal nei Y= 1, (8+ YHL), (r= 1+ Dy and (x41, = 1), The di Pixel p(x, ») are frown Sraphically in Fig. 3.4. ighbours, They are (x =, nal pixels for the reference The diagonal] neighbours of pixel p(x, ») are (x ie ay represented as Ny(p). The 4-neighbourhood and N,, are |X pny) x| collectively calléd the 8-neighbourhood. This refers lee ox to all the neighbours and pixels that share.a common comer with the teference pixel p(x, y). These pixels Fig. 3.5 8-Neighbourhood w, are called indiredt neighbours. This is represented as Ns(p) and is shoyn graphically in Fig. 3.5. The set of pixels N(x) =. (x) ol 3.13 Connectivity 7 The relationship nog {vo or more pixels is defined by pixel connectivity. Connectivity information is usdd to establish the boundaries of the objects. The pixels p and q are said to be connected if certain conditions on pixel brightness specified by the set V’ and spatial adjacency are satisfied. For a binary image, this set I” will be {0, 1} and for zrey scale images, V might be any range of grey levels. ‘Connectivity The pixels p and g are said to be in 4-connectivity when both have the same values as specified by the set Vand ify is said to be inthe set Ny(p). This implies any path from p to q on which every other pixel is 4-connected to the next pixel S-Connectivity It is assumed that the pixels p and q share a common grey scale value. The pixels p and q are Said to be in 8-connectivity if is in the set Np). Mixed connectivity Mixed connectivity is also known as m-connec are said to be in mconnectivity when 1. qisin N4(p) or 2. q is in Np(p) and the intersection of N,(p) and N,(g) is empty For example, Fig, 4.6 shows 8-connectivity whei = {0, 1} 8-Connectivity 4s shown as lines. Here, a multiple path or loop is present. In mm-connectivity,thefe are no such multiple paths, The m-connectivity for the image in Fig 3.6 is as shown in fig. 3.7. Tt can be observed that the multiple paths have been removed. ity. Two pixels p and q 3.1.4 Relations t | Abinary relation bce two pixels a and b, denoted as ab, specifies a paif of elements of an image. For example, consider the image pattem given in Fig. 3.8, a DIGITAL INAGE PROCESSING OPERATIoy, i $ H ge a i | 79 et ba on the 4 The set is given as A = fay, 13.4} The set based 0 i YY \ 4-connectivity relation iy given as = fy, xy}. It can observed that x3 1s ignored as itis not connected to any other 0 y i ment ofthe image by 4-connectivity. | N Voto) | The following are the properties of the binary relations | ma. nal f lectivity ‘ence Reflexive For any element a in the set A, if the relation aR Fig. pees nonectn ‘ holds, this is known as a reflexive relation | rep 1 fa) | Smmmetrc IC aRb implies that BRa also exists, this is known}as | a symmetrie relation 0 1 0 (py Transitive “If the relations akb and bRe exist, it implies that NG iz the relationship «Rc also exists. This is called the tansitigty Bee property Fig. 3.7 m-Connectivity {fall these three properties hold, the relationship is called an equivalence relation, The ‘elationships can be expressedlas | Ou want Assume that the sei 4 is divided into x disjoint si whe Is, is an integer ranging from 1 to « Ifa relation ab 4 | {sts and if and b belong othe same set the relationshiqis Cea tna 10 be an equivalence relation, Transitive elosure implies "1838 Image p that if aRb and bRe exist, ake also exists, The set, ntait Salled the transitive closure of Rand is denoted as R+, rity aid tial ale Hag all the implied relations is ne uth 3.1.5 Distance Measures Uae The distance between the pixels p and g in an image can be given by distance me is Such as Euclidian distance, D, distance, and De distance, Consider three pixels pg, and « If the coordinates of the pixels are PO, y), O(s, 1), and ZC, w) as shown in Fig. 3.9, the q distances between the Pixels can be calculated, The distance function can be called metic ifthe following properties are satisfied 1, tp, q) is Well-defined and finite for allp and g, ” Me U2) 2. D(p,4)= 0itp= 4, then Dip, 4) =0 agane | és 3. The distance D(p, 4) = Dig, p), 4 : eo : 4. Dlp, 4) + Dig, 2) > Dip, 2), Ths is called the property pf triangular inequality. ) Fig.3.9 Sampl "The Euclidean distance between the pixels p and q, adi 163.9 Sample image © coordinates (x,y) and (s,1), respectively, can be defined as \ D(a) = YC 86 DIGITAL IMAGE PROCESSING The advantage of the Euclidean dist involves a square root by The D, distance or cit lance is its simplicity. However, since its calculatiog peration, it is computationally very costly block distance can be simply calculated as | Di(p.q)=|x —5|+|)"~1] The Dg distance or ch¢ssboard distanc an b lculated as Dy(p.q) =max (\x"- 5} Example 3.1, Let f= (0, 1}. Compute the D,, D,, Dy, and.D, distances between bwo pixels and q. Let the pixel coortlinates of p and g be (3, 0) and (2. 3), respectively, for the image shown igs / Fig. 3.10. | 5 j 4 | ml. 2 3 i Find the distance measutes, Solution The Euclidean distance is Wt a = S awiri = ¥Q-2)'+(0-3) San) | | 0) : | =ViF9 = Vid 5 Fig. 3:10 Sample image | p-2)+|0-3) =1+3=4 3 Feil oh ahi D,=max(jr—s);]»~1) =max(3-2, 0-3) = max(1, 3)=3 ‘The distances can be eh¢cked with Figs 3.11(a)-3.11(d). The transition from one element to another. is a hop. It can be veriffed that the calculaticns match with the hops in Fig. 3.11. The distance Dg depends on the values of the set / The implication oftthe set i thatthe path should be construct only using the elementslof V”. So ifthe values of the set are changed. the path also changes. The simplest D,, distange can be calculated along the diagonal path. The distance is 3. Suppose set = {1}, the path offlistance D,, also changes. Itis shown in Fig. 3.11. OPT Mela POM Tm hofafr yay fofayrys to 1) frfofoli} fifofofa} jrjofofi ele Vytta yay aya Ly uyoya i Br bt is ay | a] japapiye] jafayate ayape (b) () (@ \ce measures (aj Distance D, (b) Distance Dy when V= {0, 1) Distance D,, when V=(0, 1} (d) Distance D,, when V= (1) DIGITAL wast PROCESSING OPER AT) | bnsidr the Following image with marked pifelsp and g. Example} Conder nee oh ee Toate Tone ) IE th 2s ia the shores mpath between pines p and g ap Solution The shortest path for set (1,2) ie shown here: | age |. The set of pixels the connected set, The shortest m-path is 12) 3:16 Important Image characteristics ) 251. Therefore, D,, distance is 4 ‘er Points, p n %)s then p x Gn. 34). The number of pixe ihe length. x= and y, 9, aoe called a closed path 2 Riscalled a region ifivis «connected component 4. If 5: Twotegions Ry and Ry (underlined pixel 1 disjoint, ‘overing a region that binary image, there is pixels are within the a ath between any two pixels p an fe usted component of S. ifthe set hy 'S called a connected set. A connect Fonnected component. tthe regions io egions R; and Ry are shown, 1") have. S-connectivity, Ifthe regions 6. The border of the image is called cont has one or more a Foreground object and a, backgro * foreground object may have atleast one neighbour in th Tegion itself, itis called inner bound it 4 lies within the donnected Set S, itis called a ss only one connected component, the, the set § fed set is called a regio adjacent if the union o: are not adjacent, itis These regions are 8 these sets also forms a led disjoint set, In Fig, 3.12, epiinected becatise the pixels they are called ‘our or boundary, boundary is a set of pixels neighbours outsid the region. Typically, in » d object. The border of the background, If the border : This need not be closed, 88 DIGITAL IMAGE PRO ee lO, © Region | Region 2 Fig, 3.12 Neighbouring regions 7. Edges are present wherever there is an abrupt intensity change among the pixels. 7 Edges are sinfilar to boundaries, but may ot may not be cannected, Ifedges are disjoint they have to} be linked together by e linking algorithms. However, boundarie are global and have a closed path. Figure 3.13 illustratgs two tegions and an edge. ss It can be observed that edges provide. an | outline of thd object, The pixels that are | he covered by “ edges lead to regions, : \ g Edge 3.2 CLASSIFICATION OF Ima PROCESSING OPERATIONS ~~ Fig. 3.13 Edge and regions There are various ways to classify image operations. The reason for categorizing thes operations is to gai an insight into the nature of the operations, the expected results, and the kind of computational burden that is associated with them. One way of categorizing the operations based on neighbourhood is as follows |, Point operations | 2. Local operations 3. Gtobal operations Point operations are those whose output valite at a specific coordinate is dependent only’ on the input value. A local operation is one whose output value at a specific coordinate is dependent on the input values in the neighbourkiood of that pixel. Global operations are those whose output value at a specific coordinate is dependent on all the values in the input images Another way of categorizing operations is as follows: 1. Linear operations! 2. Non-linear operations An operator is called a linear operator if it obeys the following rules of adiitivity and homogeneity. oe 1. Property of additivity Ha A(x, ¥) + a, f(x,y) = H(a,fi(x, y)) + Aa, f(y) =a Hf (x, »)) + aH f(x, y)) 4,X g(x, y) +a, x g,(x, ») | | | | | i DIGITAL “ PROCESSING 2. Property of homogeneity HK A899) = KARI) = hi) A non-linear operator, as the name suggests, does not follow these rules. Image operations are array operations. These operations are done on a pixel-by-pixel basi Array operations are different from matrix operations. For example, consider two (4 8) ap fe A Cte eae The multiplication of F and F; is element-wise, as follows AE BF). | las no} In addition, one can observe that F\x Fy = F,x F,, ice matrix multiplication is clearly different, since in matrices, Ax B# Bx 4, By array operations only. 3.2.1 Arithmetic Operations Arithmetic operations include ima; ige addition, subtraction, blen ding. The following sections discuss the usage of these 3.2.1.1 Image addition Two images can be added ina diréot manner, as given by BQ y)= A(ty)+ Alay) The pixels of the input images f(x, ») and f(x,y) are ade &(%, y). Figure 3.14 shows the'effect of adding a noise pattern the image addition process, care should be taken to ensute'tl allowed range. For example; ina grey scale image, the allowe bits. If the sum is above the allowed range, the pixel value is} value. The range values of select MATLAB data types are lis| Table 3.1_Data type and allowed rai S.mo. Data type Data range lefault, image operations are multiplication, division, and erations, to obtain the resultant image oan image, However, during It the sum does not cross the d range is 0-255, using eight set to the maximum allowed in Table 3.1 e8 1 Vins 0-255 2 Vim16 065,535 3 Uint32—0-4,29,49,67,295 ji 4 int64—_O- 4,51,615 Similarly, it is possible to add a constant value to a single image, as follows: B(x, Y= fils») +k If the value of & is larger than 0, the overall brightness illustrates that the addition of the constant $0 increases the br increased. Figure 3.14(d) ness of the image. Why? 90 OIGITAL IMAGE PROCESSING | The brightness of an image s the average pixel i fan im: Ifa itive Off negative constant ae to all the pixels of an image, the averag: Pixel intensity of thd image increases or decreases, respectively, This is. covered in Section 2.7 applications of image addition are as follows | To create double expdsure. Double exposure is the t chnique of superimposing an image on another image to produce the resultant, This gives a scena » equivale ng & film to two pictures, This is illustrated in Figs 3.14(a)-3.14(c) 2. To increase the btighfness of an image (b) ’ (9) (a) ig. 3.14) Reeser the image addition operation (a) Image 1 (b) Image 2 () Addition bt images 1 and 2 (d) Addition of image 1 and constant 50 3.2.1.2 Imagesubtraction | / The subtraction of two images can be done as follows. Consider BC Y= fix y)- Alay) where fi(x, y) and f(x, Ft are two input images and g(x, ») is the output image. To avoid negative values, it is desifable to find the modulus of the difference =/A.»)- Al »)] J{is also possible to subtrapt a constant value kfrom the image, ie. g(x, ») =|7.(x,)) ~K|, as kis constant. As discussed| jearlier, the decrease in the average intensity reduces the brighmess. of the image, Some of the practical applications of image subtraction are . Background eliminati Brightness reduction Change detection s follows: If there is no differen ct elween the frames, the subtraction process yields zerc en difference, it indic ates the change. Figures 3.15¢a)—3.15(d) show the diff etween the images on, it r etwween the im In addition, it illustrates that the subtraction of a constant resul decrease of the brightness (a) mo) {c) (@) Fig. 3.15 Results of the im subtraction operation |(a) image 1 (b) lm (c) Subtraction of images] and 2 (d) Subtraction of constant 50 from we ee 3.2.1.3, Image multiplication Image multiplication can be done’in the following manne Consider B(x, Y= Aly) * HS y) Here f(x, ») and f(x,y) are two input images and g(x,y) isthe output in value crosses the maximum value of the data type of the image age. If s, the value of the pixel is reset to the maximum allowed value, Similarly, scaling by alkonstant can be performed as g(t »)= f(y) xk where k is a constant. If k is greater than 1, the overall contrast increases, If k is less than 1, the contrast decreases. The brightness and contrast can be manipulated tagether as g(x,y) =af(a, y) +k Here, the parameters a and k are used to manipulate the brightness and contrast of the input image. g(x, ») is the output image. Some of the prastical applications of image multiplication are as follows: ses Contrast. Ifa fraction less than multiplied with the image, it results Sccrease of contrast. Figure 3.16 shows that by multiplying a factor of 1.25 with the original image, the contrast of the image increases 2. Itis useful for designing filter masks 3. lis useful for cfeating a mask to highlight the area of interest Fig. 3.16 Result of multiplication operation (image x 1.25) resulting in 3.2.1.4 Image division Similar to the other operations, division can be performed as g(x, y= LY f(x. ¥) where f(x,y) and fa(x, p) are two input images and g(x,y) is the output image The division process may result in floating-point numbers, Hence, the float data type should be used in prog: in loss of information. Division using a constant can also be performed as f(x,y) gs, = Some of the practical applications of image division are as follows amming. Improper data type specification of qhe image may result Where’ isa constant. 1, Change detection 2, Separation of lifminance and reflectance components 3. Contrast reduction. Figure 3.17(a) shows such an effect when the orig divided by 1.25, | image is th Figures 3.17(b) ¢) sh won and divisipn operation a mask. It can be observed that a 18 use a mask. The multipl on of i with image 2 results in highlighting certai Portions of imagg | while suppressing th d Portions. It can be observed that division elds back the original image (b) c) (d) (e) Fig. 3.17 Image division Operation (a) Result of the ima, 2 (b) Image 1 (c) Image 2 u a mask (d) Ima; (e) Image 4 image 3/ima, Example 3.3. Consider the I lowing two images Z a4 Be Tuc wisps 125) 6 1S S\f=| 43 53 145 200 30 150) (200 so 7%] Perform addition, subtract multiplicdtion, division, and im the images are of the 8-bit inte blending operations, Assume b ger type (uint8 of MATLAB ty Solution 4 Addition: 1+50 3+150 Ath=| $445 15455 7541 (3) 183 13: I-| 50 70 230 fi (200+ 200 50+50 150475) (400 100 225) { If the data type unit8 is assumed, the minimum tespectively. So if the value is larger thi 0, it is reset to 0 (Table 3,1), and maximum allowed values are 0 and 255, an 255, itis reset to 255, Simflatly, if the value is less than 94 DIGITAL 1m GE PROCESSING tse So the result of the image addition iy | so 70 | (255 100 295} Subtraction 2 a ‘ Vea) Qc IS0. 75125) (49 a147 hy S]=}-40 -40 -go (200-200 s0-so 150-75] | BEA-E=| $-45 1$~55 WAS 0a Since the data type is unit8, values less thar 0 are reset to 0 | (000 | e=/0 0° 0 lo 0 75) Wcan be observed that the modulus of the difference results in a different image, Multiplication: | { 1850 3xIs0 7125) ( so 450 875 B=AXAi=| 5x45 15x55 75x155|=| 295 (200x200 soxs0 150% 75) ie 2 Since the data type is unit8, values greater than 255 are reset to 11625 11250 (50 255 | g=|225 255 | (255. 285 Division: | WSO 3950 7/125) (0.02 0.02 0.056 $45 15/55 75/155|=|0.11 0. 200/200 soso isms) (11g 000 g=|0 0 0 Pealislae { This resultant image is flue to truncation, Image or alpha blending: This operation blends two images of the same size to yield a resultant image, This operation i useful for transparency and compositing, The resultant image is a linear combination of two inphit images. This can be mathematically stated as 8(%, Y= Ax fix, y)+ (1-0) x f(x, ») where f(x, ») and f(x, is Called blending ratio, ) are two input images and g(x, ») is the output image, and the parameter a Which determines the influence of each image on the resultant image, | | | ee bi Li sicas pissy DIGITAL IMAGE ROCESSING OPERATIONS 45 Exkmaple'3!4 Consider the following 44, & level images A bnd B Find A> B and 4:8. A Ba | | | Solution The given images are 4 <4 and 8 grey level (0 7) images. As the grey levels are 0 ~ 7 any Value above 7 is reduced to 7 | Passer | 7 Re ae =e =2 Lolo 4-B= ot Cae 0 soe 3[—] sl4afo Tet i Bs 8 ots] [zlaIlo sofaz| [a fala eee | eal2i ei! cera ae pe S2OmIeS ISR Manel < ia jcon canto ip a 8S S/T 6/0.) 6A =[0l10]0 fe] 6/3_|_ 7/5 | 66-| 67 2 YS s 6 | 78 | 25 [3 6|2[ofo lx, y)= f(x, y+ n(x, y) where f(x, y) is the input image and g(x, ») is the output image. Several instances of noisy images can be taken and averaged.as eae | Beas Dae» | 96 DIGITAL IMAGE PRocessil, \ \ { Where Mis the numbér of noisy images, As MV } intensity of the noise} and it becom Paliarel ' “| Becomies lage, the espectaion Ells, i9}= fe 3.2.2 Logical Operations } Bitwise operations can be applied to image pixels. The resultant pixel is i rules oF the partéuldhvperation, Some ofthe logical operations that ar wid 4 image processing arg as follows: ee 7 1, AND/NAND 3, EXOR/VEXNOR 2. OR/NOR 4, Invert/Logical NOT 3.2.2.1. AND/NAND The truth table of the AND and NAND operators is given in Table 3.2 {the AND and NAND opera ~ CAND) C(NAND) ‘The operators AND and NAND take two images as input and produ The output image pixels are the output of the logical AND/NAND of the individ Some of the practical applications of the AND and NAND operators are 25, follows: one output im 1, Computation of the intersection of image 2. Design of filter nlasks 3, Slicing of grey scale images; for e te 1100 0000. THe first bits of the pixels of an im: | first slice, a masldof value 1000 0000 can be designed, The AND ope! pixel and the magk can extract the first bit and first slice of he image Figures 3.18(a)-3.48(d) shows the effects of the AND and OR logical opera illustrates that the AND operator shows overlapping regions of the 0 the OR operator shgws all the input images with their overlapping. ‘ CJ Cj |e] | (b) (0 3) fa) fig. 3:18 Resuls of the AND and ORIogia! 9 (c} Result bf image 1 OR image 2 (0) Res grey scale image may anni tle sxample, the pixel value of th constitute one slice. To extract the tion 0 svo in} perators (a) Image 1 (b) image 2 ult of image 2 AND image 2 3.2.2.2 OR/NOR m The truth table of the OR and NOR operators is given in Table 3.3 Table 3.3 Truth tabe of the OR and NOR operators B COR) C(NOR) The practical applications o the OR and NOR Operators arg as follows: 1. OR is used as the union o tor of two images. 2. OR can be used as a 1g Operator, 3.2.2.3 XOR/KNOR The truth table of the XOR and XNOR operators is given in| Table 3.4. Table 3.4 Truth table of the XOR and XNOR operators A 0 0 0 1 0 1 I 0 0 1 0 I I 0 1 ene ee Tee cs ge The practical applications of the XOR and XNOR operatorsfare as follows: 1. Change detection 2, Use as a subcomponent of a complex imaging operation. XOR for identical inputs is zero. Hence it can be observed that the common region of image | and image 2 in Figs 3.18(a) and 3,18(b), respectively, is zero and hence dark. This is illustrated in Fig, 3.19. 3.2.2.4 Invert/Logical NOT The truth table of the NOT operator is given in Table eos Fig. 3.19 Result of the XOR operation 98 DIGITAL IMAGE PROCESSING For grey scal, S, g ale values} the inversion operat the inversion operation is describ B(x, y) = 255 ~ /7 The practical applications of the i inversion Table 3.5) Trut Operator are as follows. A c l Obtaining the Negative of an image. Figure ative of the original ( image shown in Fig, 3.18(a). 2. Making features cldat to the observer 3. Morphological progessing Similarly, two imagestean be comp = Equal to 3.20 shows the ng ared using operators such as > Greater than Greater than br equal to <__Less than Less than or equal to #* Not equal to | The resultant image pixel represents the truth or falsehood of the comparisons. Sim shifting operations are also very useful Shifting of / bits of the image pixel to the Tight results in division by 2/. Similarly, shifting of / bits of the image pixel to the left results in multiplication by 2/. : Shifting operators are helpful" in dividing and multiplying an image by a power of two. In addition, this operation Fig. 3.20 Effect of the NOT operator (a! is computationally less expensive. Original Image (b) NOT of original image "BxampleSS_ Consider the following hvo imaues Tee y 1a} (10 s Perform the logical AND, 'OR, NOT, and difference operations. Solution (lal Oal Daly (1 0 0) Tal tal tat}e}t tot OAl Oval Tal) (0 0 I) AND HAND f= esate eapatiints DIGITAL IMAGE PROCESSING OPERATION ovl) ( ! i fal Ivtlatl § ] OR AOR? | | ‘ Ovl Ovl Ivt) Wd | 1 30 -0) (O 1 1 NOT NOT(/)=} 41 I | | 00 0 0 =l) 10 Ditlerence 0 0 0) f, AND (-h,) =| 0 0 and a {AND 00 10 3.2.3 Geometrical Operations Let us now discuss the geometric: 3.2.3.1 Translation Translation is the movement of an image the coordinate position X x, y) of the Soordinate position is (x’, y’), Mi X to the new position X'. The jathemati translation (2: Fig. 3.21 Result of translation by $0 units A operations used in image processing ‘oanew position, Let us assume that the point at matrix F is moved to|the new position. whose ically, this can be statéd as a translation ofa point is represented as +x ytoy 5, 25) is shown in Fig, B.21. In vector Notation, is represented as F" = F +7, where dk and 6 are translations parallel to the x andy axes Here, F and F! ate the original and the translated images, respectively. However other tr formations such as scaling and rotation are multiplicative in nature. The transformation process for rotation is given as F'= RF, where Ris the transform matrix for performing rotation, and the transformation process for scaling is given as F' = SF, Here, ‘Sis the scaling transformation matrix. 100 ITAL INAGE proce To create Uniformity and consistency, it System Where i ‘ { 8 h all trangformations are ated ct S expressed as ( oar a for ). The properties of homog HS coordinates are ag 1 Inhomogeneous edordinate at least one p at least one point should 4 NOt exist in the honto, , Thus (0, 0,0)4 Brit be BENCOUS Coordin: ate system, “i and 6 ree ulalcative ofthe otter point they 4 snd (3, 9, 1S) are sdme as the second pointis3 (13,5) ; 4 Be point (x, ), wh in the homogenre coordinate ays \ ae eM corresponds t0 the poin =, %) in 2D spate _, li the homogeneous coordinate System, the iranslation process of poi age Frto the new point (x) of the image f is described as : fe and yay+s In matrix form, this can be stated as { { 10 ér [eas t=]o 1 lo o Someiimes, the image jnay not be present at the.orgin. In that eas, a suitable ne sive translational value can be used to bring the imaye to align with the origin i 3.2 Scaling Depending on the reqilirement, the object can be shrinking. In the homogeneous coordinate system. image F to the new point (x’, y’) of the im: scaled. Sealing means enlarging and @ the scaling of the point (x, ») of the i age F” is described as ox) x ysyxS, Aopas: 0) | fel G Jeet | §, and S, are called sealing factors along the x and y axes, respectively, Ifthe scale factor Is 1, the object would appear larger. If the scaling factors are fractions, the object would shrink, Similarly, if §, dnd §, are equal, scaling is uniform. This is known as isomropie scaling. Otherwise, itis éalled differential scaling, Inthe homogeneous coordinates system, itis represented as 102 DIGITAL IMAGE Process, The reflection about yb is giv Vis given as F’s[ xy] i 1 aI. 0 The reflection o, ¢ eration is iluste coordinate sy: ‘ated in Fi Stem, the}; BS 3.22) and 3.22(b) tn the hot or | Matrices for reflection can be given ag oe “q tl o | i 0 09 f-| 0 0) q Re 5 i man =| (0 | Ot UR} 9 to i 0 0 0<4 Od Similarly, the reflectiod along a line can be given as 0-10 R, Fee =|cl) -0:%0)| W020) ayy) 3.23.4. Shearing Shearing is a transformation that produces a distortion in the x-direction or the y-direction, In this transforma of the object are simph of shape, This can be applied either ion, the parallel and opposite layers sided with respect to each oitier. : Shearing can be dohe using the following calculation and can be represented in the 2 matrix form as | x= x tay | i 1 a. 0) (where a=.sh,) i oer Tw ‘| | \o 0 1) 3 x n Similarly, Y,year an be given as ytbr | 1 0 0) (where b=sh,) | Yaue=]o 1 0 0 1 directions, respectively tors in the x and y direetions, where sh, and sh, are shear fac ae 3.2.3.5 Rotation An image can be rotate! it is given as | by various degrees such as 90°, 180°, or 270°, In the matrix for a | | cos@ sin © (sind cose This can be represented as F" = Ra. The parameter 0 is the angle of rota he object rotation is about the origin S. It1s assumed that the object rotation i to the x-ani positive or negative. A positive angle represents counter clagkwise 1 een nts clockwise rotation, The rotation of an image is shown in Fi angle represents clock: » t t nt fe homogeneous coordinate system, rotation can be cos ~sind ‘ I}=| sind cose i If @is substituted matrix rotates the image in the clockw (a) Fig.3.23 Rotation (a) Original mage () Resuttof rot 3.2.3.6 Affine transform The transformation that may is given as a pair of rans and parallel lines rem tation by 45° ps the pixel atthe coordin, ates (x,y) toa new coon formation equ: ations, In this transto nain unchanged, It is described m, ‘dinate positic A, straight lines are pee lathematically as V=T(x, y) ST, (x, y) 7, and T, ate expressed as polynomials, The linear equation gives an affine transform x's AX Faye a, Y= bee by +b, DIGITAL IMAGE PROCESSING This is expressed in mhtrix form as | a, a)\fx | | 4 by Wy il O20} 3 : The affine transform |s a compact way of representing all transformations The givens equation represents all transformations, Translation is the situation where I, a, =0,8 and ay = 6,; scaling trahsformation is a situation where dy =s, and hy = 8 and a, = 9, a) =9, by = 0, and b, = 0; rotdtion is a situation where dy = cos8, ay = 0, and b = 0; and|horizontal shear is performed when ay = 6, = Sh, and by = 0 sin, by = sind. b, = 088, 1h 9, a 3.2.3.7 Inverse transformation The purpose of inversd transformation is to restore the trans form and position, Thejinverse or backward transformation matrices are given as formed object to its original follows: | 1 0 -dx) Inyerse transform for translation =|0 1 -6y) Oe { oo \5, 7 1 Inverse transform forscaling=| 0 <- 0} Hes On 0) ea | g the sign of the transform term. x i Inverse transform for rot For example, the following matrix p | ation can be obtained by changin, verforms inverse transform cos? +sin@ 0) sind cos 0| | OO 1) Bxample 36 Considér an image point (2, 2). Perform the foll results of these transforms. lowing operations and show the (a) Translate the image right by 3 units. (b) Perform a scaling operation in both x-axis and y-axis by 3 units (c) Rotate the image in x-axis by 45°. (d) Perform horizontal skewing by 45°. (e) Perform mirroring about x-axis. (f) Perform shear in y-dire¢tion by 30 units. | | | ee | DIGITAL nage PRocessine OPERATIONS j | \ | image right by 3 units means that na (a) Translation of the image right by eae 105 So the translation matrix is given as : (106) (10 3)] | | { T=|0 | 8 f=]0 1 o} | | oo 1) lo 0 a} Therefore, erxy fl 0 3) “JO 1 Olx[agiels 2 Af i lo 01) (b) Sealing by 3 units in both th he directions means that : | (©) Rotating the image in s-axis by4se (cos sing 9 6089 O/=/-sinds cosa 9 re WY 0, gla (0.707 -o.707 9) ) (cosas “snl | ReJom 0707 o}xp.2,4¢ 2828)" (oo so oi Gi iastin 2.28 vl be rounded 3, This ‘etemined yt gern ehngue wey & —@Pertrming hoz sewing by a5 lo ‘ (1 on sae stl | 0 04) 9 of) sic ll 108 DIGITAL IAG (©) Performing mirroring afar . ; (100 M,=10 <1 9 Oe oo fl oo Mirroring =10 <1 glgin 9 g=|0 1 ofxin2iei2 2 | loo 4 (8) Performing shear in ydifetion by 30 unis (1 sh 0 0 Shear, =| 1 io ; \o 9 yl (1 30 0 Miporing =|0 1 0|x12,2,1)' =(62 2 1 loo 1 4 3.23.8 3DTransforms Some medical images sich as computerized tomography (CT) and magnetic resonance imaging (MRI) images are three-dimensional images. To apply translation, rotation, scalingSon 3D images, 3D transformations are required. 3D transformations are logical extensions of 2D transformations. These are summarized and described as follows (0 0 0 6x) 010 6y Translation=| > 5. loo0 4 0 0. 0) s, 00 Da, 20 | lo 0:0 1) Rotation along z-axis cos6 -sind 0 0 \ sin@ cos@ 0 oO} | Fliedy 0 a | eu bd i {-0 ‘ 15 375} 0.25 - 0.45 0,375 NIrOnY of the image is 3,075 Thus, entropy indic ‘ites the information richn usinga surface ni ess of the image. This car Ba surface plot where pixel values shai mage. This ca 6 plot is shown in Five 454 Plotted as a function of pixel position One i Nin Figs 3.2744) and 3.27(b), Ent v Dixel posit c texture of the images. oe : iS (a) (o Fig.3.27 Surface plot (a) Original image (b) Surface plot Applications of statistical operations ‘The brightness and contrast can be define grey level and the mean abSolute deviation (MAD), respectively, Ibis MAD =1/MN x Y|(x, y) = mean) Then the equation B(x, -( so.) +0, can be used with ¢,, and 1,4 which are the standard deviation and mean of the desired new image. M and N are the dimensions of the image. Generally, this operation increases the brightness and contrast. 3.27 Convolution and Correlation Operations“ The imaging system can be|modelled as a 2D linear system. Let fx, y) and g(x,y) represent the input and output images, respectively. Then, they can be written as g(x,y) = ¢* (f(x, ‘y). Convolution is a group process, that is, unlike point operations, group processes operate on a group of input pixels 3 yield the result. Spatial convolution is a method of taking a group of pixels in the input image and computing the resultant output image. This is also known as a finite impulse response (FIR) filter. Spatial convolution moves actoss pixel by pixel and produces the output image. Each pixel of the resultant image is dependent on a group of pixels (called kertel). The one-dimensional convolution formula is as follows: j gi=1* fi) Dwse-i The convolution window is a sliding window that ceptres on each pixel of the in The resultant pixel is this calculated by multiplyir by pixel values and symming these values. Thus, sliding window is moved for every corresponding pixellin the image in both d Therefore, the convolution is called ‘shift-add-multiply’ pperation To carry out the process of convolution, the template pr mask is first rotate Then the convolution process is caried out. Consider tht process of convolution pag ne % whose dimension is 1 x 5 and a kernel or femplate T, whose dimensi I x3. Let F= (0,0, 2,0, 0} andthe kemel be {7 § 1} As mentioned, the template Toiated by 180°. The rotated mask of this original mask [7 5 !J is a convolution template whose dimension is 1 x 3 with value {1, 5, 7) To carry out the convolution Process, first, the process of zero padding should be ca: Dut. Zero padding is the process of creating more zeros and is done as shown in Tat Added zeros are underlined to generate the’resultant imag Weight of the convolution mask Table 3.7 Zero padding process for convolution ; 3 Table 3.8 Convolution process 4 (@) Initial position (>) Position after one shit i Template Template is shifted by one bit a tes th ya Peeciate he agg Gg? G UU o a} Output is produced inthe centre pixel, Output produced is zero, (©) Position after two shits (@) Phsttion after three shifts Template is shifted apain Template is shifted again, ge: Babee 9 tO 0 e010 y- G POOR 10.0. 4 9 00 14 oO 4 Output produced is 14, Output prodliced isto, { DIGITAL INAGE PROCESSING OP; RATIONS. 117 : 7 a (0, Position after five shifts Template is shifted agai ons it ! Mee Otek 0-05.09 0 v0 f 0 0 14 jo +e , Output produced is 2 en ae Oviput produced is 0 egies? 0 0 Ore 0 00 @o04b 200 Ouiput produced is 0, Futher shit crosses the range x Hence the process is sopped So in the final position, the output produced is (00 14 10 2.0 0) Correlation is similarto the convolution operation and is very useful in recognizing the ~hasie shapes in the image, Correlation reduces to convolution if the kemels are symmetric The difference between the correlation and convolution processes is that the mask or ~remplate is applied directly without any prior rotation, as in the convolution process The correlation of these sequences is carried out to observe the difference between these processes, The correlatipn process also involves the zero padding process, as shown in Table 3.9. | Table 3.9 Zero padding process for correlation set anes POuaGisomzagdd, 0. .0 | ‘The padded zeros are underlined. The correlation process is similar fo the convolution process described. Ths process is shown in Table 3.10. Table 3.10 Correlation process (a) Initial position (b) Position after one shit Template Templates shifted by one bit Pees css ere 000020000 io oo 2 0 6.0 fb Pes “ bore Output produced is 0. | 18 DIGITAL IMAGE PROCESSIN (©) Position after two shifts on after three shir, Template is shifted is shifted again again Output produced i Output produced is 10, (e) Position after four shifts (P) Position after five Shifts Teiplate is shifted agin, Templatd is shifted again Output produced is 14 (g) Final position Template is shifted again OD hh aed i) Drone 10 14 0-4 . Output produced is 0. So/in the final position, the output Produced is [002 10 14,0 0) The processes of convolution and ¢ fe ies: Kemmels are usually square, with an odd number of cells. The is I x 1; it can also be 3 x 3,5 5,7 7, etc. As the numbers increase, Precision increase, For a 1D signal. the Convolution process is as follows TENG 7) ae template of dimension n x m and the image /(, /) is the input im convolution of fwith Tis written as orrelation can be extended easily to sm EST: Lei pxse-ayhp correlation can be derived. | 48 a(i,j), the correlation is giv io Similarly, the formula for the image is f(x, ») and the template, whose size is n x m, en as = LUMA = Sx ri,y+ py i LLM, I )= 20, Nyse hy 120 joo Factors such as (#(é, /)*) and (f(w + i, y This reduces the correlation fo just multipli t+ Mx ti y+ jy) +J)*) are constant and hence can be igno cation, addition, and convolution, However, DIGITAL IMAGE PROCESSING OPERATIONS 119 image processing, convdlution is loosely interpreted to mean cross correlation. Correlation js written as at met Tad VTL) fierhy+h The dimensions of the a image will be (7 +N ~ 1) and (1 + M= 1). The convolution proces§ on a 2D image can be shown as follows. C ‘consider an image 2 Chi, ra Jan the template or mask is ¢ al | 5 s follows, The mask is rotated by 180 The algorithm for thé convolution process is a Prior to the convolution} process, the operation mask or template can fully align with the image. Zero padding is done such that the template will be aligned with the Jeftmost corner, to check the number of zeros to be inserted. In the given example, the zéros should be insegted on the left, right, top, and bottom: This is shown in the following steps. The mask coefficients are shown in subscripts, and the sum of the products pf the mask and image coefficients is found to produce the result, In the first step, the mask is positioned and the result is as shown in Fig. 3.28(a) The mask coefficients are shown in subscript Inthe second step, thé mask is moved horizontally by one position. The mask coefficients are shown in subscript. The result is shown in Fig. 3.28(b). In the third step, the mask is moved again by one position and the result can be seen in Fig. 3.28(c). | Further, the mask cannot be moved in this direction as the mask range crosses the range of the image. Therefore} it will be zeros or uncertain values. So the mask is moved down and the steps are repeated and are shown in Figs 3.28(d)-3.28(f). Again, the mask canftot be moved further. So the mask is shifted down and the process is repeated. | an the last vertical shift, the positions of the mask are as shown in Figs 3.28(g)- i f zero padding is carried out so that the 0 ZOO) 0: 2 48026 2 0 2, bo226 CeO. p 2 Qed anid oO Die )0.0 0 0.0.0 0 (b) a0 B43 0 8 4 4 20 Ga 20s4 2 250 Beh = 10piah2'. 0 10-2 -2:<0 00 0000 0000 120 DIGITAL IMAGE PROCESSIN oe Oe Ro 0 ee 2-0 tet 0 - tee yee Of 8 4. 0 3 o> 7 oo) G0 21s 0)" 0 280 0 0 0h 00 0.0 900g 100-0 0.0 (e) (f) 24 DS eed te 4 i dst? 0 aoe 4 0 et 4 or yO. IB 4 0 Orca 2.0 as ed Ok ena ee 4 2 . Cott 0. 0 00n0h 010, auia 0 0 0 0 (e) |. (h) 24 24240 48 4 8 4]o oe 2 4 240 00 0 0 0j0 Fig. 3.28 Convolution process (a) Initial position (b) Position after first shift (c) P. after second shift (d) Position after first vertical shift] (e) Position after third shift f) Position after fourth shift (g) Position after second vertical shift (h) Position after fifth shift (i) Position after sixth shift 1 The mask cannot be moved further and thus the process stops. The process described here is applicable to correlation also. In image processi calculations in the frequency domain involve only real conyolution. However, in the spati domain, the correlation process is often referred to as convolution process as the masks’ involved in image processing are generally symmetric. Hence the processes of convolution and correlation are technically the same and there is not much distinction between the two. | ; processes. Some of the properties of convolution that are usefull in image processing are as follows, Here fand T are input images. 1. Convolution is separable. 2. Convolution is commutative. Pinar 3. Convolution is associative. (ft T) *h=f*(T* h) 4. Convolution follows the principle of superposition. to? Tae This chapter focused on image operations, The kno} important for implementing applications that involve ime ledge of these operations ges. Some of these operatio! MAGE ENHANCEMENT 199 Here, r is the . ris the grey Idvel of the lai ree of the original image and.s the resulting grey level after the eee F. Thus, Tis a grey-level transformation (or mapping) function operation Involves only one point, it is call ci ADN point, it is called a point transform ot point pear, point opertion is also called a gre jevel transformation or spatial transform oo ee may involve some neighbourhood of (x) In this case, T is called a neighbourhood operatign, and if the transformation involves all the pixels of the image, can be categorized as a plohal operation. ‘ These operations carl be carried out in the spatial or freq point transformations it the spatial domain now. ency domain, Let us discuss 5.3 IMAGE ENHANCEMENT IN SPATIAL DOMAIN t transforms or point grey-level sealing transformations depend only on Inthe spatial domain, poi 1s inthe output image. Point operations the piel ofthe input image to create comespoding Pie are curied out by varius scaling functions, which canbe categorized as shown in Fig. 58. I ; Point | tansforms poo) Noatinar | | Pecenise liner point transforms | | sas | fig. 5.8. Types of point transforms ard of the following form: a X%)) * bs where a and 6 are constants Examples of linear functions are identity and that are not ofthis form are called non-linear functions. Examples re square, square root, logarithm, ‘and exponential functions. A tea set of near functions that are applied (© distinct grey values Linear functions that manipulate contrast a1 piecewise linear function of the given image | Scaling functions 7 be applied directly to the im role of the linear function is to map every BY level v ‘rey level value in thd resultant image. These operatto LUT is a table where results of the mappings are store takes the result ofthe mapping from the table, and produces the ‘of this approach is tha} the original image values are not affecte Let us fist discuss finear scaling furctions age or the lookup table (LUT). The alue of the original image to a ne ns may also involve the LUT. The -d, The function refers to the table, output pixel. The advantage xd or disturbed directly 5.3.1 Linear Point Transformations the important linear pol Inhbinary image, inverse hhich performs a digits! nt transformations, W! fhe image by changing Inversion is one 0! transformation reverses t negative operation. 194 IMAGE PROCE 1 black pixel t0 a white oné and vice versa. This is simifar to the NOT operato S-bit grey scale images can have 256 (2°) possible different shade ey | black to white, The inversion operation is mathematically expressed as follow | (ro) = L= 1 =fxy) | Where L is the number of grey levels in the image. Consider the follo ng 3 feta eet | Axvy=[o] i] aaa This image has four grey levels (0, 1,.2, 3). Hence, the inversion transform perlormed to yield Observe that pixel ‘0° in the Original image is transformed to level ‘3 and Sample image and its inverse are shown in Figs 5.9(a) and (b), respectively ——__ . therefore, the resulting image is given as follow 133 | 105 | $5 ] Let us now discuss some useful non-linear operators. 532 Non-linear Point Transformations Fora non-linear operator, the relationship between input and output variables is not linear. Some of the popular nonelinear functions frequently used in image processing 21° discussed in the following subsections, 5.3.2.1. Square Function 5 contrast of the’ given image. However, when the the data type of the image (¢.8- integer data type ration is a condition where all the square of an image This funotion is used to enhance the pixel values cross the limit accepted by supports 0-255), thi operntion leads to saturation Satu alues, Flgure $.10(a) shows an example of how the nd hence-are represented with the maximum pixels have high v ‘causes most of the pixels fo cross the limit ar pixel values. The result [Fig. 5.10(b)] is a whitened oF saturated image. (a (b) Fig. 5.10 square of ihage (a) Original image (b) Resultant image of square uncon | 5.3.2.2 Square Root The purpose of this funct image, In addition, tis fufetion reduces the dynamic range ‘The function can be written as follows: 9(x,y) = 9255 f(xy) 1 value of the input image. This function is normally used to i im of images. The original image and its square root equivalent on is to expand the grey scale range to the dark areas of the of the light areas of the image Here, f(x, y) is the pix display the Fourier spec image are shown in Figs 5}11(a) and (b). r

You might also like