Lecture05 Image Processing Pipeline
Lecture05 Image Processing Pipeline
Qilin Sun(孙启霖)
香港中文大学(深圳)
wikipedia
GAMES204 Computational Imaging, Qilin Sun
High-dimensional integration over angle, wavelength, time
plenoptic function
plenoptic function:
[Adelson 1991]
GAMES204 Computational Imaging, Qilin Sun
field sequential multiple sensors vertically stacked
Philips / wikipedia
Foveon X3
Sigma SD9
Ø Thermal IR
FLIR Systems
GAMES204 Computational Imaging, Qilin Sun
GAMES204 Computational Imaging, Qilin Sun
RAW image
JPEG image
(dcraw –D)
Color
Gamma CFA Noise Filter for
Correction
Correction Interpolation Luma
Matrix
Non-local Means
Denoise
Ø The output
Correction:
Ø Where Ravg, Gavg, Bavg is the average of different channels on bayer domain. is the
digital
Ø Gain applied to all channels to keep the image brightness constant.
Ø an image sensor will see an object as contrasty if it’s in focus, or of low contrast if it’s not
Ø move the lens until the image falling on the sensor is contrasty
Ø compute contrasty-ness using local gradient of pixel values
cinema lenses
do not autofocus
(howstuffworks.com)
Ø requires repeated measurements as lens moves,measurements are captured using the main
sensor
Ø equivalent to depth-from-focus in computer vision
Ø slow, requires hunting, suffers from overshootingit’s ok if still cameras overshoot, but
video cameras shouldn’t
GAMES204 Computational Imaging, Qilin Sun
Ø AI servo (Canon) / Continuous servo (Nikon)
Ø predictive tracking so focus doesn’t lag axially moving objects
Ø continues as long as shutter is pressed halfway
Ø focusing versus metering
Ø autofocus first, then meter on those points “trap focus”
Ø trigger a shot if an object comes into focus (Nikon)
Ø depth of field focusing
Ø find closest and furthest object; set focus and N accordingly
Ø overriding autofocus
Ø manually triggered autofocus (AF-ON in Canon)
Ø all autofocus methods fail if object is textureless!
Ø Take a image with uniform light condition. And divide the image into m*n
blocks. Four points of each block have a correction coefficient. Store the
coefficients into look-up table.
Ø Based on the pixel location, calculating the pixel falls into which block. Then
get four coefficient of the block from LUT. (Or use surface fitting)
Ø Calculating the correction gain of the pixel by interpolation.
Ø Multiplying the correction gain by the pixel.
Where m1,m2 is the gain for different thresholds. When -x1 < x ≤ x2the pixel is noise not the
edge.
Ø There are two methods to calculate the Y value:
Where
Ø Do false color suppression on chroma channel when edge is larger than edge_min.
(cambridgeincoulour.com)
GAMES204 Computational Imaging, Qilin Sun
Ø Mmanual editing
Ø capture image in RAW mode, then fiddle with histogram in Photoshop,
dcraw, Canon Digital Photo Professional, etc.
Ø to expand contrast, apply an S-curve to pixel values
Ø Gamma transform (in addition to RAW→JPEG gamma)
Ø output = inputγ (for 0 ≤ Ii ≤ 1)
Ø simple but crude
Qilin Sun(孙启霖)
香港中文大学(深圳)