Assignment 2: Scanner Characterization ====================================== Goal ---- To create a scanner model that predicts tristimulus values (CIE XYZ) from scanner RGB. What to Do ---------- 1. Choose a flatbed or transparency scanner, and scan an appropriate Kodak Q60 target. The resolution does not have to be very high, as long as you can reliably select at least 10x10 pixels within any given coloured patch. Make sure the scanner settings (brightness, contrast, etc.) - are reproducible (save them or write them down); - avoid colour clipping at the gamut edges (RGB values of 0 or 255 are suspect). 2. Write a small program that takes the scanned Q60 image and dumps out an RGB triple for each coloured square. Ideally, you want to average the pixels in the interior of each patch [KEEN1], although you may find it simpler just to dump a central pixel. 3. Using Matlab or similar software, take the RGB triples from (2) and the known CIE XYZ values for the Q60 target, and use cubic regression to determine a function f(RGB)=XYZ. (This function depends on the scanner, its settings, the illuminant used for XYZ, and the Q60 medium---better results may be possible predicting reflectances.) Note that - the linear and quadratic cases are subsets of the cubic case, so you may want to support them also [KEEN2]; - initial "gamma correction" of the RGB (to improve linearity with Y) may improve accuracy [KEEN3]; - inversion of the model may be approximated by performing the regression in the other order, g(XYZ)=RGB. 4. Write a small program that takes an arbitrary scanned image (assuming the same scanner and settings) and converts it to an XYZ image by applying function f() to each pixel. Don't worry about the speed of the algorithm or the compactness of the XYZ image format---a large file containing (numRows, numCols) and a list of float triples in ASCII is sufficient. If you are keen [KEEN4], you may want to split this program in three: - a program to take an image and determine its colours uniquely, outputting an index image and a colour table; - the program to apply the regression function to the colour table; - a program to recombine the orginal index function with the modified colour table, outputting either an XYZ image as above, or a coded binary image of some type. What to Hand In --------------- A. Show 2D plots (Y vs X, Y vs Z, and chromaticities y vs x ) of the Q60 XYZ values with your approximate f(RGB)=XYZ superimposed. It is unlikely that they will match exactly, and you should note the regions where the errors are large. B. Using the appropriate formula, convert your XYZ to LAB, and compare with the measured Q60 LAB. Plot the measured and approximate data with 2D graphs as L* vs a*, L* vs b*, and b* vs a*. What is the average delta-E error, and maximum delta-E error for your approximation? Where are the largest errors? Are they in the same region as you thought from (A)? C. You now have an opportunity to try some colour management. Convert scans of the Q60 target and another hardcopy original to XYZ, and pass them through your monitor model from Assignment 1. Display the images on your monitor, and compare them with the originals as best you can in poorly controlled lighting. Try displaying the raw scanned RGB images as well. Did your hard work modelling the scanner and monitor make any difference, or is sending the raw scan directly to the uncalibrated monitor just as good? Write a paragraph commenting on the difficulty of this comparison, and any successes/failures/headaches you encountered (e.g. gamut mapping). Marking ------- For anything that does cubic regression to model the scanner, and reasonably predicts the Q60 XYZ values (A above), I'll give 6/10. For your answers in B and paragraph in C, I'll give another mark out of 2. For implementation of any of the [KEEN] features noted above, I'll give another mark out of 2.