Determining amplitude for a linear diffusion visual spectra
For a personal project I'm trying to refactor and understand someone else's code, so while I understand there are free and paid items out there (spectral-workbench, ImageJ, MatLab) that do the same I'm trying to go at this from step one and really understand how this works.
Using python I get a image with x, y coordinates and I can find the r, g, b. for any pixel. For example, I have an image with a diffused spectra with vertical lines of black and color. Is it best to calibrate the camera setup using a known visible source at the desired limits, then correlate the camera pixels to that range and state that for all readings at pixel x1 we have lambda_min, and at x2 we have lambda_max and do a linear traversal across x, or should I do this for 10 points, or 20 points with linear traversal from point 0 to point 10 or 20.
At each vertical column of pixels at point x, when I get the r,g,b value. I found a method that finds the amplitude by summing r, g, b with g represented twice (amplitude = r + g*2 + b). I understand that there is a reason for this having to do with over/ underepresentation of green and this is a correction factor. What is the actual reason for this? I've seen some methods that use sqrt(r+g*2+b), which is recommended?
I don't want to say I've done enough research as I keep finding more items of interest, but the papers and texts I've read seem to gloss over these important matters, and they leave it to opensource/ proprietary code to find the final answer.
Submitted June 08, 2017 at 10:22AM by BloodEngineer
via reddit http://ift.tt/2r6mjKP