Ejaz
Electrical
- Dec 4, 2001
- 26
I'm capturing raw images from an experimental CMOS digital camera. However, this camera doesn't have RGB masks on the CMOS. Instead, the CMOS is monochrome. An instrument with camera lens is fitted infront of the CMOS that has four apertures so that the input image on the CMOS screen is divided into four equal parts (upper left = UL, upper right = UR, lower left = LL, lower right = LR). I'm using 3 of the 4 apertures (UR, LL, LR) to place 3 optical filters like RGB (not exactly RGB). So the captured grayscale image on the computer screen is divided in 4 parts. One part is unused. The other three parts represent the images corresponding to the individual filters. The CMOS sensor has a bit depth of 12bits/pixel but the camera and the capture board takes it as 16bits/pixel. The captured raw image is saved as uncompressed 16 bits/pixel TIFF image with Prefix 0 bits. Now I would like to do the followings with the image using Matlab:
1) subtract the extra bits from the 16 bit image so that it becomes 12 bit image.
2) make a composite image using the UR, LL & LR images.
Could anyone plz advise me on that!
Thanks in advance!
1) subtract the extra bits from the 16 bit image so that it becomes 12 bit image.
2) make a composite image using the UR, LL & LR images.
Could anyone plz advise me on that!
Thanks in advance!