Results 1 to 24 of 24

Thread: RGB calibration

  1. #1
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254

    RGB calibration

    In a few days, I'll be making RGB pictures, yet I don't know how to callibrate them, except by comparing with a RGB picture of a white reference with the same adjustments.

    I read it would be good to use different exposure and gain for each channel so I get a similar histogram peak in all of them. Even if I use the same parameters in all channels, I should still balance them to correct sensibility variations of the camera, and even atmosphere effect near the horizont. So, after so many variables, is there some process or rule of thumb to put all of that again near their true colors?

    Thanks.
    English is not my first language.

  2. #2
    Join Date
    Dec 2006
    Posts
    3,275
    This is one way I do it.
    http://www.astrodon.com/Orphan/g2v_tutorial/

    If Sloan data is available then I use eXcalibrator
    http://bf-astro.com/eXcalibrator/excalibrator.htm

    Before this the frames should be normalized (if taken different nights). When assembled into a RGB image you will need to neutralize the background. When done properly, if the image is of a starfield or typical galaxy then yes the peaks for each color will align (no need to do this, proper calibration does it automatically) Nebulae in which a color (blue or red in many cases) dominates then they won't align as the image object is skewed to begin with. This is why developing good calibration skills are important. You can trust that in cases when no double check is available because the average color of the image is white.

    It takes practice to do this. Don't move to color until you have mastered mono. I doubt you are there yet. I did this for years in a chemical darkroom with film yet still spent nearly a year taking at least 100 mono images with a CCD before moving to color.

    Rick

  3. #3
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    To neutralize the background, you mean to clip the left part of the histograms so the background is white/gray?

    I noticed the background had a color when I increased gama, and I clipped the according histograms to force it to have the same value for each RGB channels.

    Is that correct?
    English is not my first language.

  4. #4
    Join Date
    Dec 2006
    Posts
    3,275
    NO NO NEVER CLIP.

    Once the stars are calibrated the background will likely have some color cast and be very color noisy due to the very low signal it contains. By neutralizing just the background you eliminate these issues without changing the color balance of the image itself. Various processing packages have different tools for this but the end result is about the same for all -- a color neutral background without clipping or altering color balance.

    Rick

  5. #5
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    OK! You scaried me enough.

    As the background is supposed to be dark, the color difference should be of only some bits. So can I neutralize the background with minimal impact on the rest of the image just by adjusting levels or using the whitecall plugin? Or actually only the background color will be changed?
    English is not my first language.

  6. #6
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    This is the original RGB image. I used wavelets and adjusted the contrast to keep the histogram away from saturation.

    Saturno RGB.jpg

    I used the whitecall plug in on a region of the B ring, but could not completely eliminate that green band on the planet. Maybe I should have selected a bigger region of it.

    And that is the luminance. I tried to get a full histogram, but when the pixels are too close to white, the color is less visible. Is it OK to reduce the brightness to enhance the color perception? Or is there a better way?

    Saturn Luminance.jpg
    English is not my first language.

  7. #7
    Join Date
    Dec 2006
    Posts
    3,275
    Whitecal adjusts the entire image not the background only. Since the background is small it takes a LARGE change to have much effect Taking a blue pixel from 5 to 10 takes a 100% adjustment!!

    You want to only adjust the background and remove the color noise. Not knowing what software you are using I can't be more specific. Each program gives the tool a different name it seems.

    In Planetary images where there's no real background to begin with you can cheat with Photoshop duplicating the image into a layer. Desaturate the new layer, removing all color. Then use the color select tool to select the background. Be sure no dim moon is included in the selected background. Now turn off this layer and move to the original layer. The selection will still be there with the color restored. Now desaturate the selected background. With it still selected you can then give it any tint you like for a planetary image. Delete the no longer needed layer and deselect the background. This method is not recommended for anything but planetary images. For DSO images always use a tool designed for DSO images where there is often real detail and color in the background that shouldn't be altered or removed.

    The same website that had the WhiteCal program also has HLVG which will adjust excess green but be sure it really is incorrect before using it! Like WhiteCal it applies to the entire image ignoring any attempt to select a particular area.

    For adjusting your Saturn image, I used the B ring because it was all I had. You can do a lot better by actually using a white source. A G2V star is best. Be sure it is imaged with exactly the same image train as Saturn was. Also the star should be at about the same altitude above the horizon as Saturn to compensate for atmospheric extinction. Best color values will be obtained using the software or equivalent (Pix Insight has a good routine for this as well) used in the video and noting the correction values as the video showed you how to obtain. Then use the same software to apply these values to your image. The White Cal method is not really recommended. Just all I had available to use not having access to your scope, camera etc. Good color balance is not easy.

    Rick

  8. #8
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    Thank you again.

    For planetary, I'm using RegiStax6. For deep space, I used DeepSkyStack and CCDStack2. (I preferred the latter since I could choose to center the imagens manually).

    Should I avoid using wavelets and level adjustments on the isolated RGB channels before fusing them?

    If the color correction is just multiplying each channel by a constant, what about the non-linear modifications I did before?
    English is not my first language.

  9. #9
    Join Date
    Dec 2006
    Posts
    3,275
    You determine the correction factors immediately after dark and flat calibration of the G2V star so you apply them to the image at that step. That is, right after calibration for darks and flats. Won't be valid any other time! I then make the RGB image and work on it as a whole. Doing something different to one color channel than the others will just upset your color balance. If the balance was done correctly no such individual adjustments will be necessary.

    Now to complicate things. If one channel is fuzzier than the others but your calibration image is equally sharp for all three colors then some tweaking will be in order since the fuzzy channel wasn't equal to that channel in the calibration image. There are several ways of doing this but which is best depends on the image itself and the skill of the processor. For now try to avoid such situations.

    Rick

  10. #10
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    I've read there was some usefullness in sharpening the color channels separately, perhaps sacrificing the color balance for highlighting details, but I'll favor the color realism.

    I'll work on the images again from the AVIs. I'm just afraid that RegiStax will throw data away when I create TIFs, so I won't be able to use wavelets to the same extent on the fused RGB image. Well, if is the RGB, maybe I can use it without sharpening nor adjusting levels at all.

    I'll post the result later.
    English is not my first language.

  11. #11
    Join Date
    Dec 2006
    Posts
    3,275
    I don't use Registax so don't know how it works but doubt it throws anything away in making the combination. TIFF supports all three color layers individually up to 32 bits deep in fact though most video cameras don't go that deep.

    I've heard that theory, never worked for me and those I've seen using it don't, to my eye, get the results those not using it get. I tried that method for a while when first getting into color. Never could make it work well for me. It is handy for some narrow band type imaging however where color balance is immaterial. Combining IR UV and visual for instance.

    I get best detail when I combine the RGB into a pseudo L channel (mono) -- or just take mono images if time (Jupiter rotates fast) and do all my detail processing to that. Thus, I process the L channel for detail and the RGB combine for color accuracy then use that to color the detail of the L channel.

    Rick

  12. #12
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    Hi,

    I will describe why I think I could end up wasting data. Correct me if I say something wrong.

    In the process of stacking the 8-bit monochrome images, with pixels ranging from 0 to 255 in value, I end up with an average value of the entire stack for a given pixel, which is still between 0 and 255, but generally is not an integer number. When I aply the wavelts and adjust levels, I won't be working with a normal 8-bit image anymore. Effectively, its range is bigger.

    When I choose to do nothing to the stacked image and save it as a normal file, I suppose the bit values are truncated to integer again. So if I try to use wavelets only when I saved and fused the three images, I will be working on a normal 8-bit image again.

    ***

    Now something I thought about calibrating the colors of Saturn. Could I use a correctly calibrated picture of the B ring to get the exact value? It's not a white call, but a generic color call. Should I multiply the color values by constants so the B ring of my picture have the same proportion of the ideal B ring?
    English is not my first language.

  13. #13
    Join Date
    Dec 2006
    Posts
    3,275
    I don't know that program. All I work with allow you to save as 32 or 64 bit file. That takes 256 levels to 65536 or more. In the end though you still come back to 256 levels. Only the super expensive monitors can even reproduce that so really the loss even if you stay at 8 bit is very small and likely never visible.

    How do you know that image you're using is correctly calibrated? I find most by NASA and major observatories aren't simply because they didn't use RGB filters. Instead they used some rather narrow band scientific filters and approximated the correct colors. Even Cassini does this. Close yes, but not a calibrated source.

    Rick

  14. #14
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    It's a pity. I knew that was an issue in finding the true color of Mars's sky, but not in every space agency's picture. However, somebody must have got it right. Don't you know anybody? I know you preffer deep space, but perhaps even yourself?
    English is not my first language.

  15. #15
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    I think I've understood something of the G2V calibration method.

    Knowing the RGB ratios of my camera and filters, I can 1) balance the exposure time for each channel, or 2) balance the number of frames for equal exposures (for similar SN ratios) and then balance the result.

    But are those things proportional? I've heard the SN ratio decreases with the square root.
    English is not my first language.

  16. #16
    Join Date
    Jan 2008
    Posts
    698
    Quote Originally Posted by Jairo View Post
    I will describe why I think I could end up wasting data. Correct me if I say something wrong.

    In the process of stacking the 8-bit monochrome images, with pixels ranging from 0 to 255 in value, I end up with an average value of the entire stack for a given pixel, which is still between 0 and 255, but generally is not an integer number. When I aply the wavelts and adjust levels, I won't be working with a normal 8-bit image anymore. Effectively, its range is bigger.

    When I choose to do nothing to the stacked image and save it as a normal file, I suppose the bit values are truncated to integer again. So if I try to use wavelets only when I saved and fused the three images, I will be working on a normal 8-bit image again.
    Registax has the option of saving images as 16 bit/channel FITS, TIFF and PNG. While this is less than 32 bit integer or floating point some deep sky programs use it is usually quite sufficient.

  17. #17
    Join Date
    Dec 2006
    Posts
    3,275
    I figured it had some option like this. Thanks for verifying. That is plenty sufficient. Even after stacking hundreds of images a 16 bit integer file contains a far wider dynamic range than a stack of 8 bit images can create. No data would be lost, just meaningless noise would be cut off.

    Stacking more frames does nothing for color balance. It reduces noise but then the noise of that channel no longer matches the noise of the other channels which creates its own issues. For most filter/chip combinations software adjustment as told in the video I linked to way back is the best way to go. You can increase the exposure time of each frame proportionally as well but this adds to the complication of taking the image. Only if 50% or more time is needed to equalize a color would I go that route. Largest difference I've seen was about 1.4 and was handled by software just fine.

    More good frames means your signal increases faster than noise since signal builds faster. But stacking poor frames doesn't help. Get as many as you can (difficult with fast rotating Jupiter, easy with Saturn which has little fine detail along the direction of its rotation) and select the best but try to get about as many good ones in each color if making a RGB image. If making a LRGB image then the number of good L images governs detail. Noise in color images can be hidden by blurring the color image if necessary. Interestingly the eye is very insensitive to color detail so blurring it has virtually no effect on what we see in the final LRGB image. For fast rotating Jupiter this can be a great cheat to get enough frames for detail before it has rotated too far.

    I'm basically a deep sky imager and not an expert in planetary work. For better info I suggest you contact Efrain Morales Rivera who specializes in this area. His website is
    http://jaicoa-observatory.com/ He is in Puerto Rico where the ecliptic is well placed for planetary work. His email address is at the top of the home page.

    Rick

  18. #18
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    I was thinking... If the unsharpen mask and the wavelets just puts a luminance mask over the image, does it mean that it's useless to aply them to the RGB image?

    And does atmospheric turbulence count as noise? Or can it only be countered by selecting frames?
    English is not my first language.

  19. #19
    Join Date
    Dec 2006
    Posts
    3,275
    I don't know the theory here. I've always had an easier time of it if I process a luminance channel (real or combination of RGB channels) for detail using whatever methods work best; deconvolution, wavelets, high pass filtering, unsharp mask etc. that doing this to the combined color channels as it can screw up color balance. So as I've mentioned I prefer to process an L channel for detail and the RGB channels for color then combine by putting the RGB channel on the bottom and the L channel on top as a luminance later. This allows its opacity slider to be adjusted to prevent washing out color in the brighter regions.

    The atmosphere has several effects, all technically not noise but some have to be treated as if they were true noise others can be, at least, partly compensated for. Obviously its best to start with those 50 or so out of a thousand that are best. Sometimes seeing distorts the position and the shape of some feature but if those regions are noted in the stacking step I believe Registax as well as others now will help correct this. Also seeing can create problems that mimic poor focus or optical issues like over or under correction. Deconvolution, if done correctly, can somewhat compensate same as it did for Hubble before its optics were corrected. This works best when the signal to noise ratio is very high which means lots of good quality images in the stack to begin with. Other seeing issues break up the airy disk into multiple images. This, in theory, is correctable. With proper high speed mirror warping and monitoring an artificial star right beside the object this can be corrected for in the infra red but so far not in the visible spectrum and is far beyond the amateur level. There are other ways of dealing with it for solar imaging but they too are far beyond our means to correct for it. When this type of seeing exists you have to go with as fast exposures as the camera is capable of doing and collect as many as possible (sometimes called "lucky imaging") before the object changes too much due to changing shadows on the moon or rotation in the case of Jupiter or even Mars. The higher the frame rate of the camera the better to get this quantity of frames. We amateurs have to treat this type of seeing as noise even though it doesn't fit the strict definition of random noise.

    In short just grab as many frames as you can at as high of a shutter speed as reasonable (fast shutter can mean high gain and thus high noise so a balance is needed as well as as low a noise and high sensitivity camera as you can afford). Let the software deal with what it can but know there's no substitute for the highest quality data to begin with. It's easy to over process an image so it becomes very unnatural in an attempt to deal with poor data so use the software tools with restraint. If conditions just don't allow a top image, accept that rather than force the data. I'd say there's always tomorrow to try again but in some cases (transit of Venus) that isn't always the case

    Rick

  20. #20
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    I was putting the RGB layer as "color" over the luminance layer, and I was suffering exactly from bright areas washing color; I needed to make the result less bright to see the colors. I also liked your method of always sharpening a luminance layer, even if made from the RGB layer. It makes sense, and I'll give it a try.

    So do you think there's no problem for the SN ratio to use the same number of same expositions for each color channel, as long as I balance the result in the computer?

    Why the sad face? Have you missed both transits?
    English is not my first language.

  21. #21
    Join Date
    Dec 2006
    Posts
    3,275
    Another trick that can enhance color without the issues of pushing saturation too far is to raise the brightness level of the color layer by using curves. Grab the very center of the line and raise straight up stopping short of saturation. If your color layer was well done it will now look lousy but the combined result can do wonders for the color saturation.

    Also when raising saturation do so in small steps. Not more than 30% each time. You can likely get away with 3 such steps before problems arise. In no case go over 50% at a time. Once this is maxed out then try the curves mentioned above. You may then find you can raise the opacity of the luminance back up and still keep the color.

    As long as the color adjustment is under about 1.4 or 1.5 and you have enough color for reasonable signal to noise ratio when the color layer is blurred by no more than 1.8 pixels color adjustment by software is the normal route to go. I've seen some go as high as 1.6. That may take more frames of the color being pushed that far to keep that color in fainter portions of the image.

    I caught small parts of both transits. But many other one time events have been clouded out totally, especially some near earth asteroids passing inside the Clark satellite belt for instance. I was just using the transit as an example of an event that there was no second chance for (well third chance).

    Rick

  22. #22
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    Hi,

    I used a G2V star as reference. I combined all three unprocessed channels with the same parameters, and the proportion WhiteCall used to make it white was RGB=1/0.66/0.69.

    It was Alpha Centauri A, but it was well splitted from B. It was an 8 inches apperture with a 30 m focal length. Is there anything else that could go wrong?
    English is not my first language.

  23. #23
    Join Date
    Dec 2006
    Posts
    3,275
    I've never used white cal that way. I use the method in the videos as it is designed for this purpose. White cal is not quite the same as you can't control for background. Should be close however if the star is not saturated (half intensity or less if using a ABG chip as you likely are) and the background is neutral gray. White cal is more for what I used it for in your Saturn image -- when nothing else is available save your backside type of rescue situation. The method in the video using real software for this purpose calculates both the background and the star then subtracts background from the star to get the color of the star alone. White Cal doesn't do this. It just assumes the average of what it sees is white even though that isn't likely the case or the object's color has been altered making the background neutral. Thus close in some cases but if your system sees color flares, is affected by gradients and a zillion other things or the star too saturated it will be off to some unknown degree.

    Rick

  24. #24
    Join Date
    May 2004
    Location
    São José dos Campos, São Paulo, Brazil
    Posts
    254
    I watched the histograms and cared not to saturate the channels. The red channel may have passed halfway the histogram, though.

    I knew the background noise could be an issue, but Alp Cen A is so bright, and I stacked so much hundreds of frames with low gain that the background showed no grains when I blew up the gamma. Could I assume it was actual black for my purposes?

    I tried this because I still haven't the recomended software to quantify the luminous flux in the images. But I'll concentrate on doing this now and check how much close I got with WhiteCal.
    English is not my first language.

Similar Threads

  1. WFPC CCD calibration correction
    By trinitree88 in forum Astronomy
    Replies: 0
    Last Post: 2009-Jun-22, 04:59 PM
  2. Question on Polaris Calibration.
    By emillington in forum Astronomical Observing, Equipment and Accessories
    Replies: 1
    Last Post: 2007-Jan-08, 09:08 PM
  3. Spirit Color Calibration Target
    By Josh in forum Space Exploration
    Replies: 3
    Last Post: 2004-Jan-13, 09:21 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •