Post Processing QAD

  • What kind of a shooter are you? While there are many different sub-species of photographers, for the purposes of quick-and-dirty advice you'll fall into one of these three: 
    • You're not a post processor. Your preference is to set camera settings as best you can and then print directly from the resulting image, either by using PictBridge, a DPOF-enabled printer (generally one with a card slot), by having a third party print for you (Wal*Mart, Costco, local drug store, camera store, professional lab, whatever), or perhaps you just use the images for emailing and Web use. If you fall into this group, you do your post processing in camera: 
      • Set sRGB on the camera. Almost all labs and the Web require it.
      • Get white balance right when shooting. Use an Expodisc or preset white balances measured from a known source when you're not sure of what to set.
      • Watch your exposures. Bunched histograms at either end won't look right. Histogram blowouts are a no-no.
      • Be careful of Hue, Saturation, Contrast, and other controls: for anything that shifts colors or tonalities you will need to learn how the camera does that and when you might want to use it. Saturation and Contrast, in particular, have a nasty habit of changing histograms.
      • Set a higher Sharpening value. You have to remove antialiasing from the image to get good prints, so you need to be aggressive in having the camera set sharpening. Experiment to find out what level works best for each of the printing or display outputs you use.
      • The preceding item means that you also have to be careful to not set ISO too high (sharpening exaggerates noise) and you need to keep your JPEG quality levels as high as possible (JPEG Fine with JPEG Compression set to Optimal Quality if the camera supports it).
    • You work on a small number of photos after the fact and like to make them as good as possible: 
      • Shoot NEF+JPEG (preferably uncompressed and the largest possible bit-depth). Technically, you can pull out the embedded JPEG on any raw file, but you're probably not interested in the extra workflow.
      • Set sRGB in the camera, but run your post processing software in AdobeRGB or ProPhotoRGB (NEFs don't have a color space until the software converter assigns one, which it by default does from the camera setting). 
      • Absolutely no highlight or channel blowouts in your exposure. You're going to change the image's tonality in your software (see below), but you want an optimal set of data to start with. Make sure that you've set a white balance that is close to right (if not absolutely right) before evaluating exposure, as the histogram uses that.
    • You process all your images
      • Shoot NEF at its highest quality (bit-depth and lossless compression).
      • Set AdobeRGB in the camera and use UniWB with a flat curve to get the most honest possible histogram. Or: set Neutral Picture Control and use Zebras to assess highlights.
      • Again, no highlight or channel blowouts in the exposure.
      • Shoot a reference shot at the beginning of every sequence (e.g. gray card, or something more sophisticated) or whenever the light changes.
  • Getting images to the computer is a critical step. The camera makers all pretty much gloss over the step of how your images go from you camera to the computer. They give you a simplified view of the world in their manuals that will almost certainly cause you trouble later on. 
    • Use a card reader. Preferably a fast one. You won't tax your camera's battery (it has a finite number of recharges available), you won't trip over the cable and pull your camera off your desk (don't laugh, it happens more than you think), the reader is faster, and the reader allows you to do image recovery more easily.
    • Use a transfer program that renames files. If file sequence numbering is turned off in your camera (the default on Nikon cameras until the D300 despite the fact that I was complaining about this for eight years prior), you'll end up with lots of files with the same DSC_0001.xxx name. Even if you do turn file sequence numbering on the standard naming practice means that file names start repeating after 9999 shots. Sounds like a lot, but you'll be surprised at how fast you get to that figure. Computers don't like things with the same filename and sometimes will try to overwrite the file if you put the new one with the same name in a folder with the old one (yes, you would typically get a warning, but most people are awful fast with the Return or Enter key). Modern OS's tend to just duplicate the name (as in DSC_0001.xxx(1)). That's still not helpful.
    • Decide on a folder structure and filenaming protocol and stick to it. Some photographers use PLACE_DATE_# for their filenames, some use DATE_TIME_PLACE_#, and some use really elaborate naming schemes. Personally, I like names that tell me what and when (the number then tells me how many of those I have). Likewise, you need to put things in multiple folders (some file systems have limits to the number of files that can be in a folder, plus you don't want 100,000 images in one folder to have to browse through if you have to do it manually). I like folder names that tell me what and when (INT_CHILE_PAT_FEB08 for example). Even though operating system searches have gotten sophisticated and fast, I can often find what I need faster just by looking at folder names.
    • Watch for multiple folders on the card! Especially if you move cards between cameras, but even if you only have one camera, this is a gotcha that sometimes causes you to lose images because you don't transfer images out of the second (or third or fourth or whatever) folder when it gets generated and used (happens when you're taking lots of images, as there is either a 999 or 5000 max to each folder, depending upon camera). 
  • Most bang for the buck tool to learn: Curves. Many of Photoshop's (and Lightroom's) tools actually are just different ways of controlling Curves. Shorthand version: steeper slope = more contrast in that data region, flatter slope = less contrast. An "S" curve (flat steep flat) increases visible contrast in the midtones. A "Z" curve (steep flat steep) brings up shadow detail and increases contrast in the highlights. 
    • The buzzword is "local contrast." The new Clarity slider in Adobe Raw Converter and Lightroom is about local contrast. Changing Curves is often about local contrast. Ansel Adams was all about local contrast. What the heck is local contrast? The simplest explanation is: making sure that close tonal values are split apart enough to be distinguished visually. I have a Photoshop Action, for example, that splits my image into ten tonal ranges (ala Adams' Zone System) and then allows me to work on each range individually. Add-ons such as Greg Benz's excellent Lumenzia have more complex and interesting ways of letting you do the same thing. But the bottom line is this: in your image there will be two things: continuous tonal ramps (such as skies) that you don't want to mess with changing the internal contrast on, and detail, which you do want to pull out contrasts on so that it is seen. Spend you time learning this distinction and how to control it and your post processing will improve amazingingly.
    • 12 bits is still not necessarily enough. The difference in tonal values for each channel is 256 for 8-bit, 4096 for 12-bit. That still may mean that you have multiple stops of information in only a handful of bits. For example, the bottom two stops of your exposure are probably recorded using only 6 bits (64 possible values). A 14-bit raw file would give you 16,384 possible values per pixel, and those bottom two stops may now have 256 or perhaps 512 possible values. Fortunately, most raw converters move 12-bit data into the top of a 16-bit value, which helps with this problem. But in general, always pick the choice of more bits than less if you can, as it delays the inevitable data reduction issues.
    • Nikon's Compressed NEF compromises highlight bits; Sony's compressed ACR files can also compromise highlight/shadow transitions. Nikon claims that the old Compressed NEF format (since the D3 and D300 the high end cameras also support lossless, a new form of compression) is visually lossless. Basically, Nikon's Compressed NEF plays off the ability of our eye's inability to resolve small differences in bright areas by throwing away some information. If you can't distinguish between a value of 14,230 and 14,238, why store values of 14,231 through 14,237? In practice, this works without penalty until you make huge changes to highlight data. Where I see small, resolvable differences is in something like a wedding dress detail after large amounts of post processing and sharpening are applied. But in general, you can shoot Compressed NEF without much worry. But if you're a perfectionist like me, you'll want to avoid Compressed NEF if you can.
      Meanwhile, Sony's method of compressing their raw files can lead to JPEG-like mosquitoes on highlight/shadow boundaries. Avoid if you can, but Sony has insanely large file sizes, even when compressed.
    • Yes, all of the above items have contrast implications. Here's the thing that ties it all together: contrast is easy to add after the fact. Once you boost contrast, however, what you've really done is throw away data. Once data is thrown away, it is incredibly difficult to imitate it (you can't resurrect it) after the fact. So I want to always start with as much data as possible that records as much of the small tonality differences as possible. In camera setting terminology, that's LOW contrast in JPEGs, or highest possible bit recording in raw.
  • Sharpen twice, perhaps more. You have conflicting sharpening goals that you need to understand: 
    • Digital capture allows creates aliased or anti-aliased edges. An out-of-camera image from a camera with an AA filter (especially with no sharpening applied in-camera) always looks a bit soft due to the softening consequences of the analog-to-digital conversion process. Don't panic, just continue reading ;~). If you've got a camera without an AA filter, things will be sharper, but you may have aliasing artifacts (stairsteps) and you will still have a low level of anti-aliasing due to the digital sampling.
    • Always perform a "capture sharpening." The simplest definition of capture sharpening is enough sharpening to remove the visual softness at edges without adding visible artifacts. Generally, this means low Radius, moderate Threshold, and slightly high Amounts (e.g. something like 0.5, 4, 200). There's no one answer here, as each camera is a bit different, each ISO value will be a bit different, each raw converter will be a bit different, and even different subjects require different values (I wouldn't sharpen skin tones on a portrait this way, only hair and eyes and clothing). Personally, I prefer some of the third-party sharpening tools for capture sharpening than a basic Unsharp Mask, as they are better tuned to the problem.
    • Selective sharpening is another tool. In portraits, for example, the eyes should be very sharp, the hair sharp, and the skin tones not so sharp. That implies selecting different areas for different sharpening amounts. Indeed, that's exactly how most pros operate in post processing.
    • Don't perform any additional sharpening until you know what the output is. Different papers and print technologies have different ink spreads, which can often cover edge halos allowing you to do more aggressive sharpening. When I output to my Epson with the paper I typically print on, I'm often using a Radius of 1.3 to 1.5, sometimes a bit more. On the other hand, when I downsize images for the Web, I do something different: I downsize to double the size I'm going to eventually produce, sharpen very aggressively, then downsize to the final size and don't sharpen that.
    • I can't tell you how to set your sharpening tools, only your eyes can. The bottom line is that you need to know what artifacts the various sharpening tools can produce and learn to look for them. Neither capture sharpening nor output sharpening should produce visible artifacts. The former shouldn't produce visible artifacts in your image file, the latter shouldn't produce visible artifacts in your output.
  • Despite what I wrote earlier, there's no need to shoot RAW+JPEG. Two points: If you're shooting a Nikon you've already got a JPEG BASIC file embedded in the raw file, all you need is extraction software to get it; and most good software allows you to quickly batch out JPEGs from your raw files. Someone using Aperture, Lightroom, or Bridge/Photoshop in their workflow really shouldn't worry about having a JPEG copy of the photo around. But… 
  • They're handy for client preview. It's always nice to be able to hand a client preview or for-position-only images immediately after the shoot.
  • They do give you a target. If you're new to raw file conversion, seeing a JPEG that was shot with the correct camera settings gives you a baseline to look at for your conversion.
  • All raw converters are not equal. Until the many updates to Adobe Camera Raw (ACR) starting with CS6, I never really liked the Adobe conversion for Nikon raw files (though I did like it for my Canon raw files (go figure, most of the Adobe's early ACR team shot Canon ;~). The current version is better, but still not as good as Capture NX-D is, in my opinion. CaptureOne and DxO have a different look than both those two, which some prefer. As do I. But...
    • Converters are a moving target. In the twenty+ years I've been seriously shooting digital, I've gone through seven iterations of Capture, ten major iterations of ACR, and multiple iterations of every other converter. The good news is that each generation seems to do a better job, even with the files I shot years ago. The bad news is that "the best converter" isn't a stationary value.
    • Converters are doing more than conversion. The craze lately has been to add unseen noise reduction algorithms and other processing tools to raw converters. A setting of 0 for noise reduction doesn't always mean that no noise reduction is being applied!
      Meanwhile, many converters now allow you to "correct" for chromatic aberration, linear distortion, and vignetting. Just remember that you move away from your original data values with most of these tools, and are really then manipulating the conversion. Few are doing their correction during the demosaic, meaning that this is nothing more than just another post processing trick. See my comment about bits, above. Tools that apply after the demosaic to bit-limited data (e.g. in the shadows) have a way of making for visible artifacts.
    • The most popular converter may not be the best. Every Nikon shooter I know who's tried RPP (Mac only, unfortunately, and it stopped being updated after the D810) is amazed to find that there was more detail in their raw files than they thought. Indeed, all kinds of interesting and obscure technologies lurk at the margins. Instead of Unsharp Masks there are deconvolution routines, for instance. One reason why the mainstream converters don't always extract the most from a file is simple: performance. Some of the algorithms RPP and other specialized converters use simply take a lot of computer horsepower to execute. 99% of uses don't want to wait for anything. So you can guess what happens: mainstream products move to faster algorithms that may not be perfectly optimal. So it pays to take a look at some of the more geeky and peripheral approaches to conversion if you're a perfectionist.
  • Raw files do not have a Color Space. You do need to set one on your camera, but Color Spaces are arbitrary and smaller-than-what-is-captured definitions of the available colors with which to paint pixels. Your digital camera only captures light as it is, which has no restrictions. (True, the Bayer dyes may impart some small device-specific differences, but in practice these are miniscule compared to the Color Space definitions we tend to use, and we can always profile them.) Lightroom and Aperture made the wise choice to ignore the camera set Color Space and use much larger Color Spaces in which to do their calculations (e.g. ProPhoto RGB). Using a larger Color Space helps with all those rounding and posterization and data errors that happen when you make post processing choices. But… 
    • Color Spaces eventually have to be set for your output device. If your output is for the Web or many print labs, you have to in the end convert your image to sRGB, the lowest common denominator Color Space. 
    • Color Spaces and color management are one of the most misunderstood aspects of digital imaging. It's easy to set things wrong. It's easy to put images into the wrong Color Space. It's easy to have device differences that aren't controlled by ICC profiles (essentially deviation charts from the Color Space definition). Don't blame your raw files for not having the right color. Remember, they don't store color and are neutral in all this. And your camera is better at capturing color than sRGB and AdobeRGB can render. Thus, color issues you see in your output mean you did something wrong at some point in the chain from conversion to output.
    • Fujifilm S3 and S5 Pro shooters using extended D-RANGE should consider slightly overexposing their RAFs. The SuperCCD SR used in those cameras has two light detection mechanisms, and modest overexposure is easily recovered in the latest Adobe products. Basically you can't overexpose more than one channel by two stops or one channel by more than three. Experiment to find what you're comfortable with. If you only use the Fujifilm converter, you're much more restricted, as it only allows you to pull back one stop worth of exposure.

Thom's Quick Recommendations (all your really need to know):

  • Shoot RAW whenever possible. It's the best possible data your camera can capture. Converting raw images is relatively painless these days.
  • Don't fret about compression. Nikon's lossy NEF compression is cleverly designed and can only make a visible difference in extreme cases. Ditto with Sony's compression. In practice most people can't see it. Meanwhile, Nikon's lossless NEF compressions are just that, lossless.
  • Set your camera correctly for the conditions. The preview image and histograms are calculated based upon the embedded JPEG, which uses camera settings. If you value the information those provide, set the camera right! If you're a perfectionist, use UniWB with a flat curve.
  • Use the biggest Color Space in conversion. Lightroom has it correct: if quality is your goal, use ProPhoto RGB with your raw files. Even AdobeRGB is smaller than what your camera captured. Yes, this means you have to convert down to a smaller Color Space for most output. It's still worth it.
  • Stay in 16-bit during all post-processing. All the converters pack their 12-bit or 14-bit data into a 16-bit value if it's passed to Photoshop or another program (and you can usually save as 16-bit TIFF). Only reduce to 8-bit after all processing is done. The only thing you should do in 8-bit is add an output-specific sharpening.
  • Try multiple converters. Most have a free demo you can try. Different workflows work well for different people. Find the one you like, not the one I recommend.
  • Own multiple converters. I often use Lightroom or Adobe Camera Raw for quick conversions due to the ease with which that can be done and the ability to batch process very fast. But for troublesome images or ones that I'm trying to tweak to their best, I may use Capture NX2 or RPP, simply because I believe I can get better final results out of them.
  • Keep up with the state of the art. Converters change every year. Capture NX-D, Adobe Camera Raw, DxO PhotoLab, CaptureOne, and the others have gotten better with each iteration. Thus, if ultimate quality is your goal, you need to resample converters every year or two.
 Looking for gear-specific information? Check out our other Web sites:
DSLRS: dslrbodies.com | mirrorless: sansmirror.com | Z System: zsystemuser.com | film SLR: filmbodies.com

bythom.com: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts,
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: