Still No Global Shutter?

I’ve been watching image sensor technology very carefully for decades. With the Nikon Z9 going (mechanical) shutterless over a year ago, the question I’ve been asking myself lately is this: why haven’t global shutter sensors made it into at least some of our cameras?

The simple answer is that the Japanese camera companies are afraid to go backwards. 

By that I mean that, in today’s current best-case situation, a global shutter sensor is anywhere from one to two stops behind a rolling shutter sensor. In other words, using a global shutter sensor would in image quality be more like using ISO 200-400 on one of today’s base ISO 100 sensors. The camera makers are basically afraid of the word “noise.” (Noizu wa heranakereba naranai.)

I see global shutter sensors being useful in three basic cases:

  • Video — Particularly video that is action based, whether that be the camera or the subject that’s moving. No leaning verticals, no curved helicopter blades, no inter-frame distortions that can be very noticeable.
  • Action stills — With sports and wildlife, the typical rolling shutter artifact is an extremity that distorts (foot, hand, wing tip, etc., that’s moving faster than the rest of the subject). 
  • Consumer — Bet you didn’t expect that, right? It’s the removal of a costly, complex part (shutter) that comes into play here. But wait, what about image quality? Just do what the smartphones do: integrate multiple frames to reduce noise.

The problem with engineering in Japan is that it tends to act linearly: improve a low-level image sensor characteristic, for example. Apple, Qualcomm,, thought non-linearly: fix any poor sensor characteristics with an adjunct part, which also provides additional capability.

Back when I was consulting for a non-Japanese camera company early this century, one of the constant discussions we had was “how do we fix the user problem?” Not the engineering problem, but the user problem. 

Of course, user problems become engineering problems, but it’s turning the direction you’re thinking around that often leads to the new discovery and a very different product design. When I first started working with digital camera designs in the early 1990’s and up through the last direct consulting I did over a decade later, the engineers were always saying “you can’t do that, because…” They’d point to specific data, like signal-to-noise ratios, diffraction Airy disc size, and so on. I tended to ignore those protests because throughout my involvement in tech back through the 70’s, the answer always came from stating the user problem. How you engineered towards solving that usually required that old cliche of “thinking outside of the box.” 

In the case of global shutters, the box is noise. Global shutters are noisier than rolling shutters, thus can’t replace current offerings, because you’d be going backwards in some key data point that every engineer—and many customers—are measuring. Yet the image sensor is just a sampling device, it’s not the final data (digital numbers or pixel values in a photo file). What else can you apply to get better noise values in the final results? Apple can tell you. Google can tell you. Qualcomm can tell you. I could have told you back in 1994. 

Why could I have told you back in 1994? Because in the Quickcam we were using one of the first CMOS image sensors, and at the time CMOS had the same exact problem that global shutters have today: they were clearly noisier than the established digital image sensor (CCD). We had dozens of “it’s not as good” issues in those TI image sensors we were using (first monochrome, then color). We solved every one of them with a think different approach. 

The irony is that the Japanese are selling global shutter image sensors today. Just not to you and me. In a number of industrial uses there’s a value to linear integrity over noise. So that’s where the global shutter energy at the sensor makers has been directed. 

At some point, some clever out-of-box thinker will look at today’s global shutter sensor technology and figure out exactly how it could be deployed in a camera, and also ironically, I believe that might end up a consumer camera first, not a high-end video one. Note that the smartphones didn’t start out with the best image sensors, but still win the use case wars ;~).

Part of the blame goes to the media. Virtually every review mentions dynamic range these days without truly examining what that is, how it’s really applied, and what real tangible use 13 versus 11 stops might be. Remember, the output devices we’re currently using to view our images don’t have that same range, and certainly not if you print on paper (though some video HDR-type solutions now benefit from more input range). 

If a camera company were to come out with a global shutter camera that was only a maximum of 9 stops dynamic range (compared to current top ends of near 12 stops), every media outlet in the world (I hope not this one) would shoot it down immediately. But wait, a Nikon D6 only hits 9 stops of usable dynamic range ;~), that didn’t stop a lot of pros from adopting it and seeing that it did a remarkably good job for them. 

So I’m stumped at why the camera makers haven’t yet truly considered making a global shutter camera for us. It’s more than possible. It would have value. The drawbacks of the fast read (global read of all data) can be overcome. 

Looking for gear-specific information? Check out our other Web sites:
DSLRS: | mirrorless: | Z System: | film SLR: all text and original images © 2023 Thom Hogan
portions Copyright 1999-2022 Thom Hogan-- All Rights Reserved