More Questions Asked (and Answered)

"Will Nikon introduce a global shutter camera?

Yes.

The only real question is "when?" 

I've been following Nikon patents for three decades now, particularly ones pertaining to digital imaging. Nikon has quite a few patents in the global shutter arena, with one of the most interesting one being a hybrid global/rolling shutter patent (just resurfaced by Asobinet in Japan). In that updated patent, the original of which is now eight years old, Nikon outlined how to create a global shutter for areas with motion, while a rolling shutter was used for areas without motion. In essence, that particular patent is trying to mitigate the downside of both types of electronic shutters simultaneously. To put it simply, an AI autofocus system would inform the sensor as to where to place the global shutter regions. 

It's unclear to me just how well this hybrid approach would work in practice. A bird at a distance that only impacts a small part of the frame would seem like it might be a good candidate for such a system, but the 250-pound running back that's filling my frame and about to plow into me doesn't seem like it would. Moreover, there's the issue of near static subjects with flash. What I don't think I want is a system that's worst case for both rolling and global shutter impacts in some cases, but not all. I'd rather just have a global shutter with reduced dynamic range (see next question). 

Overall, Nikon is taking small steps towards global shutter. The Z8/Z9 stacked sensor and the Z6 III partial-stacked sensor try to bridge the gap, and do a pretty good job in doing so. Still, I can see Nikon moving additional steps forward and eventually having a fully global shutter. It's really just a matter of time. It's also clear that Nikon's sensor development team is actively pursuing a number of different approaches, which is what I believe they should be doing.

"It seems that all the new cameras being introduced are tending to go backwards on dynamic range. Is that now something we need to expect?"

Maybe. What happened in APS-C and full frame is that somewhere around peak camera (2011/2012) we hit a point where image sensors did an excellent job of rendering the randomness of photons without adding any real extraneous digital-caused noise. Since then we have not had a sensor technology advance that would do a better job of capturing the randomness of photons. I get slightly conflicting estimates from different engineers about what is possible using the current photosite designs, but in no case would that give us more than about a stop more dynamic range from where we were. In essence, efficiency is the primary remaining barrier, and that has the problem that Bayer filtration is the primary reduction of efficiency now.

Sensors shifted from trying to provide more dynamic range to providing more speed. For instance, the original 24mp full frame sensor used in the D600 was remarkably good at dynamic range, at about 11.5 stops of useful range at base ISO (100). The current Z6 III 24mp sensor trails the D600 by about a stop at base ISO, though with the dual gain bump, it does ever so slightly better starting at ISO 800 (and retains that without using scaling or noise reduction through a much larger range of ISO).

What changed between the D600 and Z6 III is speed, both within and pushing data off the sensor. Electrons are pesky little devils, and if you heat them enough and move them rapidly, they start generating forms of digital noise, and that noise starts impinging on useful dynamic range.

The reason my answer to the question is "maybe" is for multiple reasons: (1) process size reduction has an impact, and most image sensors aren't close to what's possible in that regard (CPUs and SoCs dominate the fabs that are capable of really small process); (2) on chip noise reduction—Canon uses this approach—and other techniques can mitigate the digital noise; and (3) it's possible to imagine other approaches (if I were a betting man, I'd guess professor Fossom's QIS approach will come to fruition before any boundaries are broken with traditional CMOS sensors). 

The real problem is simple, though: lack of dedicated camera sales volume restricts large sensor R&D. Mobile devices, autos, and security systems are where the image sensor volume is, and thus breakthroughs will happen there first, as the cost of making a big tech change can be spread across more units. Large sensor (APS-C, full frame) cameras get more iterative approaches now as those cost less to produce. Most of that iterative work has centered on speed, as the current goal seems to be trying to get to a true global shutter sensor without compromising the dynamic range further. 

Ironically, dedicated camera users are now dependent upon two outliers from pure photography, video and smartphones. Video has been driving the speed issue. Don't forget that speed gives you a better viewfinder and focus system. Meanwhile, smartphones are driving additional improvements that don't get talked about a lot, such as crosstalk. 

 Looking for gear-specific information? Check out our other Web sites:
DSLRS: dslrbodies.com | mirrorless: sansmirror.com | Z System: zsystemuser.com | film SLR: filmbodies.com

bythom.com: all text and original images © 2024 Thom Hogan
portions Copyright 1999-2023 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts,
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: