I'm transitioning all of my News/Views into a new format. For the time being, news will continue in this digest form on this site. Enjoy. (p.s. If you're interested in potentially subscribing to my new offerings, be sure to click here to receive further updates as that idea gets developed and closer to launch.)
--------------------
LEDE ON
CES supposedly stands for Consumer Electronic Show, and last week we just had that annual extravaganza. Curiously missing were cameras. Unlike many previous years, we saw no new photography-related product launches from Canon, Nikon, or Sony. Nikon didn’t even have a booth. Curious about this, I decided to look up what CTA, the association that runs the show says about what things are covered at CES. Sure enough, no mention of imaging. Video is mentioned, but not Imaging. I guess we consumers have moved on from Imaging. Not so consumer topics such as AgTech, Quantum, and Space Tech seem to have been added. Not that anything particularly interesting came out of this year’s CES. Unless you consider things like adding AI to Catepillar tractors interesting. I guess we’re stuck with our 4K televisions and current cameras now.
——————————
Explainer
Triple Stacked Sensors
You’re probably seeing articles that mention triple stacked image sensors, specificallly in relation to Apple and Samsung. You may think that might mean something akin to the Foveon sensor, where three layers of silicon are examined to find (near) R, G, and B values for a single pixel. It doesn’t.
Triple stacked refers to PD-TR-Logic. The top-most portion (PD) is the photon detection and converstion to charge. The second portion (TR) is transport. This is still a bit of a nebulous concept as in the patents we see different definitions of what this layer can/should do. In its simplest form, it’s simply moving the charge out of the PD layer. Technically, a simple “stacked image sensor” as we have in Sony and Nikon cameras has a transfer layer in it, though this is at the moment it is merely connections between two independently created layers. In more complex forms that are being considered, TR is actually a communication portal between the PD and Logic layers. For instance, Nikon has demonstrated image sensors that let the logic layer to make changes to how the PD layer operates in 16x16 pixel blocks, though this sensor has not appeared in production yet.
The Logic layer is what it sounds like: this is where you start doing computational things with the data moving downwards in the stacks.
Many of the early posts about future Apple/Samsung chips make non-sensical claims about “better than Sony Exmor.” I’m sure that we’ll get image sensors in the future—all this triple stack discussion is about the future—that are better than past image sensors. However, the commentators making such claims also don’t seem to understand where current Exmor lives. The partial stacked sensors now coming out of Sony Semiconductor are actually a side idea of the thing driving straight triple stack: let the PD do its job, have on-chip methods of moving that data as fast as possible into the partial stack, which is essentially a logic addition melded to the basic image sensor via a unique method that Nikon Precision seems to have originated. It’s not stacked behind the PD layer, it’s mounted as a partial layer on top.
Two somewhat competing things are happening with traditional image sensor development at the moment. The first is the pursuit of bandwidth: speed up everything that happens post the actual photon conversion (starts with TR layer). The second is the pursuit of control/computation (logic layer). What’s driving this is not still photography (we don’t reallly need full image data at >30 fps), it’s autofocus and video. Current top cameras are sub-sampling focus streams at 120 fps, while video is pushing so much data now (e.g. 8K/60P) that rolling shutter becomes an issue due to bandwidth constraints.
Basically, image sensors haven’t been changing much in the PD section, which is one reason why you don’t see a lot of movement in dynamic range. But we have seen changes at the “edge” of PD, such as the dual gain output in the partial-stacked image sensors, which combine a second ADC read with faster bandwidth to get that data. As long as we stick to the traditional Bayer-type PD use, it’s the things that happen post photon conversion that will change.
One of the most likely scenarios for a full triple stack sensor is that it will apply AI noise reduction to the data before it gets to the image processing stream. But other possibilities exist, as well.
The bottom line is: we aren’t quite there yet, but we can see where we might want to go.
——————————
Commentary
Be careful what you think...
I continue to have to do cleanup for other sites.
▶︎ “Image Sensor With 26 Stops of Dynamic Range” Technology is very difficult to keep up with and fully understand. What tends to happen when the media doesn’t fully understand a new technology, they simply repeat marketing messages. Canon showed off a new SPAD (Single Photon Avalanche Diode) image sensor at CES, along with the claim of 156db dynamic range (which most sites did the conversion on and reported as 26 stops). We have two things to explain here:
Current image sensors collect photons over time (effectively shutter speed) and then count how many they receive. We refer to that as PD (Photo Diode). A SPAD sensor reacts every time it receives a single photon, so rather than collect photons—actually convert photons to electrons—internally in the sensor, the downstream electronics have to assemble the counts. The good news about SPAD is that it doesn’t have electronic read noise at the sensor, effectively an “avalanche” is a 1 and “no avalanche” is a 0. This absolutely improves data integrity in really low light, as only signal should be recorded.
However, the thing that the media never quite picks up on is this: photons are random. For the past decade our PD image sensors have been pretty darned good at recording the randomness of photons. Yes, there’s electronic noise in the lowest level of the signal, but as Nikon demonstrated more than two decades ago, three basic noise components exist, at with any technology one of them will dominate. We are in an era where randomness of photons is the dominate producer of noise. So however good Canon’s new SPAD is in low light data integrity, will all the extra dynamic range actually show up in photos? Not in the ways that 26 stops (versus or current 11-12 stops) suggests or the mainstream media intimates.
This is not to suggest that we won’t get single photon detection at some point in our cameras. Both Canon’s SPAD efforts and Fossum’s JOT designs are almost certainly going to be in our cameras at some point. But neither is going to remove the randomness of photons, so the overall impact in low light will be different than you think: not so much a noise reduction (randomness stays random), but more an improvement in data integrity. As most of you know, my mantra is “optimal data capture, processed optimally,” so I’m all for SPAD and JOT.
-------------------
Wrapping Up
And in other news
▶︎ Shapeshifter Shifts, Keeps Shape. ThinkTank Photo introduced two new Shapeshifter bags, the 25L and 37L. The basic premise of these bags is to avoid dividers and have you put everything in pouches (several pouches are supplied with each bag). This isn't the first time we've seen pouches in their bags. The two basic problems with this idea are: (1) you're almost certainly going to end up buying more pouches if your kit has to morph for different work, and (2) it's more cumbersome to get things out of and back into pouches when you're changing gear quickly, particularly if you use the drawstrings on the pouches to try to keep dust and other elements away from your gear. I tried the first pouch backpack ThinkTank Photo made (disclaimer: it was provided for potential review free of charge), but ultimately felt that the system was just not flexible enough. It's not clear that the new versions will be more flexible.
Those of you who've encountered me during travel will know that I use bags with dividers. However, while in transit I put all my individual gear components into really thin bags and slide those into the dividers. The reason for this is two-fold: modest protection from the elements and those dreaded Frankfurt "empty your bag" requests at their security posts.
▶︎ Apple goes subscription. In a bit of odd news, Apple announced a new Creator Studio product, which is really just a bundle of existing products coupled with some AI additions, available as a subscription. The standalone products will still be available as perpetual purchases, however now there's a new extended subscription option. The implication is that the perpetual products will go into near feature stasis, while the subscription versions will get extended more. Creator Studio includes Final Cut Pro, Logic Pro, Pixelmator Pro, Motion, Compressor, and Main Stage with along with "substantial AI features" for those products as well as Pages, Numbers, and Keynote. Price will be US$12.99/month, or US$129/year when they appear on January 28th (there's a one month free trial). Particularly telling is the offer to students and educators: US$2.99/month, or US$29.99/year. Given that it would take multiple years before either cost comes close to equalling what you pay for all the perpetual products, it seems well worth the cost to someone coming new into Mac and iPad content creation, particularly students. What's unclear at present is whether those of us who own those products already would really get any real value from the "substantial AI features." Those features are locked if you don't have a subscription.
------------
As you can see, this weekly or bi-weekly "newsletter" style for News/Views can be quite elaborate and lengthy. But by putting everything in one spot less often, in a single format, it saves me time and allows me to spend more time on the commentary than the news itself, which you can get pretty much anywhere.
byThom MAX is still coming, but for the time being I'll be doing news this way. I'll have more about byThom Max when I kick it off later in 2026. In the meantime, if you're interested in subscribing, click here to receive updates.