LEDE ON
Things continue to be quiet in the photography scene. We did have a bunch of new rumors pop up, including a Chinese zoom lens, another Chinese autofocus lens entrant, hints that GoPro may expand beyond the wide angle Action camera, and more. But in terms of news? Yawn (wake me when something happens again).
——————————
Commentary
Gerald undoes the color world
Gerald Undone, a well-known and respected YouTuber who mostly caters to the sophisticated videographer crowd, seems to have upset a few people who only noted his "grades" for color. As in Nikon gets an A, while Fujifilm's overall report card showed a D. Nikon's Flat Picture Control gets an even better grade than A for photo accuracy.
Yes, we're back into Color Science feuds. Basically, accuracy of color versus colors people like. I should point out that I come from the video world (dating back to the early 1970's) and because video always has to worry about what's happening downstream of the capture, accurate color at capture has always been a priority. It's my priority with stills, too. I think I first wrote in 2004 that if you baked something into your original capture files (e.g. JPEG), it made the job of changing things later much more difficult. The more you baked, the more difficult later changes became, up until the point where you "burnt" you original and would never be able to recover usable data later from the burned sections.
The doom-scroll side of the world doesn't mind baking. Indeed, they count on some baking to set their images off from others in the scroll. Brighter, contrastier, more colorful, unusual color palette, and more. That's one of the things that happen when you're taking the same composition as others ;~). Pleasing color is one of the reasons why Fujifilm resonates with the content creators: picking a different film simulation gives you instant baking. Nikon's more recent foray into Recipes is similar, and even the wording speaks to "baking."
I think everyone really needs to understand—after accounting for color blindness and cataracts—just what world they want to live in: accurate or pleasing color. I'll point out that this was the case even back when I picked up my first film camera in the 1960's. You're in one camp or the other, though these days with digital, you can be in the accurate camp and migrate any time you want to the pleasing camp. Pretty difficult to do the opposite, though.
Which brings me to LUTs in the video world. I should point out that one of Gerald's businesses is selling accurate LUTs, which is one reason why he's doing all this technical color analysis in the first place. In N-RAW or N-Log video from my Z9-generation cameras, I like his LUTs a bit better than Nikon's free LUTs. His LUTs do a better job of getting you to a "broadcast accurate" color in your base material. If you want to grade looks into that after the fact, you're working from a better original data set using his LUTs.
——————————
Commentary
Better than raw
A couple of questions, a few posts, and now dpreview's interview with the Camera Intelligence's Caira camera designers remind me that we did the right thing in the beginning at Connectix in 1994 when we designed the QuickCam. It's why Apple copied what we did when they got around to building cameras into laptops, tablets, and phones. And it's a really simple idea.
Let's start, however, with what current cameras do: data from the image sensor is instantly (and sometimes within the sensor) managed into a complex physical image processing pipeline. That includes "gain" control, "correction" of raw data (in Nikon's case White Balance Preconditioning among other things), as well as lens corrections, among other things. Many of these things happen before the camera actually creates a raw data file, let alone processes the raw data into a JPEG, HEIF, or TIFF image.
What we did at Connectix was dirt simple: as few parts as possible to get truly raw image sensor data captured in real time into the CPU of a Macintosh (and eventually Windows PCs). The stream of the data was more important than getting each data packet "corrected." Particularly when we and others eventually combined the stream of image data with streams of other data, such as gyroscopes.
I'll take a really simple example to illustrate. Photons are random. So when you capture a snapshot of them via a shutter (electronic or mechanical) you freeze whatever random photons you've managed to capture. The randomness of photons is our primary source of visual "noise" these days. So what if you captured the moment via one frame of image capture but captured the data stream before and after? You could look at the pixel data for two, four, eight, or sixteen images in the stream and, where there's not motion, use methods to "fix" the photon randomness (I published an article back in 2011 about how to do this; I talked about it at a graphics conference a decade earlier). With the QuickCam we were even trickier: we used the stream to constantly evaluate both exposure and color change (e.g. someone turns on a light), as well as to produce images.
To a large degree, the smartphone cameras have gotten better at final results because they're doing some of these stream related things (as does the Caira), particularly in low light. Meanwhile, the camera makers have gotten worse. That's because they rely upon a physical image pipeline that does pieces of the work all along the way, plus the sensor-to-ASIC path goes through a limited amount of memory and not always at the speeds that the smartphones work at with their direct to compute cores approach.
Which brings me to another point: we always knew processors would get faster. Moore's law itself predicts that. So keeping the connection between image sensor and CPU as short and unmanaged as possible was always going to provide the ability to do more later at the CPU side. Today, that includes a lot of machine-learned things in the smartphones (AI, if you will). The dedicated cameras are doing that on limited (or no) real-time streams well away from the image sensor, and with process size cores that are far bigger, more power hungry, and slower than the mobile devices.
So what's better than raw data? All data as it is produced, and the full stream of it, not a slice of it every now and again. Which leads me to Professor Eric Fossum's work. He was in on the original CCD and CMOS image sensor development at the NASA Jet Propulsion Lab (JPL), but at Dartmouth he helped develop what he originally called the JOT image processor (now called Quanta), which basically just produces a stream of data that tells you when and wherefrom every photon arrived at the image sensor. After introducing a 41mp 2.2-micron sensor in 2022, the company he and his students started has gone 100% silent. I can't tell whether it was absorbed by someone else or what, but I can see that the quanta image sensor is still getting development activity.
So you wanted to know what's better than raw. It's knowing when every photon hit the focal plane and from where. Couple that stream of data with all the other image processing things that are going on in today's very sophisticated and fast computing cores, and I believe you'll get even better results than we obtain today.
——————————
Reader Question
Carry on my wayward friends
One of the things I've been trying to build in the background is a huge database of reader questions with my answers. I've been doing "reader questions" off and on here at byThom for decades, but it's time to start dialing that in a bit more. So expect a reader question coming with each future News/Views. Today's question: "What’s the heaviest lens you’d use when carrying the camera using a neckstrap?"
Answer: I have a simple rule of thumb for this: if the mass of the lens exceeds the mass of the body, you should be carrying the combination by the lens (e.g. strap attached to the tripod collar). The more the mismatch and the more the lens mass is far away from the mount, the more important it is that you follow this rule. That's because the lens mount is the point of weakness between body and lens, and the mount on both is designed to break once stressed past a certain force level. The reason for that is that repairing a mount is far cheaper than repairing structural damage to the body or lens structure.
Beyond that, camera+lens these days tends to exceed three pounds even on the simplest of systems. That’s a lot of force on your neck, too. Almost none of us who carry cameras all the time use traditional neck carries. At a minimum, we use shoulder straps, but this is where Cotton Carriers and various sling belts and harnesses come into play. Even then you need to be careful, as just hanging a camera+lens off a carrier doesn’t isolate g-forces on the mount unless there are multiple points of contact.
——————————
Now Arriving at Tab 2
Only three months late
This week I published my first completely redesigned Web site: filmbodies.com. I've started with that site because it's the smallest of my sites and the one that gets the fewest updates and additions, and I wanted to make sure things work as I want them to in this new style before committing to the bigger bythom.com and zsystemuser.com sites, which would need a lot of extra work if I got something wrong enough that I needed to abandon the style and start over.
It's been a long, winding path to where I ended up. I have so many different prototypes of sites now I'm going to have to build an archive drive of just all my ideas and testing. In the end, though, I decided to keep things closer to what I've been doing rather than further. That said, there are a ton of behind the scenes changes, some of which I coded myself, some of which were coded by others. But filmbodies is now my first site that 100% respects screen size, plus it also respects Night/Day settings on your devices. A 404 page, redone SEO, and other missing elements are all features, though I haven't hooked up the contact page, 404, and redirects yet. Overall, filmbodies should be leaner and faster. I can also update it quicker, too, plus site backups are now automatic.
Even more exciting is that I rewrote (or at least re-edited) every key article on the old filmbodies site, then added a couple of new ones just because it's difficult to stop me at anything once I get started ;~). So much so, there's even a brand new, free book available to film SLR users who visit the site (though you can enter a donation price, should you care to). The letters on my keyboards seem to be wearing off. I've also cleaned up a lot of images because on the original site many of them were only 384 pixels wide! That tells you how far things went back, as I originally used a two-column site design with a maximum column size of 500. But now, all will be new again. New text, new photos, new everything.
Now that filmbodies is live, it's time for me to start knocking down other site dominoes...
-------------------
Wrapping Up
And in other news
▶︎ DxO PhotoLab 9.6. This new update adds the DeepPrime XD3 noise reduction (for Bayer sensors), adds diffusion on AI masking to make the edges more natural, and adds a new high-fidelity DNG compression routine.
▶︎ Affinity gets bug fix. The new combined Affinity application was updated to version 3.1, claiming to fix over 200 bugs. Canva added a new light interface for those that objected to the dark one, a new Convert to Curves function that changes pixel selections into editable vector curves, a new Live Tone Blend Groups function, and some other minor bits. Affinity seems to generate a love/hate response from users, mostly due to its combined Illustrator/Photoshop/Indesign mimicking UI, but frankly, it's a free Photoshop (near) clone that works, and I'm not sure how you can hate that.
▶︎ GIMP gets an update. Speaking of Photoshop alternatives, GIMP (Gnu Image Manipulation Program) just updated to version 3.2, which finally adds non-destructive layers. In fact, layers got a lot of additional attention in this release, though not for the sort of layers that we tend to use with photos. Instead, the new bits have to do with text layers, linked layers, and vector layers, which are more graphic-design oriented. The MyPaint Brush tool was updated, as well as the Text Editor. The UI got a lot of touchup and adjustment, though it still has a geeky, old-school Unix flavor, and now JPEG 2000 and AVCI images are also supported. Curves now supports Presets. There's even a new Cornish language version of the UI, which brings to 86 the languages GIMP supports.
▶︎ Another Viltrox Vintage Flash. Viltrox introduced the Vintage V2, a US$37 basic flash unit with a rechargeable lithium battery. While there are TTL compatible versions for Canon, Fujifilm, Nikon, and Sony, the only controls on the flash are really Automatic plus 1/2 to 1/16 power. With a GN of 6 (feet, about 2m), it's not very powerful. The best application for this flash would be in situations where minimal fill is useful.