News/Views

May I ask that you start your photographic-related shopping by clicking on any of the B&H links on this site. B&H is this site's exclusive advertiser. Starting a purchase from any B&H link on this site helps support this site.

This page of the site contains the latest 10 articles to appear on bythom, followed by links to the archives.

Sorry We Didn't Listen to You...

I continue to be bemused. 

"...we have received orders that far exceed expectations..."

Yep, it continues to happen. This time it was the Sony 300mm f/2.8GM lens that triggered the comment from the Japanese manufacturer, but we've seen it from Canon, Fujifilm, and Nikon, too.

I have to repeat: if it is true that orders exceeded expectations, this is a product marketing failure. A gross failure. Meanwhile, admitting "failure" in Japan is a cultural no-no, so what the heck is going on here? 

I've written for three decades now that the insular R&D and distributed subsidiary organization of the Japanese companies is a big problem. I believe that I know more about Nikon's primary customers in the US than does Nikon corporate, for instance, which is a huge problem if even remotely true. The reason I write the foregoing is simple: I deal with dozens, often hundreds, of Nikon users a day. I hear their issues/problems/desires directly. For Nikon, however, there's a funnel system that isolates the development teams. First, a customer tries contacting NikonUSA customer service. What happens in that exchange usually triggers the question to me ;~). Sometimes things get escalated from customer service to higher within NikonUSA, but ultimately, what gets sent to Japan—translated into Japanese by someone who didn't hear the original complaint—is a very small subset of what the customers are actually reporting. 

You can even see this on the Internet fora. More often than not, real issues show up in frustrated posts long before the Japanese company picks up on it. And sometimes, those posts are just ignored. 

I've written it before, but the whole camera design climate in Tokyo is paternalistic. They believe they know what we need, and they refuse to talk to us customers directly. Then when they do create something we want, they're surprised by the demand for it. 

There's little doubt in my mind that a fair amount of the collapse of the dedicated camera market is due to the distance between the designers and the users. Yes, gearheads exist that worship any increase in any number or function, and will buy anything the camera maker creates. But that's not the majority of the market, it's just a vocal subset. So the constant small iterations in hardware, particularly for the last decade or so, isn't resonating with the customers who've stopped buying new cameras. 

If you can't solve a real user problem, you don't need a new product. 

How Important is Gear?

It's been decades of seeing "gear does/doesn't make the photograph" claims that has made me almost unable to see them anymore (my brain is very good at editing them out now). 

However, as the holiday buying season comes upon us, there's a natural question that comes up about how much impact gear really makes on image quality and/or your enjoyment. Particularly new gear.

If you think about it carefully, we're dealing with a continuum. With no gear, you can't practice photography at all. The best you can do is try to store some more memories in your brain, which will ultimately keep post processing those nuerochromes towards full decay. Thus, you need a camera.

At the other end of the continuum, if someone could make a "perfect" camera—none exist that I know of—your photography could also be perfect, assuming that you read the manual and knew how to compose ;~). 

Consumer Reports just rated dozens of camera/lens combinations, and dozens of them got "CR Recommended" checkmarks. Put a different way, most of the cameras that are on the market today are closer to that "perfect camera" end of the spectrum and obviously well away from the "no camera" end.  

We keep chasing smaller and smaller gains. Technically, at this point in time, there isn't anything I want to do with a camera that I can't already do. Yes, in some cases I have to finesse something or rely some on post processing, but seriously, if you told me that my current gear closet is full and has all the gear I'll ever have, is there any output I can foresee wanting to use in the remainder of my lifetime that I can't produce? 

I can think of only one, and that comes back to Apple's Vision Pro headset and its ability to put you "into" 3D photos, panoramic photos, and videos. I'm not 100% convinced that there wouldn't also be a way to do that with my current gear, though at the moment Apple says only iPhone 15 Pro cameras can capture what you'd need. 

So when I get asked versions of the "does gear matter" question these days, my answer has become "less and less." As I pointed out at my recent presentation at Creative Photo Academy, this even applies to all those recent lenses Nikon has been making. Yes, I can extract a better image from my 400mm f/2.8 TC VR S or the 600mm f/4 TC VR S than I can from all the other Nikkors that hit those focal lengths. But the difference is pretty small in most use cases, and costs US$10,000 or more additional to obtain. The built in teleconverters are also nice, but do you really want to pay that much more money to flip a switch instead of mounting an additional objective? 

Thus, as you gear up for the holidays—possibly literally—think carefully about why it is you're thinking about gear you don't already own. What's your real motivation for upgrading or adding? Moreover, make sure you understand what the cost/benefit ratio of getting something new really is. 

That—cost/benefit—is one of the reasons why I'm hesitant to recommend Nikon's new Zf to Z6 users as an upgrade, for example. In terms of the tech side, the Zf runs rings around the original Z6, even though it uses the same image sensor, shutter, and viewfinder. The problem is that it's a very different camera to use and configure. It handles differently. How are those things going to impact your appreciation of the new abilities and performance? What do you have to learn that's new in order to achieve something different than you're currently achieving?

These are always tough questions. 

I long ago resolved myself to just living in a world of constant change. I grew up in Silicon Valley, I worked most of my career in Silicon Valley, and today I'm surrounded by a lot of gear that came out of ideas/innovations/products from Silicon Valley. I can point to almost any year all the way back to 1975 and show you the disruption that the constant change caused. Punch cards to switch flipping to paper tape to cassettes to floppy disks to hard drives to SSDs is just one of the many tech progressions I've already lived through. Both my curiosity and drive keeps me pushing forward, but I have to say that in many things—for example, word processing—most of the recent changes are really annoyances over substance. 

Thus, as Black Friday and Cyber Monday and all the other Days That Will Be Named come upon you, pay particular attention to that annoyance/substance bit. If something is currently annoying you that can be fixed by a new product, go for it. If a new product gives you little of substance over what you're using, don't buy it. 

Ultimately, though, nothing will have as much impact on your photography as instruction and practice, something I've been preaching (and doing myself) for decades. Gear matters, but brain matter matters more.

What's Really Happening with Sony?

I get it. First global shutter: we win. 

As usual with significant new camera announcements, the Internet is hotly "debating" what this means for everyone, even though the camera in question is a niche product with a specific user target.

One of the things about Sony's press conference and announcements yesterday I've been pondering is "why now?" The camera, lens, and firmware updates Sony announced are not available today. You won't be able to buy them for the holiday season. The camera and lens are scheduled for "Spring 2024", which I'd take to be no sooner than April, and the firmware updates are also sometime in the future (I believe they said March). Due to their niche nature, the demand could exceed supply when the camera and lens finally do arrive. The announcements weren't even made on the actual ten year anniversary of full frame mirrorless Alpha, though the marketing side did bring the ten-year thing up several times. 

So what gives?

FUD. (Fear, Uncertainty, Doubt; a marketing tactic.)

Two years ago Nikon started the eventual removal of the mechanical shutter with the Z9. Nikon's made major improvements via firmware updates to that top-end camera three times now, and introduced a less expensive but not really less capable sibling in the Z8. Nikon also has introduced quite a few telephoto lens options that will appeal to folks that are concerned about my next sentence. The 2024 Summer Olympics are coming up, as are a couple of the remaining big agency buying/replacement decisions. Canon is rumored to be getting ready to launch an R1. They, too, are launching a lot of interesting optics that Sony can't exactly match.

Meanwhile, the Sony A1 has slid out of the buzz worthy column as Nikon and Canon fully committed to mirrorless, and in talking to dealers, I'm told by most that the sales have dropped off, as well. 

I see Sony's pre-announcements as being fear-of-losing-sales driven. "We need to announce something that makes us look in front again." 

The only real new thing that they announced was a global shutter. These have been slowly trickling into the expensive video gear, and they're going to continue to trickle into the expensive still photography gear now that Sony's cracked the lid open. 

For the general photography consumer (and prosumer), a global shutter doesn't necessarily buy you things that you actually need. The fast rolling shutters of the A9 Mark II, A1, Z8, and Z9 are plenty fast enough. 

For the professional photographers trying to make a living from images, a global shutter will be useful, particularly in sports and some flash usages, perhaps for photojournalists dealing with frequency-based lighting (though the top current cameras can be adjusted for that). Moreover, that group is willing to pay for benefits that are real to them.

But again, I can't buy an A9 Mark III today. I can continue to photograph with my A1 or Z9, and guess what? 98% of the time they're all I need. So I have to wonder what Sony is worrying about by announcing this camera and the specifics so long before its actual release.

Does Zeno Ever Get Where He’s Going?

One reason why dpreview really needed to continue is that its volume of users—both knowledgable and naive—surfaces interesting things to contemplate. The Science and Technology forum recently had someone reference an interesting paper: basically, what’s the MTF of human vision? 

In photography we talk about MTF values for lenses all the time. We sometimes talk about the contrast of output devices (printing, screens, etc.). But do we talk much about what the eye can do in respect to contrast?

Note that the commonly used Zeiss formula for depth of field calculations is based upon what our eyes can resolve. The circle of confusion that theory uses is calculated with placeholders that deal with angle of view (size of print, distance to view print, and eyesight assumptions). So we do sometimes talk about the eye, but that usually gets buried behind something else.

I’m still thinking about the implications of the research paper. It falls in my “what can you see?” commentary, but it intersects with a bunch of other things.

I will say this: as camera resolutions and screen resolutions went up and we started seeing lenses with very high MTF results, it’s getting more and more difficult to make a clear, accurate assessment from just observation. Nowhere has that been more true than with all the recent telephoto lenses. It’s one of the reasons why one of my base tests is of bear hair at distance (feathers are also good), and why I do Imatest MTF calculations, as well.

If you’re trying to make an assessment visually, a number of factors come into play: (1) your corrected vision, including astigmatisms; (2) the program rendering the output; and (3) the resolution of the output. That’s just the three big ones, there are others. Many people are making assessments based on observation of Internet content, either posted photos or videos. But there compression also comes into play. And now, according to the paper, I may have to take into account my pupil size at the time of assessing something when the detail gets high.

The previous paragraph is just one of the reasons why two people can look at the same thing and see something different. To one, the lens is “perfect,” to another the lens has “flaws.” 

At the age of 71, my math skills aren’t what they used to be (though my eyes are carefully and highly corrected). But there’s a complex math problem here I need to contemplate. Are we at a stage where combination of the resolution of the camera, the MTF of the lens, the resolution of our screens, and the abilities of our eyes all are in a combined near good-as-it-gets sweet spot?  In other words, a 45mp camera with a modern telephoto lens on a Retina-type display with a well-managed JPEG image viewed at normal distances shows just about all most people will see? 

Anecdotally, I’d say yes. Further, I’d say that we’re deep into Zeno’s Paradox (where if you keep covering half of the distance to a goal, you never get to the goal; so in photographic sense, if we keep increasing the resolution, does that get us nearer to "best observable"?).

Does this mean the end of photography is near? 

No. One issue with photography is that it is two dimensional. All the math for the above would be using flat planes for subjects and viewing. We argue about bokeh all the time, but what we’re actually arguing about is how much we can ignore (or don't want to see) the third dimension. Nikkor lens engineers have long talked about the transition from the MTF (flat plane) to the three-dimensional out of focus area, and suggest that there are multiple approaches to what can happen. In reality there isn’t a true difference (other than difference between human eye/brain abilities), but in the physical world, X sits precisely behind Y in only one measurable way.

Apple has been poo-pooed over its upcoming Apple Vision Pro. But if you think about it, this will likely be the start of true three-dimensional photography. Many have attempted and failed at this before (e.g. Lytro). It’s unclear how successful Apple will be, but I haven’t known them to throw out big platform efforts without having first understood what new world was opened up. 

But to my point: 45mp with modern lenses on modern displays is starting to look a lot alike to me. My Z9 with a 400mm f/4.5, 100-400mm f/4.5-5.6, and 180-600mm f/5.6-6.3 look a lot alike at 400mm and maximum aperture. I’m still able to tell one from the other, but I’ve noticed in showing the original images to others, they often can’t. And I doubt that you can really tell via JPEGs posted on my sites. 

On the one hand, this is a positive thing: you don’t have to buy US$13,000 lenses to make an image that most will find good. A US$4000 camera/lens combo might do what most folk need. On the other hand, it’s not a positive thing, as those of us still working at photography have a harder time justifying our expenses to differentiate ourselves. Moreover, there's always that nagging FOMO (fear of missing out); that someday in the future you may wish you had used a better camera/lens combo.

That said, when you work at processing images and know what to look for, yes, the differences between cameras and lenses can still be seen. I suspect I’m going to have to spend some of 2024 trying to tell you how you do that.

Should You Imitate?

This question comes up again and again in the form “how do I make my images look like this?” or “where was this taken?” or “how do I get these colors/saturation/contrast?”

Before I answer the headline question, though, there’s another question you should ask yourself first: what’s your motivation? 

France-Paris 2-7-2001 4107

If you’ve ever been to a big museum like the Louvre, you’ll have encountered the easels of painters who are trying to slavishly copy a painting hanging on the wall. In the art world, there’s a method of teaching that involves learning how a “master” did what they did. That’s not just about composition, but more importantly about brush strokes, layering, blending, and a whole bunch more technique-related bits. The thought is that if you can reasonably recreate an image, that implies that you understand how it was created.

If your motivation is that—you want to understand how a photo was created—my answer to the headline would be yes, that’s probably a valid approach to learning. However, a “copy” shouldn’t be your end goal, it should be an intermediary step. Understanding how something was done is different than understanding how to do something for yourself.

Galen Rowell taught me that. 

He spent a lot of time researching any place he was going to photograph, and he often brought sample photos with him on trips. He’d use those to judge where someone had actually placed their tripod and consider why they might have chosen the lens, aperture, and focus point they did. He also loved historical photos: he wanted to see what something looked like today versus the way it did when it was originally photographed. 

But that was more about historical photo education and scouting for him. The photos Galen took in those locations were unique to him: he Galenized the place. Which often meant that after seeing what someone else did, he climbed something, looked at it from another angle, or used a completely different—typically 20mm f/11—set of gear.

He also used three other things he learned from studying other photos and photographers along the way: contrast reduction via (1) graduated NDs or (2) fill flash, and (3) near/middle/far relationships. With graduated NDs he even went so far as to decide that the ones that existed all did the wrong thing, so he worked with Singh-Ray to create new ones that did what he needed done. Fill flash was another of those “I learned this…I’m going to do that” things in Galen’s training. Once he understood how everyone else was using flash for fill, he figured out how to make fill work the way he wanted it to. That certainly wasn’t the way Nikon designed the Speedlights to work. 

The near/middle/far thing came essentially from study of the Muench’s, though once again Galen adapted it differently. Indeed, the “middle” was often Galen! Why? Because the Muench version didn’t show scale. A human in the image shows scale.

Most of the time, you knew an image was Galen’s on first sight. That doesn’t happen because he was imitating someone, it happens because he studied what others did and then used that information as a foundation for what he wanted to show. 

One of the last challenges Galen hit me with was “how was I Thoming a photo?” I never got to give him a clear answer before his untimely death, though I had planned to give him my answer the next time I saw him. My understanding myself after first understanding what others were doing generated a whole new way of teaching, as well as a book I’m still trying to lock in some final details on before publishing. 

But this article’s headline is targeted at you. Your answer should be: yes, it can be productive to imitate so that you can understand how a great photographer got to his image in the first place. But that was their image, not yours. Almost certainly you would have had a different response to a place, setting, subject than the photographer you learned to imitate. So can you capture that? Can you make a photograph your own? 

When you can, that’s when you know that you’ve done what Susan Sontag once suggested: stand on the shoulders of those that came before you. 

Thank You EU and USB Standards Committee

With the EU basically mandating the use of USB-C as a connector and the USB standards body imposing no consistent or required labeling, we’re now at the point where that USB-C connector EU is mandating may be trying to do any of the following:

  • USB 2, 480Mb/s (Hi-Speed, e.g. new iPhone!)
  • USB 3.0, 5Gb/s
  • USB 3.1, 5Gb/s
  • USB 3.1 Gen 2, 10Gb/s (SuperSpeed)
  • USB 3.2 Gen 1, 5Gb/s
  • USB 3.2 Gen 2x2, 10Gb/s (absolutely requires different cable)
  • and get ready for USB4, 40Gb/s (which drops the space between USB and the number!)

What’s supported at each end is important, so you can have a new iPhone talking to a new Mac where the iPhone is throttling the connection speed! Also, while cameras state that they support USB 3.2 (of any variation), the electronics in the camera don’t produce data at the max speed of the connection.

On top of this, we have USB PD (Power Delivery) up to 240 watts, now at standards version 3.0, plus Thunderbolt standards, which is now at version 4, all of which use the same USB-C connector.

Here’s the rub: I have about seven different variations of “USB-C” cables now, and virtually none of them are labeled with what they support other than a couple of ones with Thunderbolt markings. Even that’s ambiguous, and sometimes I have to figure out if the sole lightning symbol on the cable is for Thunderbolt 2, 3, or 4.

Technically, there is a labeling standard (SS5, SS10, SS20, 20, 40, 3, 4, numbers along with the USB or Thunderbolt icon). But hardly anyone complies with the labeling, and note the confusion with 20, which could mean USB 3.2 Gen 2x2 or USB4. Moreover, we need USB PD labeling on the connector, as well (e.g. 15, 60, 240 watts). EU's new directive starting in December 2024 may or may not help with the labeling: packages require a label, but the cables themselves apparently don’t. 

One can almost make an entire IT career out of just knowing (and having a supply of) all the various different cables and solving users’ problems by simply plugging in the right one. Add in understanding Wi-Fi routers, which has as many behind-the-covers differences, and yes, you have a complete career IT ;~).

Nikon has been supplying basically USB-3.0 to USB-2 cables with their cameras on the mistaken assumption that your computer is still antiquated and the camera is newer ;~). The Zf at least supplies a USB-C to USB-C cable (appears to be 30W max). 

If you’re having connection issues, either for communication or power, it’s probably almost always going to come down to having the right cable. 

Some advice: if you're on a Mac, Apple decided to move straight to USB4 and Thunderbolt 4 with the M1/M2/M3 machines (there are still a couple of exceptions, such as the front USB-C ports on the Studio M2 Max model being only USB 3.2 Gen 2x2, and the recently annoucned base 14” MacBook Pro M3 being only Thunderbolt 3). You now really need to be paying attention to the external SSD drives and card readers you add: they should be USB4 to get maximum performance. 

You should also be using USB4/Thunderbolt 4 cables that are rated to 240 watts Power Delivery. Fortunately, there's a good source for that, and they're marked with Thunderbolt level, max MGps, and USB PD maximum wattage: OWC

Yes, it makes a difference. The latest ProGrade CFexpress card reader is USB4 (mislabeled USB 4.0 in their marketing literature ;~). Their older CFexpress card reader is USB 3.2 Gen 2x2, which tends to work at 5Gb/s speeds on most devices I've used it on. Technically, our fastest current CFexpress cards can transfer at something just above 10Gb/s, so if ingest speed is important, you need to stay current and get the newer reader if your computer supports USB4. 

Casual enthusiasts taking only a few images at a time probably don't have to stay near the front edge of tech as we sports and event photographers do. However, tech progression is relentless. You need a plan to keep all your gear relatively current or else you fall into gaps that become a big pain to deal with, as you end up having to replace everything. There was a post recently from someone who bought a Nikon Z7 II, for instance (a three year old camera!), but their computer was over a decade old and the software they used was also no longer updated for current OS versions. In essence, the purchase of the camera forced them to deal with everything they owned all at once. 

Europe’s USB-C requirements that come fully into effect at the end of next year will be another one of those things that forces everyone to deal with a large portion of their gear at once. 

See you at the cable store...

Spooky Gassed — Trick or Treat?

Apple yesterday introduced new M3-based MacBook Pro and iMac 24” models. They also end-of-lifed the 13” MacBook Pro with the touchbar. 

bythom m3

What does all this mean for photographers looking for a new Mac?

Probably lots of discounts on amazingly good M2 models as stores unload inventories (ditto Apple Refurbished; the M2 13” is selling for less than US$1000). 

I don’t think my advice has changed from before: 16GB RAM and anything other than the base SSD are more than sufficient for most photography enthusiasts. If you batch process, go deep into AI processing, or edit video, then 32GB and a Pro or Max chip starts to become the bar you don’t want to go below. 

One thing that Apple emphasized, and which will have impacts on game playing, 3D modeling, and perhaps some fringes of photography was that the GPU design has been changed, and now supports hardware-accelerated ray tracing. Likewise, the nueral engine got faster, and that should directly impact AI-based image processing.

The iMac remains in its usual model colors, but the MacBook Pros now add a Space Black option (left, versus regular silver on right:

bythom 0063


Apple did a lot of gaslighting in their 30-minute Spooky Fast presentation last night. Besides the touchbar retirement, Apple didn’t supply any real information about how M3 differs from 14/16” M2 models other than vague bits (they compared to the 13” M2, which isn’t an apples to apples comparison, pardon the pun). You can find some more direct comparisons partly buried on the Apple Web site.

In diving deep into a few things that weren’t presented in the Apple Event, I discovered why they gaslighted the direct M2 to M3 upgrade differences. For instance, on the 14” MacBook Pro model, the battery life goes up perhaps an hour at the same level of performance. And that’s with Apple’s “soft” tests (e.g. running video on the laptop screen continuously). I’ll have to get my hands on a unit to do a more performance based test, but I don’t think it was unintentional that most of Apple’s comparisons were M3 to M1. 

Also ghosted in the Event was the availability of the Max models: November, not next week. 

Yet another ghosted detail was that the new base 14” MacBook Pro drops to Thunderbolt 3, and only two USB-C/Thunderbolt ports (down from Thunderbolt 4 and three). You have to buy the Pro or Max chip models to get back to where the M1 MacBook Pro started things. 

The EU got gaslit, too: the iMac still uses the current Magic Mouse and Keyboard, so it comes with a USB-C to Lightning cable. I guess new mice and keyboards are going to play chicken with the EU right up through Christmas 2024 ;~). 

I’m not done. The Apple Silicon uses shared memory internally, and the speed at which that works has changed (and is referred to as dynamic caching). If I’m reading the Tarot cards correctly, the M3 uses somewhat slower memory access generally, but the CPU/GPU cores have a mechanism in them for speeding up the access when required. Moreover, the number of performance cores versus efficiency cores seems to have changed across all models (lowering the performance core number and increasing the efficiency core number). Overall, I’m not expecting the benchmark tests of the M3 models to be tremendously improved from M2. Oh, they’ll show improvement, but it’s now taking a lot of improvement to show up in real speed to the average user. 

Which brings me back to the M2 Macs. As they come on sale, take a long look at them first before opting for an M3 model. The exception to that would be the iMac model: Apple never built an M2 version, and the M3 has a pretty clear improvement performance over the M1. Not that I wouldn’t advise you avoid the M1 iMac: if you can live with the configuration limitations (16GB RAM, 2TB SSD), it’s a perfectly fine computer with a built-in 4.5K Retina display. 

  • 14” M3 MacBook Pro: base (8GB RAM, 512GB SSD) is US$1599, maxed out (128GB RAM, 8TB SSD) is US$6899.
  • 16” M3 MacBook Pro: base (18GB RAM, 512GB SSD is US$2499, maxed out (128GB RAM, 8TB SSD) is US$7200.
  • 24” M3 iMac: base (8GB RAM, 256GB SSD) is US$1299, maxed out (24GB RAM, 2TB SSD) is US$2699.

If you are going to opt for an M3 Mac, the enthusiast photographer configuration is probably a M3 Pro-based model somewhere in between the base and max listed above. For instance, a 14” with 36GB RAM and 1TB SSD (US$2599).

I like that Apple is rapidly pushing their silicon forward. It’s clear that they’re devoting significant resources to keeping the Apple Silicon chips at the front edge of state-of-the-art. The M3 versus fastest Intel-based MacBook Pro comparisons are mind-boggling, and many of you probably still think those last Intel-based MBPs are excellent and fast. 

This site’s exclusive advertiser already has the new Macs available for order (click on ad at bottom of page).

Esthetic Obsolescence

"Planned obsolescence is a business strategy in which the obsolescence (the process of becoming obsolete, that is, unfashionable or no longer usable) of a product is planned and built into it from its conception, by the manufacturer."

In other words, the camera manufacturers have a compulsion to convince you that the camera you have needs to be replaced by the new one they're making, even though your old camera takes perfectly fine photos. When the obsolescence is mostly rhetorical, this is often called esthetic obsolescence: obsolescence because something appears out of date, not necessarily because it is.

This has been coming up a lot in discussions I've been having in email and on the Internet. For instance, here's a question: "how much better is the Nikon Zf's 8-stop VR than the Z9's 6-stop VR?" Well, it's 2 better ;~). Of course, what that "2" represents is befuddled in technical jargon disguised in a standard that straps your camera to an anvil. It turns out you can shake the anvil more and get the same results. However, humans aren't anvils on which you mount a camera, they're something entirely different than a mechanical shake platform. 

But 8 sounds a lot better than 6, right? Better get that new camera.

We've been through the same thing with megapixels. Just in full frame: 6, 10, 12, 16, 18, 20, 24, 26, 33, 36, 45, 50, 61. Yet even a 6mp camera should produce a perfectly fine 8x10" print. Do you really need more? Oh, right, you don't have the correct lens, aren't approaching your subject, and didn't frame properly in the first place, so you're cropping a lot. How much? 2x? Then 24mp should suffice, shouldn't it?

Recently, Nikon has been using features as differentiators. The Z9 has Auto capture, but the Z8 and Zf don't. The Zf has Pixel shift shooting, but the Z8 and Z9 don't. The Z8 and Zf have HEIF support, but the Z9 doesn't. No doubt this is intentional. It's part of the esthetic obsolescence methodology that gets you thinking that maybe the thing you have needs to be replaced with something else. 

Users, of course, will say "just update the firmware." But adding new features, performance, and capabilities in firmware doesn't make a camera maker any extra money. At best case, it makes it look like they support their users a little better. In practice, new features in firmware is usually because corporate wanted to ship the product before software had finished their job. 

Not only don't firmware updates make a camera company any money, but even if the camera company charged for them, it wouldn't make the same level of money on cost (ROI or ROE). It would also open up the camera companies to having to listen to users, as users won't pay for something they don't want, but will pay for something they do want. Fortunately, SaaS (subscription as a service) hasn’t yet been figured out by the camera companies, though they look at Adobe and say “yeah, we want some of that.” 

I keep getting asked about what I would consider as a real improvement that I’d jump at as a customer. Well, I’ve outlined that for almost two decades now, so I’m pretty sure I won’t get that (communicating, programmable, modular). But on the short list would be global shutter and rollover electron wells (or some other significant dynamic range improvement, meaning >1 stop). 

Big, real, significant changes that do obsolete previous gear are rare in the camera industry. Which has the eventuality of everyone dipping into esthetic obsolescence. And I’m back on topic, so I’d better stop… ;~) 

Time Versus Value

I see this one all the time: “I lost money when I bought my X for US$Y and it subsequently depreciated in value.”

Businesses don’t (or shouldn’t) have this same problem. The calculation for a business tends to be simple: does the value provided by an item exceed its depreciation? 

Consumers, on the other hand, seem to struggle with the fact that most things they buy depreciate in value and that they probably shouldn’t be buying things that depreciate more than the value they get out of them. 

Cameras and lenses don’t appreciate in value. Well, at least most of them don’t. There have been a few exceptions over the years. But you shouldn’t count on your choice of camera/lens being one of them. 

From first sale to closeout sale, cameras tend to drop about 25% in the price you'll pay for them. In other words, if the latest/greatest announced camera is US$2000 at introduction, by the time the last units are sold new, you should expect to find them at US$1500. Lenses have longer life cycles, so they tend to sell at list price at introduction, then get periodic ~10% off sales to occasionally clear built-up inventory (or sustain a production rate). 

What a used camera or lens will sell for has been in a bit of an adjustment with the transition from DSLR to mirrorless. A DSLR camera or lens just doesn’t have the same value it used to. And a dealer is only going to, at best, give you 50% of what you could sell it for yourself (they need a profit margin in committing funds to it and later reselling it). We have a lot of originally US$3000 DSLRs that are selling used for anywhere from US$500 to US$1500 these days, and that’s going to continue to slide downward as the seven-year manufacturer guaranteed repair cycles expire. Take 50% off those numbers if you're going to trade it in, and, well, your older camera isn't worth much, is it? (Tip: sometimes camera makers will put a bonus on a trade-in via your dealer. Ignore those bonuses at your peril.)

Most consumers I hear from are paying for state-of-the-art product and then complaining when it doesn’t hold value. A simple solution to that is to buy at the back of the trend, not at the front of it. With new cameras near end-of-life, that means a 25% discount for waiting. With new lenses, it means simply waiting until it comes on sale.

Of course there’s a gotcha in all this: what’s the value of being able to take photos with a new product today? A good case in point is the Nikon Z9 and its Auto capture capability. If that function nets you photos that you couldn’t easily get otherwise, you have to assign a value to that. That’s particularly true for us sports photographers, where we’ve long been trying to use multi camera setups with remote triggers. If we get the image that the others at an event don’t, that has a huge value for us. 

But we’re back to businesses—sports photographers in this instance—who can put a direct value on something. Consumers using Auto capture would have what? A ego burst over the poor enthusiast that’s trying to get that same image manually? 

Which brings us to the real culprit here: marketing has made it imperative that you keep up with the Joneses. If Jonesie has a Z9, you need one to feel like you're still in the same league. And yet, I'm constantly seeing a stream of photos from older cameras that really hold their own and tell unique stories. 

Here's what I'd suggest you compare instead of the capability of your camera or the size of your lens: the photos you actually take. Are they as good as they could be from the gear you have? No? Then you don't need new gear, you need training and practice. Both of which tend to be less expensive than buying new cameras and lenses. 

Most people don't like hearing that, as they don't like hearing that they're at fault. It's easier to blame it on the gear. 

In my long career in photography I never felt I was as good as my mentors and instructors. But I also never felt it was that Galen Rowell used better equipment than me (he didn't, he used two older, less expensive lenses than I had). What I tried to learn from him and others was how best to use what I had. I didn't consider it a failure that I wasn't as good as him, I considered it an aspiration to get as good as him. 

I'm as much as a gearaholic as any who have GAS (gear acquisition syndrome). I'm just curious about new stuff and what it might do. But I try not to let that play into my creations. I was (almost) as happy running around Botswana with the Z8, 70-180mm 2.8, and 180-600mm f/5.6-6.3 VR lenses last month as I was in April with the Z9, 35-150mm f/2-2.8, and 400mm f/2.8 TC VR S lenses. In fact, I was happier in one fashion: my more consumer gear was easier to carry, stow, and protect. (I wrote "almost" because f/2.8 at 400mm versus f/6 does make a difference to backgrounds, and I'm spending a lot more time these days concentrating on things behind the animals.)

Value what you do with your gear, not your gear itself.

Your "Good Enough" is Not My "Good Enough"

I was looking through a forum discussion recently and came across the ubiquitous "can you see the difference?" type of post. My answer is almost always "yes." Your answer may vary. ;~)

I've written about "good enough" before, but I realize that I haven't pointed out the real problem with "good enough": it's a time-sensitive, subjective analysis based solely upon how well trained and experienced you are. Moreover, since we're talking about seeing things with our eyes, your eyesight also comes into play.

As my ophthalmologist will tell you, I'm a stickler for prescriptions. She spends way more time dialing in my eyesight than she does with any of her other patients, and we discuss things that rarely come up with other patients, like CYL (which tells you about astigmatism). Now that I've had cataract surgery on both eyes corrected to distance, that means that I use three different sets of prescription "readers", depending upon what device/page I'm reading. I don't want "good enough" sharpness, I want "best possible sharpness." Using a +1.5 diopter change when I should be using a +1.75 one doesn't cut it for me. (I also dislike the pincushion distortion that all off-the-shelf readers impart, but that's not a problem with acutance, the thing I'm writing about today.)

I've been lucky enough to have trained with a number of pixel peeping processing perfectionists over the years, and that has instructed me as to what to look for, something I practice pretty much every day. I also use tools to help me. I have one plug-in that allows me to view four different sharpening conditions simultaneously that I've set up so that it helps me see the difference in missed focus, motion, lack of depth of field, etc. 

But let's get back to the "good enough" problem. Simply put, your evaluation of "good enough" is randomly different than not only mine, but everyone else's. That's because what you can see, what you've been trained to see, and what you'll overlook is different than everyone else. 

Moreover, your "good enough" bar will change over time. If you've been using digital cameras as long as I have (now 35 years), you'll immediately know that the "good enough" bar has moved and moved and moved and moved. If you've ever pulled up an older image you took that you thought was pretty good and said "why didn't I see that?" you know what I mean. 

Be careful with declaring something "good enough." This is essentially saying that it probably has flaws, but you either can't see them or will ignore them. Be careful when reading someone else who says something is "good enough." You know nothing about how well trained they and what they can and can't see, let alone what they're willing to ignore. But it goes further than that: beyond "good enough" is "best." The same problems occur when someone says product X is the "best", particularly when it comes to anything regarding image quality. 

This is one reason why I've long been an advocate of "collect optimal data, process data optimally." I've never found that there isn't a better "optimal" I can reach for, even with my current gear. Whether that means more sampling (more pixels), more accurate focus (better edge clarity), better product handling, or anything else doesn't matter: I've pretty much always found that if I consider my current "optimal" as a placeholder until I figure out what is better might be, I stay at the forefront of what can be done in photography. 


Looking for older News and Opinion stories? Click here.

 Looking for gear-specific information? Check out our other Web sites:
DSLRS: dslrbodies.com | mirrorless: sansmirror.com | Z System: zsystemuser.com | film SLR: filmbodies.com

bythom.com: all text and original images © 2023 Thom Hogan
portions Copyright 1999-2022 Thom Hogan
All Rights Reserved — the contents of this site, including but not limited to its text, illustrations, and concepts,
may not be utilized, directly or indirectly, to inform, train, or improve any artificial intelligence program or system. 

Advertisement: