One reason why dpreview really needed to continue is that its volume of users—both knowledgable and naive—surfaces interesting things to contemplate. The Science and Technology forum recently had someone reference an interesting paper: basically, what’s the MTF of human vision?
In photography we talk about MTF values for lenses all the time. We sometimes talk about the contrast of output devices (printing, screens, etc.). But do we talk much about what the eye can do in respect to contrast?
Note that the commonly used Zeiss formula for depth of field calculations is based upon what our eyes can resolve. The circle of confusion that theory uses is calculated with placeholders that deal with angle of view (size of print, distance to view print, and eyesight assumptions). So we do sometimes talk about the eye, but that usually gets buried behind something else.
I’m still thinking about the implications of the research paper. It falls in my “what can you see?” commentary, but it intersects with a bunch of other things.
I will say this: as camera resolutions and screen resolutions went up and we started seeing lenses with very high MTF results, it’s getting more and more difficult to make a clear, accurate assessment from just observation. Nowhere has that been more true than with all the recent telephoto lenses. It’s one of the reasons why one of my base tests is of bear hair at distance (feathers are also good), and why I do Imatest MTF calculations, as well.
If you’re trying to make an assessment visually, a number of factors come into play: (1) your corrected vision, including astigmatisms; (2) the program rendering the output; and (3) the resolution of the output. That’s just the three big ones, there are others. Many people are making assessments based on observation of Internet content, either posted photos or videos. But there compression also comes into play. And now, according to the paper, I may have to take into account my pupil size at the time of assessing something when the detail gets high.
The previous paragraph is just one of the reasons why two people can look at the same thing and see something different. To one, the lens is “perfect,” to another the lens has “flaws.”
At the age of 71, my math skills aren’t what they used to be (though my eyes are carefully and highly corrected). But there’s a complex math problem here I need to contemplate. Are we at a stage where combination of the resolution of the camera, the MTF of the lens, the resolution of our screens, and the abilities of our eyes all are in a combined near good-as-it-gets sweet spot? In other words, a 45mp camera with a modern telephoto lens on a Retina-type display with a well-managed JPEG image viewed at normal distances shows just about all most people will see?
Anecdotally, I’d say yes. Further, I’d say that we’re deep into Zeno’s Paradox (where if you keep covering half of the distance to a goal, you never get to the goal; so in photographic sense, if we keep increasing the resolution, does that get us nearer to "best observable"?).
Does this mean the end of photography is near?
No. One issue with photography is that it is two dimensional. All the math for the above would be using flat planes for subjects and viewing. We argue about bokeh all the time, but what we’re actually arguing about is how much we can ignore (or don't want to see) the third dimension. Nikkor lens engineers have long talked about the transition from the MTF (flat plane) to the three-dimensional out of focus area, and suggest that there are multiple approaches to what can happen. In reality there isn’t a true difference (other than difference between human eye/brain abilities), but in the physical world, X sits precisely behind Y in only one measurable way.
Apple has been poo-pooed over its upcoming Apple Vision Pro. But if you think about it, this will likely be the start of true three-dimensional photography. Many have attempted and failed at this before (e.g. Lytro). It’s unclear how successful Apple will be, but I haven’t known them to throw out big platform efforts without having first understood what new world was opened up.
But to my point: 45mp with modern lenses on modern displays is starting to look a lot alike to me. My Z9 with a 400mm f/4.5, 100-400mm f/4.5-5.6, and 180-600mm f/5.6-6.3 look a lot alike at 400mm and maximum aperture. I’m still able to tell one from the other, but I’ve noticed in showing the original images to others, they often can’t. And I doubt that you can really tell via JPEGs posted on my sites.
On the one hand, this is a positive thing: you don’t have to buy US$13,000 lenses to make an image that most will find good. A US$4000 camera/lens combo might do what most folk need. On the other hand, it’s not a positive thing, as those of us still working at photography have a harder time justifying our expenses to differentiate ourselves. Moreover, there's always that nagging FOMO (fear of missing out); that someday in the future you may wish you had used a better camera/lens combo.
That said, when you work at processing images and know what to look for, yes, the differences between cameras and lenses can still be seen. I suspect I’m going to have to spend some of 2024 trying to tell you how you do that.