Do Lenses Have Character?

I've been as guilty as anyone in describing lenses. In an attempt to describe results I've often reverted to a generalization, using the term "old school" to refer to what the results look like. I'm probably past due in drilling down some as to what I (and others) mean when we try to describe a lens' overall characteristics.

Technically, a lens designer is balancing a large variety of attributes when they formulate the optical formula. No "perfect" lens can really exist—okay, maybe it might if price were no object and we could invent a few new technologies along the way—so a lens designer has to pick and choose which attributes they want to maximize and which they can ignore in the process. 

It's probably wise to describe some of the attributes that are being juggled: central MTF (contrast), corner MTF, spherical aberration, coma, vignetting, linear distortion, chromatic aberration (both lateral and longitudinal), field curvature, focus shift, spectral passthrough (color), flare (several types), to name the most important. 

If you want to dig deeper, you'll find plenty of other variables that come into play. For instance, aspherical polishing attributes. When we talk about onion-skinning in bokeh, that usually is attributable to concentric variations in how the aspherical aspect of the lens is defined. Some lenses have extreme changes in their outer curvature and more crude transitions, others have less extreme changes and more sophisticated transitions. An aspherical element may only have one surface that's aspherical (the other a normal curve), or it may have two aspherical surfaces. 

Short version: there's a lot of design variables to balance. And balance is the correct term, because as you optimize one variable you tend to de-optimize another.

In the film SLR days, virtually all lens design was done manually. It wasn't until late in the film era that we started to get computerized design tools that could model different potential designs and provide emulations of what the final results might be. Even today, those automated tools are only as good as what you've programmed into them. For instance, if you stumble upon a new way to polish aspherical elements, if that data isn't yet modeled in your program, you're not optimized to the final results. New forms of glass have to be programmed, and pretty much any new optical technology you come up with will need to be first added to your modeling and confirmed.

Even with all the computer simulation that's now available, you'll find that optical designs still tend to use known forms and known lens elements (particularly for curvature and polishing, probably because the mechanical tools for doing those jobs don't change rapidly). For instance, the following is the Nikon 14-24mm f/2.8 S (top) optical design versus the Nikon 14-24mm f/2.8G (bottom) (ray tracing courtesy of Photons to Photos Optical Bench).

bythom 1424S

You'll notice that the optical designs are quite similar (but note they’re not drawn to the same scale). From this simple 30,000' altitude, it's not easy to see how these lenses might differ in capability. You might notice that the S lens is doing something different at the back portion of the design and that it appears that its final ray patterns seem to be a little tighter (which might produce more telecentricity at the center and less potential pixel crosstalk at the edge).

Yet despite the similarities in the basic optics of the two lenses, I find they perform quite differently. They have a very different character in how they image, thus answering our title. How so, you ask?

Well, the primary differences I found in my testing of both lenses are two-fold: the DSLR version of the lens has considerable field curvature, and the MTFs (contrast) look quite different. Here's Nikon's own theoretical MTFs (Z lens on top, DSLR lens on bottom):

bythom 2220
bythom 2219

The two observed differences are related. Some of the MTF difference—maybe most—has to do with field curvature, at least at the 14mm focal length. On flat subjects, for instance architectural photography, field curvature would be a real problem. Ditto on test charts ;~). On more 3D subjects, such as landscapes, the issue might not clearly be visible or even problematic. 

Which brings me to a common thing I observe in lens characteristics, and one of the things that forms my use of the "old school" shorthand: the older the lens design, the less likely it performs well as you move closer to the corners. Since many photos are framed and focused centrally (or at least in the rule of thirds “center"), the change in the corners often doesn't particularly show up in images, and over time we all formed an acceptance of center-better-than-corners that now "feels right" to many. This is the still photography equivalent to the long-established motion blur that we've become accustomed to in films and video (due to slow shutter speeds): we think it is "right" because that's what we're accustomed to. It's a "characteristic" we're comfortable with due to repeated exposure to it.

Experiments with center-and-corners-equal (and higher shutter speeds in movies/video) tends to shock the perceptual system. You'll often see people try to describe this as "too digital" or "unnatural." What they're really saying is that they've formed a perceptual habit that they don't like being violated. 

In truth, the most recent mirrorless lenses—particularly the Nikon S and the Sony G/GM designs—are just simply better in many of their optical characteristics. Sometimes startlingly so. It has nothing to do with “digital” and everything to do with better optical design that balances the variables differently.

Though that “better” sometimes comes with another compromise, because remember, optical designs are about balancing different aspects in how they handle light. 

The big difference between how I see lenses being designed now versus how they were designed last century is simple: a reliance on in-camera lens corrections. Instead of trying to balance linear distortion, vignetting, and even some levels of chromatic aberration, these "problems" are just left with the camera or the raw image processor to deal with. The "problems" that the lens designer mostly deals with these days are related to MTF (and its subcomponents). You can see that somewhat clearly in the Nikon MTFs, above: the mirrorless S lens' theoretical contrast capabilities are simply better than the older DSLR lens'. 

Indeed, that's what I'd tend to refer to as "modern school," versus film era and early DSLR lenses' being "old school." The design priority difference produces a visual character that's different.

Amusingly, for years I've been altering that characteristic—modern or old school—in post processing. In fact, I can trace this all the way back into my mid-1980's darkroom work. I spent a lot of time (and still do) working against lens designers' choices. These days I tend to add a bit of vignetting—what Scott Kelby refers to as his go-to "finishing move"—and I'll also sometimes look to destroy sharpness in peripheral areas. Back in the darkroom I tended to have to dodge against some portions of the vignetting and find ways to improve sharpness (contrast) in some peripheral areas. 

To some degree, how you view the visual characteristics of a lens and what you might do about them is dependent upon how you think your images will be viewed. I'd tend to say that the low bar of Instagram has shifted many towards a more modern school: sharpness, saturation, and contrast throughout the image, as it is not dominating your vision and the image’s periphery is just as important as its center. I'd also say that really large prints also benefit from modern school characteristic lenses, as the viewer—particularly if allowed to wander close to the print—tends to be looking only at a portion of it at a time, so that portion, whether center or corner, needs to deliver visual impact. 

It's the in between that's a bit of a problem, and unfortunately, that's where most photo enthusiasts live at the moment. From laptop to 4K monitor we're talking about something that is more midsized and viewed at a fixed distance. I notice that when I post process on my 14" MacBook I have tendency to want to do things a little differently than I do when I process on my 5K Retina iMac, whose display more than fills my central vision. (Generalized: we have vision that works best—sees detail—in the central 20°, and is defined by the primary binocular field of view of 60° where we see color and shape; everything else is peripheral vision, which doesn't have the same response as the central response and mostly sees just motion.)

Which brings us to this: does the character of a lens truly make a difference?

I'd argue: yes with a caveat. 

Remember, my primary thesis for decades has been this: collect optimal data. I'd argue that the modern school lens designs—even assuming lots of pixel correction for vignetting and linear distortion needed in post—simply produces more optimal data. I can always later compromise the data if I so desire ;~). Old school designs with compromised corner sharpness are just not optimal captures, in my book.

The caveat is this: most of you aren't close to optimal data collection coupled with optimal processing of that data. You're driven more by immediacy, and you're evaluating images via a fixed output (typically a computer screen of some sort, which could be smaller or larger or something in between). You respond to what you're seeing. And you have a built-in bias towards what you're used to seeing. Thus many of you like old-school lens designs, though without really knowing why.

Looking for gear-specific information? Check out our other Web sites:
DSLRS: dslrbodies.com | mirrorless: sansmirror.com | Z System: zsystemuser.com | film SLR: filmbodies.com

bythom.com: all text and original images © 2023 Thom Hogan
portions Copyright 1999-2022 Thom Hogan-- All Rights Reserved

Advertisement: