May I ask that you start your photographic-related shopping by clicking on any of the B&H links on this site. B&H is this site's exclusive advertiser. Starting a purchase from any B&H link on this site helps support this site.
This page of the site contains the latest 10 articles to appear on bythom, followed by links to the archives.
How Do You Take Images That Look 3D?
Simple answer: don’t follow the instructions in the books, on the Internet, or use the latest feature built into most cameras.
Someone has once again wrote images from a lens that had an appealing 3D-look to them. No need to call them out; they’re one in a long string of folk that have done so. But are they sure it is the optical design of the lens that produces the 3D effect?
I’ve written this before, but I find myself repeating it a lot these days as I fight through myths: our eye/brain connection has a built in cue for distance: detail.
If you can see every individual whisker, the lion is quite close to you! If all you see is a vague lump off in the grass, the lion is not close to you. Detail tells us something is close, fuzziness tells us something is far. This is true whether your eyesight is 20/20 or 20/100 or whatever.
The trend, partly because of smartphones and their small image sensors, is towards huge depth of field. I often get asked how to put everything up through infinity in focus with full frame cameras. Others ask me how to use focus stacking to put everything from one foot away through infinity in focus. I’m reluctant to answer either question because you basically take a 3D subject (reality) and make it even more into a very flat 2D construct (photo on wall). You’re erasing 3D cues if you keep trying to put everything in focus.
3D also works in other ways in our eye/brain. We focus, just like lenses, at only one distance. The difference is that we contract muscles around the cornea to change that focus distance constantly, and we do it unnoticed. Thus, we think we see near to far in focus, but that doesn’t happen in one moment of time, it happens over time. Our brain tricks us into thinking we’re seeing near/far in focus simultaneously, but we’re not.
While you’ll see different variations on the numbers—partly because our eyes also move up, down, left, and right, partly because everyone’s eyesight has variations—at any given moment your best acuity (detail) is in about a 20° arc at the center of your field of view. That’s because retinal cells are densest in the center. You distinctly recognize shapes at out to about a 60° arc. You can see color through perhaps a 120° degree arc, and motion—essentially peripheral vision—through around 180°. But it doesn’t really matter if these are exact numbers. If you hold your eyes steady on a point, your view is just like older lens designs: excellent in the center, not so good as you move towards the extremes.
What most people describe as “3D look” in photographs is really that: limited depth of field targeting a central subject with high acuity, but less acuity as you move outward. Hmm, just like our eye/brain works.
I’ve long been an advocate of not taking important visual cues out of my photos. I leave infinity out of focus (unless it’s the one thing I want you to concentrate on). I work the outer areas of images in lots of ways, including vignetting, to trigger our usual eye/brain response.
Do You Have to Read the Manual?
Short answer: Yes.
RTFM (read the f-ing manual) is a common response on Web fora to people’s naive questions about something (Aside: there’s nothing wrong with a naive question; it’s the arrogant answers that are the problem).
I believe the earliest I saw the RTFM response was in the mid 1980’s (on dial-up services and newsgroups), but it was a common comment in Silicon Valley customer service departments long before then (“why didn't they just read the manual?”, though often much more pithily put).
The proper response really should be RUM (read and understand the manual). It’s in that U (understand) where sometimes you’d need to ask further questions, as the current state of camera manuals is that the less said, the less cost to the camera company (even if all they do is provide you a PDF manual, as they contract out a lot of that work; Nikon’s latest Z8 manual has many entire pages of nothing more than “The X function does X.” Enlightening ;~).
The problem that triggers people to ask the question in the headline has two key elements: (1) people’s time is limited, and reading a 200-900 page manual takes time; plus (2) many of today’s products are highly complex and have enormous numbers of options and interactions.
Before I continue with that thought, let me get one other thing out of the way: some manuals are beyond terrible. My mom’s Panasonic telephone answering machine and phones come with a manual. That manual is (a) in short hand; (b) seems to be the result of the worst sort of Japlish translation; (c) covers a UI that was easy for engineers to create but impossible for users to understand; and (d) has errors, omissions, and inclusions of things not in the product (!). I’m sure someone at Panasonic thought they had done their job, but that job turned out to be “confusing the customer way more than they were already confused.”
Another (mostly) unnoticed thing is that if you’re buying at the front edge of a product’s availability, you get a manual that wasn’t particularly well proofread, and perhaps incomplete or even wrong about a few things. The camera companies don’t notify you that they’ve produced an updated version of the manual, though they do sometimes tell you that there’s a supplement to the manual. If a product has been out for a year or more and you downloaded an early PDF, it’s probably a wise idea to download the current version and see if it’s been changed any. I’d say that seems to happen 20% of the time when I try it.
All that said, there’s another type of question that comes into play. This past week I’ve fielded a couple of “why is my shutter speed stuck at 1/125, which can’t I go lower?” type questions, and I saw that repeated in an Internet post, too. The answer is that these users have set Pre-release capture to 120 fps. You can’t take 120 frames a second with a 1/60 second shutter speed; you’d only get 60 ;~).
This is someone not fully understanding what it is they’re trying to do in the first place. It’s not that they haven’t read the manual (it’s in the manual, but buried in a footnote), it’s that they aren’t fully understanding what it is they’re trying to do. They’re trying to use an advanced feature—probably because it was marketed so well—when they don’t fully understand or remember a fundamental construct of photography: shutter speed dictates how long the camera is collecting light, and the slower the shutter speed, the fewer images you can take in any given time period.
I’m not sure there is a good book for introducing photographic concepts in all their forms these days. Which makes the intersection of “complicated new camera” and “complicated photographic concepts” a minefield for customers. You can kick half the mines out of the field by reading the manual that came with your camera, though.
Finally, I should point out that buying your sophisticated new camera from a good local camera store can help, too. Really good stores have training programs, sometimes free, that you can take and which will get you up to speed faster. You’ll still need to read the manual, though ;~).
The Ethics of Post Processing
Apparently everyone wanted to announce things prior to the Memorial Day vacation here in the US (the traditional kickoff to “summer”). Adobe’s big news this week was Generative Fill, a new AI function being beta tested in Photoshop.
I’m not going to speak to whether it works or not, or all the things you might be able to do with it. Instead, we need to return to a subject that’s been out of the discussions for awhile now: what is “normal” post processing, and what is “taking processing too far”?
I first encountered this issue full scale back when I was editor of Backpacker magazine in the 1990’s. A number of photographers, including my mentor Galen Rowell, were digitally scanning their slides. Indeed, Backpacker had to digitally scan any film that was submitted by a photographer. All of us faced a simple question: what constitutes an acceptable color and tonal manipulation on the scan, versus what starts to make the photograph “lie” to the viewer?
As I’ve written before, ethics carry over into the field, as well. And it’s from there that I started to develop my basic position about object removal/manipulation. NANPA (North American Nature Photographers’ Association), of which I was a founding member, starting in 1994 had a basic ethics guideline card they gave to members. That’s expanded into multiple pages, but the key field ethics page is here. Couple that with their top-level statement: “The viewing public generally assumes that nature images are straightforward depictions of events and scenes that occurred without human interference.”
But even that is fraught with nuanced issues. For instance, I’m in the middle of a wilderness and am photographing a beautiful scene. I notice a beer can that shouldn’t be there. I actually encountered this situation, and it brought up dueling ethics: Leave No Trace says I should pick up the can and take it to where it belongs (recycling), NANPA’s statement essentially says I should photograph with the can in place, as that would be the straightforward depiction. But the can won’t be in place the minute I leave! ;~) Worse still, what happens if technically I’m not allowed to walk over to where the can is (e.g. area marked as being restored, do not enter)?
Yeah, ethics gets prickly real fast.
That led to my generalized field position: leave things (and photograph them) the way they should be. Which, of course, is subjective. But then I’ve read every word of the Wilderness Act and quite a few other documents about natural preservation policies. I believe my subjective view is “informed” as opposed to “what I feel like.”
Which brings us to the next dilemma: you took that photograph but didn’t notice the beer can: do you remove it in post processing. Here’s my answer: yes, I do, because that’s what I would have done if I had noticed it in the field. It doesn’t belong there, and if I had seen it, it wouldn’t be there.
That’s the “easy” part. The problem just multiplies when we get out of the woods and into the city. One thing that Content Aware Fill does well, for example, is remove power lines. Those lines are everywhere you don’t want them to be when photographing in towns. A beautiful building you want to capture has a line running through it that distracts from what you want to show the viewer.
And there we get into “photographic intent.” Imagine the building is a historical one, imagine that your client wants a “perfect” rendering of the building itself. There’s a power line in the way of you doing that. In a “perfect” world you’d have the client get the power company to come out and remove the line temporarily while you took your photo, then put it back up afterwards. That takes a lot of time, effort, and money to do, and it’s an absurd thing to even consider when you can effectively do the same thing virtually at no cost. If my intent is to show the building without distraction, I’d have no real issue taking out the power line in post processing.
However, we’re on a slippery slope here, and this is where Photoshop’s Generative Fill starts to come into the discussion.
The examples I gave so far are reality based. The thing about Generative Fill—and the way it’s been demonstrated by most so far—is that it is equally good about fantasy as it is reality. Maybe especially good. So in my first example I could have said “replace the beer can with a Corvette.” Now that would have never happened (remember, we’re in a designated Wilderness area): thus the result is entirely fantasy.
I have no real problem with fantasy images other than one: I have to know it’s a fantasy image as I view it. Otherwise, I’m being deceived.
Back when we had tons of magazines and newspapers doing (mostly) the right thing, pretty much everyone had a policy about photographs that went beyond reality into fantasy: they had to be labeled something along the lines of “photo montage” or “photo illustration” or something even more specific as to what was done by the creator.
Which brings me to my real point, and it’s a point about AI in general, too: how do we know when to process something as “reality” or “fantasy”? We don’t. Because very few sources you get information from today reliably give you context for what you’re viewing. It’s not Photoshop that’s the problem, it’s the media you consume. It’s not reliable, simple as that.
Since I just mentioned it, perhaps I should offer you some context for what you’re viewing on my sites: photographs with my color and tonal decisions made in post processing, and from which I’ll remove (content aware fill) anything that I would have removed if I had seen it in the field and could have done so. I always process towards the “real scene” without distractions. In the few times I might not (sometimes I illustrate a possibility), I’ll try to let you know that I have.
The Vortices of the Internet Trolling
One of the things you have to learn about the Internet is that trolling, click-baiting, speculation and all the other non-fact bits coming from people who are desperate to be heard from tends to come with the extremes.
We had a perfect example of that this past week, when dueling competitor-vested interests started promoting two different notions:
- From the Sony-vested crowd we got: “the Z8 is using year-and-a-half old technology.” Technically, any new product that’s launched has technology that’s more than a year old, because you don’t invent a new technology and then, voila, ship it. Only a small handful of products to date have stacked image sensors with blackout-free viewfinders, and only one from the Sony ship that this complaint was heard being yelled from.
- From the Canon-vested crowd we got: “the R1 prototypes are out in the wild in Japan.” As it turned out, this was really just an R3 that someone had put a video cage around, which made it look different. The real thing that was observed was the insecurity of those rooting for Canon, as they currently have the fewest pixels of the top pro body crowd, which apparently is making them nervous. That’s despite most of those posting this new rumor probably don’t own even an R3 in the first place.
In both cases, a vested interest in deriding a perfectly fine camera was present: it makes the poster feel better about their own choices, apparently. There’s nothing particularly wrong with a Z8. It has its pluses and it has its minuses, just like any other digital camera that’s come out in the last three decades.
Of course, what comes around goes around. If the Nikon crowd wanted to retaliate they could simply say “the ZV-1 Mark II doesn’t even match the lens Nikon threw away five years ago” or “Canon has taken the same camera, changed a letter and added 50 to its name.”
I don’t get all the hoopla, and what purpose it serves. It’s not as if everyone is changing camera brands every year, let alone incrementing at every model. What seems more likely is that everyone these days just feels like they have to complain about something. And to do so in the loudest possible voice, with no relationship to reality.
If a Z8 isn’t for you, it isn’t for you. I’m already on record as saying that Nikon’s marketing for the Z8 is not particularly good, and totally off kilter with many loyal and long-term Nikon customers. Nikon is going to have a difficult time persuading their own D850 users to buy into a Z8, so I fail to see how Sony A7 or Canon R5 users feel at all threatened.
Moreover, none of these complaints seem to have anything to actually do with photography. Here’s a challenge for you: the Z8 is a camera, it takes photographs and records video. In what specific way does it fail to take photographs and record video? I can actually answer that question with a couple of answers, but interestingly, I have yet to hear anyone else mention those things.
In terms of my own photographs and videos, no the Z8 doesn’t fail at all, it just gives me everything I was using for the past year in a smaller package. The rest of the camera world has turned into a bunch of kvetchers.
Reader Questions Answered
"I notice that many of the photo forums (e.g. the former dpreview) throw [the term Image Quality] around willy nilly, and that term has become quite a ubiquitous one. Camera A has better Image Quality than camera B. It is such a nebulous term and has so many facets to it. Does Image Quality just refer to sensor size or pixel size? Does Image Quality mean lens sharpness? Are all high megapixel count cameras high Image Quality and all smaller formats low Image Quality? Or is Image Quality just a measurement of someone's ability to learn?”
You came closest with the last question. “Image Quality” is typically a term thrown around by someone who doesn’t yet know what they’re talking about. The user tends to be hiding behind some vague construct in order to disguise the fact that they can’t yet talk about specifics.
You’ll note that when I talk about capturing an image, I talk about capturing optimal data. That has a whole bunch of sub-topics, but let’s just address one: dynamic range. I generally capture optimal data with a digital camera if (a) the camera’s capture range fully encompasses the scene’s output range; and (b) I’ve placed the capture in the highest bits without truncation of highlights (e.g. Expose to the Right, or ETTR). Anything else would be sub-optimal and mean that my eventual “Image Quality” could suffer as a result.
"Image Quality" is a bit like juggling. Some seem to think that if you can keep three balls in the air, that’s good enough, you’re juggling. Ultimately, however, the best juggler controls more balls than others and does so with rhythm, structure, and showmanship.
Discussions of “Image Quality” are also a bit like wine tasting. No two tasters tend to agree on anything. Even when it appears they have some agreement, they’ll argue over nuanced details. Eventually, they try to anoint a winner when there isn’t actually a competition in the first place. Subjective starts overriding objective, and then it starts to become all about who can speak the loudest with an authoritative voice. Think Ted Baxter.
Why we need winners and losers in “Image Quality” I have no idea. I’ve taken photos that still resonate with people to this day with cameras we’d all call losers (particularly compared to what we can buy today). For every facet that gets debated, there tends to also be an alternative answer. For instance, pixel count. Which is better, 100mp sensor or eight 24mp images stitched together? Or dynamic range. Which is better, 14 stops of dynamic range in one capture, or a three image 2EV bracket on a camera with 12 stops of dynamic range?
Of course, some of you just got out of your chair and screamed “but stitching and bracketing doesn’t work for moving subjects.” Sit down. You’re embarrassing yourself. That’s because you’re walking right into my contention: there’s no single solution that’s right for all possible photographic scenarios.
Just prior to answering this question I was answering another, and the answer was the same: once you understand the variables, you have to figure out your priorities and maximize them individually.
No camera/lens combination with maximum pixel count, maximum dynamic range, maximum MTF recording, and anything else you want to say contributes to “Image Quality" actually exists. If it does, I wouldn’t be able to afford it. And that’s before we get to any subjective evaluations—"the colors have a hint of peach, with an underlying reminder of late day"—and the debate that will bring.
To what end? If a “best Image Quality” thing existed, all the other companies not making it would collapse, the maker of the best would increase prices, and the discussion would immediately turn to “why didn’t I achieve 'best Image Quality' when I used mine?” [sic ;~]
I find it far better to change the question: what’s the Best Image I can create? And what qualities can I embue it with?
Notice how I separated the words Image and Quality in the last paragraph? The image has to work, first and foremost. Subject, moment, light, composition, and so on. Without those things working right I’m not going to have a very good image. Now, given that, what underlying pixel properties might enhance/detract from the image?
That’s actually an interesting thought process for another article, but I’ll give you a hint. For example, noise tends to work against light. If you have great light but too much noise, the noise is masking or hurting the capture of that light. Still, I couldn’t really care less about noise until I’ve gotten the light right. And then I’m only trying to control the noise enough so that it isn’t damaging my light.
Notice how my answer started talking less and less about cameras than it did the photo itself? Funny thing, that.
“Why do so many companies use Kickstarter for new photography products?”
Let’s talk about online economics for a moment, because most of the companies doing this are mostly selling products online (either directly or indirectly).
If you watch Shark Tank, you’ve probably heard about CAC (customer acquisition cost). This is sort of the online equivalent of “marketing and sales costs.” Another issue that comes up a lot is cashflow as it pertains to production. You need enough cash to produce hardware products in sufficient quantity to get your landed cost per unit down and your gross profit margin up.
Kickstarter basically is a way of bundling all those problems into one convenient pre-packaged system. You offer a discount for pre-orders, which nets you the cash you need to start production, and you create an affiliate program that points to the Kickstarter campaign, which puts a somewhat reasonable and clear number (typically 10-15%) on your customer acquisition costs (Kickstarter siphons off its profits by handling the online credit card system and tacking on a small fee). The affiliate program extends your visible marketing well beyond your own Web site, creating new customers. The ones that cost you most to acquire.
Kickstarter thus becomes sort of a Product Release for Dummies template, one in which all your risks and costs are well known in advance. You set your goal to the lowest point of pain you can tolerate, and push the publish button. I haven’t looked, but I’m sure there must be business school classes that have popped up to help even the most challenged college students understand how to do this (the Kickstarter for Dummies class for Product Release for Dummies ;~).
I jest a bit. To fully appreciate what you can do with Kickstarter requires some higher level thinking and an understanding of all the give/take elements in product marketing and sales. You don’t want to lose money on the campaign, but you also don’t want it to stall at a level below what really provides you a useful growth boost. You don’t want to take no profit on the campaign, but you also need to make it attractive enough to generate enough cash you need to fully produce inventory that will last you past the campaign.
You’ll also note that the Shark Tank investors often want to know what a candidate’s Kickstarter goals were and how they actually performed to that. That’s a crude assessment of whether the product really attracts enough attention to be considered sellable. As Mr. Wonderful likes to say to those that have no meaningful sales: “Stop the madness!”
I don’t publicize Kickstarter campaigns on my sites. I do sometimes personally purchase something on one. That’s generally not because I think it’s a product I need, but rather because I want to encourage innovation in a specific photography product. Sometimes I get something interesting and useful out of that, more often I donate the result to film/photography students nearby. Every now and then we get the stalled product. There’s one I bought into a couple of years ago that still hasn’t shipped, but I kind of hope it still does, as it was doing ambitious stuff I don’t see anyone else managing to do.
“Budget” Macs
It seems we’ve been having a number of sales recently of state-of-the-art Macs. This site’s exclusive advertiser, for instance, has two particular models on sale at the moment you might want to think about. Note that both these deals are today (Monday) only.
- At the true budget end, we have the Mac Mini. My minimal Mini configuration is currently US$200 off at US$1100 [advertiser link]. That gives you an M2 processor with 10-CPU/16-GPU/16-Neural cores, 16GB of RAM, and a 512GB SSD. 16GB’s of RAM is technically okay for single image post processing work with only LR/PS open, but you need to keep the SSD having plenty of empty space for virtual memory spillage. Thus, you add a Satechi Mac Mini hub for US$100 and populate it with a 2TB M.2 SSD (another US$100). So, for about US$1300 you have a very fast machine with 16GB RAM and effectively 2.5TB SSD. Again, the 16/512 M2 is the lowest end Mac desktop I’d recommend these days, but that’s a lot of desktop for the current price. If you need more, pay attention to the on again, off again discounts B&H and Amazon seem to be putting on recent Mini models.
- At the high price end, we have the MacBook Pro. B&H currently has a whopping US$1700 discount on the M1 16” MacBook Pro [advertiser link] with 64GB RAM and 4TB SSD. I have essentially the same machine in 14” form, and it is ridiculous at how much it can do, and how fast. It’s better than the maxed out iMac Retina I’ve been using. This is a great portable solution that can also be used as a desktop with an external monitor. I don’t know of anything else at this price that is so good. I’m sure an M2 version would be even better, but you’re not going to believe how good the M1 version is, and you’re not going to get an M2 version close to this price.
While Apple has long been the “high price” solution, what we’ve been seeing in the Apple Silicon era is that discounts are an ongoing thing that make models briefly the best equivalent you can buy from anyone. You just need to have some patience to wait for the model you really want to hit someone’s discount list.
May You Live in Confusing Times
The shutdown of dpreview.com lives on. And on.
Given the most recent post by dpreview’s General Manager, one would have to guess that negotiations are going on behind the scenes to in some way keep the site going. Meanwhile, the ads are back (they had been taken off just prior to the “closure”), articles are still being posted, and fora still are seeing posts.
However, the thing I feared has already started happening: bi-furcation of audience. We have two primary groups trying to build a dpreview community from scratch (dprevived and dprforum), other sites (e.g. photographylife) suddenly sprout forums and encourage folk to migrate, and still others (including some former dpreview moderators) attempting to point to a particular forum that already existed. A few folk migrated to existing forums, such as fredmiranda.
I’ll give you one example of how the dissolution is problematic: the dpreview Photographic Science and Technology forum, which is where the nerds hung out, didn’t have a post in five days. Meanwhile, the Photographic Science and Technology forum on dprevived seems to be where most of those folk went, but isn’t very active, either.
The problem as I see it was the dpreview was the “center of things.” Much like a city center, you didn’t always go there, but you knew it was there if you needed something. Now, first you need to lookup on a Google map where something might be, then you have to drive a random direction to get to it.
With AI bots now starting to show up on your screens, often in disguise, one problem is going to be “what data were they fed?” I’ve already received an offer to take all the data on my site and feed it to a dedicated bot I could install on my site. Sure, but I’m not always right, and how is that bot going to distinguish between rumors I might have mentioned, commentary I made about what I think might/should happen, my occasional errors, and actual data from observation and testing?
Having a central, (mostly) trusted repository of useful information that also allows you to type a question and get an answer is a useful thing. That’s essentially what dpreview was providing the photographic community. Dpreview has already hemorrhaged Chris and Jordan, two of their top content creators. It’s unclear who else is still on board as writers and editors. But all the fora proliferation has also depleted some of the activity overall in the community that dpreview spawned, too.
We’re at Day 38 post closure by my count, and dpreview’s status hasn’t really resolved. The longer it takes to resolve, the less likely that it will hold onto its position as a hub for photographic enthusiasts. The community has been leaking away.
More importantly, the quality of the reviews on dpreview had already deteriorated from where it used to be, and needs to be resurrected. So if the site does get a reprieve, it’s still got work to do.
Imaging-Resource Followup
I wrote earlier that I had press inquiries into the various parties after the closure of Imaging-Resource. Founder David Etchells has written his story of the demise on PetaPixel.
However, today I (surprisingly) received a response from BeBop, the company that closed down Imaging-Resource: "The property will be going on sale later today and listed on our websites.” As I post this, the latter doesn’t seem to have happened.
In looking closer at BeBop, it’s clear that their stated business description (“performing arts,” which they say includes Jazz, Dance, and Theatre) doesn’t align with all of the Madavor properties they took over. They claim to be creating a consumer hub for the arts that is done via: (1) a television channel, (2) presentation of arts events, and (3) a marketplace for arts products and services. Their business model is advertising and ticket sales. Not much of this aligns with what Imaging-Resource was doing, thus the closure (and apparently, now a sale of the property).
As you might guess from the concise emailed response I got, I’ve asked for further clarification.
Update: all the photography assets (Outdoor Photographer, DigitalPhotoPro, DigitalPhoto, and Imaging-Resource) are being offered by BeBop for US$500,000, including all archives. That seems overpriced to me, as it is an “as-is” sale, doesn’t guarantee any contributors will carry on, and there are tangible not-yet-paid-for content costs involved.
While the archival content has some value, the primary value would be based on site traffic. IR was getting about the same monthly traffic as the bythom site, but with less engagement (fewer pages per visit, far less visit duration, etc.). Someone who knows site valuation would have a difficult time justifying the price, methinks. Either that, or I should sell my sites ;~).
Updated update: the Imaging-Resource Web site is back to being available, though there are no new articles or information on it after April 18th. I suspect that BeBop discovered that Web site valuation goes way down when it isn’t actually active.
Camera Life Cycles
A recent post on a forum can help you understand how to think about cameras a bit better:
“The competitive life-cycle of a modern hybrid camera is two years, tops. This is different than the camera’s useable life.”
We can argue about the “two years” bit—I believe that was the competitive cycle in DSLRs, but it’s been stretching longer as the market size has collapsed—but that’s not really the important thing to argue about.
- Competitive life-cycle: how long a product will remain competitive against other new products in the market.
- Usefulness cycle: how long a product will remain useful to the person that bought it.
The difference is a marketing game.
For example, your automobile doesn’t wear out in two years, but by significantly changing and adding things to the models that are currently available, the auto makers hope to get you to buy a new one with some regularity.
Leases made changing cars frequently a very popular thing for awhile, as lease payments appeared to lower the average monthly cost of owning a car—probably not; “appeared to” is a marketing construct ;~)—while forcing you to make a new decision every couple of years.
Fundamentally, though, any 21st century new car probably has at least a 10-year useful life before it starts to become more costly to maintain (and even then, that cost should be far less than purchasing something new). Auto makers can’t function profitably if they only sell you a car every 10 to 15 years.
According to Kelley Blue Book, the average US ownership of a new vehicle in 2002 was a mere 38 months (~3 years), but grew to 71.4 months (~6 years) by 2012. By 2022 the average ownership had grown to 100.8 months (~8.5 years). However, note that about three-quarters of cars bought using loans in 2022 were financed with loans of 60 months or more, with 72 months being the most common. If the car depreciated too fast, you were underwater on the loan for most of its length, thus people holding onto cars longer.
If I were an automaker, I’d want some sort of “reset” to happen, so that I could shorten the period before all those folk had to buy a new car. I’d want to introduce something new and different. Hmm, EV anyone?
All of this is completely relevant to the camera market. Specifically: (1) the camera makers are cycling new product faster than the useful life of the product; and (2) mirrorless was essentially that “new thing” that tried to reset the market into buying again. Heck, we even have retailers, particularly online, peddling “pay later” plans. Anything to get you to buy today, just like the auto market.
Much of the online discussion about cameras is about buying, not using. A lot of folk benefit from you buying (as opposed to the number that benefit from you using): camera maker, distributor, camera store, sales people (spiffs), online sources with ads from same, and many more.
Back in the Days of Film, camera stores could benefit from your using your camera: you had to buy film and get it processed. Today, not so much, though the smartest dealers all have training, social interaction, and printing available.
As a long-time participant in the media, one of the things I’ve observed over many decades is that media that gets too caught up in the buying of something goes away. That’s because the buying either becomes commodity like (which doesn’t require media), or the particular product fad dies (which also doesn’t require media, just morticians ;~). Media that helps with the using of something does far better, though it may have to be done on a smaller scale. Moreover, you can’t get too locked into the type of media you use. Print has pretty much gone away, now replaced by online, for example. That means you have to understand new media really well.
Aside: if I weren’t 71 years old and mostly doing this for fun, I’d have long ago transitioned to other media.
You might have noticed that I’ve brought back Teaching Points (first a hint via the front page photo, then an article). I felt I had slipped a bit on using versus buying, so I’ll try to rebalance things during the coming year. It’s not that serious users don’t keep buying products, but rather that they buy because they have use cases that aren’t being completely fulfilled by an old product. I’ll see if I can help fill the gap in between those two things.
Busy Season
If it seems like I’ve been a bit busy since returning from Africa, you haven’t seen the half of it.
Thanks to Nikon, I already have a Z8 that I need to do more photography with, plus I’m already at work at getting my Z8 book done (and still need to finish my D6 one), so there’s not been a lot of time to do other things. That said, I’ve been able to post a few new articles and keep data pages up to date on all the sites. Bear with me as I continue to try to get new information and commentary onto the sites, however.
News/Views
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
Looking for older News and Opinion stories? Click here.