"...our market share is steadily recovering."
"Young people who are not satisfied with the functions of smartphones have purchased high-end models such as the Z9 and Z8." —Nikon statement in their recent Q&A after first quarter financials release
Okay, these were said in person (and in phone conference), but then posted on the Internet. I have problems with both statements, and neither were challenged by the Japanese press.
First, Nikon's market share, in terms of units had an up tick in the quarter (to 13.9%, up from the previous fiscal year's 12.1%; their projections for the coming year are 12.9%). That's likely mostly due to the Z8. Here's the problem with making the claim that market share is on the mend: the Z8 is a long-term purchase. Those folks won't be shopping for another body soon. When a single, high-end product gives you a sales bump, that tends to be a temporary bump unless you can follow it up with other successful models.
I'd say that the Z9 and Z8 stabilized Nikon's market share in the low teens. Up to the point where the Z9 appeared, their market share was consistently trending down. Nikon has a long way to go before they can say Canon or Sony are once again in their sight.
On the other hand, I'm surprised that Nikon didn't talk about the really good news: during the most recent quarter, Nikon's revenue from sales had a sharp increase (again, due to the Z8 and to some degree Z9). To the point where about 20% of all dollars spent on mirrorless cameras and lenses went to Nikon. I've been hearing about this trend from dealers for awhile now, pretty much since the Z9 appeared: camera dealers are making more money from Nikon sales than in previous years, and clearly so.
The second statement, to me, is anecdotal hyperbole. I'm sure there is a smartphone user or two that bought a Z8 or Z9. But to imply that this is a trend or something significant seems to me to be a complete overstatement and a random brag.
“Light Meter is Made for Entry-Level Film Photographers.” —Petapixel headline
The article basically describes a hot-shoe mounted, simplistic light meter. The single sensor takes in about a 30° area (which might include the top of the lens or hood on larger lenses ;~) and averages it.
The “strange” part of the headline is “entry-level.” Is there such a thing anymore as an entry-level film user? Moreover, probably the first thing that entry-level users wanted automated last century was exposure, and this little gadget doesn’t exactly solve that problem.
I’d say that this little meter is more likely used as a replacement to older in-camera metering systems for which you can no longer get batteries (or which are busted). Such as my mom’s old Nikomat FTN.
“Fuji[film camera]s are far more than cute or niche, they are effin sexy in every way.” —Discus comment
If this is how you choose cameras (cute, niche, sexy), I’d argue you’re not a photographer. No doubt the physical attributes of one camera (car, device, whatever) will have more appeal to some than others, but if you’re just using external visual cues in your buying decisions, I hope that the camera you buy is only for being put on a shelf in your interior design redo. Cute doesn’t take photos. Sexy doesn’t produce images.
"In 2000, I went to Greece with 22 rolls of 36 exposures, came home and had them developed and put onto CDs at the cost of US$440. Today with digital, I could have bought a lens with what I spent on film and developing on that trip.” —dpreview post
The strange thing here is that someone has correctly identified one key attribute of digital cameras. Among all the rhetoric that gets posted on the Internet in discussion fora, it’s always amazing to me how many of those posts simply don’t see the reality around them. This poster does.
Many hobbyists, once convinced that the image quality was still there, dumped their film cameras for digital ones. They might not have explicitly have come to the same conclusion, but implicitly, they did.
Which brings me to an observation. I’ve long written about sampling/leaking/switching with digital cameras. One of the reasons why I recognized this happening was that it wasn’t anything new! In the film world, I was constantly dealing with folk who just swore that the latest film stock from Fujifilm or Kodak (or Agfa and others) was “better.” Indeed, exactly the same things that get argued about today (dynamic range, resolution, color) were argued about by film users. Pros would switch films because of some perceived advantage. For instance, while Kodachrome was regarded as a “standard” by many, it was actually quite color inaccurate, and had other issues. Along came Velvia, with its somewhat more accurate color coupled with extra saturation and contrast, and pros discovered that all the photo editors were picking the “snazzier” looking Velvia slides on the lightbox. (As an aside, Rodale, the publisher of more than a dozen magazines, one of which I ran, did a pretty thorough test of all the slide films available in the 1990’s and chose Provia as being the best solution with the fewest problems, and that was what was stocked in our photographers' cabinets.)
But back to the point: one thing that I don’t think was accurately captured in analysis of the first decade of DSLRs was how much the lack of having to constantly pay for film and processing led to investment in more photo equipment by the hobbyist/enthusiast group. A few camera companies did sometimes mention this in marketing, but I’m surprised that none seem to ever had made that a primary theme in their advertising (perhaps I missed something?).
Taking things to their endpoint: photography enthusiasts have never been better off. The equipment has gotten far better without any real price change after inflation, the per-image costs have dropped to next to nothing (so experimentation doesn’t have a cost other than time), and there’s more variety in choice of gear now than when every maker was making the same thing (we’re especially seeing some interesting new lens specs these days that we didn’t use to see).
"Referral [affiliate] commissions don't get added to the price we pay for an item." —Discus comment
While technically true since the listed price doesn't change on something with a referral link, pragmatically this is not true. The current practice in Internet-based businesses is to assign an expected CAC (custom acquisition cost) when figuring out what to charge for a product. Commissions are a large part of CAC for many products. In essence, this is mostly just a replacement for the old sales and marketing expense lines that reduce gross profit margin.
One thing that surprises me is how aggressive many of the companies using affiliate programs have gotten with CAC. They double dip in reducing their revenue by first offering upgrade or new version discounts, but then adding affiliate program fees that take another 5-15% to effectively advertise the discounts. This tells me that these companies don't have reliable installed bases (or aren't profitable). Moreover, many of these companies are just iterating the same thing over and over.
Note the word "acquisition" in CAC. Technically, once you acquire a customer you should be able to market successfully directly to them in the future. But that doesn't seem to be the case for a lot of these companies. If you offer a discount and commission to acquire a customer once, but then do the same thing with your next update or offering, you probably haven't actually acquired the customer, have you? I've been privy to the financials of a few of these companies, and my usual first reaction is "why are you customer acquisition costs so high?"
But getting back to the thing written on the Internet: commissions are part of the cost of sales, which is something you, the customer, are paying. Whether that's a hidden cost or not doesn't really matter.
Let's talk software for a moment, since we don't really have to deal with the manufacturing and distribution costs. If every time you update the software you have another huge CAC hitting your bottom line, you gain no ground. This is actually why SaaS (Software as a Service, or subscriptions) make a lot of sense, to both you and the company producing the software. The company does actually acquire a customer (subscription), and that ongoing income isn't hit by additional CAC.
Where I object to Adobe's Creative Cloud subscription practices is only in one specific area: should you stop subscribing at some point, the software no longer really operates. True, some things will still work, but not enough to continue effectively using it. Creative Cloud really ought to have a perpetuality aspect to it (e.g. "subscribe for a minimum of two years and if you cancel your subscription the software will continue to work but get no updates. To rejoin later, you'll have to pay an update fee prior to your new subscription starting.")
"6K isn't really a thing, there are no 6K displays or TVs." —Discus comment
In the sense that there's a standard 6K playback outlet, this is true. But as for it not being a thing, it most certainly is. Netflix and a number of other organizations these days are essentially demanding >4K recording. One reason is for cropping flexibility, but the other is the Disney Thing: you want to record in the highest possible resolution if you think your footage has more than a quick, ephemeral life. For example, when 8K outlets do appear, you don't want to be upscaling from old FullHD video unless you have to. Even 4K would be a 4x bump. The bar for the level you record video at keeps moving higher. Some RFQs (requests for quotes) I see these days ask for 8K, if possible.
Now, AI may change that up some, making upscaling a little less problematic. Topazlabs Video AI, for example, already provides some ability to do that. However, I'm a firm believer in optimal data, optimal processing. So capture at best possible capabilities, and you have less issues in processing down the road.
Thus, 6K is most certainly a thing. I've encountered plenty of productions using it for the reasons I just stated.
"Same wafer, just cut smaller" —Discus comment about the Sony A6700 image sensor (compared to A7R Mark V).
Uh, no. That's not the way silicon works, and when people post information like this where they pretend to know how something is created it tends to live on in Internet Amplification in ways that some eventually start to believe is the truth.
There's no doubt that Sony has a photosite design that, when congregated correctly, can be deployed as 26mp APS-C, 61mp full frame, or 100mp medium format. But you don't make a decision as to which size the final chip will be made after the wafer has been created. You have to make that decision before the wafer is built up. That's because you need electronics that communicate what each photosite has done to the outside world—e.g. Analog-Digital Converter—and those must live on the same chip. Sony is using column-based data transfer, for example, so some electronics have to live at the edge of the photosite area in order to be linked to pins on the resulting chip.
What was posted by that person is akin to saying that V6 and V8 engines differ only in where the automaker cuts a giant, long, metal engine block. Heck, we could have V4, V10, V12, and other variants by just changing where the cut is made! Let's see that statement live on on the Internet ;~).
FWIW, the light-to-charge aspect of photosites hasn't changed much over the last decade. We've seen movements of position (FSI versus BSI), walls/moats around the photosite, and transmission of data from the photosite all change, but the primary function has been settled for some time, which is why we haven't seen a significant difference in dynamic range from the photosite itself for some time.