Intelligence Versus Artificial Intelligence

Apparently I'm doing things all wrong, as virtually every software product these days is touting artificial intelligence where "every pixel is independently analyzed and properly exposed." (I didn't make that up, it's a real quote; I don't have that much imagination in my old, curmudgeon form.) 

"Forget about time-consuming manual" anything, it appears that all AI (artificial intelligence) marketing is telling us. Everyone wanted an automatic transmission and these days an automatic driving car, too, it seems. Or at least the marketing groups at companies seem to be telling us that. Next thing you know we're going to have houses created automatically for us. Oh, wait: 3D-printed homes are already a reality. 

As you've probably noticed over the years of reading my Web sites, there's a constant battle going on between "good enough" and "best possible." AI is just another variation playing off of that.

It's probably worth examining how we know something is "best possible." 

Because we say so! Of course your best possible will differ from my best possible, as we don't have the same values, nor do we have the same experience in terms of seeing all the possibilities, let alone being able to test for and judge them accurately. Still, the relevant thing about "best possible" is that it is often subjective, and formed from how many years of experience you have at the topic, whatever it is. AI doesn't care about best possible. You'll be served what you're served, and maybe you'll get a slider or two to make minor adjustments, or a button to cause a rethink.

For some reason, media keeps categorizing AI as having four stages or types. Unfortunately, no one can agree on the stages. For instance: toy, servant, caregiver, and parent. Or reactive mechanics, limited memory, theory of mind, self-awareness. Or foundational, approaching, aspirational, mature. And even where you find agreement on what the stages are, you find disagreement on what they mean. 

It's quite possible that an automated process can create something better than you can, but you would still have to verify that it's better ;~). How many times have you bought something on marketing promises only to eventually discover that there was a better choice?

Marketing of AI software is preying on your underlying desire to have things made easier and faster. You don't have to make a choice, the software will do it for you, and instantly! You don't have to tweak anything! If you want, it’s just a single slider. Coupled with that approach, of course, is that often you'll note that the results are "good enough" and just click Accept. 

Most of what is being labeled as AI in our software is really machine learning (ML). Curiously, ML often is also described as having four stages: collect data, analyze data, report pattern(s), predict data/pattern(s). ML is trained, and how good it is depends a lot on how good the training is. I note, for example, that the earliest versions of Topaz Labs AI Sharpen often produced obvious and hideous artifacts. Somewhere along the line, the program got more (and better) training, and that type of artifact is rarer these days. It still isn't as "clean" as my favorite-and-no-longer-available deconvolution sharpener, but the Topaz software is often better—when monitored and tweaked by the user—than you're going to get from the tools in Photoshop. 

I generally don't scoff at automation. Any reasonable automation makes my life easier. But I'm a "trust but verify" type of person, and I don't use AI/ML software without constantly monitoring what it's doing, and often getting directly involved and nudging it this way and that to get "better" results. I suggest you take that approach, too.


Looking for gear-specific information? Check out our other Web sites:
DSLRS: dslrbodies.com | mirrorless: sansmirror.com | Z System: zsystemuser.com | film SLR: filmbodies.com

bythom.com: all text and original images © 2023 Thom Hogan
portions Copyright 1999-2022 Thom Hogan-- All Rights Reserved

Advertisement: