Over a year ago when generative computer programs became public, what programmers call AI1, it felt interesting to me. Now that we’re over a year into this new type of image generation, we’ve arrived at a point where it is baked into the cameras on phones, and the generative output from the phones looks as real as the reality portrayed in photos.
If you’ve followed the reporting on the new camera/photo programs in the new Google Pixel 9 phones, you’ll see that we’ve taken another step forward into this new world of computer-generated imagery. The Verge covered this in detail. It isn’t revolutionary in what it can do, but it’ll now be available on a phone to people who likely didn’t bother to sign up for Midjourney or other online computer programs that were already generating imagery. The generated imagery looks as real as a photo, especially when posted on social media.
If you aren’t familiar with generative imagery, this is how it works. The user types a prompt into the computer program and the program spits out an generated image that looks like the prompt. For example, they might type, “A frog sitting on a lily pad in the Boundary Waters, photo realistic, 100mm lens.” Then the computer kicks out a photo realistic image of a frog sitting on a lily pad. It can look and appear exactly like a photo of the same. Sometimes it is impossible to tell the difference.
What I think this means is that we’re about to see more phones gain this function and more people using it and more of this computer-generated imagery showing up everywhere. Any guardrails that existed due to limited adoption are about to be removed without any guaranteed tools that can identify all generative output correctly. What systems that they’ve put into place are so unreliable at detecting generative imagery that they reinforce believability of generated photo-realistic imagery by failing to identify all of them.
AI is the death of the photographic illusion.
The photographic illusion is what the viewer experiences when they look at a photo. They believe that what they see is real regardless of all the reason why photography has never been able to capture reality and can only be based on reality—reality is paint if you want a metaphor. The key is that photography made people believe what they were seeing was real. That and, eventually its final form, the print’s inherent repeatability is what made photography so unique and different from the visual arts that proceeded it.
To me, the photographic illusion is what makes photography so alluring. You believe what you see when you see a photo. With AI, the age of the photographic illusion draws near, and that illusion is about that make a big shift.
Photography has always depended upon a belief in a photographic illusion. In the past, that illusion was able to exist in isolation from the photographer—one didn't need to know who the photographer was to believe a photo was reality. With AI, people will stop believing in an isolated photographic illusion, and the burden of illusion will fall upon the reputation of the photographer or publication.
In many ways, I’ve already stopped believing in a photographic illusion. I’ve seen too many fake images from locations that I know like the back of my hand that I know don’t exist. Or I’ve seen something that looks so familiar but then I notice a small detail that’s wrong. These small details are going to get fixed in these next generation tools, like those on the Pixel 9. But I’m already questioning photos that I’ve seen and am left wondering if they are really photos when often they are.
So, is photography dead?
Not yet, and I have hope.
I have hope because I think that there’s going to be a reaction against this type of AI imagery after it completely pollutes our daily lives.
If you do photography, you are going to have to take a stand and decide whether or not you are with generative output, aka fake photo-realistic imagery, or with the foundations of photography even though photography has been and will be an illusion and in many ways can already lie in the same ways as generative output or in the same way as any visual art. That said, photography isn’t going to survive without you.
[Spoilers for 1984 ahead. Seriously, though, have you not read it?]
There’s a cautionary tale in the form of Orwell’s Nineteen Eighty-Four. In the end, the main character starts to believe as truth the unreality that has been created around him. If you accept generative output into your work, you will join the mind warp that befalls our poor protagonist, and you will justify its use. To many of us, your work will lose the ability to be trusted as a photographic illusion, because you will have, to us, succumbed to the unreality that has been weaved around you as truth.
Hopefully, I’m strong enough to resist it as well, but as a reader at the beginning of Nineteen Eighty-Four, I thought our dear protagonist would succeed when ultimately he fails. We will all be susceptible to the coming gaslighting of computer-generated imagery being forced upon us as real. They, whoever that is and whoever among us joins the “they,” will tell us that this is okay. That protections exist to prevent confusion. “Just trust us,” they will say.
But the trust in the illusion will be gone.
At some point, we’re going to have to draw a line in the sand between tools that are okay for us to use and those that aren’t. Adobe or other companies aren’t going to take a stand on this. They are going to give us the tools that we can use to destroy the photographic illusion. We can flee their computer programs for those that take a stand, or we can ignore those tools and ask for an ability to turn them off.
I can’t help but think back to my high school days when I learned photography in the black and white darkroom.
When looking back at retouching prints made in the darkroom, I think of the art of spotting prints. After printing, the print could end up with imperfections that were fixed with inks, dyes or paints. This was something done outside of the photographic process of shooting, developing film, and then printing, but it was acceptable in the way that fixing spots is in digital photography. This makes me think that there are lines in photography.
I think there’s are many possible lines, but that they may divide easily. On one side, there are tools that help us but don’t generate new content that didn’t already exist in our photos. On the other side, there are tools that generate imagery that didn’t exist in our original photo.
An example of the first side is masking. In the black and white darkroom, I’d spend hours and hours carefully cutting out shapes in a sheet of Rubylith, and then I’d lay it over the paper and project a negative down upon it. The red parts of the Rubylith wouldn’t show the image and the clear would. This is similar to the way that Adobe Photoshop or Lightroom can now automatically generate a better mask than I could do with Rubylith and do it in seconds. This kind of tool gives us more creative control over our photos, and these tools are much faster and easier to use than in the past.
An example of the other side is that you decide that you need a frog on a lily pad. Instead of putting in the time in the field to take that photo, you ask your Pixel 9 to generate a frog on a certain lily pad. It consults the data that programmer’s programmed into it and spits out a frog based on all the photos that were used to create the program’s database. What is outputted onto that lily was never a photo. This generative output is more akin to painting than photography. It will be believable, but it isn’t a photo. It’s a pollution of the idea of what is a photo.
There’s another category that’s similar to the spotting of prints of past. In digital photography, this could be as simple as removing a dust spot that was on the sensor or as complex as removing a registration sticker from a canoe. There are two ways to do this: via pixels that already exist in the photo or generating new pixels based on a database contained in a program that programmers like to call AI.
Somewhere there’s a line in there. I don’t know exactly where it’s at, but on one side there’s generated output and on the other there photographic infill. I think eventually this will be sorted out somehow on how much infill is acceptable, and I think it will be sorted out that for photography generated output results in fakery. The public, I think, will agree with the latter.
I’m hopeful that if by taking a stand for photography by declaring where I personally stand and if you do the same, we will be able to save the photographic illusion for at least those of us who stake our reputations on it. My prediction is that within a decade that the photographic illusion that was isolated from the photographer or publication will be dead. The future is our reputation, and our reputation depends on what we do at this moment in photographic history.
I do wonder where you stand.
Until next time
I hope you enjoyed this issue. I’ll see you again in two weeks.
For some reason, it seems like my 2025 photo workshops are filling slower than usual. I have about 20 total spaces available across my various workshops for next year. I hope you take the time to check out what’s available. Sign up early and often.
Footnotes
The programmers like to call generative computer programs AI, but these programs aren’t intelligent at all. The programs are a regurgitative computer program doing what the programmers programed it to do, and the program isn’t capable of thought or awareness in a way that even the simplest of animals are. Don’t let these companies make you believe that the programs are anything more than computer programs.
I am so glad to see this article from you. I wrote a much smaller piece on my photography FB page a few months ago. I am in full agreement with you on this topic! And as a wildlife/wildspace photographer it is very troubling to see some of these AI creations. People need to see, appreciate and understand Nature exactly as it is. Some people are apt to believe that a purple owl exists! My line is firmly drawn in the sand on this topic. It needs to go! I am seeing so much of it now that it is ruining my interest in online imagery. Instead of being able to see and appreciate an image for what it is - now my first instinct is to second guess it. I am immediately critical and untrusting. We don’t live is a perfect/fake world so don’t show it that way.
Excellent article. Thought provoking to be sure.