Nowadays it’s easier to just take lots of shots and fiddle with the setting and do bracketing and such. But I maintain something important was lost by the move to automatic cameras.
I'm being a little hyperbolic, but it really seems like, for a non-insignificant portion of the population, that will be true.
Inserting user's mates was a problem in 2006.
I understand I am relying more on luck and not being as deliberate with composition when I do that, and I have high respect for people who are able to get great wildlife photos with film. But for amateurs like me, it's far easier to get better pictures simply by taking more pictures.
“It was night and day. Six minutes instead of six years tells the story,” McFadyen says. “Instead of 12 frames per second, I can now shoot at 30 frames per second, so when a bird dives at 30 miles per hour, it makes it so much more likely you’ll capture it at the right moment.
McFadyen says that the focusing system is also “incredibly fast” on mirrorless cameras. “It can lock on the kingfisher’s tiny eye at these super-fast speeds,” he adds.”
https://petapixel.com/2025/11/27/photographer-recreates-king...
This is a bit of a marketing puff piece, but the core insights are correct - the kind of shots the photographer is talking about here were insanely hard to pull off on film, still very tricky to achieve with digital bodies in the 2010s - but modern tech makes them almost trivial.
Otherwise your meter will pick up on color differences in a given framing and meter slightly differently. Shots will be 1/30th of a second, 1/25th of a second, then thanks to the freedom of aperture priority you might get little weird 1/32ths of a second you don't have discretely on a dial. How about iso. same thing, one shot iso 200, another iso 250, 275 this other one. Oh this one went up to iso 800 and the meter cut the shutter speed. Aperture too. This one f2 this one f4 this other one f2.5. This wasn't such a big deal even in the full auto film era since 35mm film has such latitude where you can't really tell a couple stops over or underexposed.
All these shots, ever so slightly different from one another even if the lighting of the scene didn't really change.
Why does this matter? Batch processing. If I shot them all at same iso, same shutter speed, same aperture, and I know the lighting didn't really change over that series of shots, I can just edit one image if needed and carry the settings over to batch process the entire set of shots.
If they were all slightly different that strategy would not work so well. Shots would have to be edited individually or "gasp" full auto button which might deviate from what I had in mind. Plus there are qualitative trade offs too when one balances exposure via shutter speed, vs via aperture, vs via iso.
You can approximate the same limitation on digital cameras by simply using a very small SD card.
The best selling SD card on B&H is 128 GB. Let's consider that "regular size".
Fujifilm's GFX100 II is a popular medium-format mirrorless camera. Its sensor is 102MP. So each 14-bit RAW image is about 170 MB.
102M pixels x 14 bits = 1.428B bits = ~178M bytes = ~170 MB
So a 128 GB SD card can hold ~771 images that are 170 MB. That's a lot more images than a standard roll of film.
You want full control you fall into the rabbit hole of dcraw where you can option out how that raw processing engine actually works, what algorithms are used and what parameters for those algorithms. Even lightroom you are just using the algorithm they decided for you already with parameters they decided are fine.
If I ever find a good moving prop like a small fan, maybe I'll also re-shoot new previews to demonstrate how shutter speed affects moving objects.
Now, I'm just not sure how would one simulate a running fan with a picture. While for a static image you can have separated foreground and background and then apply effects for simulation (I know iPhone HEIC images have this property), for moving images you have to simulate the blur and the stillness, which is probably more difficult in terms of coding.
Not this absolute shit again. This is not how photography works or how physics actually work. Image noise does NOT come from high ISO, it comes from low exposure (not enough light hitting the sensor). ISO is just a multiplier between a number of photons and the brigthness of a pixel in your photo. The implementation of the multiplier is (usually) half-analog and half-digital, but it's still just a multiplier. If you keep the exposure the same, then changing the ISO on a digital camera will NOT introduce any more noise (except for at the extremes of the range, where, for example, analog readout noise may play a role).
This "simulator" artificially adds noise based on the ISO value, as you can easily discover: Set your shutter to 1/500 and your aperture to F8, then switch between ISO 50 and ISO 1600 and look at the letters on the bulb. ISO 50, dark but perfectly readable. ISO 1600, garbled mess. Since the amount of light hitting the simulated sensor stays the same, you should be seeing slightly LESS noise at ISO 1600 (better signal to noise ratio than at low ISO), not more.
edit: To add something genuinely useful: Use whatever mode suits you (manual, Av, Tv) and just use Auto ISO. Expose for the artistic intent and get as much light in as possible (i.e. use a slower shutter speed unless you need to go faster, use a wider aperture unless you need a narrower one). That’s the light that you have, period. Let the camera choose a multiplier (ISO) that will result in a sane brightness range in your JPEG or RAW (you’ll tweak that anyway in post). If the photo ends up too noisy, sorry but there was not enough light.
ISO is an almost useless concept carried over from film cameras where you had to choose, buy and load your brightness multiplier into the camera. Digital cameras can do that on the fly and there’s usually no reason not to let them. (If you can come up with a reason, you probably don’t need this explanation)
Sounds like you're saying that setting higher ISO does cause noise, but as long as you don't go too high you won't really notice the difference?
So does this mean that changin the ISO directly on my camera, or in DarkTable/whatever at post-proc time is virtually the same?
I'm sure that image nerds would poke holes in it, but it seems to work pretty much exactly the way it does IRL.
The noise at high ISO is where it can get specific. Some manufacturers make cameras that actually do really well, at high ISO, and high shutter speed. This seems to reproduce a consumer DSLR.
Even on old, entry-level APS-C cameras, ISO1600 is normally very usable. What is rendered here at ISO1600 feels more like the "get the picture at any cost" levels of ISO, which on those limited cameras would be something like ISO6400+.
Heck, the original pictures (there is one for each aperture setting) are taken at ISO640 (Canon EOS 5D MarkII at 67mm)!
(Granted, many are too allergic to noise and end up missing a picture instead of just taking the noisy one which is a shame, but that's another story entirely.)
The amount and appearance of noise also heavily depends on whether you're looking at a RAW image before noise processing or a cooked JPEG. Noise reduction is really good these days but you might be surprised by what files from even a modern camera look like before any processing.
That said, I do think the simulation here exaggerates the effect of noise for clarity. (It also appears to be about six years old.)
Yes, this simulation exaggerates a lot. Either that, or contains a tiny crop of a larger image.
I do feel (image nerding now) that its shutter/ISO visual for showing the image over/under-exposed is not quite correct. It appears they show incorrect exposure by taking the "correct" image and blend (multiply) with either white or blend with black (on the other end of the exposure spectrum) to produce the resulting image.
I suppose I am expecting something more like "levels" that pushes all the pixels to white (or black) until they are forced to clip. (But maybe I am too trained in photo-editing tools and expect the film to behave in the same way.)
Very limited camera choices, though.
I'm interested to see how the roll turns out - gave it for development the other day, had a good laugh with the employees though.
I now have a mnemonic for it: Blor - a (somewhat) portmanteau of Blur and low. So low aperture = blur.
Edit for clarification: I mean low number (2 vs 32) = blur
Unfortunately the lower number actually means bigger aperture.
With my mnemonic, I say low *number = blur
I should have been more specific
Denominator, not numerator. That's why larger number = smaller aperture.
But photographers generally just say "f2", meaning an aperture value of two set on the dial of the camera/lens. It's one stop faster (twice as much light) as f/2.8. It'll give you a relatively shallow depth of field, but not as shallow as e.g. f/1.4.
The smaller, i.e. the closest to an ideal pinhole camera, the wider the depth of field is. A an ideal pinhole camera has infinite depth of field.
Unfortunately the aperture f numbers are the wrong way round; larger numbers correspond to smaller diameters.
It all matters.