Or maybe just don’t move your arm for literally less than a second while the foto(s) is/are taken… Moving your arm(s) down happens in less than a second if one just let them fall by gravity. It’s a funny pic nonetheless.
Comment on A bride to be discovers a reality bending mistake in Apple's computational photography
aeronmelon@lemm.ee 11 months ago
It’s a really cool discovery, but I don’t know how Apple is suppose to program against it.
What surprises me is how much of a time range each photo has to work with. Enough time for Tessa to put down one arm and then the other. It’s basically recording a mini-video and selecting frames from it. I wonder if turning off things like Live Photo (which retroactively starts the video a second or two before you actually press record) would force the Camera app to select from a briefer range of time.
Maybe combining facial recognition with post processing to tell the software that if it thinks it’s looking at multiple copies of the same person, it needs to time-sync the sections of frames chosen for the final photo. It wouldn’t be foolproof, but it would be better than nothing.
Petter1@lemm.ee 11 months ago
xantoxis@lemmy.world 11 months ago
Program against it? It’s a camera. Put what’s on the light sensor into the file, you’re done. They programmed to make this happen, by pretending that multiple images are the same image.
ninekeysdown@lemmy.world 11 months ago
That’s over simplified. There’s only so much you can get on a sensor at the sizes in mobile devices. To compensate there’s A LOT of processing that goes on. Even higher end DSLR cameras are doing post processing.
Even shooting RAW like you’re suggesting involves some amount of post processing for things like lens corrections.
It’s all that post processing that allows us to have things like HDR images for example. It also allows us to compensate for various lighting and motion changes.
Mobile phone cameras are more about the software than the hardware these days
cmnybo@discuss.tchncs.de 11 months ago
With a DSLR, the person editing the pictures has full control over what post processing is done to the RAW files.
ninekeysdown@lemmy.world 11 months ago
Correct, I was referring to RAW shot on mobile not a proper DLSR. I guess I should have been more clear about that. Sorry!
SpaceNoodle@lemmy.world 11 months ago
Oh, so you have no idea what you’re talking about.
ninekeysdown@lemmy.world 11 months ago
So what was I wrong about? I’m always happy to learn from my mistakes! 😊
Do you have some whitepapers I can reference too?
schmidtster@lemmy.world 11 months ago
Oh, so your excuse is you are illiterate?
randombullet@feddit.de 11 months ago
Raw files from cameras have meta data that tells raw converters the info of which color profile and lenses it’s taken with, but any camera worth using professionally doesn’t have any native corrections on raw files. However, in special cases as with lenses with high distortion, the raw files have a distortion profile on by default.
ninekeysdown@lemmy.world 11 months ago
Correct, I was referring to RAW shot on mobile devices not a proper DSLR. That was my observations based off of using the iPhone raw and android raw formats.
This isn’t my area of expertise so if I’m wrong about that aspect too let me know! 😃
ricecake@sh.itjust.works 11 months ago
What’s on the light sensor when? There’s no shutter, it can just capture a continuous stream of light indefinitely.
Most people want a rough representation of what’s hitting the sensor when they push the button. But they don’t actually care about the sensor, they care about what they can see, which doesn’t include the blur from the camera wobbling, or the slight blur of the subject moving.
They want the lighting to match how they perceived the scene, even though that isn’t what the sensor picked up, because your brain edits what you see before you comprehend the image.
Doing those corrections is a small step to incorporating discontinuities in the capture window for better results.