Chapter in this post:
In the keynote, in which, among other things, the iPhone 11 Pro was presented, Phil Schiller gave a little insight into a feature that Apple calls "Deep Fusion". According to Phil Schiller, this is about computer-aided photography in connection with crazy science (his original sound was "computational photography mad science"). When I watched the keynote, it was not immediately clear to me what the feature was doing, because good low-light photos were also presented beforehand in the keynote in connection with the iPhone 11, so I assume that there is a little more behind it .
This is exactly the question I asked myself and looked at the corresponding passage in the keynote again. For the readers who would like to watch: It starts in the keynote at the time stamp 1:21:55.
In detail, the following happens: The iPhone 11 Pro (only the version with the 3 lenses and not the normal iPhone 11 without Pro!) Takes nine pictures before the shutter button is pressed. Of these, four are short-exposure photos and four are slightly longer-exposure photos. The last photo is saved directly when you press the shutter release (this is how you get nine photos), where this is a long exposure.
Within a second of pressing the shutter release, the iPhone 11 Pro's neural engine analyzes and combines all of the individual images pixel by pixel into an image, using only the best images to add them up.
The end result is a photograph with 24 megapixels, which is characterized by a particularly high level of detail and low image noise.
According to Phil Schiller it is the first time that the neural engine is used for the output of photos and in this context the term "computational photography mad science" was used.
Anyone who has looked at the technical data of the iPhone 11 Pro will surely have noticed that the three built-in cameras only have a sensor size of 12 megapixels. This was already the case with the iPhone XS, which is why the highest resolution of iPhone photos so far was 12 MP.
With the Deep Fusion camera feature, however, the iPhone 11 Pro now spits out photos with 24 megapixels, which can only be technically realized by combining several photos into one larger one and the details come about by minimal blurring during the recording . There is a photo app called "Hydra" that takes a similar approach and throws 32MP photos. You can take sample photos here on the developer page under "Super Resolution (Zoom / Hi-Res)".
Even if Hydra is certainly not bad, the result that can be achieved with the iPhone 11 Pro should be much more comfortable and better.
You have to admit to Google that they have put all low-light recordings from previous iPhones in the shade with the Night Sight recordings.
The technical background is that - as far as I have read it - the Google Pixel smartphones allow the recordings made to be edited in the Google cloud and thus achieve the impressive quality of the photos in Night Sight mode. The statement is not correct. I read it again and learned that all editing is done on the pixel and no data is uploaded to the cloud.
With the iPhone's Deep Fusion mode, no photos wander around, and certainly not to third-party servers - unless you have explicitly set that you want them in the iCloud photo library. Apple is known for trying to always solve this kind of thing locally on the device. For this reason, Deep Fusion may not come until now that there is a new iPhone with a faster processor.
In any case, this feature should allow Apple to catch up with Google in terms of low-light shots. The only question is whether Google already has something up its sleeve to raise the bar again with the next Pixel Phone.
All iPhone 11 models are equipped with a feature called "Night Mode", while Deep Fusion is probably reserved for the Pro models. The night mode is technically more like the Google Night Sight mode, since both modes are based on longer exposure times and then calculate the photos together, similar to the HDR mode. With Deep Fusion, on the other hand, there seems to be a lot more technology magic going on in the background.
In this case, however, it is interesting how the night mode of the iPhone differs from the night sight mode of the pixel phones. From what I have seen in previous test reports, Apple's Night Mode seems much more natural, as the photos still clearly show that the picture was taken in the dark. In the Night Sight mode of the Pixel Phones, however, the photos look more like they were taken during the day.
From a photographer's point of view, I like Apple's approach much better, because I like it when the nighttime mood is retained in the photos, but the colors and details are still significantly better than in the night mode of the iPhone models XR, XS and XS Max.
I'm excited to see how Deep Fusion performs in practice. According to Phil Schiller, it is not only suitable for taking pictures in poor lighting conditions, but can also achieve impressive results in better light.
However, with the delivery of the iPhone 11 Pro models, Deep Fusion is not yet available. Apple will only deliver it to the Pro models with a software update this fall.
Jens has been running the blog since 2012. He appears as Sir Apfelot for his readers and helps them with problems of a technical nature. In his free time he drives electric unicycles, takes photos (preferably with his iPhone, of course), climbs around in the Hessian mountains or hikes with the family. His articles deal with Apple products, news from the world of drones or solutions for current bugs.