Google bucked the trend of using a dual camera setup for the new Pixel 2 flagship phones, knowing that it had an ace up its sleeve. The dual camera setup in smartphones these days is used mainly for one thing (although there are other advantages, surely) – the much-hyped bokeh effect. Google reveals how it uses machine learning to basically get the same effect on Pixel 2’s single camera setups.
The bokeh effect in photography, if you’re not familiar with the term, pertains to getting that artistic creamy blurred look in the background of the image while maintaining focus on a main subject in the foreground. In photography, you would usually associate this effect with single lens reflex (SLR) cameras. In mobile photography, this has become trendy with the dual camera setup – where one camera sensor grabs the details of the subject and the other sensor is used for depth data (foreground and background). Add a little software processing and the bokeh effect can now be done on smartphones.
But Google did not go with the trend of dual camera sensors – the Pixel 2 and Pixel 2 XL still has single camera sensors in the front and back. But both phones still feature a “portrait mode” where the bokeh effect can be done. Google’s ace in the hole is machine learning via TensorFlow. Basically, Google applies machine learning to judge which pixels in the image to blur to get the bokeh effect.
The TensorFlow machine learning is so good that even the Pixel 2’s selfie camera is capable of doing portrait mode even with phase detect auto focus (PDAF), which is part of the process in the main camera sensor. Check out the details of this machine learning feature in the source link below.
SOURCE: Google