Google

Google made the tech driving Potrait Mode on Pixel 2 open source

Google is a company who doesn't need any introduction. Its Pixel lineup of smartphones offers the purest Android experience ever. The Google Pixel 2 and Pixel 2 XL might not sport dual camera setup at the back or front. However, the devices comes with the potraight mode at both front and back. Google manages to deliver the same experience driven by AI and software. This has led the camera to be rated one of the best on a smartphone ever. Now the technology has been made open-source by the company.

Google’s research and development team just put out a blog post. The blog post states they are making the open source release of their “semantic image segmentation model”. The model is known as “DeepLab-v3+” and implemented in TensorFlow. Also, according to the blog post, “semantic image segmentation” stands for “assigning a semantic label, such as ‘road’, ‘sky’, ‘person’, ‘dog’, to every pixel in an image.” The company this new application is helping power various other new applications. These new applications include the synthetic shallow depth-of-field effect. This is also known as the portrait mode seen on the Pixel 2 and Pixel 2 XL smartphones.

So how exactly the Google's tech works?

The blog post by Google explains what happens when each pixel or subject in the image is assigned one of these labels. This technology also helps in figuring out the outline of the objects. This is the most crucial thing in the Portrait mode concept. In the Portrait mode, everything apart from the object in Focus is blurred. This gives it a shallow depth of field effect. Apple introduced this in smartphones with the iPhone 7 Plus. While most of the companies rely on a dual camera setup for this, Google did it with just software.

Also Read: How to record the screen on a MacBook without any software

Google’s blog post also says with the DeepLab-v3+ open source release, it includes some more models. These models also include “models built on top of a powerful convolutional neural network (CNN) backbone architecture for the most accurate results…”. The post also states that the models have improved dramatically due to improvement in datasets and more. The post also says this technology will help industry professionals and students will be really benefitted by this.

This post was last modified on March 16, 2018 2:29 pm

Lalit Wadhwa
Share
Published by
Lalit Wadhwa

Recent Posts

  • Technology News

Microsoft Xbox app launched on Amazon Fire TV devices: Everything you need to know

Microsoft launches its Xbox cloud gaming app on Amazon Fire TV devices with access to…

June 28, 2024
  • Technology News

Realme Narzo N55 new colourway revealed, will come with 33W charging support

Realme Narzo N55 new Black colourway revealed, here's how it looks. The company also revealed…

April 8, 2023
  • Technology News

Motorola Moto G Power 5G with MediaTek Dimensity 930 SoC launched in the US

Motorola Moto G Power 5G with MediaTek Dimensity 930 chipset, 6GB RAM and 256GB internal…

April 7, 2023
  • Technology News

OnePlus Nord CE 3 Lite could be rebranded as Nord N30 for the US market

OnePlus reportedly will rebrand the Nord CE 3 Lite as the Nord N30 for the…

April 7, 2023
  • Technology News

Apple could revamp its Control Center feature with iOS 17: Report

Apple's upcoming iOS 17 could bring a revamp to its Control Center feature. Details are…

April 7, 2023
  • Technology News

Sony finally working on PS Vita successor, suggests report

Sony currently working on a new handheld gaming console codenamed PlayStation Q Lite. Launch expected…

April 6, 2023