When Google first mentioned Project Soli back in 2015, we knew then radar tech applied to interfacing with wearables was complicated. The tech giant did demo Project Soli in action on an LG smartwatch and we somehow understood it. It was only last year when Google received FCC’s approval for Project Soli’s radar-based sensor. Motion Sense was then expected on Pixel 4 but with some limits as to how many apps could work and not in all countries. You see, the Motion Sense API isn’t readily available yet for 3rd party developers.

Google is now telling the world how it actually works. It is a bit complicated but at least the information is out there for the public to digest. Pixel 4’s Soli radar-based perception and interaction could be understood but it’s still not simple.

The tech giant has decided to make the Pixel 4’s forehead larger instead of making the screen bezel-less. There is a reason for that: Motion Gesture (Motion Sense). That part of the device is also for face recognition.

Google’s Soli radar technology offers improved gesture controls. With radar technology, it’s made more efficient. Such technology is made “smaller” by Google. Meaning, short-range radar is possible to detect small “details” like the face or hand. By details we mean just “signals” that differ in intensity or shape. They depend on the distance to the movement of the object and the sensor.

Google has made its AI tech be trained on videos from Google volunteers for the radar to “read”. Hundreds of hours of movements have been reviewed as “training”. It’s a good start for radar technology but at the moment, there are restrictions because not all Pixel 4 units could access the tech due to limited certification per country.

For some Pixel 4 and Pixel 4 XL phones, Project Soli has been a success. Radar-based machine perception is possible and can be seamless. The technology has a long way to go. We’re crossing our fingers Google will continue on research and development and not decide to just scrap the project.