The past couple of years has seen much upheaval on a lot of platforms for various pandemic-related reasons. Platforms that deal with user reviews for businesses have also been affected due to COVID-related health and safety measures which not all customers agree with. Google Maps is one of those and now they’re explaining their moderation policies and procedures, most likely in light of Yelp’s recent report on the reviews on their site. The post gives us a behind-the-scenes look at what keeps the user-generated reviews still usable and functioning.

The reviews that users post on Google Maps are checked by a combination of machine learning and human intervention. Every review that users put for businesses is checked by a machine learning system that has been trained to look for violations of the platform’s content policies. This way they are able to remove abusive or misleading reviews, which is pretty common in case you didn’t know that yet. The system checks both the content and the account that left the review.

Aside from this, the system is also trained to look at uncharacteristic activity for the business like a spike in reviews and if there are questionable patterns detected. If the system doesn’t detect a policy violation, the review is posted within a matter of seconds. But if a review is flagged by users and the businesses themselves, that’s when human intervention comes in. They have a team of operators that review flagged content and remove the review or suspend the account if warranted.

Yelp recently reported that it removed 15,500 reviews between April and December 2021 since they violated COVID-19 content guidelines. While Google Maps didn’t specifically mention similar incidents, there have been reports previously of “review bombing” incidents where some users coordinated attacks against establishments that implement mask-wearing and vaccine requirements. This may be the reason why they are explaining how they moderate reviews on their platform.

Google Maps also emphasizes that they have systems in place to identify potential abuse risks, especially when for current events that may affect certain businesses associated with it. Hopefully, this really does mean that the reviews we see on the platform are trusted and authentic.