Free Google AI image analysis tool

Google offers an AI image classification tool that analyzes images to classify content and assign tags to them.

The tool is intended as a demonstration of Google Visionwhich can scale image classification in an automated way, but can be used as a standalone tool to see how an image detection algorithm views your images and what they are relevant for.

Even if you’re not using the Google Vision API to scale image detection and classification, the tool provides an interesting view into what Google’s image-related algorithms are capable of, which makes it interesting uploading images to see how Google’s Vision algorithm ranks them. .

This tool demonstrates Google’s AI and machine learning algorithms for understanding images.

It’s part of Google Cloud Vision API Suite which offers machine learning models of vision for apps and websites.

Does the Cloud Vision tool reflect Google’s algorithm?

It is simply a machine learning model and not a ranking algorithm.

It is therefore unrealistic to use this tool and expect it to reflect something on Google’s image ranking algorithm.

However, it is a great tool for understanding how Google’s AI and machine learning algorithms can understand images, and it will provide educational insight into the state of the art of vision-related algorithms. of today.

The information provided by this tool can be used to understand how a machine can understand what an image is about and possibly give an idea of ​​how closely that image relates to the general topic of a web page.

Why is an image classification tool useful?

Images can play an important role in search visibility and CTR from the various ways in which web page content is presented on Google.

Potential site visitors researching a topic use images to navigate to the right content.

Thus, using images that are attractive and relevant to search queries can, in some contexts, be useful for quickly communicating that a web page is relevant to what a person is looking for.

The Google Vision tool helps to understand how an algorithm can display and classify an image according to what it contains.

from google guidelines for image referencing recommend:

“High-quality photos attract users more than blurry and unclear images. Also, crisp images are more appealing to users in the results thumbnail and increase the likelihood of getting traffic from users. »

If the Vision tool is having trouble identifying the subject of the image, this may indicate that potential site visitors may also experience the same issues and decide not to visit the site.

What is the Google Image Tool?

The tool is a way to demonstrate Google’s Cloud Vision API.

The Cloud Vision API is a service that allows applications and websites to connect to the machine learning tool, providing image analysis services that can be scaled.

The standalone tool itself lets you upload an image and tells you how Google’s machine learning algorithm interprets it.

Google’s Cloud Vision Page describe how the service can be used like this:

“Cloud Vision enables developers to easily integrate vision detection capabilities into applications, including image tagging, face and landmark detection, optical character recognition (OCR), and markup explicit content.”

Here are five ways Google’s image analysis tools classify uploaded images:

  1. Faces.
  2. Objects.
  3. Labels.
  4. Properties.
  5. Safe search.


The “faces” tab offers an analysis of the emotion expressed by the image.

The accuracy of this result is quite precise.

The image below is of a person described as confused, but that’s not really an emotion.

The AI ​​describes the emotion expressed on the face as surprise, with a confidence score of 96%.

Composite image created by author, July 2022; images sourced from Google Cloud Vision API and Shutterstock/Cast Of Thousands


The “Objects” tab shows which objects are in the image, such as glasses, a person, etc.

The tool accurately identifies horses and people.

Screenshot of the Google Vision toolComposite image created by author, July 2022; images sourced from Google Cloud Vision API and Shutterstock/Lukas Gojda


The “labels” tab displays details about the image that Google recognizes, such as ears and mouth, but also conceptual aspects such as portrait and photography.

This is particularly interesting because it shows how well Google’s image AI can understand what’s in an image.

Screenshot of Google Vision AI identifying objects in an uploaded photoComposite image created by author, July 2022; images sourced from Google Cloud Vision API and Shutterstock/Lukas Gojda

Does Google use this as part of the ranking algorithm? It is something that is not known.


The properties are the colors used in the image.

Screenshot of the Google Vision tool identifying the dominant colors in an imageGoogle Cloud Vision API screenshot, July 2022

On the surface, the purpose of this tool is not obvious and may seem somewhat useless.

But in reality, the colors of an image can be very important, especially for a featured image.

Images that contain a very wide color gamut can be an indication of a poorly chosen image with a bloated size, which is something to watch out for.

Another useful idea about images and color is that images with a darker color range tend to produce larger image files.

In terms of SEO, the Property section can be useful for identifying images throughout a website that can be replaced with less bloated sized images.

Also, the color ranges for featured images that are muted or even grayscale can be something to watch out for, as featured images that lack vibrant colors tend not to show up on social media, Google Discover and Google News.

For example, featured images that are vivid can be easily scanned and possibly receive a higher click-through rate (CTR) when displayed in search results or Google Discover, as they are more eye-catching than images. that are muted and fade. background.

Many variables can affect image CTR performance, but this offers a way to intensify the image auditing process of an entire website.

eBay conducted a product image study and CTR and found that images with lighter background colors tended to have higher CTR.

eBay researchers noted:

“In this article, we find that product image characteristics can impact user search behavior.

We find that certain image features correlate with CTR in a product search engine and these features can help model click-through rate for shopping search applications.

This research can encourage sellers to submit better images for the products they sell. »

Anecdotally, using bright colors for featured images can be helpful in increasing the CTR of sites that rely on traffic from Google Discover and Google News.

Obviously, many factors impact the CTR of Google Discover and Google News. But an image that stands out from the rest can be useful.

For this reason, using the Vision tool to understand the colors used can be useful for image-wide auditing.

Safe Search

Safe Search shows how the image ranks for dangerous content. Descriptions of potentially dangerous images are as follows:

  • Adult.
  • Parody.
  • Medical.
  • Violence.
  • Breed.

Google search has filters that rate a web page for dangerous or inappropriate content.

For this reason, the Safe Search section of the tool is very important because if an image unintentionally triggers a safe search filter, the webpage may not rank for potential site visitors who search for the content of the webpage. .

Google Vision Safe Search AnalyticsGoogle Cloud Vision API screenshot, July 2022

The screenshot above shows the evaluation of a photo of racehorses on a racetrack. The tool accurately identifies that there is no medical or adult content in the image.

Text: Optical Character Recognition (OCR)

Google Vision has a remarkable ability to read text from a photograph.

The Vision tool is able to accurately read the text in the image below:

Screenshot of the Vision tool accurately reading text from an imageComposite image created by author, July 2022; images sourced from Google Cloud Vision API and Shutterstock/Melissa King

As can be seen above, Google has the ability (via Optical Character Recognition, aka OCR), to read words in pictures.

However, this does not indicate that Google is using OCR for search ranking purposes.

The point is that Google recommends using words around images to help it understand what an image is and it may be that even for images that contain text, Google still depends on the words around the image to understand what the image is about and relevant to.

from google Image SEO guidelines repeatedly emphasize the use of words to provide context for images.

“By adding more context around images, results can become much more useful, which can lead to higher quality traffic to your site.

… Whenever possible, place images near relevant text.

…Google extracts information about the subject of the image from the content of the page…

… Google uses alt text with computer vision algorithms and page content to understand the subject of the image.

It’s very clear from Google’s documentation that Google depends on the context of text around images to understand what the image is about.


Google’s Vision AI tool provides a way to test Google’s Vision AI so that a publisher can connect to it through an API and use it to scale image classification and extract data for use on the site.

But it also gives insight into the evolution of image tagging, annotation, and optical character recognition algorithms.

Upload an image here to see how it is classified and if a machine sees it the same way you do.

More resources:

Featured image by Maksim Shmeljov/Shutterstock

Comments are closed.