The face of images search could be changing with Google’s new machine-learning system that can automatically produce captions from images found on websites. Up until now it has been incredibly difficult for Google to be able to understand complex images, so there has been no way of the robots to be able to ‘read’ the image without the publisher stating what the image is about.
Of course, this has led to people ‘stuffing’ the image Alt Tags and descriptions with keywords, even if the image isn’t relevant to the keywords. Now though, this looks as if we will be in for a change as images will need to be targeted towards the image content and not just a string of keywords.
Following on from all the recent algorithm changes that have targeted the text on a website, and over-optimising keywords, this seems to be the next logical step in Google providing absolutely contextually relevant information to the user search query, regardless of if they are searching for web content or a related image.
We have always told our clients to describe the image properly when using the Alt attributes. If your image is of a ‘black dog’ then that’s what you should write, and not put in your keywords which could be something like ‘Best Dog Groomer in London’ – Your text doesn’t describe the image and therefore it’s falsely labelled. This has always been important for users that need to use a screen reader who rely on this information that will be read out to them so that they can understand the context of the image.
Now then, of course, if you can get any of your keywords in the description of the image then good on you, as long as the image content is absolutely what is being described then you’re onto a winner!
This technology is not ready yet, but we don’t think it will be too long before its launched. Our advice, if you have been stuffing your images with keywords, now is the time to do some housekeeping… and of course moving forwards please use the alt attribute and description wisely!