The Sense of Vision is taken for granted by us in our day to day life, but only a visually impaired person can understand the true value and necessity of Vision. But soon AI based computer vision systems can help the blind and visually impaired to navigate.
Tech giants like Google, Baidu, Facebook, Microsoft are working on a range of products that apply Deep Learning for the Visually Impaired. One of them being Image Captioning technology wherein the system describes the content of an image. To accelerate further research and to boost the possible applications of this technology, Google made the latest version of their Image Captioning System available as an open source model in Tensorflow. It’s called “Show And Tell: A Neural Image Caption Generator”. The project can be found at https://github.com/tensorflow/models/tree/master/im2txt and the full paper can be found at https://arxiv.org/abs/1609.06647
The Show and Tell model is an example of an encoder-decoder neural network. It works by first "encoding" an image into a fixed-length vector representation, and then "decoding" the representation into a natural language description.
The image encoder is a deep convolutional neural network. This type of network is widely used for image tasks and is currently state-of-the-art for object recognition and detection. The Inception v3 image recognition model pretrained on the ILSVRC-2012-CLS image classification dataset is used as the encoder.
The decoder is a long short-term memory (LSTM) network. This type of network is commonly used for sequence modeling tasks such as language modeling and machine translation. In the Show and Tell model, the LSTM network is trained as a language model conditioned on the image encoding.
Words in the captions are represented with an embedding model. Each word in the vocabulary is associated with a fixed-length vector representation that is learned during training.
|Caption Generated : a group of people walking down a street.|
|Caption Generated : a group of cars parked on the side of a street.|
We at Cere Labs, an Artificial Intelligence startup based in Mumbai, have come with an application wherein we have used this technique and extended its application on Videos to continuously describe the content of Videos. Firstly, we have trained the Show And Tell Model on the MSCOCO image captioning data set to come with our custom model. Then we used OpenCV to obtain video frames from a particular video and these frames were then fed to the inference algorithm of Show And Tell which would caption these individual frames. To speed up the inference performance the frame rate for processing frames in Inference algorithm was tuned to obtain a smooth and synced video playback and caption generation. The results were awesome with some errors in the generated captions but they can be improved further through more data and training. This application was further extended to generate captions on feed received from camera so that the description is real time and can someday help the visually impaired and blind. The possibilities are enormous with applications even in Robotics.
We further plan to experiment and come up with more innovative applications of this promising technology.
By Amol Bhivarkar,
Researcher / Senior Software Developer,