Cele News

Toward leaner, greener algorithms

The human perceptual system makes excellent use of data.To make models smaller, researchers have taken this idea for recognizing actions in video and in real life.Researchers at the MIT-IBM Watson AI Lab describe a technique for extracting the most important information from a scene in a few glances, similar to how humans do, in a paper presented in August at the European Conference on Computer Vision (ECCV).

Take a look at a sandwich-making video.A policy network selects frames of a knife cutting through roast beef and meat being stacked on a slice of bread to represent at high resolution using the method described in the paper.Frames that aren’t as important are skipped or shown at a lower resolution.

A second model then refers to the movie as “making a sandwich” in the shortened CliffsNotes version.According to the researchers, the method results in faster video classification at half the computational cost of the next-best model.

Leave a Reply

Your email address will not be published. Required fields are marked *