Today, surfing the net for some material related to Cloud Computing, I've landed on the one of the coolest algorithm of computer vision website I've heard. It is called IMG2GPS and it was developed at the Carnegie Mellon University, by James Hays and Alexei Efros.
What did those guys do?
They have developed an algorithm that is able to extract some geographically information from pictures taken wherever the world, by leveraging the huge GPS tagged images database contained by Flickr. The location is determined taking into account some matches with color histogram, some textures, geometrical contexts and others, and not by considering people's skin color or clothes or other individualities related to humans. The algorithm is exemplified with three pictures, one representing the Notre Dame cathedral in Paris, one showing a street (probably a mediteranean landscape) and one representing an island, and after executing the algorithm, the proposed results are displayed, indicating the most probable, or even the exact location.
Why is this algorithm interesting from the Cloud Computing point of view? Because it is a typically problem of iterating and processing huge quantities of data, a problem of parallelization of tasks and the current Cloud architectures offer the power and the tools for solving it.
The tests were made using Flickr's images, but they are also a lot of other image providers, some of them hosting their databases in Cloud architectures and I am sure that soon we'll be able to determine ourselves the location of our interesting pictures. This is just the beginning in this branch of geographical computer vision, as the two authors named this research direction.
The entire material could be found here.