Consider this: Some experts say that as much as two-thirds of the human brain is dedicated to processing the information we gather with our eyes. Imagine the enormous complexities involved in attempting to engineer processes that allow a machine to mimic that kind of brain power.
Goggles attempts this feat by analyzing images using various mathematical algorithms. Then it compares extracted information (called features) with its expansive image database to find any possible matches. Items with a lot of texture and recognizable feature points are recognized almost immediately.
As of this writing, Goggles is best for commonly photographed subjects. Capture an image of a Coke can or the Golden Gate Bridge, and you'll likely see an accurate description in your search results. Bright, colorful product packaging, for instance, is pretty simple to recognize. Less commonly known places and things aren't as easy for Google to discern.
Because Goggles relies on your smartphone's camera, object and text identification is sometimes harder than it should be. You can give Goggles a big assist by taking your pictures in good light, holding the camera steady and using a decent resolution setting. Grainy or blurry photos are a recipe for Goggles failure. To further improve results, Goggles has a feature that helps you crop the image, leaving the most pertinent (and recognizable) elements for analysis.
Even with proper picture taking, Goggles still needs work, which is why it's in the Labs (or experimental) section of Google's site. But as the science and technology behind visual search improves, you can expect to do a lot more searching with your phone's camera in the very near future.