SIFT Directory Reference

Image matching and object recognition using SIFT keypoints.

More...

src/SIFT/

Files

file  app-build-SIFT-database.C [code]
file  app-match-SIFT-database.C [code]
file  app-SIFT-panorama.C [code]
file  app-SIFT-VisualObjectDBQt.C [code]
file  app-SIFT.C [code]
file  CameraIntrinsicParam.H [code]
file  FeatureVector.C [code]
file  FeatureVector.H [code]
file  Histogram.C [code]
file  Histogram.H [code]
file  ilabLoweSiftComp.C [code]
file  KDTree.C [code]
file  KDTree.H [code]
file  Keypoint.C [code]
file  Keypoint.H [code]
file  KeypointMatch.H [code]
file  LoweSIFT.C [code]
file  LoweSIFT.H [code]
file  ObjRecUtil.cc [code]
file  README.dxy [code]
file  ScaleSpace.C [code]
file  ScaleSpace.H [code]
file  SIFTaffine.H [code]
file  SIFTegomotion.C [code]
file  SIFTegomotion.H [code]
file  SIFThough.C [code]
file  SIFThough.H [code]
file  test-LoweSIFT.C [code]
file  test-ScaleSpace.C [code]
file  test-SIFT.C [code]
file  test-SIFTimageMatch.C [code]
file  VisualObject.C [code]
file  VisualObject.H [code]
file  VisualObjectDB.C [code]
file  VisualObjectDB.H [code]
file  VisualObjectDBQt.qt.C [code]
file  VisualObjectDBQt.qt.H [code]
file  VisualObjectMatch.C [code]
file  VisualObjectMatch.H [code]
file  VisualObjectMatchAlgo.C [code]
file  VisualObjectMatchAlgo.H [code]

Detailed Description

Image matching and object recognition using SIFT keypoints.

This directory contains a suite of classes that aim at being able to match images and recognize objects in them. The general inspiration comes from David Lowe's work at the University of British Columbia, Canada. Most of the implementation was created here by carefully reading his 2004 IJCV paper.

Some of the code here is also based on the Hugin panorama software

Given an image, a number of Scale-Invariant Feature Transform (SIFT) keypoints can be extracted. These keypoints mark the locations in the image which have pretty unique and distinctive local appearance; for example, the corner of a textured object, a letter, an eye, or a mouth. Many such keypoints exist in typical images, usually in the range of hundreds to thousands.

Given two images we can extract two lists of keypoints (class ScaleSpace, class Keypoint) and store them (class VisualObject, VisualObjectDB). We can then look for keypoints that have similar visual appearance between the two images (class KeypointMatch, class KDTree, VisualObjectMatch). Given a matching set of keypoints, we can try to recover the geometric transform that relates the first image to the second (class VisualObjectMatch).

This can be used to stitch two or more images together to form a mosaic or panorama. It can also be used to recognize attended locations as matching some known objects stored in an object database (see Neuro/Inferotemporal).

for dependency graphs: rankdir: TB

Generated on Sun May 8 08:32:25 2011 for iLab Neuromorphic Vision Toolkit by  doxygen 1.6.3