Instance-level Recognition Practical

2012 edition

Andrea Vedaldi and Andrew Zisserman

spacer

Introduction

Getting started

Exercise description

Part I: Sparse features for matching specific objects in images

Stage I.A: SIFT features detections

Stage I.B: SIFT features descriptors and matching between images

Stage I.C: Improving SIFT matching (i) using Lowe’s second nearest neighbour test

Stage I.D: Improving SIFT matching (ii) using a geometric transformation

Part II: Affine co-variant detectors

Part III: Towards large scale retrieval

Stage III.A: Accelerating descriptor matching with visual words

Stage III.B: Searching with an inverted index

Part IV: Large scale retrieval

Acknowledgements

History

Introduction

In instance level recognition, a specific object or scene is matched. For example, recognizing a specific building, such as Notre Dame, or a specific painting, such as `Starry Night’ by Van Gogh. The object is recognized despite changes in scale, camera viewpoint, illumination conditions and partial occlusion. An important application is image retrieval - starting from an image of an object of interest (the query), search  through an image dataset to obtain (or retrieve) those images that contain the target object.

The goal of this session is to get basic practical experience with the methods that enable specific object recognition. It includes: (i) using SIFT features to obtain sparse matches between two images; (ii) using affine co-variant detectors to cover changes in viewpoint; (iii) vector quantizing the SIFT descriptors into visual words to enable large scale retrieval; and (iv) constructing and using an image retrieval system to identify objects.

Getting started

  1. Download the code and data. The data includes images and pre-computed features.
  1. from JHU server: everything ~513 Mb
  2. from Oxford server: everything OR code only + data only ~513 Mb
  1. Unpack the archive(s). This will make a directory called instance-search-practical.
  1. Finally, start MATLAB in the directory instance-search-practical.
  2. Try running setup.m command (type setup without the .m suffix).

Note. If you have a copy VLFeat toolbox loaded automatically on starting MATLAB, the copy shipped with this practical may conflict with it (it will generate errors during the execution of the exercises). In this case, switch to the shipped version of the toolbox. First, try issuing clear mex ; vlfeat/toolbox/vl_setup. If this does not solve the problem, exit MATLAB and restart it without loading your default VLFeat installation (this may require editing your MATLAB startup.m file). 

As you progress in the exercises you can use MATLAB help command to display the help of the MATLAB functions that you need to use. For example, try typing help setup.

Exercise description

Open and edit the script exercise1.m in the MATLAB editor. The script contains commented code and a description for all steps of this exercise, relative to Part I of this document. You can cut and paste this code into the MATLAB window to run it, and will need to modify it as you go through the session. Other files exercise2.m, exercise3.m, and exercise4.m are given for Part II, III, and IV.

Part I: Sparse features for matching specific objects in images

Stage I.A: SIFT features detections

The SIFT feature has both a detector and a descriptor. We will start by computing and visualizing the SIFT feature detections for two images of the same object (a building facade). Load an image, compute and visualise the SIFT feature detections (frames):

% Load an image
im1 = imread('data/oxbuild_lite/all_souls_000002.jpg') ;

% Compute the SIFT features
[frames1, descrs1] = getFeatures(im1, 'peakThreshold', 0.001) ;

% Plot the image and overlay the feature frames
figure(1) ; imagesc(im1) ; axis equal off ; hold on ;
vl_plotframe(frames1, 'linewidth', 2) ;

A SIFT frame is a circle with an orientation and is specified by four parameters: the center tx, ty, the scale s, and the rotation spacer  (in radians), resulting in a vector of four parameters (s, spacer , tx, ty).

Now rotate the second image by 30 degrees and re-compute and visualise the SIFT detections:

% Let the second image be a rotated and scaled version of the first
im3 = imresize(imrotate(im1,35,'bilinear'),0.7) ;

% Compute the SIFT features
[frames3, descrs3] = getFeatures(im3, 'peakThreshold', 0.001) ;

Examine the second image and its rotated and scaled version and convince yourself that the detections overlap the same scene regions (even though the circles have moved their image position and changed radius). It is helpful to zoom into a smaller image area using MATLAB magnifying glass tool. This demonstrates that the detection process transforms (is co-variant) with translations, rotations and isotropic scalings. This class of transformations is known as a similarity or equiform.

Now repeat the exercise with a pair of natural images. Start by loading the second one:

% Load an second image
im2 = imread('data/oxbuild_lite/all_souls_000015.jpg') ;

and plot images and feature frames. Again you should see that many of the detections overlap the same scene region. Note that, while repeatability occurs for the pair of natural views, it is much better for the synthetically rotated pair.

  1. Note the change in density of detections across the image.
  2. Why does it change? Will it be a problem for matching? How could it be avoided?

Stage I.B: SIFT features descriptors and matching between images

Next we will use the descriptor computed over each detection to match the detections between images. We will start with the simplest matching scheme (first nearest neighbour of descriptors) and then add more sophisticated methods to eliminate any mismatches.

  1. Visualize the SIFT descriptors for the detected feature frames with the function  vl_plotsiftdescriptors. Then use vl_plotframe to overlay the corresponding frames. Note the descriptors are computed over a much larger region (shown in blue) than the detection. Why?
  2. Compute first nearest neighbours matches - for each SIFT descriptor in the first image, compute its nearest neighbour in the second image with the function findNeighbours.
  3. Visualize correspondences using lines joining matched SIFT features with the function plotMatches.
  4. Notice that there are many mismatches.
  5. Examine some of the mismatches to understand why the mistakes are being made. For example, is the change in lighting a problem? What additional constraints can be applied to remove the mismatches?

Stage I.C: Improving SIFT matching (i) using Lowe’s second nearest neighbour test

Lowe introduced a second nearest neighbour (2nd NN) test to identify, and hence remove, ambiguous matches. The idea is to identify distinctive matches by a threshold on the ratio of first to second NN distances.

In the MATLAB file, the ratio is nnThreshold = 1NN distance / 2NN distance.

  1. Vary the ratio nnThreshold in a range from 0.1 to 0.9, and examine how the number of matches and number of mismatches changes.
  1. A value of nnThreshold = 0.8 is often a good compromise between losing too many matches and rejecting mismatches.

Stage I.D: Improving SIFT matching (ii) using a geometric transformation

In addition to the 2nd NN test, we can also require consistency between the matches and a geometric transformation between the images. For the moment we will look for matches that are consistent with a similarity transformation

 spacer

which consists of a rotation by spacer , an isotropic scaling (i.e. same in all directions) by s, and a translation by a vector (tx, ty). This transformation is specified by four parameters (s, spacer , tx, ty) and can be computed from a single correspondence between SIFT detections in each image.

  1. Work out how to compute this transformation from a single correspondence. Hint: recall from Stage I that a SIFT feature frame is an oriented circle and map one onto the other.

The matches consistent with a similarity can then be found using a RANSAC inspired algorithm, implemented by the function geometricVerification:

For each tentative correspondence in turn:

  1. compute the similarity transformation;
  2. map all the SIFT detections in one image to the other using this transformation;
  3. accept matches that are within a threshold distance to the mapped detection (inliers);
  4. count the number of accepted matches.

Choose the transformation with the highest count of inliers

 After this algrotihm the inliers are consistent with the transformation and are retained, and most mismatches should now be removed.

  1. Test this procedure by varying the threshold distance (edit geometricVerification). Note the number of inliers and number of mismatches.

If more matches are required the geometric transformation can be used alone, without also requiring the 2nd NN test. Indeed, since the 1st NN may not be the correct match, a list of potential (putative) matches can be generated for each SIFT descriptor by including the 1st NN, 2nd NN, 3rd NN etc.

Investigate how the number of correct matches (and time for computation) grows as the potential match list is extended, and the geometric transformation is used to select inliers. To this end:

  1. Change the code to include in the match list the 1st NN, 2nd NN, 3rd NN, … best matches for each feature.
  2. Run geometric verification and check the number of verified matches using this expanded list.

Part II: Affine co-variant detectors

So far the change in viewpoint between images has been a similarity transformation. Now we consider more severe viewpoint changes - for example where an object is fronto-parallel in one view, and turns away from the camera in the other, e.g.

spacer spacer spacer

In this case, there is foreshortening (anisotropic scaling) and perspective distortions between the two images (as well as in-plane rotation, translation and scaling). A circle in one image cannot cover the same scene area as a circle in the other, but an ellipse can.  Affine co-variant detectors are designed to find such regions.

In the following we will compare the number of matches using a similarity and affine co-variant detector as the viewpoint becomes progressively more extreme. The detectors are SIFT (for similarity) and Hessian-affine (for affine). Note, a SIFT descriptor can be used for both types of detector.

  1. Open and examine the script exercise2.m in the MATLAB editor.
  2. Run the script
  3. Note the behaviour in the number of verified matches as the viewpoint becomes more extreme
  4. At first there are more similarity detector matches than affine. Why?
  5. Observe that the matches also identify the regions of the images that are in common
  6. The transformation between the images induced by the plane is a planar homography. The detections are only affine co-variant (not as general as a planar homography). So how can descriptors computed on these detections possibly match?

Part III: Towards large scale retrieval

In large scale retrieval the goal is to match a query image to a large database of images (for example the WWW or Wikipedia). The quality of a match is measured as the number of geometrically verified feature correspondences between the query and a database image. While the techniques discussed in Part I and II are sufficient to do this, in practice they require too much memory to store the SIFT descriptors for all the detections in all the database images. We explore next two key ideas: one to reduce the memory footprint and pre-compute descriptor matches; the other to speed up image retrieval.

  1. Open and edit the script exercise3.m in the MATLAB editor, and cut and paste to work through the following stages

Stage III.A: Accelerating descriptor matching with visual words

Instead of matching feature descriptors directly as done in Part I and II, descriptors are usually mapped first to discrete symbols, also called visual words, by means of a clustering technique like K-Means. The descriptors that are assigned to the same visual word are considered matched. Each of the rows in the following figure illustrates image patches that are mapped to the same visual word, and are hence indistinguishable by the representation.

spacer

Then, matching two sets of feature descriptors (from two images) reduces to finding the intersection of two sets of symbols.

  1. Load a visual word dictionary and an associated approximate nearest neighbour (ANN) matcher (the ANN matcher is used to determine the closest visual word to each descriptor and is based on a forest of KD trees).
  2. Given SIFT descriptors for two images, quantize them (assign them) into the corresponding visual words.
  3. Find corresponding features by looking for the same visual words in the two images and note the computation time.
  4. Geometrically verify these initial correspondences and count the number of inlier matches found.
  5. Find corresponding features by using the method of Part I and II, i.e. by  comparing the descriptors directly, and note the time. Geometrically verify these initial correspondences and count the number of inlier matches found.
  6. Compare the speed and number of inliers when using visual words vs raw SIFT descriptors by means of the function matchWords. Note, you should repeat the timing (by running the matching again) as the first time you run it there may be a delay as certain MATLAB components are loaded into memory.
  7. Optional: compare the speed and number of matches over another pair of images (from part I and II).

In the above procedure the time required to convert the descriptors into visual words was not accounted for. Why?

  1. What is the speedup in searching a large, fixed database of 10, 100, 1000 images?

Often multiple feature occurrences are mapped to the same visual word. In this case matchWords generates only one of the possible matches.

  1. Modify matchWords to generate more than one match for cases in which multiple features are mapped to the same visual word.
  2. Most of these additional matches are incorrect. Filter them out by running geometricVerification.
  3. Compare the number of inliers obtained before and after this modification.

Stage III.B: Searching with an inverted index

While matching with visual words is much faster than doing so by comparing feature descriptors directly, scoring images directly based on the number of geometrically verified matches still entails fitting a geometric model, a relatively slow operation. Rather than scoring all the images in the database in this way, we are going to use an approximation and count the number of visual words shared between two images.

To this end, one computes a histogram of the visual words in a query image and in the database images. Then the number of visual word in common can be computed from the intersection of the two histograms.

The histogram intersection can be thought as a similarity measure between two histograms. In practice, this measure can be refined in several ways:

  1. By reducing the importance of common visual words. This is similar to a stop-words list and can be implemented by weighting each word by the inverse document frequency.
  2. By normalising the weighted histograms to unit vectors and using the cosine between them as similarity. This can be implemented easily as the inner product between normalised histograms.

We now apply this retrieval method to search using a query image within a 660 image subset of the Oxford 5k building image set.

  1. Why does the top image have a score of 1?
  2. How many erroneous matches do you count in the top results?

Stage III.C: Geometric rescoring

Histogram-based retrieval results are good but far from perfect. Given a short list of top document from the previous step, we are now going to re-score them based on the number of inlier matches after a geometric verification step.

  1. Why is the top score much larger than 1 now?
  2. Are the retrieval results improved after geometric verification?

spacer

Part IV: Large scale retrieval

The images below are all details of paintings. The goal of this last part of the practical is to identify the paintings that they came from. For this we selected a set of 1734 images of paintings from Wikipedia.

spacer spacer spacer

To identify the details you can either:

  1. use your knowledge of art
  2. search through the 1734 Wikipedia images until you find matches
  3. build a recognition system and match the details automatically

We follow route (3) here. Look through and run exercise4.m. This uses the techniques described in Part III in order to construct an index for 1734 Wikipedia images so that they may be searched quickly. Use the code to find from which paintings these details come from.

Note, although the index is stored locally, the matching images are downloaded from Wikipedia and displayed. Click on the image to reach the Wikipedia page for that painting (and hence identify it).

  1. Use the code to visually search Wikipedia for further paintings from Van Gogh downloaded from the Internet. Note: the code supports URL in place of filenames. 

Acknowledgements

  1. Mircea Cimpoi for scripts for downloading and linking to Wikipedia paintings
  2. Comments from Relja Arandjelovic, Karen Simonyan, Omkar Parkhi, Meelis Lootus
  3. Funding from ERC grant VisRec Grant No. 228180, and a PASCAL Harvest Grant.

History

  1. Used at JHU Summer School on Human Language Technology, 2012.

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.