Improve Matching of Feature Points with Opencv

Improve matching of feature points with OpenCV

By comparing all feature detection algorithms I found a good combination, which gives me a lot more matches. Now I am using FAST for feature detection, SIFT for feature extraction and BruteForce for the matching. Combined with the check, whether the matches is inside a defined region I get a lot of matches, see the image:

a lot of good matches with FAST and SIFT
(source: codemax.de)

The relevant code:

Ptr<FeatureDetector> detector;
detector = new DynamicAdaptedFeatureDetector ( new FastAdjuster(10,true), 5000, 10000, 10);
detector->detect(leftImageGrey, keypoints_1);
detector->detect(rightImageGrey, keypoints_2);

Ptr<DescriptorExtractor> extractor = DescriptorExtractor::create("SIFT");
extractor->compute( leftImageGrey, keypoints_1, descriptors_1 );
extractor->compute( rightImageGrey, keypoints_2, descriptors_2 );

vector< vector<DMatch> > matches;
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce");
matcher->knnMatch( descriptors_1, descriptors_2, matches, 500 );

//look whether the match is inside a defined area of the image
//only 25% of maximum of possible distance
double tresholdDist = 0.25 * sqrt(double(leftImageGrey.size().height*leftImageGrey.size().height + leftImageGrey.size().width*leftImageGrey.size().width));

vector< DMatch > good_matches2;
good_matches2.reserve(matches.size());
for (size_t i = 0; i < matches.size(); ++i)
{
for (int j = 0; j < matches[i].size(); j++)
{
Point2f from = keypoints_1[matches[i][j].queryIdx].pt;
Point2f to = keypoints_2[matches[i][j].trainIdx].pt;

//calculate local distance for each possible match
double dist = sqrt((from.x - to.x) * (from.x - to.x) + (from.y - to.y) * (from.y - to.y));

//save as best match if local distance is in specified area and on same height
if (dist < tresholdDist && abs(from.y-to.y)<5)
{
good_matches2.push_back(matches[i][j]);
j = matches[i].size();
}
}
}

OpenCV Flann matching of feature point for multiple views

The precise problem this question asks is "How can I label a set of multiple matches uniquely when all I have is pairwise matches?"

This is a standard graph theory problem: getting from sets of edges to connected components.

Just for some intuition:

Connected components

The idea is you have edges (pairs of feature matches). So for e.g. in the above graph, (2, 1) is an edge. And (1, 3), and (5, 6), and so on. So since 2 is matched with 1, and 1 with 3, really, 1, 2, and 3 are all probably the same feature. And so you can group the same features together by finding all the components which are connected together in this graph. Note that the graph need only be described by these pairs and nothing more.

You already have code to compute your matches. I'll provide some code to compute the connected components. No guarantees that this code is particularly fast, but it should be robust to whatever types of data you're using. Note however that each distinct node that you send in has to have distinct data as this uses sets.

def conncomp(edges):
"""Finds the connected components in a graph.

Parameters
----------
edges : sequence
A sequence of pairs where the pair represents an undirected edge.

Returns
-------
components : list
A list with each component as a list of nodes. Only includes single
nodes if the node is paired with itself in edges.
"""

# group edge pairs together into a dict
pair_dict = defaultdict(set)
nodes = set([num for pair in edges for num in pair])
for node in nodes:
for pair in edges:
if node in pair:
pair_dict[node] = pair_dict[node].union(set(pair))

# run BFS on the dict
components = []
nodes_to_explore = set(pair_dict.keys())
while nodes_to_explore: # while nodes_to_explore is not empty
node = nodes_to_explore.pop()
component = {node}
neighbors = pair_dict[node]
while neighbors: # while neighbors is non-emtpy
next_node = neighbors.pop()
if next_node in nodes_to_explore:
nodes_to_explore.remove(next_node)
next_nodes = set([val for val in pair_dict[next_node] if val not in component])
neighbors = neighbors.union(next_nodes)
component.add(next_node)
components.append(list(component))

return components

As mentioned above, the input to this function is a list of pairs (tuples). I'd just send in a list of paired IDs, so for e.g.:

edges = [(img1_feat_i, img2_feat_j), ...]

where img1_feat_i and img2_feat_j the ids of features matched from knnMatch or BFMatch or whatever you like to use.

The function will return a list of components like

[[img1_feat_i, img2_feat_j, img3_feat_k, ...], ...]

Each component (i.e. each sublist) are all the same feature across images, so you can then map all those distinct ids to one unique id for the component.

OpenCV feature matching parallel processing

You might have an easier time using OpenMP. OpenMP can be real easy to integrate. I've used it plenty when I make an algorithm and find it slow. If you're trying to parallelize the loop, then you can add the line #pragma omp parallel for above your for loop statement. You will then have to compile with -fopenmp. Here is a link to a simple tutorial on OpenMP.

Alternatively, I would imagine OpenCV has already optimized the processing with parallel processing when running their feature descriptor functions. I'm not sure of that but I know when building there is a flag for parallel processing support you need to make sure is enabled. Also,the parallel_for function doesn't work if you don't have that either.

Improve matching accuracy of cvMatchShapes in OpenCV

Here is the FAST inventor's website. FAST stands for Features from Accelerated Segment Test. Here is a short Wikipedia entry on AST based algorithms. Also, here is a good survey of the different feature detectors currently in use today.

FAST is actually already implemented by OpenCV if you would like to use their implementation.

EDIT : Here is short example I created to show you how to use the FAST detector:

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <vector>

using namespace std;
using namespace cv;

int main(int argc, char* argv[])
{
Mat far = imread("far.jpg", 0);
Mat near = imread("near.jpg", 0);

Ptr<FeatureDetector> detector = FeatureDetector::create("FAST");

vector<KeyPoint> farPoints;
detector->detect(far, farPoints);

Mat farColor;
cvtColor(far, farColor, CV_GRAY2BGR);
drawKeypoints(farColor, farPoints, farColor, Scalar(255, 0, 0), DrawMatchesFlags::DRAW_OVER_OUTIMG);
imshow("farColor", farColor);
imwrite("farPoints.jpg", farColor);

vector<KeyPoint> nearPoints;
detector->detect(near, nearPoints);

Mat nearColor;
cvtColor(near, nearColor, CV_GRAY2BGR);
drawKeypoints(nearColor, nearPoints, nearColor, Scalar(0, 255, 0), DrawMatchesFlags::DRAW_OVER_OUTIMG);
imshow("nearColor", nearColor);
imwrite("nearPoints.jpg", nearColor);

waitKey();
return 0;
}

This code finds the follow feature points for the far and near imagery:

near image far image

As you can see, the near image has many more features, but it looks like the same basic structure is detected with the far image. So, you should be able to match these. Have a look at the descriptor_extractor_matcher.cpp. That should get you started.

Hope that helps!

How to improve features detection in opencv

I haven't used SURF, but used ORB algorithm. And to improve feature detection I've experimented several filters. The best results I've obtained was with combination of filters Equalize Histogram and Fast Fourier Transform.

Equalize Histogram filter: It enhances meaningless detail and hides important but small high-contrast pixels, which are assumed as noise. Histogram equalization employs a monotonic, non-linear mapping which re-assigns the intensity values of pixels in the input image such that the output image contains a uniform distribution of intensities (i.e. a flat histogram)

Fast Fourier Transform filter: It decomposes the image into its sine and cosine components. The output of the transformation performed by this filter represents the image in the frequency domain, while the input image is the spatial domain equivalent. In the Fourier domain image, each point represents a particular frequency contained in the spatial domain image.

I'm not sure, but I think that in OpenCV there is no FFT filter, so probably you will need to use another library.

Edit1:
I have a code, but unfortunately it is in Java and not in C++. But if you will apply the same filters, the result will be the same. Here is the documentation of Eqaulize Histogram. And to apply FFT filter I've used ImageJ, which is Java library. You can try to find something similar to this library, like this one.

Edit2: ImageJ code to apply FFT filter

import ij.plugin.filter.FFTFilter;
...
FFTFilter fft = new FFTFilter();
ImageProcessor ip = new ColorProcessor(bufImage);
ImagePlus imgPlus = new ImagePlus();

imgPlus.setImage(bufImage);

try{
fft.setup(null, imgPlus);
}catch(Exception e){e.printStackTrace();}
fft.run(ip);

Edit3: Here are examples of detected features before and after applying mentioned filters.

  1. SURF without any filter:
    Sample Image
  2. SURF with EH + FFT:
    Sample Image
  3. ORB with EH + FFT:
    Sample Image

As you can see with SURF algorithm, there are too many redundant information to perform matching. So I suggest you to use ORB algorithm. Also the advantages of ORB is that it is free to use, efficient and stable to image rotation and scale. You can also smooth the image before applying EH+FFT to detect features only on corners.

Edit4: I've also found useful information about FFT. According to this topic FFT is an efficient implementation of DFT. Which is described here. It is also could be the answer four your recent question.

open cv Feature matching using given coordinates

You can not specify this to the matcher but you can limit the points at extraction time. In your code keypoints1 and keypoints2 can be the inputs of the extractor for the points you wish to match only. Hence, you should do the following:

// perform "optical flow tracking" and get some points
// for left and right frame

// convert them to cv::KeyPoint
// cv::KeyPoint keypoints1; // left frames
// cv::KeyPoint keypoints1; // right frames

// extract feature for those points only
OrbDescriptorExtractor extractor;
extractor.compute(img1, keypoints1, descriptors1);
extractor.compute(img2, keypoints2, descriptors2);

// match for the descriptors computed at the pixel coordinates
// given by the "optical flow tracking" only
BFMatcher matcher(NORM_L2);
matcher.match(descriptors1, descriptors2, matches);


Related Topics



Leave a reply



Submit