Multiple View Geometry in Computer Vision

Category: Computer Science
Author: Richard Hartley, Andrew Zisserman
4.5
All Stack Overflow 8
This Year Stack Overflow 1
This Month Stack Overflow 6

Comments

by anonymous   2019-07-21

Long answer - http://www.amazon.com/Multiple-View-Geometry-Computer-Vision/dp/0521540518/

Short answer. You have the pixel scale, so for a given number of pixel disparity you can get an angle different. With the baseline between cameras and an angle you have a distance

ps. take a look at the opencv book, it has a couple of good chapters on stereo enter image description here

by anonymous   2019-07-21

Look at the first formula in the wikipedia entry on the fundamental matrix:

enter image description here

This is the "model" you are trying to solve using RANSAC. You have two 3xn (n>=7) matrices x and x' that represent all your corresponding x,y - x',y' points in both images (the 3rd coordinate is just the number 1 all the time). And an unknown 3x3 matrix F for which you want to find out the values. The pseudocode algorithm for RANSAC in the wikipedia entry is a pretty good explanation.

Now, what is the fundamental matrix? One way to think of a point in an image is as a 3D line connecting the camera position and that point in 3D space. This line extends to infinity in both directions. If you look at a 3D point on that line with a different camera then in the image from that camera you see a line going right across it. The transformation (projection really) of a point in an image to a 3D line is just a matrix operation. The projection of a line in 3D onto a 2D image is also a matrix operation. F captures both these matrix operations in one matrix. F can also be used to determine the camera matrix of both camera's, which can then be used for the 3D reconstruction.

Maybe this helps a bit? Otherwise, I've learned most I know about this from Hartley and Zisserman.

by anonymous   2019-07-21

warpPerspective does a projective transformation or homography:

http://en.wikipedia.org/wiki/Homography

warpAffine does an affine transformation:

http://en.wikipedia.org/wiki/Affine_transformation

Abid Rahman, already mentioned a good book. If you want a more theoretical one, Multiple View Geometry is considered the bible of these topics.

by anonymous   2019-07-21

suppose you have got the four candidate solution. as we have known, the five point we use to determine the essential matrix are all in front of both two camera.

therefore, them you can triangulate just one of the five point using the four candidate solution, them there are only one candidate suffice that the point stay in front of both of the camera.

that candidate is the "true estimate".

for a more detail explaination and visualizaiton, you can refer to the book "Multiple view geometry in computer vision",second version page 259 9.6.3 https://www.amazon.com/Multiple-View-Geometry-Computer-Vision/dp/0521540518/ref=sr_1_1?s=books&ie=UTF8&qid=1487321676&sr=1-1&keywords=multiple+view+geometry+in+computer+vision

i hope this answer can help you.

by anonymous   2018-03-19

As mentioned, the problem is very hard and is often also referred to as multi-view object reconstruction. It is usually approached by solving the stereo-view reconstruction problem for each pair of consecutive images.

Performing stereo reconstruction requires that pairs of images are taken that have a good amount of visible overlap of physical points. You need to find corresponding points such that you can then use triangulation to find the 3D co-ordinates of the points.

Epipolar geometry

Stereo reconstruction is usually done by first calibrating your camera setup so you can rectify your images using the theory of epipolar geometry. This simplifies finding corresponding points as well as the final triangulation calculations.

If you have:

  • the intrinsic camera parameters (requiring camera calibration),
  • the camera's position and rotation (it's extrinsic parameters), and
  • 8 or more physical points with matching known positions in two photos (when using the eight-point algorithm)

you can calculate the fundamental and essential matrices using only matrix theory and use these to rectify your images. This requires some theory about co-ordinate projections with homogeneous co-ordinates and also knowledge of the pinhole camera model and camera matrix.

If you want a method that doesn't need the camera parameters and works for unknown camera set-ups you should probably look into methods for uncalibrated stereo reconstruction.

Correspondence problem

Finding corresponding points is the tricky part that requires you to look for points of the same brightness or colour, or to use texture patterns or some other features to identify the same points in pairs of images. Techniques for this either work locally by looking for a best match in a small region around each point, or globally by considering the image as a whole.

If you already have the fundamental matrix, it will allow you to rectify the images such that corresponding points in two images will be constrained to a line (in theory). This helps you to use faster local techniques.

There is currently still no ideal technique to solve the correspondence problem, but possible approaches could fall in these categories:

  • Manual selection: have a person hand-select matching points.
  • Custom markers: place markers or use specific patterns/colours that you can easily identify.
  • Sum of squared differences: take a region around a point and find the closest whole matching region in the other image.
  • Graph cuts: a global optimisation technique based on optimisation using graph theory.

For specific implementations you can use Google Scholar to search through the current literature. Here is one highly cited paper comparing various techniques: A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms.

Multi-view reconstruction

Once you have the corresponding points, you can then use epipolar geometry theory for the triangulation calculations to find the 3D co-ordinates of the points.

This whole stereo reconstruction would then be repeated for each pair of consecutive images (implying that you need an order to the images or at least knowledge of which images have many overlapping points). For each pair you would calculate a different fundamental matrix.

Of course, due to noise or inaccuracies at each of these steps you might want to consider how to solve the problem in a more global manner. For instance, if you have a series of images that are taken around an object and form a loop, this provides extra constraints that can be used to improve the accuracy of earlier steps using something like bundle adjustment.

As you can see, both stereo and multi-view reconstruction are far from solved problems and are still actively researched. The less you want to do in an automated manner the more well-defined the problem becomes, but even in these cases quite a bit of theory is required to get started.

Alternatives

If it's within the constraints of what you want to do, I would recommend considering dedicated hardware sensors (such as the XBox's Kinect) instead of only using normal cameras. These sensors use structured light, time-of-flight or some other range imaging technique to generate a depth image which they can also combine with colour data from their own cameras. They practically solve the single-view reconstruction problem for you and often include libraries and tools for stitching/combining multiple views.

Epipolar geometry references

My knowledge is actually quite thin on most of the theory, so the best I can do is to further provide you with some references that are hopefully useful (in order of relevance):

  • I found a PDF chapter on Multiple View Geometry that contains most of the critical theory. In fact the textbook Multiple View Geometry in Computer Vision should also be quite useful (sample chapters available here).
  • Here's a page describing a project on uncalibrated stereo reconstruction that seems to include some source code that could be useful. They find matching points in an automated manner using one of many feature detection techniques. If you want this part of the process to be automated as well, then SIFT feature detection is commonly considered to be an excellent non-real-time technique (since it's quite slow).
  • A paper about Scene Reconstruction from Multiple Uncalibrated Views.
  • A slideshow on Methods for 3D Reconstruction from Multiple Images (it has some more references below it's slides towards the end).
  • A paper comparing different multi-view stereo reconstruction algorithms can be found here. It limits itself to algorithms that "reconstruct dense object models from calibrated views".
  • Here's a paper that goes into lots of detail for the case that you have stereo cameras that take multiple images: Towards robust metric reconstruction via a dynamic uncalibrated stereo head. They then find methods to self-calibrate the cameras.

I'm not sure how helpful all of this is, but hopefully it includes enough useful terminology and references to find further resources.

by anonymous   2017-08-20

Peter's matlab code would be much helpful to you I think :

http://www.csse.uwa.edu.au/~pk/research/matlabfns/

Peter has posted a number of fundamental matrix solutions. The original algorithms were mentioned in the zisserman book

http://www.amazon.com/exec/obidos/tg/detail/-/0521540518/qid=1126195435/sr=8-1/ref=pd_bbs_1/103-8055115-0657421?v=glance&s=books&n=507846

Also, while you are at it don't forget to see the fundamental matrix song :

http://danielwedge.com/fmatrix/

one fine composition in my honest opinion!

by anonymous   2017-08-20

That seems like a massive undertaking: model recognition is not an easy task. I recommend looking at OpenCV (which has some standard algorithms you can use as a starting point) and then looking at a good computer vision book (e.g., Richard Szeliski's book or Hartley and Zisserman).

But you are going to run into a host of practical problems. Consider that systems like Vuforia provide camera calibration data for most Android devices, and it's hard to do computer vision without it. Then, of course, there's efficiently managing the whole pipeline which (again) companies like Qualcomm and Metaio invest huge amounts of $$ in.

by anonymous   2017-08-20

I think that stereoCalibrate is the way to work if you are interested in the depth map and in aligning the 2 images (and I think this is an important issue even if I don't know what you're trying to do and even if you're already have a depth map from the kinect).

But, If I understand it correctly what you need you also want to find the position of the cameras in the world. You can do that by having the same known geometry in both view. This is normally achieved via a chessboard pattern that is lying in the floor, send by both (fixed position) cameras.

Once you have a known geometry 3d points and the correspective 2d points projected in the image plane you can find independently the 3d position of the camera relative to the 3d world considering the world starting in one edge of the chessboard.

In this way what you're going to achieve is something like this image:

enter image description here

To find the 3d position of the camera relative to the chessboards you can use the cv::solvePnP to find the extrinsic matrix for each camera independently. The are some issues about the direction of the camera (the ray pointing from the camera to the origin world) and you have to handle them (the same: independently for each camera) if you want to visualise them (like in OpenGL). Some matrix algebra and angle handling too.

For a detailed description of the math I can address you to the famous Multiple View Geometry.

See also my previous answer on augmented reality and integration between OpenCV and OpenGL (i.e. hot to use the extrinsic matrix and T and R matrixes that can be decomposed from it and that represent position and orientation of the camera in the world).

Just for curiosity: why are you using a normal camera PLUS a kinect? The kinect gives you the depth map that we are try to achieve with 2 stereo camera. I don't understand exactly what kind of data an additional normal camera can give you more then a calibrated kinect with good use of the extrinsic matrix already gives you.

PS the image is taken from this nice OpenCV introductory blog but I think that post is not much relevant to your question because that post is about intrisinc matrix and distortion parameters that seems you already have. Just to clarify.

EDIT: when you're talking about units of the extrinsic data you are normally measure them in the same unit of the 3D points of the chessboard are, so if you identify a squared chessboard edge points in 3D with P(0,0) P(1,0) P(1,1) P(0,1) and use them with solvePnP the translation of the camera will be measured in the unit of "chessboard edge size". If it is 1 meter long, the unit of measure will be meters. For the rotations, the unit are normally angles in radians, but it depends how you are extracting them with the cv::Rodrigues and how you're getting the 3 angles yawn-pitch-roll from a rotation matrix.