I combined opencv tracking (using the Camshift algorithm) with OpenGL here:
My first recommendation is not to try and tackle opengl libraries directly - there is a very good java wrapper for it called Rajawali
You can use opengl directly but I found the interface quite bewildering and Rajawali hides a lot of the complexity.
Now, to map the opencv 2D coordinates to the 3D openGL is an interesting problem as a). you need to worry about how your camera 'sees' the 3D world in terms of lens distortion etc and b). you lose a dimension when moving from what the camera sees in 2D to the 3D model and so you need to make up for that loss of information from other sources such as the size of object being tracked. Vuforia will do this through a reference image so it is a shame you can't use that.
These issues are covered in sections 11 and 12 of the O'Reilly book http://www.amazon.co.uk/Learning-OpenCV-Computer-Vision-Library/dp/0596516134/ (although that book covers the C/C++ interfaces rather than Java but there's usually an obvious mapping).
For my example I didn't do anything too complex - the bigger the Camshift detected area the closer I moved the camera position in the 3D model. I used the accelerometer sensors to determine clockwise/anticlockwise rotation etc. Actually Vuforia would of been better for what I was trying to do but there you are.
The model btw was crafted in Blender and you can import that into Rajawali including some basic texturing.
Hope that helps!
I don't know pitch trax specifically (try google or their site)
But the general field of object tracking and image processing then openCV is probabyl a good place to start (as a programmer)
There is a good book at http://www.amazon.com/Learning-OpenCV-Computer-Vision-Library/dp/0596516134/ref=sr_1_1?ie=UTF8&s=books&qid=1262807256&sr=8-1
Hi the following blog will answer all your questions...
No image was uploaded.. Just check or else you could find theoritical concepts
from the book http://www.amazon.com/Learning-OpenCV-Computer-Vision-Library/dp/0596516134
or for the actual code implementing with cameras one can refer to
If you don't have reference image of the background then you could try to make an average of first n frames and use such resultant image as a reference. This number, n, could be 20 or similar and it should be enough if the scene isn't too complex. I recommend you to read a chapter about background subtraction in official OpenCV book (http://www.amazon.com/Learning-OpenCV-Computer-Vision-Library/dp/0596516134) as there are some of other similar techniques presented.
No doubt in that, use OpenCV.
But remember, you have a long way to go.
1. First of all, you should be good in C++ and object oriented programming.
Well, if you are not good, try to learn it first. Check out following link for some best resources : https://stackoverflow.com/questions/909323/what-are-good-online-resources-or-tutorials-to-learn-c
2. Then get OpenCV and install
Check out OpenCV homepage to get info about downloading and installing OpenCV.
3. Now get and read some good books on OpenCV
The best book on OpenCV is "Learning OpenCV" written by Gary Bradsky, main founder of OpenCV.
Second one is "OpenCV cookbook".
These books contains lots of examples on OpenCV along with description
4. Check out OpenCV documentation.
OpenCV documentation contains details of complete functions. It also includes a lot of tutorials, which are really good for all.
5. Also try running OpenCV samples. It contains a lot of good programs
And always, Google is your best friend. Ask everything there first. Come here only when you are lost in your path.
Acquire all the above things. Then you will be really good in OpenCV and i am sure you will enjoy its power. Once you are done with these, you will get enough idea on realizing your project.( Otherwise, you will post new questions every day asking codes to realize your project, which will be useless for you. )
For your understanding, your project include advanced things like Optical Character Recognition. It is a big topic. So build yourself from basics. And it will take time.
All the best.
You can use image subtraction. Cross-correlation is also acceptable here.
Google search phrase: background subtraction algorithm.
background subtraction algorithm
Also this book contains needed info for you.
openCV is probably the most complete free image processing library.
There is also a book which describes both the library and some image processing techniques.
This is a reasonably complex problem, not exactly graduate research but challenging!
See this question for a list of other books.
1. Get OpenCV
Check out OpenCV homepage to download OpenCV source.
2. Check out this SOF for more details on OpenCV on iOS
iPhone and OpenCV
3. Get and read some good books on OpenCV
Have you taken a look at the camshift paper by Gary Bradski? You can download it from here
I used the the skin detection algorithm a year ago for detecting skin regions for hand tracking and it is robust. It depends on how you use it.
The first problem with using color for tracking is that it is not robust to lighting variations or like you mentioned, when people have different skin tones. However this can be solved easily as mentioned in the paper by:
Throwing away the V channel in HSV and only considering H and S channels is really enough (surprisingly) to detect different skin tones and under different lighting variations. A plus side is that its computation is fast.
These steps and the corresponding code can be found in the original OpenCV book.
As a side note, I've also used Gaussian Mixture Models (GMM) before. If you are only considering color then I would say using histograms or GMM makes not much difference. In fact the histogram would perform better (if your GMM is not constructed to account for lighting variations etc.). GMM is good if your sample vectors are more sophisticated (i.e. you consider other features) but speed-wise histogram is much faster because computing the probability map using histogram is essentially a table lookup whereas GMM requires performing a matrix computation (for vector with dimension > 1 in the formula for multi-dimension gaussian distribution) which can be time consuming for real time applications.
So in conclusion, if you are only trying to detect skin regions using color, then go with the histogram method. You can adapt it to consider local gradient as well (i.e. histogram of gradients but possibly not going to the full extent of Dalal and Trigg's human detection algo.) so that it can differentiate between skin and regions with similar color (e.g. cardboard or wooden furniture) using the local texture information. But that would require more effort.
For sample source code on how to use histogram for skin detection, you can take a look at OpenCV"s page here. But do note that it is mentioned on that webpage that they only use the hue channel and that using both hue and saturation would give better result.
For a more sophisticated approach, you can take a look at the work on "Detecting naked people" by Margaret Fleck and David Forsyth. This was one of the earlier work on detecting skin regions that considers both color and texture. The details can be found here.
A great resource for source code related to computer vision and image processing, which happens to include code for visual tracking can be found here. And not, its not OpenCV.
Hope this helps.
I would recommend a book by OpenCV author Gary Bradski - Learning OpenCV: Computer Vision with the OpenCV Library.
It is not only a refence how to use OpenCV, but also a comprehensive book on many computer vision topics, with many images that illustrate the concepts. It guides you through OpenCV basics, then trough image processing using filters, convolutions, histograms, contours, segmentation, tracking, camera calibration and 3D vision.
It is relatively easy to read (not too much math, just enough), I liked it really much.
There is an O'Reilly book by two of the major authors of OpenCV: Learning OpenCV: Computer Vision with the OpenCV Library by Bradski and Kaehler.
You should note that it is based on OpenCV 1.0, not the more recent 2.2.
Nonetheless, to understand algorithms it will likely be useful. For most of the higher-level algorithms like corner detection, for example, the book contains a mathematical descriptions of the implementation. Also, the authors do a decent job of providing references to academic journal articles, so even if their own description of the implementation is lacking, you will be able to use the references as a starting point.