Some productive hacking
Over the weekend I have made a pretty decent amount of progress with my vision project. I have created a simulated stereo system by taking images with the camera on the left hand side, moving the camera and then taking images of the same scene from the right hand side. I then load these images from disk simultaneously to simulate the effect of having two cameras. Hopefully I will get my other camera soon as this solution won't work for long.
I am now at the point where I can match features between the stereo images and extract depth information from the stereo nature of the images. I have also completed the matrix mathematics that will give an estimate of the image coordinates of a given feature in the next frame given the camera translation between frames. This last bit of work was a bit challenging as it required me to relearn a bunch of linear algebra and trigonometry which I hadn't used since University.
My next challenge is to use least squares minimisation on the features successfully matched between frames to find the error between the projected feature location and the actual feature location. This will then feed back into the camera location calculations as a sort of error term. I am thinking I may even use a kalman filter to integrate the robot odometry and calculated camera ego motion.
I still have a long way to go before this prototype is finished and I know whether the system will even work or not, but it is pretty exciting to be able to do what I have already accomplished!
I am now at the point where I can match features between the stereo images and extract depth information from the stereo nature of the images. I have also completed the matrix mathematics that will give an estimate of the image coordinates of a given feature in the next frame given the camera translation between frames. This last bit of work was a bit challenging as it required me to relearn a bunch of linear algebra and trigonometry which I hadn't used since University.
My next challenge is to use least squares minimisation on the features successfully matched between frames to find the error between the projected feature location and the actual feature location. This will then feed back into the camera location calculations as a sort of error term. I am thinking I may even use a kalman filter to integrate the robot odometry and calculated camera ego motion.
I still have a long way to go before this prototype is finished and I know whether the system will even work or not, but it is pretty exciting to be able to do what I have already accomplished!
0 Comments:
Post a Comment
<< Home