The human eyes both take independent “photos” at different angles and positons. The remarkable thing about the human brain is that it is able to use both images to recreate the 3D world. It does this very quickly and remarkably accuratly.
To replicate this in computers is a crucial part of image processing and robotics. A computer’s ability to understand its own position compared to its surroundings is a crucial step forward. This would allow the ability for robots to go where humans could not and “learn” useful information about their surroundings.
In the scope of this project we try to replicate basic Stereo Imaging given two photos taken at different horiztonal positions from eachother. We are going to rebuild the 3D space possible with two images, with depths from the camera plane.