Detecting White Goal Posts

Drinking tea, checking social medias and resting is quite nice, when you get the first, essential results. After many many sleepless hours, finally the basics of my project are done, and now we are able to detect the white goal posts. But there is still a long path lies ahead, a long path to follow.
White Goal Post - The blue rectangles are the detected goal posts. The image is originally taken from the robot camera.

Here is some detail of how I implemented this feature:
Firsts of all, let me remind that, this is not our primary version of goal detector. It's just a backup code. An alternative version for our main vision (Color Calibration Free Image Processing) which is under progress but not finished yet.
After I have reviewed codes release from other robocupper teams, I decide to use B-human code release [1] as my starting point and basic structure. But, since it has designed to detect yellow goal posts, it has fundamental issues with white goal posts detection. First, the white background makes it hard to find a sensible threshold to separate goal posts from background. Beside that, if the goal frame structure (the structure that holds the posts and the net) is white, as it is allowed by SPL 2015 rules [2], that also become an issue.
To overcome these problems, I choose to scan field boundaries instead of scanning the horizon (as it is in B-Human goal perceptor module) and find any violent in field boundary which could be a possible goal post. since every goal posts are ends inside the field. Also, this method reduce the false positive caused by objects outside the field. However, it's common to see that the field boundary provider module doesn't work well. Hence, I chose to have the previous scan method (scanning the horizon) beside this function, just in case. This means, I scan both field boundaries and the horizon to find any white pixel and then merge the results together.
After the scan has been accomplished, it's important to have a precise estimation of the object borders (boundaries). To do so, (since it is almost impossible to estimate them with thresholded pixels) I employed a color gradient follower. The main purpose of this function is to track the color changing-rate and return false if it falls out of the track. This 'out of the track' results means the compared pixels have different colors and are from different segments in image. This is used to estimate the object boundaries in all of the four directions.
Finally, some sanity checks are applied to ensure the validity of the data. These checks includes, width, height, distance to horizon, ratio, etc.
For the second issue, the problem of the white frame, I have applied a filter on results on every two goal posts. It says, if two goal posts are near enough (e.g. than 1 meter) remove the distant one.
However, it didn't work well enough for me and I'm still working on it, so I'll be glad if you have an idea to share!
The codes are accessible from my github, and it is easy to use. all you need is to replace it with B-Human goal perceptor module.

Arash - My pair in developing this project!

REFERENCES:
[1] B-Human Code Release 2013
[2] SPL 2015 Rules

Comments

Popular posts from this blog

Stage #1 - Replanting a pineapple

Brand New Perception

YOLO