White Ball Detection
As the SPL rule
change in 2016, the official ball has changed. Now, robots should
play soccer with a black and white, foam ball. The ball, is mostly
white, with black pentagons on it. Since it is the same color with goal
posts, field lines and other robots, it is harder for robots to
distinguish the new ball from these objects.
Formerly, as well as the ball
was orange, it was uniquely colored in the field. So, it could be
detected simply by searching the image for specific colors, and then
just by applying few filters such as width and shape.
At the beginning, our strategy for
detecting ball after this change, was to simply calculating image
edges, and applying a Hough transform on it. Then we were expecting an acceptable outcome just
by applying few filters on the result. But it didn't work!
There were more
problem than we expected. Calculating edge for whole
image was super heavy, so we had to reduce the image resolution 4
times. Then, since the resolution was too low,
the robot could not detect the ball farther than 2 or 3 meters. Although this helped us recover some of our lost cycle-time, but since the code was still very heavy, we couldn’t react properly to the ball, even if we saw it,
so the robots used to lose the ball even on their search task.
Then, we
have implemented Random Hough Transform (RHT), and we had a great
improvement on our time. But the side effect was that, the ball
was not visible 50% of the time “Randomly”! And again, we was still behind our clock!
Finally, as my
last refactor, I changed the edge detection and Hough transform
module both. The edge detection is now running on full resolution image,
and the cycle time just where it had to be, around 10ms.
To achieving this
state for edge detection I have obtained some rules for it,
including: 1- There should be no edge detection runs above the
horizon. 2- The Edge Detection starts just after field boundary. 3- (and the most important one) The closer you get to the horizon, the dense your searching space
should be.
But also, there are
some more tricks addition to the above. The edging step (as the number 3 says) is dynamic initially, and the farther you get
from the horizon, the step gets wider and wider. Then the Fast Random
Hough Transform (FRHT) is applied. This means one point is selected,
randomly. Then the area around that point fully revised in order to refine the edge space. This means
that edge filter is applied to all the pixels around that point.
Afterward, the FRHT is applied and the result will be marked either
as circle or not. By this combination of edging and detecting the
circle, we could achieve the speed and precision simultaneously.
After all, the result is being filtered by the size and density of
the white colors inside it. Then if the selected circle is passed
this test, the boundary of this circle is calculated precisely, and
again, the size and the color density (this time both, black and
white) is measured.
This method is
currently working well for detecting the ball. But it still have some
few problem. Such as, it sometimes mix up with other robot body parts. I am trying to implement Neural Network for learning the
differences between these two classes. I hope I can finish this part before the RoboCup competitions and release it as well!
By the way, the code, currently is available at my github:
https://github.com/ArefMq/SPLBallPreceptor
https://github.com/ArefMq/SPLBallPreceptor
Please feel free to send pull request, and/or let me know if you have any comment on this!
Comments
Post a Comment