Posts

YOLO

Image
Yolo, a great library my Colleague (Ashkan) and I have found for detecting objects. YOLO stands for Y ou O nly L ook O nce, and its a deep learning computer vision library [1]. The code base is on C (not C++) and it has GPU support. These features along with the availability of designing new layers make it awesome. But the most significant feature of this library is its speed in detecting objects in real life. The default network on this project is trained to detect variety range of objects. However, the library has a huge disadvantage. It's written in C, which makes it hard to further developments. Although, there is C++ wrapper for it in [2], but the library isn't satisfying. I am still try it to fit it in my program, but after a week no progress has been attained yet... I planed to rewrite the wrapper (and in worst case the library itself) to see if I am able to  1) compile it under c++ and integrate it in my program. and 2) to see if can improve it even faster. The Y

Stage #1 - Replanting a pineapple

Image
I just have read an article about replanting a pineapple, and guess what; here I am! After reading couple of search queries, I decided to start this process. Let's see what happens! I have removed the rest of the fruit. Also, removed the lower leafs. According to the article [and search queries], I should wait couple of days till the lower part is completely dries. Then I can soak it into water for few weeks and hopefully the roots start to grow! Afterward, I am able to plant it in the clay. The only problem is I need somewhere with really high amount of sunlight! [Which there is none in my room...]. A Pineapple If everything goes well, this cute plant will be my friend for a while. I really enjoy having such a thing on my desk. Please let me know if you have ever tried to replant a pineapple. I am really looking for some experience in this field!

HSL Ball Detection

Image
I am about to finish the ball detection method, a trick that can be used to detect the soccer ball [hopefully] in every soccer field! I am trying my best to provide this algorithm in a way that no prior configuration is required, however still some parameter is needed to be set before each match. Today, I am going to test it on a real humanoid robot. Maybe later I test it on NAO robots as well! Photo of the MRL-HSL (Humanoid) Field and Ball. I'll release push the code on my github ( http://github.com/arefmq ) as soon as I reach an stable version. Also, I am going to provide a detailed description of how it works!

Working Mode = Switched back to normal

Image
After a hard period of preparation for TOEFL and GRE test, I am finally back to routine: programming! Right now I am kinda into freelancing along with my research hobbies. And now, as I am righting this post, my laptop is busy training a neural network for human face emotion detection. First I get a little taste of Google Tensor Flow library and it was amazingly easy to build and use! However, the possibilities today goes even further. I found a python wrapper for it, named TFLearn! Which makes Tensor Flow easy as few lines! And right now I am training my network with a dataset of faces that I got from Kaggler website to see whether the TFLearn is working or not. (I hope it does!) If everything goes well, I'll be able to recognize the emotion of the faces that has been detected by OpenCV.

RoboCup 2016

The #robocup_2016 is almost over now, today was the last day of it, B-Human got first and UT Austin got second, but they both played really well. The outdoor field was quite challenging, and also the new (white) ball. Although most of the team could see the ball, but almost all of them had problem with farther ball. Today is ending, and everyone are packing up, maybe heading home. It's quite good to heading back home, but somehow it's looks a bit sad, we are going to say good-bye to all of our friends, and leave. However, I am not going to go back right after the robocup, I am going to visit some friend and take a look around.

White Ball Detection

As the SPL rule change in 2016, the official ball has changed. Now, robots should play soccer with a black and white, foam ball. The ball, is mostly white, with black pentagons on it. Since it is the same color with goal posts, field lines and other robots, it is harder for robots to distinguish the new ball from these objects. Formerly, as well as the ball was orange, it was uniquely colored in the field. So, it could be detected simply by searching the image for specific colors, and then just by applying few filters such as width and shape. At the beginning, our strategy for detecting ball after this change, was to simply calculating image edges, and applying a Hough transform on it. Then we were expecting an acceptable outcome just by applying few filters on the result. But it didn't work! There were more problem than we expected. Calculating edge for whole image was super heavy, so we had to reduce the image resolution 4 times. Then, since the resolution was too low, the

Detecting White Goal Posts

Image
Drinking tea, checking social medias and resting is quite nice, when you get the first, essential results. After many many sleepless hours, finally the basics of my project are done, and now we are able to detect the white goal posts. But there is still a long path lies ahead, a long path to follow. White Goal Post - The blue rectangles are the detected goal posts. The image is originally taken from the robot camera. Here is some detail of how I implemented this feature: Firsts of all, let me remind that, this is not our primary version of goal detector. It's just a backup code. An alternative version for our main vision (Color Calibration Free Image Processing) which is under progress but not finished yet. After I have reviewed codes release from other robocupper teams, I decide to use B-human code release [1] as my starting point and basic structure. But, since it has designed to detect yellow goal posts, it has fundamental issues with white goal posts detection. First