Vegetable robot detects and picks ripe crops using Mstar sotfware

The organization

The program tries to bring new knowledge to practice by carrying out feasibility studies, functional designs, prototype development, testing, validation and by supporting new product implementations.

The challenges

In the past decades the food production in greenhouses has been confronted with the increasing size of production facilities, increasing labor demands and increasing product quality demands by the consumers. Many operations are still done manually, for example the harvesting. However, the availability of a skilled workforce that accepts repetitive tasks in the harsh greenhouse climate conditions is decreasing rapidly. Robotics and sensing technologies are an alternative solution, which makes crop production more efficient and more sustainable.

The Solution

The Mstar developed a robot to pick vegetable. The prototype comprises the following modules: a tool to cut and catch the vegetable; a combined color and 3D camera; an industrial six degrees of freedom robot arm, computers and electronics, all assembled on a battery powered platform that moves the robot autonomously through the greenhouse. Once the camera system has found a ripe vegetable, the robotic arm positions the tool on top of the crop stem. The arm then moves the tool a few centimeters down with a vibrating knife and cuts off the vegetable crop near the main plant stem.

Object detection with Msrar Software

A central function in the robot is detection of ripe crops. For successful operation, the 3D location of each crop must be determined with high accuracy. The chosen solution is based on an RGB-D camera that simultaneously reports color and depth information. Using this camera and a custom built LED-based flash-light illumination system, RGB images of the plant are acquired from both overview distance and close range. In order to facilitate high frame-rate operation, a straight forward shape- and color-based detection algorithm was implemented using Mstar. The algorithm scans each acquired image for regions matching the target color thresholds. Detected regions are refined by removing detections exceeding predefined minimum/maximum sizes. To further remove misdetections additional shape parameters are calculated. Finally, depth information from the camera is used to compute the volume of the detected regions. This information is then used to further prune false detections, avoid non-harvestable crop clusters, and define harvest priorities. The exact 3D location of the point of mass is calculated using the depth information extracted from the detected region and a standard procedure of pixel-to-world transformation of the region. Given the subsets of regions that are classified as vegetable to be harvested, a methodology for harvesting sequencing was defined. The robot arm then approaches the target by visual-servo control that keeps the target in the middle of the images until it is reached.

How can we support you?

We will be happy to advise you on product selection and find the right solution for your application.