This work is funded by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 872614 (Smart4All) through the open call project AgriAdapt

EC logo

Project team:


Computer vision is poised to revolutionize agriculture. Cameras mounted on unmanned aerial vehicles (UAVs) can provide a detailed picture of the fields, and deep neural network (DNNs)-based object recognition can help us understand the state of the land. However, the full potential of computer vision can be realized only if the processing happens directly on UAVs. This would enable UAVs to provide information, which could be used for real-time location-specific actioning. For instance, a UAV could detect weeds, which an on-the-ground robot could immediately exterminate.

The biggest obstacles towards the realization of the above are the limited processing capabilities and battery charge of UAVs. DNN models require substantial resources that prevent the use of on-device DNN models on anything but the most powerful (and expensive) UAV models.

Compression techniques have been developed to reduce the resource appetites of DNN models. However, the compression often results in a permanently impaired DNN. In our previous work we have already developed a framework for dynamically adapting a DNN by slimming a network, i.e. using only a fraction of its parameters, which leads to the proportional reduction of its computational and energy requirements. We have also developed an algorithm for context-driven slimming ratio adaptation and support for running such a network on mobile/embedded devices. The concept has been successfully demonstrated on smartphones where a 30% reduction of energy usage with only a 1% loss of inference accuracy has been attained in a human-activity recognition app [more details].

AgriAdapt drone

In this project, in collaboration with an Italian SME Geo-K, our framework will be harnessed for running complex DNNs for weed recognition directly on low-end UAVs. An algorithm will be developed to use the low-power slimmed DNN whenever possible, and use the full DNN only for recognition on hard-to-classify images.