Developing a system that can learn and adapt to computer vision requirements with minimal human intervention for a given agricultural scenario is a complex task. Such capabilities are however required for Internet of things (IoT) deployments especially with cameras that are used for continuous monitoring of plants. The challenge specifically is to track events associated with biological processes of plants such as those associated with growth and health. Timely and localized identification of growth stage or a health condition at a particular stage is very important to improve yield. Given the different variety of crops, their growth patterns and difference in manifestations in physical appearances due to aging or external factors such as a disease or deficiency, it is non-trivial to identify and flag only the changes in the appearance of a crop during its life cycle. The ability to do this however is essential, for example, in order to tag and forward essential events from image acquisition systems at the farm to the cloud instead of periodically forwarding redundant images.
Image classification typically involves a major challenge of human intervention for labeling image datasets for supervised classification. Deep Convolutional Neural Networks (CNNs) have proved to give a higher accuracy for feature extraction. They however need a large amount of labeled dataset to train classification models.
For IoT deployments involving camera sensors or equivalent participatory sensing scenarios, configuring a computer vision solution to meet specific sensing requirements of the context being monitored becomes a challenge. Moreover, carrying out real time interpretation of an image submitted to an IoT platform for processing at the edge is a challenge where connectivity is not easily available, especially in rural areas of developing countries.