Pollinator Identification App

id app.jpg
 

 

The project is being undertaken at the University of New England (UNE), Armidale, Australia, in collaboration with the Earthwatch Institute. The University of New England team is made up of Greg Falzon and Sharifah Aldossary from the School of Science & Technology, and Romina Rader and Tobias Smith from the School of Environmental and Rural Science. There are also a team of UNE computer science students working on the background building of the app ‘packaging’ development itself (structure, appearance, pathways etc). The primary aim of the project is to provide an app for farmers and others involved in the growing of horticultural crops that both instantly identifies pollinator insects visiting their crops and compiles that data for researchers to better understand the relationships between crop species and different pollinator insects. Further down the track we hope to make a version of the app that is targeted more widely to anyone interested in identifying pollinator insects on any flowers.

There are multiple phases to the development of our pollinator recognition app. We are currently in phase 1, which is the proof of concept phase. During this phase we have demonstrated that we can use machine learning to automatically recognise different groups of pollinators from images. In this phase only coarse recognition is being used: bees, flies, beetles, butterflies etc. This phase has involved using image libraries of hundreds of images of each group as a basis for tweaking algorithms that have been used in the recognition of other objects, to working for pollinator insects. We are only a couple of months into the project, and we have demonstrated that we can correctly automatically categorise pollinators in images to these coarse groups with an average accuracy rate of about 70–80%. Obviously we are aiming for higher accuracy than this (ultimately in the high 90s), but these are the first steps. Once we fine tune the algorithms to get the accuracy a bit higher, we will move into phase 2.

Phase 2 broadly involves two main parts: finer taxonomic resolution and deep learning. We will move beyond course groupings, and start training the algorithms we perfected in phase 1 to recognise key pollinator species and groups of interest. To begin with we will focus just on species and groups that are know or suspected pollinators of crops, for example Apis mellifera, Tetragonula bees, Syrphid flies (as a group) Calliphorid flies (as a group) etc, but into the future we hope to add more groups and species. The deep learning part involves making the algorithms able to more readily identify the part of the image that holds the pollinator insect. In phase 1 we are using images that have been cropped so that the pollinator insect is quite big relative to the size of the image, but we ultimately want this to work for photos that are further away from the subject. This part will involve using tens of thousands of reference images of each species or group, from all possible angles. We envisage the data being a great source of broad data to help formulate ideas for more targeted research questions in the future.

Project page: https://www.facebook.com/PollinatorRecognition/