Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ Estudo Geralarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
Estudo Geral
Master thesis . 2023
Data sources: Estudo Geral
addClaim

Deteção e classificação de obstáculos à navegação de veículos autónomos em ambientes florestais

Authors: Monteiro, José Pedro Peixe;

Deteção e classificação de obstáculos à navegação de veículos autónomos em ambientes florestais

Abstract

Forest fires are recurring phenomena every year in Portugal, with devastating effects for the population and economy, and one of the measures for their prevention goes through the reduction of combustible vegetal material in certain areas of the forest, which becomes a strenuous task using conventional methods. This work, in the scope of projects as F4F and E-Forest, currently in development by ADAI and Field Tech Laboratory, proposes to enable the detection and classification of obstacles in the navigation path of an autonomous machine in a forestry environment, whose primary job is to remove the excess combustible vegetation. Using a computer vision approach with resource to Convolucional Neural Networks (CNN), as well as other tools, it is possible to perform obstacle detection and classification on the visual information retrieved from the surrounding environment by the different sensors installed. This information is distinguished into major classes of obstacles, as tree trunks, living beings and combustible vegetation, for example. The correct labelling of these classes is essential to allow the supply of reliable data which can that way be used by the machine’s movement and navigation controllers. YOLOv5 CNN model was selected and trained on datasets aimed at the detection of tree trunks, combustible vegetation, and water lines. People detection with resource to colour and thermal imaging was also tested. The most adequate dataset for trunk detection was “Trees_dataset”, which obtained a mAP50 value of 94% when the model was executed on the test images. The detection of combustible vegetation and water lines attained reasonable results, albeit improvable, with a mAP50 value for the test images of 68 and 79%, respectively. The detection of people with colour and thermal imaging had positive results, where the model achieved confidence values of 40 to 50% while the person on the image was at a considerable distance. The ROS integrated tool, Adaptive Clustering, was tested for obstacle detection with the LiDAR, and had the expected results, detecting the defined obstacles.

Os incêndios florestais são fenómenos recorrentes todos os anos em Portugal, com efeitos devastadores para a população e economia, e uma das medidas para a sua prevenção passa pela redução do material vegetal combustível em certas zonas da floresta, o que se torna uma tarefa difícil através de métodos convencionais. Este trabalho, no âmbito de projetos como o F4F e E-Forest em desenvolvimento pela ADAI e pelo Field Tech Laboratory, visa possibilitar a deteção e classificação de obstáculos à navegação de uma máquina autónoma em ambiente florestal, que desempenha a função primária de remoção do combustível florestal em excesso. Utilizando uma abordagem de visão computacional com recurso a uma arquitetura de Rede Neural Convolucional (CNN), entre outras ferramentas, é possível realizar deteção e classificação de objetos na informação visual retirada do ambiente em redor por diferentes sensores instalados. Esta informação é diferenciada em classes principais de obstáculos, como troncos de árvores, seres vivos e combustível florestal. A correta identificação destas classes é essencial para fornecer dados fidedignos que possam ser utilizados pelos controladores de movimento e navegação da máquina. Foi selecionado e treinado o modelo CNN YOLOv5 em datasets destinados à deteção de troncos, combustível florestal e cursos de água. Foi também testada a deteção de pessoas com recurso a imagens a cores e térmicas. O dataset mais adequado para a deteção de troncos foi o “Trees_dataset”, que obteve uma mAP50 de 94%, ao executar o modelo nas imagens de teste. A deteção de combustível florestal e cursos de água obtiveram resultados razoáveis, embora melhoráveis, com uma mAP50 nas imagens de teste de 68 e 79%, respetivamente. A deteção de pessoas com imagens a cores e térmica obteve resultados positivos, onde o modelo atingiu confianças de 40 a 50%, mesmo quando a pessoa na imagem estava a uma distância considerável. A ferramenta Adaptive Clustering integrada no ROS foi testada para a deteção de obstáculos com o LiDAR, e teve os resultados esperados, detetando os obstáculos definidos.

Dissertação de Mestrado em Engenharia Mecânica apresentada à Faculdade de Ciências e Tecnologia

Country
Portugal
Related Organizations
Keywords

forest environment, deteção de objetos, object detection, visão computacional, ambiente florestal, máquina autónoma, Convolutional Neural Network (CNN), Rede Neural Convolucional (CNN), autonomous machine, computer vision

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Green