While Human Activity Identification and tracking have made significant developments over the past few years, most of the current methods are of partial success in real-time applications utilizing moving sensors. In particular, we highlight in this thesis the requirement for a visual tracker that enables autonomous navigation functions to a drone (UAVs), mainly following a specific target. To accomplish this goal, an activity detection framework has been developed. This paper presents a genuine modern instance of community-oriented mechanical technology and the subtleties of the utilization of machine vision strategies in the identification and tracking of human activities from Unmanned Aerial Vehicles. The methodology also incorporates the execution of control orders, which are fundamental to change from the discovery to naturally follow the identified objective. Our proposed approach is assessed on video recorded through drones. The acquired outcomes are adequate to deal with precisely follow an objective progressively, regardless of various issues, for example, lighting changes, speed, and occlusions. We have used various models: VGG-Origin, VGG-16,PoseNet, RCNN ‘Faster_rcnn_inception_v2_coco_2018_01_28’, SSD‘ssd_mobilenet_v2_ quantized_300x300 _coco_2019_01_0, Mobilenet_thin, Mobilenetv2_large, Mobilenet v2_small to achieve results. For VGG-origin and VG-16 we used video, for the rest of the models we have used image datasets. The final selected model is Pose Net that works with Google Coral. The accuracy achieved is around 83%. Pose Net could be used for estimating the single pose or multiple poses, which means there is a version of the algorithm that could detect only a single person in a video /image and one version that could detect various persons in an image or video.

N/A

Human activity identification and tracking with autonomous UAV

BHUYAN, MANASJYOTI
2018/2019

Abstract

While Human Activity Identification and tracking have made significant developments over the past few years, most of the current methods are of partial success in real-time applications utilizing moving sensors. In particular, we highlight in this thesis the requirement for a visual tracker that enables autonomous navigation functions to a drone (UAVs), mainly following a specific target. To accomplish this goal, an activity detection framework has been developed. This paper presents a genuine modern instance of community-oriented mechanical technology and the subtleties of the utilization of machine vision strategies in the identification and tracking of human activities from Unmanned Aerial Vehicles. The methodology also incorporates the execution of control orders, which are fundamental to change from the discovery to naturally follow the identified objective. Our proposed approach is assessed on video recorded through drones. The acquired outcomes are adequate to deal with precisely follow an objective progressively, regardless of various issues, for example, lighting changes, speed, and occlusions. We have used various models: VGG-Origin, VGG-16,PoseNet, RCNN ‘Faster_rcnn_inception_v2_coco_2018_01_28’, SSD‘ssd_mobilenet_v2_ quantized_300x300 _coco_2019_01_0, Mobilenet_thin, Mobilenetv2_large, Mobilenet v2_small to achieve results. For VGG-origin and VG-16 we used video, for the rest of the models we have used image datasets. The final selected model is Pose Net that works with Google Coral. The accuracy achieved is around 83%. Pose Net could be used for estimating the single pose or multiple poses, which means there is a version of the algorithm that could detect only a single person in a video /image and one version that could detect various persons in an image or video.
MARZONA, PIERANGELO
ING - Scuola di Ingegneria Industriale e dell'Informazione
6-giu-2020
2018/2019
N/A
Tesi di laurea Magistrale
File allegati
File Dimensione Formato  
Thesis_Atuonomous Drone v1.4 [6784].pdf

accessibile in internet per tutti

Descrizione: Thesis Text
Dimensione 4.22 MB
Formato Adobe PDF
4.22 MB Adobe PDF Visualizza/Apri

I documenti in POLITesi sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10589/154407