C BYREGOWDA INSTITUTE OF TECHNOLOGY (1)-1.pptx

SurajGb 11 views 21 slides Mar 10, 2025
Slide 1
Slide 1 of 21
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21

About This Presentation

Drone technology


Slide Content

PHASE 2 PROJECT PRESENTATION Department of artificial intelligence and machine learning C BYREGOWDA INSTITUTE OF TECHNOLOGY PROJECT :”FACIAL RECOGNITION USING DRONE TECHNOLOGY” PRESENTION BY ABHISHEK 1CK21AI001 MOHAMMED YOUNUS 1CK21AI021 SURAJ GB 1CK21AI035 YASHAS D 1CK21AI042 UNDER THE GUIDANCE OF: Prof.NARAYAN SWAMY(Phd) ASSOCIATE PROFESSOR Dept.of AI&ML,CBIT

AGENDA SOFTWARE REQUIREMENTS AND SPECIFICATION SYSTEM DESIGN IMPLEMENTATION

SOFTWARE REQUIREMENTS AND SPECIFICATION INTRODUCTION Here we define the software requirements and specifications for a face recognition system integrated with drone technology. This system aims to enable drones to identify and recognize human faces in real-time for applications such as surveillance, search and rescue, and security monitoring. Capture real-time video using drone-mounted cameras. Detect and recognize human faces using AI-based facial recognition algorithms.

FUNCTIONAL REQUIREMENTS 1.Face Detection and Recognition: - Detect human faces in live video feeds. - Match detected faces with pre-registered profiles in the database. 2.Real-Time Processing: - Perform face recognition within a specified latency threshold. 3.Data Transmission: - Send recognition results (e.g., IDs or alerts) to a ground station. 4.Drone Navigation: - Track the drone from the ground station. - Navigate predefined flight paths

NON FUNCTIONAL REQUIREMENTS 1.Performance: The system must process video frames at a minimum of 15 FPS (Frames Per Second). Recognition accuracy should be at least 90% under optimal conditions. 2.Scalability: The system should support multiple drones operating simultaneously. 3.Security: Data transmitted between the drone and the server must be encrypted. The system should restrict access to authorized users. 4.Reliability: The system should function effectively in varying environmental conditions, such as low light or windy weather

HARDWARE REQUIREMENTS 1. Drone Hardware: High-definition camera (minimum 1080p resolution). 3-axis gimbal for camera stabilization. GPS module for navigation. Obstacle detection sensors (e.g., LIDAR or ultrasonic sensors). Battery capacity: Minimum 30 minutes of flight time. 2. Computing Hardware: Onboard: NVIDIA Jetson Nano/AGX Xavier or Raspberry Pi 4. Ground Station: High-performance laptop or server with GPU support (e.g., NVIDIA RTX 3080).

SOFTWARE REQUIREMENTS 1.Onboard Software: Operating System: Linux-based OS (e.g., Ubuntu). Libraries/Frameworks: OpenCV for image processing. TensorFlow or PyTorch for AI model deployment. Drone SDK (e.g., DJI SDK, PX4). 2. Ground Station Software: Operating System: Windows 10/11 or Linux. Applications: Database system (e.g., MySQL, MongoDB). Web server (e.g., Node.js, Flask). 3.Development Tools: Programming Languages: Python, C++, or Java. IDEs: Visual Studio Code, PyCharm.

SOFTWARE ENVIRONMENTS 1.Development Environment: IDEs: PyCharm/VS Code for AI model and algorithm development. Docker: For containerized deployment. 2.Deployment Environment: Edge Computing: AI models deployed on NVIDIA Jetson devices. Cloud Computing: AWS or Azure for data storage and additional processing. Testing Frameworks: Unit testing: Pytest for Python. Performance testing: JMeter.

SYSTEM DESIGN INTRODUCTION Surveillance and security operations. Real-time face recognition via drone-mounted cameras. Communication of recognized face data to a ground station or server for monitoring and action. To achieve seamless integration of face recognition algorithms with drone technology. To provide a robust and scalable system capable of handling real-time processing.

SYSTEM ARCHITECTURE Drone Unit: Equipped with a high-definition camera and onboard processing hardware (e.g., NVIDIA Jetson Nano). Performs face detection and initial recognition processing. Sends processed data to the ground station. Ground Station: Receives data from the drone and performs further processing if needed. Displays recognition results and system status via a user-friendly interface. Allows operators to manage drone commands and monitor flight paths. Cloud/Database Server: Stores facial profiles and recognition logs. Manages centralized data synchronization for multiple drones. Provides additional computational power for advanced recognition tasks if required.

DATAFLOW DIAGRAM +--------------+ | Camera | +--------------+ | v +------------------+ | Onboard System | | (Face Detection | | & Recognition) | +------------------+ | v +------------------+ | Ground Station | | (Processing & | | Monitoring) | +------------------+ | v +------------------+ | Cloud Database | | (Storage & | | Synchronization)| +------------------+

Capture Input: The drone captures a live video feed using its onboard camera. Process Data: The onboard system detects faces and performs initial recognition using AI models. Transmit Data: Processed recognition data is sent to the ground station for further analysis or validation. Store and Display Results: The ground station stores data locally and displays it on the monitoring dashboard. Optionally, data is synchronized with the cloud server for long-term storage and analytics.

USE CASE DIAGRAM +-----------+ | Operator | +-----------+ | v +-------------------+ | Control Drone | +-------------------+ | v +-----------------------+ | Process Face Data | +-----------------------+ | v +-----------------------+ | Display Results/Data | +-----------------------+

IMPLEMENTATION INTRODUCTION This section outlines the implementation details of the face recognition system using drone technology. It includes the platform, programming language, modules, algorithms, and sample code used to bring the design to life.

PLATFORM SELECTION The following platforms are chosen based on system requirements: Drone Hardware: DJI Mavic or custom-built drones equipped with NVIDIA Jetson Nano for onboard AI processing. Ground Station: A desktop or laptop with an interface built using Python or web-based tools. Cloud Infrastructure: AWS or Google Cloud for storage and computational tasks.

LANGUAGE SELECTION The project uses the following languages: Python: For implementing AI and machine learning models. C++: For low-level drone control and communication. HTML, CSS, JavaScript: For creating the ground station’s web-based interface.

MODULE IMPLEMENTATION The implementation uses the following modules: OpenCV: For image processing and face detection. Dlib: For face recognition. Flask/Django: For building the web interface. PyTorch: For training and deploying AI models. DroneKit: For controlling the drone.

ALGORITHMS Face Detection: Uses Haar Cascades or a pre-trained YOLO model. Face Recognition: Employs a deep learning model based on convolutional neural networks (CNNs). Uses a pre-trained ResNet model for feature extraction and comparison. Data Transmission: Implements a socket-based communication protocol for real-time data exchange between the drone and the ground station.

SAMPLE CODE import cv2 import dlib # Load pre-trained face detection model face_detector = dlib.get_frontal_face_detector() face_recognizer = dlib.face_recognition_model_v1("dlib_face_recognition_resnet_model_v1.dat") # Capture video from the drone video_capture = cv2.VideoCapture(0) while True: ret, frame = video_capture.read() if not ret: break # Convert to grayscale for detection gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = face_detector(gray)

# Convert to grayscale for detection gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = face_detector(gray) for face in faces: # Draw a rectangle around the face x, y, w, h = (face.left(), face.top(), face.width(), face.height()) cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) # Display the resulting frame cv2.imshow('Drone Face Recognition', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break video_capture.release() cv2.destroyAllWindows()

THANK YOU
Tags