Project 3: Autonomous Navigator
Overview
Build an autonomous navigation system that combines person following, line following, and obstacle avoidance. The robot can autonomously track and follow people, navigate along lines, and respond to ROS2 service commands - all integrated with the dashboard from Project 2.
Learning Objectives
By completing this project, you will:
- Implement YOLO-based person detection and tracking
- Create PID controllers for smooth following behavior
- Integrate line detection for path following
- Use ROS2 services for mode switching
- Build a complete autonomous robot system
Prerequisites
| Lab | Topic | Skills Used |
|---|---|---|
| Project 2 | Teleop Dashboard | All prior skills |
| ROS2 Lab 5 | Person Tracking | YOLO, tracking algorithms |
| ROS2 Lab 6 | Line Following | OpenCV, PID control |
| ROS2 Lab 7 | Services | ROS2 service architecture |
Hardware Requirements
- Mini Pupper v2 with camera
- Stable WiFi connection
- Printed line course (optional)
Part 1: System Architecture
┌─────────────────────────────────────────────────────────────────────┐
│ Autonomous Navigator System │
├─────────────────────────────────────────────────────────────────────┤
│ │
│ ┌─────────────────────────────────────────────────────────────┐ │
│ │ Mode Controller │ │
│ │ [MANUAL] ←→ [PERSON_FOLLOW] ←→ [LINE_FOLLOW] ←→ [IDLE] │ │
│ └──────────────────────────┬──────────────────────────────────┘ │
│ │ │
│ ┌───────────────────┼───────────────────┐ │
│ ▼ ▼ ▼ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
│ │ Person │ │ Line │ │ Obstacle │ │
│ │ Detector │ │ Detector │ │ Detector │ │
│ │ (YOLO) │ │ (OpenCV) │ │ (Depth) │ │
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
│ │ │ │ │
│ └───────────────────┼───────────────────┘ │
│ ▼ │
│ ┌──────────────┐ │
│ │ PID │ │
│ │ Controller │ │
│ └──────┬───────┘ │
│ │ │
│ ▼ │
│ ┌──────────────┐ │
│ │ Motion │ │
│ │ Controller │───► /cmd_vel │
│ └──────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────┘
Part 2: Project Setup
Step 1: Create Project Structure
mkdir -p ~/autonomous_navigator/{src,models,config,launch}
cd ~/autonomous_navigator
Step 2: Download YOLO Model
cd models
pip3 install ultralytics
python3 -c "from ultralytics import YOLO; YOLO('yolov8n.pt')"
Part 3: Person Detection Module
Create src/person_detector.py:
#!/usr/bin/env python3
"""
YOLO-based person detection
Based on ROS2 Lab 5
"""
import cv2
import numpy as np
from dataclasses import dataclass
from typing import List, Tuple, Optional
from ultralytics import YOLO
@dataclass
class Detection:
"""Detected person data"""
bbox: Tuple[int, int, int, int] # x1, y1, x2, y2
confidence: float
center: Tuple[int, int]
area: int
track_id: Optional[int] = None
class PersonDetector:
"""YOLO-based person detector with tracking"""
def __init__(self, model_path: str = "yolov8n.pt",
confidence: float = 0.5):
self.model = YOLO(model_path)
self.confidence = confidence
self.person_class_id = 0 # COCO person class
def detect(self, image: np.ndarray) -> List[Detection]:
"""Detect persons in image"""
results = self.model(image, verbose=False)
detections = []
for result in results:
for box in result.boxes:
class_id = int(box.cls[0])
conf = float(box.conf[0])
if class_id != self.person_class_id:
continue
if conf < self.confidence:
continue
x1, y1, x2, y2 = map(int, box.xyxy[0])
center = ((x1 + x2) // 2, (y1 + y2) // 2)
area = (x2 - x1) * (y2 - y1)
detections.append(Detection(
bbox=(x1, y1, x2, y2),
confidence=conf,
center=center,
area=area
))
return detections
def detect_with_tracking(self, image: np.ndarray) -> List[Detection]:
"""Detect with persistent tracking IDs"""
results = self.model.track(image, persist=True, verbose=False)
detections = []
for result in results:
for box in result.boxes:
class_id = int(box.cls[0])
if class_id != self.person_class_id:
continue
conf = float(box.conf[0])
if conf < self.confidence:
continue
x1, y1, x2, y2 = map(int, box.xyxy[0])
center = ((x1 + x2) // 2, (y1 + y2) // 2)
area = (x2 - x1) * (y2 - y1)
track_id = None
if hasattr(box, 'id') and box.id is not None:
track_id = int(box.id[0])
detections.append(Detection(
bbox=(x1, y1, x2, y2),
confidence=conf,
center=center,
area=area,
track_id=track_id
))
return detections
def draw_detections(self, image: np.ndarray,
detections: List[Detection],
target: Optional[Detection] = None) -> np.ndarray:
"""Draw bounding boxes on image"""
for det in detections:
color = (0, 255, 0) if det == target else (255, 0, 0)
x1, y1, x2, y2 = det.bbox
cv2.rectangle(image, (x1, y1), (x2, y2), color, 2)
label = f"ID:{det.track_id} {det.confidence:.2f}"
cv2.putText(image, label, (x1, y1 - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
if det == target:
cv2.circle(image, det.center, 10, (0, 255, 255), -1)
return image
Part 4: Line Detection Module
Create src/line_detector.py:
#!/usr/bin/env python3
"""
Line detection for path following
Based on ROS2 Lab 6
"""
import cv2
import numpy as np
from dataclasses import dataclass
from typing import Optional, Tuple
@dataclass
class LineInfo:
"""Detected line information"""
center_x: int # Line center x position
offset: float # Offset from image center (-1 to 1)
angle: float # Line angle in degrees
detected: bool # Whether line was detected
confidence: float # Detection confidence
class LineDetector:
"""Detect and track lines for following"""
def __init__(self,
lower_color: Tuple[int, int, int] = (0, 0, 0),
upper_color: Tuple[int, int, int] = (180, 255, 80),
roi_top: float = 0.6):
"""
Args:
lower_color: Lower HSV bound for line color
upper_color: Upper HSV bound for line color
roi_top: Top of ROI as fraction of image height
"""
self.lower_color = np.array(lower_color)
self.upper_color = np.array(upper_color)
self.roi_top = roi_top
def detect(self, image: np.ndarray) -> LineInfo:
"""Detect line in image"""
height, width = image.shape[:2]
# Region of interest (bottom portion of image)
roi_y = int(height * self.roi_top)
roi = image[roi_y:, :]
# Convert to HSV
hsv = cv2.cvtColor(roi, cv2.COLOR_BGR2HSV)
# Threshold for line color
mask = cv2.inRange(hsv, self.lower_color, self.upper_color)
# Clean up mask
kernel = np.ones((5, 5), np.uint8)
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernel)
# Find contours
contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL,
cv2.CHAIN_APPROX_SIMPLE)
if not contours:
return LineInfo(
center_x=width // 2,
offset=0.0,
angle=0.0,
detected=False,
confidence=0.0
)
# Find largest contour
largest = max(contours, key=cv2.contourArea)
area = cv2.contourArea(largest)
# Minimum area threshold
if area < 500:
return LineInfo(
center_x=width // 2,
offset=0.0,
angle=0.0,
detected=False,
confidence=0.0
)
# Get centroid
M = cv2.moments(largest)
if M['m00'] == 0:
return LineInfo(
center_x=width // 2,
offset=0.0,
angle=0.0,
detected=False,
confidence=0.0
)
cx = int(M['m10'] / M['m00'])
# Calculate offset from center
offset = (cx - width / 2) / (width / 2) # -1 to 1
# Fit line to get angle
[vx, vy, x, y] = cv2.fitLine(largest, cv2.DIST_L2, 0, 0.01, 0.01)
angle = np.degrees(np.arctan2(vy, vx))
# Confidence based on contour area
max_area = (width * height * (1 - self.roi_top)) * 0.3
confidence = min(1.0, area / max_area)
return LineInfo(
center_x=cx,
offset=offset,
angle=float(angle),
detected=True,
confidence=confidence
)
def draw_detection(self, image: np.ndarray,
line_info: LineInfo) -> np.ndarray:
"""Draw line detection visualization"""
height, width = image.shape[:2]
roi_y = int(height * self.roi_top)
# Draw ROI rectangle
cv2.rectangle(image, (0, roi_y), (width, height), (0, 255, 255), 2)
if line_info.detected:
# Draw center line
cv2.line(image,
(line_info.center_x, roi_y),
(line_info.center_x, height),
(0, 255, 0), 3)
# Draw target line (center)
cv2.line(image,
(width // 2, roi_y),
(width // 2, height),
(255, 0, 0), 1)
# Draw offset indicator
cv2.putText(image, f"Offset: {line_info.offset:.2f}",
(10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
cv2.putText(image, f"Angle: {line_info.angle:.1f}°",
(10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
else:
cv2.putText(image, "NO LINE DETECTED",
(10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)
return image
Part 5: PID Controller
Create src/pid_controller.py:
#!/usr/bin/env python3
"""
PID Controller for smooth robot control
"""
import time
from dataclasses import dataclass
from typing import Tuple, Optional
@dataclass
class ControlOutput:
"""Control output for robot movement"""
linear_x: float # Forward/backward speed
angular_z: float # Rotation speed
class PIDController:
"""Generic PID controller"""
def __init__(self, kp: float, ki: float, kd: float,
output_limits: Tuple[float, float] = (-1.0, 1.0)):
self.kp = kp
self.ki = ki
self.kd = kd
self.output_limits = output_limits
self.integral = 0.0
self.prev_error = 0.0
self.prev_time = None
def compute(self, error: float) -> float:
"""Compute PID output"""
current_time = time.time()
if self.prev_time is None:
dt = 0.1
else:
dt = current_time - self.prev_time
# Proportional
p = self.kp * error
# Integral with anti-windup
self.integral += error * dt
self.integral = max(-10, min(10, self.integral))
i = self.ki * self.integral
# Derivative
if dt > 0:
derivative = (error - self.prev_error) / dt
else:
derivative = 0
d = self.kd * derivative
# Compute output
output = p + i + d
output = max(self.output_limits[0], min(self.output_limits[1], output))
self.prev_error = error
self.prev_time = current_time
return output
def reset(self):
"""Reset controller state"""
self.integral = 0.0
self.prev_error = 0.0
self.prev_time = None
class FollowingController:
"""Controller for person/line following"""
def __init__(self, image_width: int = 640, image_height: int = 480):
self.image_width = image_width
self.image_height = image_height
# PID for angular (yaw) control
self.yaw_pid = PIDController(kp=0.8, ki=0.01, kd=0.1)
# PID for linear (distance) control
self.distance_pid = PIDController(kp=0.00002, ki=0.0, kd=0.000005)
# Control limits
self.max_linear = 0.15 # m/s
self.max_angular = 0.5 # rad/s
# Target tracking
self.target_area = 40000 # Target bounding box area
self.area_tolerance = 0.3
# Lost target handling
self.target_lost_time = None
self.lost_timeout = 3.0
def compute_person_following(self, detection) -> ControlOutput:
"""Compute control for person following"""
if detection is None:
return self._handle_lost_target()
self.target_lost_time = None
# Yaw error: offset from center
center_x = detection.center[0]
yaw_error = (self.image_width / 2) - center_x
yaw_error_normalized = yaw_error / (self.image_width / 2)
# Distance error: based on bounding box area
area = detection.area
distance_error = self.target_area - area
# Compute control outputs
angular_z = self.yaw_pid.compute(yaw_error_normalized)
linear_x = self.distance_pid.compute(distance_error)
# Apply limits
angular_z = max(-self.max_angular, min(self.max_angular, angular_z))
linear_x = max(-self.max_linear, min(self.max_linear, linear_x))
# Don't move if within tolerance
if abs(distance_error) < self.target_area * self.area_tolerance:
linear_x = 0.0
return ControlOutput(linear_x=linear_x, angular_z=angular_z)
def compute_line_following(self, line_info) -> ControlOutput:
"""Compute control for line following"""
if not line_info.detected:
return self._handle_lost_target()
self.target_lost_time = None
# Use offset directly as error (-1 to 1)
angular_z = self.yaw_pid.compute(-line_info.offset)
# Constant forward speed when following line
linear_x = 0.08
# Slow down on sharp turns
if abs(line_info.offset) > 0.5:
linear_x *= 0.5
angular_z = max(-self.max_angular, min(self.max_angular, angular_z))
return ControlOutput(linear_x=linear_x, angular_z=angular_z)
def _handle_lost_target(self) -> ControlOutput:
"""Handle lost target with search behavior"""
if self.target_lost_time is None:
self.target_lost_time = time.time()
elapsed = time.time() - self.target_lost_time
if elapsed < self.lost_timeout:
# Rotate to search
return ControlOutput(linear_x=0.0, angular_z=0.3)
else:
# Stop after timeout
self.yaw_pid.reset()
self.distance_pid.reset()
return ControlOutput(linear_x=0.0, angular_z=0.0)
def select_target(self, detections: list):
"""Select best target from detections"""
if not detections:
return None
# Follow largest (closest) person
return max(detections, key=lambda d: d.area)
def reset(self):
"""Reset all controllers"""
self.yaw_pid.reset()
self.distance_pid.reset()
self.target_lost_time = None
Part 6: Navigation Mode Controller
Create src/mode_controller.py:
#!/usr/bin/env python3
"""
Navigation Mode Controller
Manages switching between different autonomous modes
"""
from enum import Enum
from dataclasses import dataclass
from typing import Optional, Callable
class NavMode(Enum):
"""Navigation modes"""
IDLE = "idle"
MANUAL = "manual"
PERSON_FOLLOW = "person_follow"
LINE_FOLLOW = "line_follow"
@dataclass
class ModeConfig:
"""Configuration for a navigation mode"""
name: str
description: str
can_interrupt: bool = True
class ModeController:
"""Manage navigation mode transitions"""
MODE_CONFIGS = {
NavMode.IDLE: ModeConfig("Idle", "Robot is stopped"),
NavMode.MANUAL: ModeConfig("Manual", "Manual teleoperation"),
NavMode.PERSON_FOLLOW: ModeConfig("Person Follow", "Following detected person"),
NavMode.LINE_FOLLOW: ModeConfig("Line Follow", "Following detected line"),
}
def __init__(self):
self.current_mode = NavMode.IDLE
self.previous_mode = NavMode.IDLE
self.mode_callbacks: dict = {}
def register_callback(self, event: str, callback: Callable):
"""Register callback for mode events"""
if event not in self.mode_callbacks:
self.mode_callbacks[event] = []
self.mode_callbacks[event].append(callback)
def _trigger_callbacks(self, event: str, **kwargs):
"""Trigger registered callbacks"""
for callback in self.mode_callbacks.get(event, []):
callback(**kwargs)
def set_mode(self, new_mode: NavMode) -> bool:
"""Switch to a new mode"""
if new_mode == self.current_mode:
return True
config = self.MODE_CONFIGS.get(new_mode)
if not config:
return False
# Check if current mode can be interrupted
current_config = self.MODE_CONFIGS.get(self.current_mode)
if current_config and not current_config.can_interrupt:
return False
self.previous_mode = self.current_mode
self.current_mode = new_mode
self._trigger_callbacks('mode_changed',
old_mode=self.previous_mode,
new_mode=self.current_mode)
print(f"Mode: {self.previous_mode.value} -> {self.current_mode.value}")
return True
def get_mode(self) -> NavMode:
"""Get current mode"""
return self.current_mode
def get_mode_info(self) -> ModeConfig:
"""Get current mode configuration"""
return self.MODE_CONFIGS.get(self.current_mode)
def toggle_person_follow(self):
"""Toggle person following mode"""
if self.current_mode == NavMode.PERSON_FOLLOW:
self.set_mode(NavMode.IDLE)
else:
self.set_mode(NavMode.PERSON_FOLLOW)
def toggle_line_follow(self):
"""Toggle line following mode"""
if self.current_mode == NavMode.LINE_FOLLOW:
self.set_mode(NavMode.IDLE)
else:
self.set_mode(NavMode.LINE_FOLLOW)
Part 7: ROS2 Navigator Node
Create src/navigator_node.py:
#!/usr/bin/env python3
"""
ROS2 Autonomous Navigator Node
Integrates all detection and control modules
"""
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import Image
from geometry_msgs.msg import Twist
from std_msgs.msg import String
from std_srvs.srv import SetBool, Trigger
from cv_bridge import CvBridge
import cv2
import json
from person_detector import PersonDetector
from line_detector import LineDetector
from pid_controller import FollowingController
from mode_controller import ModeController, NavMode
class NavigatorNode(Node):
"""Main navigation node"""
def __init__(self):
super().__init__('autonomous_navigator')
# Declare parameters
self.declare_parameter('model_path', 'yolov8n.pt')
self.declare_parameter('confidence', 0.5)
model_path = self.get_parameter('model_path').value
confidence = self.get_parameter('confidence').value
# Initialize components
self.bridge = CvBridge()
self.person_detector = PersonDetector(model_path, confidence)
self.line_detector = LineDetector()
self.controller = FollowingController()
self.mode_controller = ModeController()
# State
self.current_target = None
self.current_line = None
# Subscribers
self.image_sub = self.create_subscription(
Image, '/image_raw', self.image_callback, 10
)
# Publishers
self.cmd_vel_pub = self.create_publisher(Twist, '/cmd_vel', 10)
self.debug_image_pub = self.create_publisher(Image, '/navigator/debug_image', 10)
self.status_pub = self.create_publisher(String, '/navigator/status', 10)
# Services
self.person_follow_srv = self.create_service(
SetBool, '/navigator/person_follow', self.person_follow_callback
)
self.line_follow_srv = self.create_service(
SetBool, '/navigator/line_follow', self.line_follow_callback
)
self.stop_srv = self.create_service(
Trigger, '/navigator/stop', self.stop_callback
)
# Timer for status updates
self.status_timer = self.create_timer(0.5, self.publish_status)
self.get_logger().info('Autonomous Navigator started!')
def image_callback(self, msg: Image):
"""Process camera images"""
cv_image = self.bridge.imgmsg_to_cv2(msg, 'bgr8')
mode = self.mode_controller.get_mode()
twist = Twist()
debug_image = cv_image.copy()
if mode == NavMode.PERSON_FOLLOW:
# Detect and track persons
detections = self.person_detector.detect_with_tracking(cv_image)
self.current_target = self.controller.select_target(detections)
# Compute control
control = self.controller.compute_person_following(self.current_target)
twist.linear.x = control.linear_x
twist.angular.z = control.angular_z
# Draw debug visualization
debug_image = self.person_detector.draw_detections(
debug_image, detections, self.current_target
)
elif mode == NavMode.LINE_FOLLOW:
# Detect line
self.current_line = self.line_detector.detect(cv_image)
# Compute control
control = self.controller.compute_line_following(self.current_line)
twist.linear.x = control.linear_x
twist.angular.z = control.angular_z
# Draw debug visualization
debug_image = self.line_detector.draw_detection(
debug_image, self.current_line
)
elif mode == NavMode.IDLE:
# Stop
twist.linear.x = 0.0
twist.angular.z = 0.0
# Draw mode indicator
cv2.putText(debug_image, "IDLE", (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 0), 2)
# Publish velocity command
self.cmd_vel_pub.publish(twist)
# Publish debug image
debug_msg = self.bridge.cv2_to_imgmsg(debug_image, 'bgr8')
self.debug_image_pub.publish(debug_msg)
def person_follow_callback(self, request, response):
"""Handle person follow service"""
if request.data:
success = self.mode_controller.set_mode(NavMode.PERSON_FOLLOW)
response.message = "Person following enabled" if success else "Failed"
else:
self.mode_controller.set_mode(NavMode.IDLE)
response.message = "Person following disabled"
response.success = True
return response
def line_follow_callback(self, request, response):
"""Handle line follow service"""
if request.data:
success = self.mode_controller.set_mode(NavMode.LINE_FOLLOW)
response.message = "Line following enabled" if success else "Failed"
else:
self.mode_controller.set_mode(NavMode.IDLE)
response.message = "Line following disabled"
response.success = True
return response
def stop_callback(self, request, response):
"""Handle stop service"""
self.mode_controller.set_mode(NavMode.IDLE)
# Send stop command
twist = Twist()
self.cmd_vel_pub.publish(twist)
response.success = True
response.message = "Robot stopped"
return response
def publish_status(self):
"""Publish current status"""
mode = self.mode_controller.get_mode()
status = {
'mode': mode.value,
'tracking_target': self.current_target is not None,
'line_detected': self.current_line.detected if self.current_line else False
}
msg = String()
msg.data = json.dumps(status)
self.status_pub.publish(msg)
def main():
rclpy.init()
node = NavigatorNode()
try:
rclpy.spin(node)
except KeyboardInterrupt:
pass
finally:
# Stop robot
twist = Twist()
node.cmd_vel_pub.publish(twist)
node.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
Part 8: Launch File
Create launch/navigator.launch.py:
from launch import LaunchDescription
from launch_ros.actions import Node
from launch.actions import DeclareLaunchArgument
from launch.substitutions import LaunchConfiguration
def generate_launch_description():
return LaunchDescription([
DeclareLaunchArgument(
'model_path',
default_value='yolov8n.pt',
description='Path to YOLO model'
),
DeclareLaunchArgument(
'confidence',
default_value='0.5',
description='Detection confidence threshold'
),
Node(
package='autonomous_navigator',
executable='navigator_node',
name='autonomous_navigator',
parameters=[{
'model_path': LaunchConfiguration('model_path'),
'confidence': LaunchConfiguration('confidence'),
}],
output='screen'
),
])
Part 9: Service Client Script
Create src/nav_control.py:
#!/usr/bin/env python3
"""
Navigation control client
Command-line interface for navigator services
"""
import argparse
import rclpy
from rclpy.node import Node
from std_srvs.srv import SetBool, Trigger
class NavControlClient(Node):
def __init__(self):
super().__init__('nav_control_client')
self.person_follow_client = self.create_client(
SetBool, '/navigator/person_follow'
)
self.line_follow_client = self.create_client(
SetBool, '/navigator/line_follow'
)
self.stop_client = self.create_client(
Trigger, '/navigator/stop'
)
def call_person_follow(self, enable: bool):
"""Enable/disable person following"""
request = SetBool.Request()
request.data = enable
self.person_follow_client.wait_for_service(timeout_sec=5.0)
future = self.person_follow_client.call_async(request)
rclpy.spin_until_future_complete(self, future)
return future.result()
def call_line_follow(self, enable: bool):
"""Enable/disable line following"""
request = SetBool.Request()
request.data = enable
self.line_follow_client.wait_for_service(timeout_sec=5.0)
future = self.line_follow_client.call_async(request)
rclpy.spin_until_future_complete(self, future)
return future.result()
def call_stop(self):
"""Stop the robot"""
request = Trigger.Request()
self.stop_client.wait_for_service(timeout_sec=5.0)
future = self.stop_client.call_async(request)
rclpy.spin_until_future_complete(self, future)
return future.result()
def main():
parser = argparse.ArgumentParser(description='Navigator Control')
parser.add_argument('command', choices=['person', 'line', 'stop'],
help='Navigation command')
parser.add_argument('--off', action='store_true',
help='Disable the mode')
args = parser.parse_args()
rclpy.init()
client = NavControlClient()
if args.command == 'person':
result = client.call_person_follow(not args.off)
print(f"Person follow: {result.message}")
elif args.command == 'line':
result = client.call_line_follow(not args.off)
print(f"Line follow: {result.message}")
elif args.command == 'stop':
result = client.call_stop()
print(f"Stop: {result.message}")
client.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
Usage
# Start the navigator
ros2 run autonomous_navigator navigator_node
# Or with launch file
ros2 launch autonomous_navigator navigator.launch.py
# Control via service calls
python3 nav_control.py person # Start person following
python3 nav_control.py person --off # Stop person following
python3 nav_control.py line # Start line following
python3 nav_control.py stop # Emergency stop
# Or use ros2 service directly
ros2 service call /navigator/person_follow std_srvs/srv/SetBool "{data: true}"
ros2 service call /navigator/stop std_srvs/srv/Trigger
Deliverables
| Deliverable | Description |
|---|---|
src/person_detector.py | YOLO person detection |
src/line_detector.py | OpenCV line detection |
src/pid_controller.py | PID following controller |
src/mode_controller.py | Navigation mode manager |
src/navigator_node.py | Main ROS2 node |
src/nav_control.py | Service client CLI |
launch/navigator.launch.py | Launch file |
| Video Demo | Person and line following |
Extension Ideas
- Obstacle Avoidance: Add ultrasonic/depth sensor integration
- Waypoint Navigation: Navigate to saved positions
- Gesture Control: Respond to hand gestures
- Face Recognition: Only follow recognized people
- Speed Adaptation: Match target’s walking speed
- Zone Boundaries: Stay within defined areas
Grading Rubric
| Component | Points | Criteria |
|---|---|---|
| Person Detection | 20 | Accurate detection, stable tracking |
| Line Following | 20 | Smooth following, handles curves |
| PID Control | 20 | Smooth movements, no oscillation |
| Mode Switching | 15 | Clean transitions via services |
| ROS2 Integration | 15 | Proper node architecture |
| Code Quality | 10 | Clean, documented code |
| Total | 100 |