-
Install ROS Noetic
-
Create a virtual environment with Python >= 3.8
conda create -n yolov5_deepsort python=3.8
conda activate yolov5_deepsort
- Install pytorch and torchvision
pip3 install torch torchvision torchaudio
you can follow the official website to install the correct version of pytorch.
cuda 12.2, cudnn 8.9.7.29 is tested in this branch. You can try other versions, but I am not sure if it will work.
- Install Python3 Dependencies
pip3 install rospkg catkin_pkg
- Clone the repository
- Clone the repository to your catkin workspace
mkdir -p ~/catkin_ws/src && cd ~/catkin_ws/src
git clone https://github.com/ChiRanTou/Yolov5_Deepsort_pytorch_ROS.git
cd ..
catkin_make
- Ensure that all dependencies are met.
cd ~/catkin_ws/src/Yolov5_Deepsort_pytorch_ROS
pip3 install -r requirements.txt
-
You can download YOLOv5 weight from the official website of yolov5 and place the downloaded
.pt
file underyolov5/weights/
. I've already put theyolov5s.pt
in the folder. You can other weight file if you like. -
You may also need to download the deepsort weight file from here and place
ckpt.t7
file underdeep_sort/deep/checkpoint/
. I've also already put theckpt.t7
in the folder. You can the file if you like.
Before running the Project, you may notice that a ROS simulation enviornment is required. A robot with rgb camera is also needed to send the sensor_msgs/Image topic. So you have to get one first.
- open the launch file and change the
image_topic
to the topic that your camera publish the image.
<arg name="image_topic" default="/rgb/image_raw"/>
-
start your ROS simulation enviornment and make sure the camera is working.
-
launch the task you want to run.
# for dectection only
roslaunch yolov5_deepsort detector.launch
# for tracking
roslaunch yolov5_deepsort tracker.launch
- open
rviz
if you didn't open it, and add thedetected_objects_image/IMAGE
ortracked_objects_image/IMAGE
based on your task to the display panel. You can now see the result in the rviz.
Note: please follow the LICENCE of YOLOv5!