📊
KerloudUAV
  • Kerloud UAV Main Page
  • 📗Kerloud UAV User Guide
    • Introduction
    • System Overview
    • Hardware Options
      • Kerloud 300
      • Kerloud 600
    • Gallery
    • Quick Start
    • Application Programming Interface (API)
    • Tutorials
      • Powering and Programming Interface
      • Offboard Control with Mavros (C++)
      • Offboard Control with Mavros (Python)
      • Indoor Positioning with Optical Flow
      • Flight Data Analysis
      • Virtual Simulation
      • Camera Pod Operation
      • Real Time Visual Recognition
      • Deep Learning in ROS
      • Enabling Autonomous Indoor Flight with a Tracking Camera
      • Hardware-in-the-loop Simulation in Airsim Environment
      • Visual Inertial System (VINS) with Stereo Vision and GPU Acceleration
      • DASA Swarm Simulation Toolbox
    • Video Instructions
  • 📘Kerloud UAV使用说明
    • 介绍
    • 系统组成
    • 硬件选项
      • Kerloud 300
      • Kerloud 600
    • 展示区
    • 快速启动
    • 应用程序接口 (API)
    • 使用教程
      • 供电和编程界面
      • Mavros在线控制 (C++)
      • Mavros在线控制 (Python)
      • 室内光流定位
      • 飞行数据分析
      • 虚拟仿真空间
      • 吊舱操作
      • 实时视觉识别
      • ROS深度学习集成
      • 基于跟踪相机的室内自主飞行实现
      • Airsim环境下的硬件在环仿真
      • 基于立体视觉和GPU加速的视觉里程系统(VINS)
      • DASA集群仿真工具箱
    • 视频指导
Powered by GitBook
On this page
  • Code Structure
  • How to Install
  • How to Run

Was this helpful?

  1. Kerloud UAV User Guide
  2. Tutorials

Deep Learning in ROS

PreviousReal Time Visual RecognitionNextEnabling Autonomous Indoor Flight with a Tracking Camera

Last updated 3 years ago

Was this helpful?

Note: This tutorial is applicable for Kerloud UAV products equipped with Jetson Nano only.

To develop robotics applications, we can follow the official repository in to integrate Nvidia deep learning capabilities with ROS. The code include serveral ros nodes to deploy networks based on installed jetson inference libraries.

Code Structure

Main directories are listed below for the ros deep learning repo:

(1) launch/: launch files to deploy ROS nodes for deep learning tasks.

(2) src/: source codes for ROS nodes:

node_detectnet: ROS node to deploy the detectnet network for object localization.

node_imagenet: ROS node to deploy the imagenet network for visual recognition.

node_segment: ROS node to deploy the segnet network for semantic segmentation.

node_video_source: ROS node to handle the video input and publish image messages.

node_video_output: ROS node to create video stream with overlayed images.

image_converter.cpp: class to convert images to various ros messages.

How to Install

Users have to install jetson-inference libraries , ROS and build the ros_deep_learning workspace. The official guide is . For jetson-inference installation, please refer to the previous page.

For ROS melodic, we have to install the prerequisites below:

    sudo apt-get install ros-melodic-image-transport ros-melodic-vision-msgs

Then run 'catkin_make' for the workspace under the directory: ~/ros_workspace.

How to Run

(1) Before proceeding, if you're using ROS Melodic make sure that roscore is running first.

(2) Launch the video viewer to check whether the video stream is OK:

    cd ~/ros_workspace
    source devel/setup.bash
    roslaunch ros_deep_learning video_viewer.ros1.launch input:=csi://0 output:=display://0

(3) Launch the imagenet node for video recognition:

    cd ~/ros_workspace
    source devel/setup.bash
    roslaunch ros_deep_learning imagenet.ros1.launch input:=csi://0 output:=display://0

(4) Launch the detectnode for object detection:

    cd ~/ros_workspace
    source devel/setup.bash
    roslaunch ros_deep_learning detectnet.ros1.launch input:=csi://0 output:=display://0

(5) Launch the segnet for semantic segmentation:

    cd ~/ros_workspace
    source devel/setup.bash
    roslaunch ros_deep_learning segnet.ros1.launch input:=csi://0 output:=display://0

For input and output settings, refer to for details

Make sure that you have downloaded necessary networks for jetson-inference. If not, you might consider downloading them manually by following instructions in .

📗
https://github.com/dusty-nv/ros_deep_learning
https://github.com/dusty-nv/ros_deep_learning#installation
https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md
https://github.com/dusty-nv/jetson-inference/releases