Autonomy Development Environment
The environment is meant for leveraging system development and robot deployment for autonomous ground and aerial navigation. Integrating Gazebo and Unity simulators and containing autonomous navigation modules such as collision avoidance, waypoint following, and assistive teleoperation, users can develop autonomous navigation systems and later on deploy to real flying robots with minor sim-to-real gap.
Key Features
Multi-scale Complex Environments
The development environment includes 29 multi-scale scenes with different complexity. Including 17 cluttered indoor photo-realistic scenes for ground autonomous navigation, 12 large-scale complex scenes for aerial autonomous navigation and aerial-ground autonomy. The environment also supports both Gazebo and Unity. Because scenes are using either ROS's built-in simulator or executable file format, each of them can be quickly deployed and tested in 10 minutes (details in Quick Start and Simulation Environments).
Our development environment supports both ground and aerial vehicles. The guidance for configuring the Aerial Autonomy Environment can be found below. Instruction for setting up the Ground Autonomy Environment is available here.
High-performance Depolyable Autonomy Stack
The Autonomy Stack takes <10% CPU load overall, <1ms and <1% CPU for local path planning and execution, <20% CPU load for real-time global path planning. It can also be directly deployed to real robots for various field applications.
Ultra Light-weight Local Planner
The autonomy stack includes specified local planners for different vehicle configurations, enabling planning dynamically feasible trajectories for autonomous navigation while simultaneously avoiding or interacting with obstacles. Leveraging a pre-calculated motion primitive library, the stack remains lightweight ( <0.5% CPU occupancy on a single core) while ensuring robust performance.
Aerial Local Planner
Ground Local Planner
Interactive Local Planner
Real-time Global Planner
We provide our global planner series, called FAR planners, for long-range navigation in unknown, large-scale and/or complex environments. By utilizing the sparse polygon map representation, the global planner can plan a long path (>300m) in real-time (<10ms). When navigating in known map, the planner uses the built visibility graph, and when navigating in unknown space, the planner attempts to discover a way to the goal. Details can be found in other sub-pages of the website.
Left: Air-FAR Planner; Middle: FAR Planner; Right: Interactive-FAR Planner
Extendable to High-level Robotics and AI Tasks
Different field applications have already validated the high extendability of the platform.
Autonomous Exploration
Visual-Language Navigation
Aerial Autonomy Development Environment
7 Minutes Quick Start
The repository has been tested in Ubuntu 20.04 with ROS Noetic. Install dependencies with command below.
sudo apt install libusb-dev python-yaml python-is-python3 # may need 'sudo apt update' first
git clone https://github.com/Bottle101/aerial_autonomy_development_environment.git
In a terminal, go to the folder and compile.
cd aerial_autonomy_development_environment
catkin_make
Download factory (default) or any of our Unity environment models and unzip the files to the 'src/vehicle_simulator/mesh' folder.
Launch the system.
source ./devel/setup.bash
roslaunch vehicle_simulator system_unity.launch
Now, users can send a waypoint by 3 step: 1. click the 'Waypoint3D' button in RVIZ; 2. click a point in the map, hold the left mouse button and scroll the mouse wheel to adjust the height; 3. release the mouse. You can also use virtual joystick in the left side to achieve assistive teleoperation.
(Optional) If you want to use Gazebo, please follow the instruction below.
# run a script to download environment models for Gazebo
./src/vehicle_simulator/mesh/download_environments.sh
# launch
source ./devel/setup.bash
roslaunch vehicle_simulator system_gazebo.launch
Simulation Environments
Change Environment
The repository contains a set of simulation environments of different types and scales. To launch the system with a particular environment, change the config in src/vehicle_simulator/launch/system_unity.launch, Line 4 :
<arg name="map_name" default="SCENE_TO_USE"/>
#factory, village, urban_city, town, old_town, office_building_1, office_building_2
If you are using Gazebo, change src/vehicle_simulator/launch/system_gazebo.launch, change the Line 3 :
<arg name="map_name" default="SCENE_TO_USE"/> # garage, indoor, campus, tunnel, forest
The simulation environments are kept in 'src/vehicle_simulator/meshes'. A quick overview is shown below. In each environment, 'map.ply' is a 3D pointcloud of the overall map accessible through software like CloudCompare and MeshLab. Autonomous navigation systems can use the point cloud as a prior map if needed.
Change Config
For different environment, users may change the configurations for both the autonomy stack and the FAR planners together to achieve best performance. For different encironments, the suggested configurations are shown below.
For autonomy stack, users can change the config in src/vehicle_simulator/launch/system_unity.launch, Line 5 or src/vehicle_simulator/launch/system_gazebo.launch, Line 3 if you are using Gazebo.
<arg name="config" default="CONFIG_TO_USE"/> # indoor, outdoor
For FAR planners, you can change the cooresponding config in far_planner/launch/far_planner.launch or airfar_planner/launch/airfar.launch, Line 7:
<arg name="config_file" default="CONFIG_TO_USE"/> # default or outdoor
Environment Overview
Gray font means the environment is in Unity, Orange means Gazebo.
Campus (340m x 340m)
Indoor Corridors (130m x 100m)
Multi-storage Garage (140m x 130m, 5 floors)
Tunnel Network (330m x 250m)
Forest (150m x 150m)
Factory (300m x 300m)
Village (240m x 240m)
Urban City (500m x 500m)
Town (800m x 800m)
Old Town (140m x 90m)
Office Building 1 (30m x 30m, Aerial-Ground)
Office Building 2 (50m x 40m, Aerial-Ground)
Sensor Setup
For Unity, we provide both Lidar and Depth Camera for navigation, we also provide a panoramic RGB camera with semantic segmentation for other tasks. To switch between lidar and depth camera, users can simply click the toggle switch on the upper-right side of the simulator.
For Gazebo, we only provide Lidar for navigation, while users can still use depth camera for other tasks by subscribing /rgbd_camera/depth/points.
Navigation Modes
Waypoint
Users can send a waypoint by 3 step: 1. click the 'Waypoint3D' button in RVIZ; 2. click a point in the map, hold the left mouse button and scroll the mouse wheel to adjust the altitude, as shown in the figure below; 3. release the mouse.
Hold the left button & scroll to adjust altitude
Release the mouse and start navigation
Smart Joystick
Anytime during the navigation, users can use the control panel to navigate the vehicle by clicking the in the virtual joystick, or use a PS3/4 or Xbox controller. The system will switch to smart joystick mode - the vehicle tries to follow the virtual joystick command and avoid collisions at the same time.
To resume navigation, press the 'Resume Navigation to Goal' button or use the 'Waypoint3D' button to set a new goal. If users wish to resume the navigation while also controlling the flying speed, hold the 'Waypoint-flight button' and adjust the speed using the right joystick at the same time.
Manual
Hold the 'Manual-flight button' below to switch to manual mode, note that all collision avoidance will be disabled.
Ground Autonomy Development Environment
Details about our ground-based autonomy development platform can be found in this page: www.cmu-exploration.com/
Credits
The code is based on Ground-based Autonomy Base Repository and Aerial Autonomy Base Repository by Ji Zhang and Chao Cao at CMU.
Tunnel network environment is provided by Tung Dang at University of Nevada, Reno.
velodyne_simulator and joystick_drivers packages are from open-source releases.
Main contributor: Botao He, Guofei Chen, Ji Zhang
Links
Aerial Navigation Development Environment: Leveraging system development and robot deployment for aerial autonomy.
AI Meets Autonomy: Vision, Language, and Autonomous Systems Tutorial & CMU VLA Challenge
References
[1] B. He, G. Chen, C. Fermuller, Y. Aloimonos and J. Zhang. Air-FAR: Fast and Adaptable Routing for Aerial Navigation in Large-scale Complex Unknown Environments. arXiv preprint arXiv:2409.11188 (2024). [PDF]
[2] B. He, G. Chen, W. Wang, J. Zhang, C. Fermuller, and Y. Aloimonos. Interactive-FAR: Interactive, Fast and Adaptable Routing for Navigation Among Movable Obstacles in Complex Unknown Environments. IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS) 2024. [PDF]
[3] F. Yang, C. Cao, H. Zhu, J. Oh, and J. Zhang. FAR Planner: Fast, Attemptable Route Planner using Dynamic Visibility Update. IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS). Kyoto, Japan, Oct. 2022. Best Student Paper Award. (PDF) (Video)
[4] C. Cao, H. Zhu., F. Yang, Y. Xia, H. Choset, J. Oh, and J. Zhang. Autonomous exploration development environment and the planning algorithms. In 2022 International Conference on Robotics and Automation (ICRA) (pp. 8921-8928). IEEE.[PDF]