[experimental] cmake/catkin wrapper for dinit service manager
|
4 年之前 | |
---|---|---|
dinit | 4 年之前 | |
dinit_services | 4 年之前 | |
examples | 4 年之前 | |
scripts | 4 年之前 | |
.gitmodules | 4 年之前 | |
CMakeLists.txt | 4 年之前 | |
Makefile | 4 年之前 | |
README.md | 4 年之前 | |
package.xml | 4 年之前 |
cmake
/catkin
wrapper for dinit
service manager
(https://github.com/davmac314/dinit). The purpose of this package is to provide
an intermediate service manager working between systemd
(or a system service
manager in general) and roslaunch
in ROS
environments. Motivation,
technical details, and alternatives are discussed below.
Scripts assume that they are executed in https://github.com/asherikov/colcon_workspace environment.
rosrun cdinit launch.sh roscore timeout CDINIT_TIMEOUT=17
-- runs roscore
and timeout
services (see dinit_services
directory) and sets
CDINIT_TIMEOUT
environment variable used by timeout
service.rosrun cdinit ctl.sh ...
can be used to list/control services.systemd
, rc
.ROS services (nodes) are usually started and supervised by roslaunch
utility,
which works differently in ROS1 and ROS2:
ROS1 roslaunch
uses a hierarchy of XML startup scripts, where nodes and
their parameters are declared. ROS2 encourages using python scripts instead
of XML (http://design.ros2.org/articles/roslaunch_xml.html) and implementing
service management logic using roslaunch API. This approach effectively
dilutes the boundaries between launch script, node, and roslaunch itself.
roslaunch
performs startup in three implicit steps:
roscore
, which, in particular, provides parameter server
functionality [dropped in ROS2].Node termination handling is controlled by three parameters: required
,
respawn
, respawn_delay
, termination of 'required' node implies
termination of the whole stack, respawn*
parameters control restarting of
the node. ROS2 launch provides similar functionality, but only in python,
see https://github.com/ros2/launch/pull/426,
https://ubuntu.com/blog/ros2-launch-required-nodes
Running ROS nodes can be listed with rosnode list
. control over node
execution is limited to rosnode kill
, which potentially can be used to
restart node if it is declared as respawnable. Individual nodes cannot be
killed in ROS2 in general ->
https://answers.ros.org/question/323329/how-to-kill-nodes-in-ros2/, i.e.,
only ROS-aware nodes can be controlled.
In my opinion ROS2 launch has a number of design flaws:
Software stack deployed on a robot should generally be started automatically on boot -- in order to achieve that a system service is created which starts a roslaunch script.
ROS assumption that startup ordering is not relevant does not hold in practice, often it is necessary to run services sequentially, for example in order to generate configurations, create fake devices in simulation, etc.
Startup scripts may also get fragmented in order to share parts of the stack between different profiles, e.g., HAL (hardware abstraction layer) and high level logic.
System service manager could be perfect for managing such fragmented scripts, but there are some caveats too:
Startup scripts usually must be installed to predefined locations in the system, or in user home directory (https://wiki.archlinux.org/index.php/Systemd/User). This is inconvenient for development and testing.
Service managers may not be adapted to be running in user space or inside
docker
containers, e.g.,
https://serverfault.com/questions/607769/running-systemd-inside-a-docker-container-arch-linux
These issues can be addressed by introducing an additional stack service manager.