We would like to start building tooling for robotic fabrication based on COMPAS Framework, and it would be interesting to get feedback from the community about what are the perceived needs that must be fulfilled.
Our idea revolves around the following:
Communications channel: Python ROS client via websocket bridge
Data exchange mechanism: via URDF handling (in particular URDF Mesh <-> Compas Mesh)
Fundamental data structures: Frame, Pose, Transformation, etc.
Fabrication data structures: Fabrication process representation
Planning tools: Path planning & kinematic solvers, and interfaces to existing tools.
In terms of Data exchange mechanism, what would be the envisioned pipeline for users to set up their own hardware setup to interact with the backends, e.g. ROS?
Based on my experience, there seem to be two ways:
(1) let our users set up everything in GH/Rhino (just like we’ve been doing for a while with HAL or KUKA PRC) and we write python packages to automate URDF and ROS package generation process, in a way similar to what moveit! setup assistant does. And we send the generated package to the catkin workspace that resides in the docker backend, then enforce a rebuild and a restart.
(2) let our users follow the urdf generation tutorial, build their own package, and send it to docker. Since most of our community are using industrial robots that are mostly covered by ROS-Industrial already, their job is simplified to just configure their end effector (flange frame and TCP frame).
For visualization, after the planning tools finish its computation, do we have plans on fetching these computed trajectories back to compas_fab environment and allow users to visualize the result using compas’s drawing pipeline?
Or alternatively, we can leverage the ros3djs library offered by ros web tools, and allow our users to watch and interact with the planning result in the web browser. This method echoes with compas’s CAD-free spirit.
This needs us to build up data structures that are similar to moveit!'s plan_representation to represent computed trajectory, so the trajectories are tagged with process information and more importantly, the accompanied planning scene. The planning scene embodies the information about the allowed collision relationship, collision objects, and the attached collision objects. I do think visualizing the planning scene itself is worth our effort, too [1].
In my own work, a ROS-based extrusion planning framework called choreo [2], I save all the computed results to a formated json file. And then users can manually copy it back to Windows and parse it in Grasshopper (see this workflow video [3] for more details). The tagged trajectories allow users to differentiate trajectories according to their belonging process, and insert path modifications or hardware IO commands accordingly, in a visual graphical programming platform (GH). This workflow has been very convenient for my own work and we can definitely make a better, automated version of it in compas_fab.
Thanks a lot for the feedback! I’ll try to address your open questions but if I’m missing things, please bring them up again!
We’re tackling this from the bottom up, building some foundations and raising the abstraction level as we move on. The first thing we did was simply mapping URDF to compas data structures, leaving all the process of how that URDF is generated totally up to the user. Right now, the mapping between URDF and compas is unidirectional, but the structure is in place to close the loop and also generate URDF from compas models. Once we add that, it will be possible to add the feature you mention in (1) were the model can be built in Python code (inside or outside CAD), and then published into a ROS service, instead of having to launch ROS with it already. I don’t think we will rebuild the docker containers from the generator’s side, since docker is only one of the possible ROS deployment modes, but instead we will reload the model in memory (which in the case of a Docker container, it would mean the container stays untouched and the model goes away on restart, making it sort-of stateless, which is good to prevent environment degradation).
Yes, the goal is to remove the need to run RViz to visualize. Starting with the latest release, it’s already much easier to do that, you can check out some of the examples of a recent workshop.
As you correctly point out, to fully represent the scene, we need to add additional data structures, atm the scene represented on the examples above is based on the assumption that the entire scene was sent by you in the first place (i.e. via GH/Rhino/etc), so that we don’t need to actually query much of the scene, we can just assume that whatever we sent matches one-to-one and the only thing that needs to be queried is the robot(s) trajectory. This is obviously not entirely true in every situation, but we’ll slowly add more and more of this in coming releases.
100% with you on this.
We’re doing something similar (see example here) with the difference than instead of generating the paths from a ROS UI Panel, the idea is to handle all that via compas/compas_fab querying ROS services, and then the process can be streamlined to avoid manual copying. In the example above the full assembly (basically, a network data structure), is dumped into JSON including all the path planning assigned to the edges of the graph. This can be piped almost directly into the trajectory execution actions (MoveIt!'s or ROS-I follow trajectory stuff).
It would be very interesting to explore how Choreo planning tools could be integrated in this setup.