-
Notifications
You must be signed in to change notification settings - Fork 13
feat: introduce Image Segmentation based on CustomStencil #359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: devel
Are you sure you want to change the base?
Conversation
- Add a new binary asset `PP_Segmentation.uasset` - Introduce a new `SEGMENT` camera type with corresponding setup for capture source and post-process material - Enable temporal anti-aliasing for the scene capture component - Adjust the capture source and data formats for RGB and Depth camera types - Provide uninitialized data buffers for all camera types - Add a conditional check for `PostProcessMaterial` to ensure it's not null before use - Update the logic to consider the new `SEGMENT` camera type during data capture and processing - Add the option to disable rendering or publishing via new boolean properties, `Render` and `Publish`
Source/RapyutaSimulationPlugins/Private/Sensors/RRROS2CameraComponent.cpp
Outdated
Show resolved
Hide resolved
Source/RapyutaSimulationPlugins/Public/Sensors/RRROS2CameraComponent.h
Outdated
Show resolved
Hide resolved
Source/RapyutaSimulationPlugins/Private/Sensors/RRROS2CameraComponent.cpp
Show resolved
Hide resolved
- Remove the condition check for Publishing in GetROS2Data to always proceed with the render request. - Remove the Publish boolean property from the RRROS2CameraComponent header, eliminating the ability to disable publishing images.
Hi @bresilla , I'm not involved in development of this plugin, but I really like the idea you propose. Because I want to look into this myself, I'd like to ask you some questions regarding your approach:
Thanks, |
Hi @mtmal, Thanks for liking the idea! 🙂 The short answer is that this approach is simpler than swapping materials, and we needed it ready for a project we're working on. The more complex answer is that we want to capture RGB, Depth, and Segmentation simultaneously. Swapping materials would affect all cameras, right? (Though I might be wrong—I’m still getting familiar with UE5, but I’m enjoying it more each day! Just waiting/hoping for OpenUSD to be more integrated. 😜) Using Stencil IDs is a bit more tedious since you have to manually assign them to each component/actor. But with material swapping, you'd also need to assign a new material. The 255 limit exists, but in my experience with segmentation, I’ve never needed more than 20–30 classes in a single dataset. That said, I’d love to explore ways to improve this implementation—or maybe even consider your suggestion if I’m mistaken and material swapping is camera-specific rather than affecting all cameras. Looking forward to your thoughts! |
Hi @bresilla All buffers are available to each camera, so you should be able to access the depth buffer, stencil buffer, and apply a post-processing material to the RGB buffer (final color) to create a mask. Also, you can provide per-camera material. You could have the post-processing material added to a specified camera based on config file - you probably want to do this in C++ - I never worked with I/O in blueprints so I can't say how well it may work. Applying distortion (which I appreciate is more what I'm after) would require a custom render path where the stencil and depth buffers account for the distortion. Without applying distortion, you should be able to retrieve all buffers - you may need to define another pass, but this could be less resource-intense than having three separate cameras. I'm not too familiar with ROS, so additional modifications to the plugin may be required (I come from a world where we created our own autonomy stack). And yeah, 20-30 classes for segmentation seem reasonable. I was thinking more from a framework perspective ,where you never know what the end user might come up with, which can be quite surprising! Good luck! |
PP_Segmentation.uasset
SEGMENT
camera type with corresponding setup for capture source and post-process materialPostProcessMaterial
to ensure it's not null before useSEGMENT
camera type during data capture and processingRender
andPublish