← Back to packages
Package nre.grpc.protos.sensorsim
Services
SensorsimService provides gRPC APIs for neural reconstruction and sensor simulation.
This service enables rendering of RGB camera and LiDAR sensor data from reconstructed scenes,
querying scene metadata, and editing dynamic objects within scenes.
Renders an RGB camera image from a reconstructed scene at a specified pose and time.
Renders LiDAR point cloud data from a reconstructed scene at a specified pose and time.
Returns version information about the service implementation and API.
Returns the list of scene IDs available for rendering.
Returns the available cameras and their specifications for a given scene.
Gracefully shuts down the sensor simulation service.
Returns available trajectories (camera/vehicle paths) for a given scene.
Returns available ego vehicle masks for masking out the vehicle in rendered images.
Returns the list of external asset objects that can be inserted into a scene.
Edits assets in a scene by replacing or inserting dynamic objects.
Restores model parameters for a scene to their original state.
Returns information about dynamic objects present in a scene.
Messages
Metadata and specifications for a single camera.
|
Field |
Type |
Description |
| 1 |
intrinsics
|
nre.grpc.protos.sensorsim.CameraSpec
|
Camera intrinsic parameters. |
| 2 |
rig_to_camera
|
nre.grpc.protos.common.Pose
|
Transform from rig coordinate frame to camera coordinate frame.
TODO: Should be camera_to_rig for consistency. Needs careful migration. |
| 3 |
logical_id
|
string
|
Logical identifier for the camera. |
| 4 |
trajectory_idx
|
uint32
|
DEPRECATED: Trajectory index, no longer used in AlpaSim runtime.
NRE no longer supports multiple trajectories. |
Request to get available cameras for a scene.
|
Field |
Type |
Description |
| 1 |
scene_id
|
string
|
Identifier of the scene to query cameras for. |
Response containing available cameras and their specifications for a scene.
Request to get dynamic objects present in a scene.
|
Field |
Type |
Description |
| 1 |
scene_id
|
string
|
Identifier of the scene to query dynamic objects for. |
Response containing dynamic objects present in a scene.
Response containing available ego vehicle masks.
Request to get available trajectories (paths) for a scene.
|
Field |
Type |
Description |
| 1 |
scene_id
|
string
|
Identifier of the scene to query trajectories for. |
Response containing available trajectories for a scene.
Metadata for a single trajectory.
|
Field |
Type |
Description |
| 1 |
trajectory_idx
|
uint32
|
Index identifying this trajectory. |
| 2 |
trajectory
|
nre.grpc.protos.common.Trajectory
|
Sequence of poses over time defining the trajectory path. |
Bivariate polynomial model for windshield distortion.
Models the optical distortion caused by viewing through a windshield,
which can introduce additional distortion beyond the camera lens itself.
|
Field |
Type |
Description |
| 1 |
reference_poly
|
nre.grpc.protos.sensorsim.BivariateWindshieldModelParameters.ReferencePolynomial
|
Specifies which polynomial is the reference (primary) mapping. |
| 2 |
horizontal_poly
|
repeated
double
|
Polynomial coefficients for horizontal distortion. |
| 3 |
vertical_poly
|
repeated
double
|
Polynomial coefficients for vertical distortion. |
| 4 |
horizontal_poly_inverse
|
repeated
double
|
Inverse polynomial coefficients for horizontal distortion. |
| 5 |
vertical_poly_inverse
|
repeated
double
|
Inverse polynomial coefficients for vertical distortion. |
Complete specification of a camera including intrinsics, resolution, and distortion models.
Represents a dynamic object (e.g., vehicle, pedestrian) in a scene with motion.
|
Field |
Type |
Description |
| 1 |
track_id
|
string
|
Unique tracking identifier for the object across frames. |
| 2 |
pose_pair
|
nre.grpc.protos.sensorsim.PosePair
|
Pose at frame start and end times, defining object motion during the frame. |
Complete description of a dynamic object track including trajectory and 3D properties.
|
Field |
Type |
Description |
| 1 |
id
|
string
|
Unique identifier for this object track. |
| 2 |
semantic_class
|
string
|
Semantic class label (e.g., "car", "pedestrian", "bicycle"). |
| 3 |
trajectory
|
nre.grpc.protos.common.Trajectory
|
Full trajectory of the object over time. |
| 4 |
object_size
|
nre.grpc.protos.common.AABB
|
3D bounding box dimensions of the object. |
| 5 |
asset_id
|
string
|
Identifier of the 3D asset used to represent this object. |
Request to edit assets in a scene by replacing or inserting objects.
Response from an asset editing operation.
|
Field |
Type |
Description |
| 1 |
success
|
bool
|
True if the edit operation completed successfully, false otherwise. |
| 2 |
message
|
string
|
Human-readable message describing the result or any errors. |
Identifier for an ego vehicle mask associated with a specific camera and rig configuration.
Ego masks are used to mask out the ego vehicle in rendered camera images.
|
Field |
Type |
Description |
| 1 |
camera_logical_id
|
string
|
Logical identifier for the camera (e.g., "camera_front_wide_120fov"). |
| 2 |
rig_config_id
|
string
|
Rig configuration identifier (e.g., "hyperion 8.0", "hyperion 8.1"). |
Metadata for a single ego mask.
Request to get external asset objects that can be inserted into a scene.
|
Field |
Type |
Description |
| 1 |
scene_id
|
string
|
Identifier of the scene to query external assets for. |
Response containing external asset objects available for insertion.
|
Field |
Type |
Description |
| 1 |
track_ids
|
repeated
string
|
List of track IDs for external asset objects that can be inserted. |
F-theta (fisheye) camera model parameters.
This model is commonly used for wide field-of-view cameras where the
relationship between pixel distance from principal point and angle is modeled
using polynomial equations.
|
Field |
Type |
Description |
| 1 |
principal_point_x
|
double
|
Principal point x-coordinate in pixels. |
| 2 |
principal_point_y
|
double
|
Principal point y-coordinate in pixels. |
| 3 |
reference_poly
|
nre.grpc.protos.sensorsim.FthetaCameraParam.PolynomialType
|
Specifies which polynomial is the reference (primary) polynomial. |
| 4 |
pixeldist_to_angle_poly
|
repeated
double
|
Polynomial coefficients for mapping pixel distance to angle.
Coefficients are ordered from lowest to highest degree term. |
| 5 |
angle_to_pixeldist_poly
|
repeated
double
|
Polynomial coefficients for mapping angle to pixel distance.
Coefficients are ordered from lowest to highest degree term. |
| 6 |
max_angle
|
double
|
Maximum field of view angle in radians. |
| 7 |
linear_cde
|
nre.grpc.protos.sensorsim.LinearCde
|
Additional linear correction parameters. |
Request to render LiDAR point cloud data from a reconstructed scene.
|
Field |
Type |
Description |
| 1 |
scene_id
|
string
|
Identifier of the scene to render from. |
| 2 |
lidar_config
|
nre.grpc.protos.sensorsim.LidarSpec
|
LiDAR sensor specifications defining the device model. |
| 3 |
frame_start_us
|
fixed64
|
LiDAR spin start time in microseconds since epoch. |
| 4 |
frame_end_us
|
fixed64
|
LiDAR spin end time in microseconds since epoch. |
| 5 |
sensor_pose
|
nre.grpc.protos.sensorsim.PosePair
|
LiDAR sensor pose at spin start and end times. |
| 6 |
dynamic_objects
|
repeated
nre.grpc.protos.sensorsim.DynamicObject
|
List of dynamic objects to render in the scene with their motion. |
Response containing rendered LiDAR point cloud data.
|
Field |
Type |
Description |
| 1 |
point_xyzs
|
repeated
float
|
Point cloud xyz coordinates in end-of-spin LiDAR coordinate frame.
Stored as flat array: [x1, y1, z1, x2, y2, z2, ...] in meters. |
| 2 |
point_intensities
|
repeated
float
|
Intensity value for each point, normalized to range [0, 1].
Represents the reflectivity of the surface at each point. |
| 3 |
num_points
|
uint32
|
Total number of points in the point cloud. |
| 4 |
point_xyzs_buffer
|
bytes
|
Binary buffer containing point xyz coordinates.
More efficient than repeated float for large point clouds. |
| 5 |
point_intensities_buffer
|
bytes
|
Binary buffer containing point intensities.
More efficient than repeated float for large point clouds. |
LiDAR sensor specification.
Currently supports specific LiDAR device models [PANDAR128, AT128].
TODO: Add full LiDAR parameterization for custom sensor configurations.
Linear correction parameters for certain camera models.
These parameters (c, d, e) are used in linear transformation equations
for additional distortion modeling.
|
Field |
Type |
Description |
| 1 |
linear_c
|
double
|
Linear coefficient c. |
| 2 |
linear_d
|
double
|
Linear coefficient d. |
| 3 |
linear_e
|
double
|
Linear coefficient e. |
OpenCV fisheye camera model parameters.
This model is specifically designed for fisheye lenses with wide field of view.
Uses a different distortion model than the pinhole camera.
|
Field |
Type |
Description |
| 1 |
principal_point_x
|
double
|
Principal point x-coordinate in pixels. |
| 2 |
principal_point_y
|
double
|
Principal point y-coordinate in pixels. |
| 3 |
focal_length_x
|
double
|
Focal length in x direction in pixels. |
| 4 |
focal_length_y
|
double
|
Focal length in y direction in pixels. |
| 5 |
radial_coeffs
|
repeated
double
|
Radial distortion coefficients for fisheye model [k1, k2, k3, k4]. |
| 6 |
max_angle
|
double
|
Maximum field of view angle in radians. |
OpenCV pinhole camera model parameters.
This is the standard pinhole camera model with distortion as used in OpenCV.
Supports radial, tangential, and thin prism distortion modeling.
|
Field |
Type |
Description |
| 1 |
principal_point_x
|
double
|
Principal point x-coordinate (cx) in pixels. |
| 2 |
principal_point_y
|
double
|
Principal point y-coordinate (cy) in pixels. |
| 3 |
focal_length_x
|
double
|
Focal length in x direction (fx) in pixels. |
| 4 |
focal_length_y
|
double
|
Focal length in y direction (fy) in pixels. |
| 5 |
radial_coeffs
|
repeated
double
|
Radial distortion coefficients [k1, k2, k3, k4, k5, k6].
Typically only k1 and k2 are non-zero for most cameras. |
| 6 |
tangential_coeffs
|
repeated
double
|
Tangential distortion coefficients [p1, p2].
Models decentering distortion from lens misalignment. |
| 7 |
thin_prism_coeffs
|
repeated
double
|
Thin prism distortion coefficients [s1, s2, s3, s4].
Models higher-order distortion effects. |
Pair of poses representing position and orientation at frame start and end times.
Used to model motion during a frame exposure, particularly important for rolling shutter cameras.
TODO: Replace with common.Trajectory for more general multi-point trajectories.
Request to render an RGB camera image from a reconstructed scene.
|
Field |
Type |
Description |
| 1 |
scene_id
|
string
|
Identifier of the scene to render from. |
| 2 |
resolution_h
|
uint32
|
Image resolution height in pixels.
TODO: Consider merging with camera_intrinsics to avoid redundancy. |
| 3 |
resolution_w
|
uint32
|
Image resolution width in pixels.
TODO: Consider merging with camera_intrinsics to avoid redundancy. |
| 4 |
camera_intrinsics
|
nre.grpc.protos.sensorsim.CameraSpec
|
Camera intrinsic parameters defining the camera model and distortion. |
| 5 |
frame_start_us
|
fixed64
|
Frame exposure start time in microseconds since epoch. |
| 6 |
frame_end_us
|
fixed64
|
Frame exposure end time in microseconds since epoch. |
| 7 |
sensor_pose
|
nre.grpc.protos.sensorsim.PosePair
|
Camera sensor pose at frame start and end times. |
| 8 |
dynamic_objects
|
repeated
nre.grpc.protos.sensorsim.DynamicObject
|
List of dynamic objects to render in the scene with their motion. |
| 9 |
image_format
|
nre.grpc.protos.sensorsim.ImageFormat
|
Desired output image format (PNG, JPEG, etc.). |
| 10 |
image_quality
|
float
|
Image quality parameter (0.0 to 1.0) for lossy formats like JPEG. |
| 11 |
insert_ego_mask
|
bool
|
If true, apply ego vehicle mask to mask out the ego vehicle in the rendered image. |
| 12 |
ego_mask_id
|
nre.grpc.protos.sensorsim.EgoMaskId
|
Ego mask identifier to use. Required if insert_ego_mask is true.
Must correspond to one of the masks returned by get_available_ego_masks. |
Response containing a rendered RGB image.
|
Field |
Type |
Description |
| 1 |
image_bytes
|
bytes
|
Encoded image data in the requested format (PNG, JPEG, etc.). |
Action to replace one asset with another in a scene.
|
Field |
Type |
Description |
| 1 |
original_id
|
string
|
Track ID of the original asset to be replaced. |
| 2 |
replacement_id
|
string
|
Asset ID of the replacement asset. |
| 3 |
object_size
|
nre.grpc.protos.common.AABB
|
3D bounding box dimensions for the replacement object. |
Request to restore model parameters for a scene to their original state.
This undoes any modifications made through edit_assets.
|
Field |
Type |
Description |
| 1 |
scene_id
|
string
|
Identifier of the scene to restore. |
Enums
Image encoding formats supported for rendered images.
| Name |
Number |
Description |
UNDEFINED |
0 |
Undefined or unknown image format. |
PNG |
1 |
PNG (Portable Network Graphics) format - lossless compression. |
JPEG |
2 |
JPEG (Joint Photographic Experts Group) format - lossy compression. |
JPEG2000 |
3 |
JPEG2000 format - improved lossy/lossless compression.
Note: AlpaSim currently only supports PNG and JPEG. |
RGB_UINT8_PLANAR |
4 |
Raw RGB data with 8-bit unsigned integers in planar format. |
AVC |
5 |
AVC (H.264) video codec format. |
AV1 |
6 |
AV1 video codec format. |
Supported LiDAR device types with specific scan patterns and characteristics.
| Name |
Number |
Description |
PANDAR128 |
0 |
Hesai Pandar128 LiDAR sensor. |
AT128 |
1 |
Hesai AT128 LiDAR sensor. |
Direction of polynomial mapping between pixel distances and angles.
| Name |
Number |
Description |
UNKNOWN |
0 |
Unknown or unspecified polynomial type. |
PIXELDIST_TO_ANGLE |
1 |
Polynomial mapping pixel distances to angles (backward polynomial). |
ANGLE_TO_PIXELDIST |
2 |
Polynomial mapping angles to pixel distances (forward polynomial). |
Direction of the reference polynomial mapping.
| Name |
Number |
Description |
FORWARD |
0 |
Forward mapping: undistorted to distorted coordinates. |
BACKWARD |
1 |
Backward mapping: distorted to undistorted coordinates. |
Camera shutter type affecting how image sensor rows/columns are exposed.
Rolling shutters expose different parts of the image at different times,
which can cause motion artifacts. Global shutters expose the entire image simultaneously.
| Name |
Number |
Description |
UNKNOWN |
0 |
Unknown or unspecified shutter type. |
ROLLING_TOP_TO_BOTTOM |
1 |
Rolling shutter exposing rows from top to bottom of the image sensor. |
ROLLING_LEFT_TO_RIGHT |
2 |
Rolling shutter exposing columns from left to right of the image sensor. |
ROLLING_BOTTOM_TO_TOP |
3 |
Rolling shutter exposing rows from bottom to top of the image sensor. |
ROLLING_RIGHT_TO_LEFT |
4 |
Rolling shutter exposing columns from right to left of the image sensor. |
GLOBAL |
5 |
Instantaneous global shutter with no rolling shutter effects. |