Troubleshooting#
If you run into to trouble while using DeepStream, consider the following solutions. if you don’t find answers below, post your questions on DeepStream developer forum.
You are migrating from DeepStream 6.0 to DeepStream 7.1#
Solution:
You must clean up the DeepStream 6.0 libraries and binaries. Use one of these commands to clean up:
For dGPU:
For example to remove DeepStream 6.0 :
Open the uninstall.sh file in
/opt/nvidia/deepstream/deepstream/
Set
PREV_DS_VER
as 6.0Run the script as sudo
./uninstall.sh
For Jetson: Flash the target device with the latest release of JetPack.
Inference#
Application fails to run when the neural network is changed#
Solution:
Make sure that the network parameters are updated for the corresponding [GIE] group in the configuration file (e.g. source30_720p_dec_infer-resnet_tiled_display_int8.txt
). Also make sure that the Gst-nvinfer plugin’s configuration file is updated accordingly.
When the model is changed, make sure that the application is not using old engine files.
Performance#
The DeepStream application is running slowly (Jetson only)#
Solution:
Ensure that Jetson clocks are set high. Run these commands to set Jetson clocks high.
$ sudo nvpmodel -m 0 --for MAX perf and power
$ sudo jetson_clocks
The DeepStream application is running slowly#
Solution 1:
One of the plugins in the pipeline may be running slowly. You can measure the latency of each plugin in the pipeline to determine if one of them is slow.
To enable frame latency measurement, run this command on the console:
$ export NVDS_ENABLE_LATENCY_MEASUREMENT=1
To enable latency for all plugins, run this command on the console:
$ export NVDS_ENABLE_COMPONENT_LATENCY_MEASUREMENT=1
Solution 2: (dGPU only)
Ensure that your GPU card is in the PCI slot with the highest bus width.
Solution 3:
In the configuration file’s [streammux]
group, set batched-push-timeout
to 1/max_fps
.
Solution 4:
In the configuration file’s [streammux]
group, set width and height to the stream’s resolution.
Solution 5:
For RTSP streaming input, in the configuration file’s [streammux]
group, set live-source=1
. Also make sure that all [sink#] groups have the sync property set to 0.
Solution 6:
If secondary inferencing is enabled, try to increase batch-size in the the configuration file’s [secondary-gie#]
group in case the number of objects to be inferred is greater than the batch-size setting.
Solution 7:
On Jetson, use Gst-nvdrmvideosink
instead of Gst-nv3dsink
as nv3dsink
requires GPU utilization.
Solution 8:
If the GPU is the performance bottleneck, try increasing the interval at which the primary detector infers on input frames. You can do this by modifying the interval property of [primary-gie]
group in the application configuration, or the interval property of the Gst-nvinfer configuration file.
Solution 9:
If the elements in the pipeline are getting starved for buffers (you can check if CPU/GPU utilization is low), increase the number of buffers allocated by the decoder by setting the num-extra-surfaces
property of the [source#] group in the application or the num-extra-surfaces
property of Gst-nvv4l2decoder
element.
Solution 10:
If you are running the application inside docker/on-console and it delivers low FPS, set qos=0
in the configuration file’s [sink0] group.
The issue is caused by initial load. With qos
set to 1 as the property’s default value in the [sink0] group, decodebin starts dropping frames.
Solution 11:
For RTSP streaming input, if the input has high jitter the GStreamer rtpjitterbuffer
element might drop packets which are late. Increase the latency property of rtspsrc
, for deepstream-app
set latency
in [source*] group. Alternatively, if using RTSP type source (type=4) with deepstream-app
, turn off drop-on-latency
in deepstream_source_bin.c
. These steps may add cumulative delay in frames reaching the renderer and memory accumulation in the rtpjitterbuffer
if the pipeline is not fast enough.
Solution 12:
On Jetson in the configuration file of gst-nvinfer set scaling-compute-hw = 1
if gpu usage is not 100%.
Solution 13:
On dgpu set cudadec-memtype=0
property on Gst-nvv4l2decoder plugin to select device memory output.
Triton#
Errors occur when deepstream-app fails to load plugin Gst-nvinferserver#
For example: (deepstream-app:16632): GStreamer-WARNING **: 13:13:31.201: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/deepstream/libnvdsgst_inferserver.so':
libtrtserver.so: cannot open shared object file: No such file or directory.
This is a harmless warning indicating that the DeepStream’s nvinferserver
plugin cannot be used since “Triton Inference Server” is not installed.
Solution 1:
Ignore this message if users do not need Triton support. Otherwise see Solution 2, 3.
Solution 2:
Pull deepstream-triton
docker image and start the container. Retry deepstream-app
to launch triton models.
Solution 3:
For dGPU: Build Triton server library from source (triton-inference-server/server) and fix dynamic link problem manually.
For Jetson: Refer
/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app-triton/README
for steps to install Triton on Jetson.
Tensorflow models are running into OOM (Out-Of-Memory) problem#
This problem may manifest as other errors like CUDA_ERROR_OUT_OF_MEMORY
, core dump
, application get killed
once GPU memory is set up by Tensorflow component.
Solution:
Tune parameter tf_gpu_memory_fraction
in config file (e.g. config_infer_primary_detector_ssd_inception_v2_coco_2018_01_28.txt
) to a proper value. For more details, see:
samples/configs/deepstream-app-triton/README
.
Troubleshooting in Tracker Setup and Parameter Tuning#
Flickering Bbox#
In case the PGIE detection interval is set to zero (i.e., interval=0
in the ds-app config file), the bbox flickering may occur in the video output if the value for minTrackerConfidence
is set too low. Try increasing the value for this parameter to mitigate the issue.
In case the PGIE detection interval is set to be a non-zero value (i.e., interval
> 0 in the ds-app config file), it is expected that the tracker outputs are not reported on the uninferenced frames, although all the targets are being tracked in the background. Thus, it is an expected behavior that the real-time video display from OSD has the bbox flickering.
To mitigate this issue, users can first enable the past-frame data configuration to retrieve the missed outputs and then add a custom module to combine the real-time metadata with the past-frame data. By doing so, users can combine the data in a proper order and can opt to visualize the combined data on the display with no flickering bbox issue.
Frequent tracking ID changes although no nearby objects#
This may occur because the tracker cannot detect the target from the correlation response map. It is recommended to start with lower minimum qualification for the target. First, set minTrackerConfidence
with a relatively low value like 0.5
. Also, in case the state estimator is enabled, the prediction may not be accurate enough. Users may tune the state estimator parameters based on the expected motion dynamics, or disable during debugging.
Frequent tracking ID switches to the nearby objects#
Make the data association policy stricter by increasing the minimum qualifications such as:
minMatchingScore4SizeSimilarity
minMatchingScore4Iou
minMatchingScore4VisualSimilarity
Setup Re-ID model#
Error “!![ERROR] TAO model file does not exist” when using config_tracker_NvDCF_accuracy.yml
or config_tracker_NvDeepSORT.yml
.
tltEncodedModel: "/opt/nvidia/deepstream/deepstream/samples/models/Tracker/resnet50_market1501.etlt""
does not exist.
Solution:
You need a Re-ID model to use these trackers. Follow steps in sources/tracker_ReID/README
to setup the Re-ID model.
Error while running ONNX / Explicit batch dimension networks#
After upgrading TensorRT on Jetson, running with ONNX / Explicit batch dimension networks fails with the error “Network has dynamic or shape inputs, but no optimization profile has been defined.”
Due to an ABI break in TensorRT 7.1.3.0 (part of Jetpack 4.4 - Jetpack 4.6), when moving to newer TensorRT versions, libnvds_infer.so
needs to be recompiled from sources provided in the SDK. The sources along with the compilation instructions can be found in /opt/nvidia/deepstream/deepstream-6.4/sources/libs/nvdsinfer
.
Warning message gstnvtracker: Unable to acquire a user meta buffer
#
Tracker uses a buffer pool for miscellaneous data memory management, whose size can be set with user-meta-pool-size
. When the latency for downstream plugins to release the buffers is too long, the buffer pool may be empty so tracker will skip reporting the miscellaneous data for next batch and print this message. Users can increase the pool size from default 32 to larger values like 64.
Graph Composer Troubleshooting#
My component is not visible in the composer even after registering the extension with registry#
Run “registry extn info -n <extn-name>” to check that component is part of the extension.
If not, Check that a call to GXF_EXT_FACTORY_ADD() corresponding to the component has been added within the GXF_EXT_FACTORY code block of the extension
Run “registry comp info -t <comp-type>” to check if the component is being detected as an abstract type.
If yes, component does not list abstract types since they cannot be instantiated. Check the solution to the next question.
My component is getting registered as an abstract type.#
This usually happens because pure virtual methods inherited from the base class hierarchy have not been implemented. A quick way to find which pure virtual methods have not been implemented is to try to declare an object of the component class. Adding “MyComponentType comp;” anywhere after the class definition should throw compilation errors pointing to the missing implementations.
When executing a graph, the execution ends immediately with the warning “No system specified. Nothing to do”#
Check that NvDsScheduler component is part of the graph.
Miscellaneous#
“NvDsBatchMeta not found for input buffer” error while running DeepStream pipeline#
Solution:
The Gst-nvstreammux plugin is not in the pipeline. Starting with DeepStream 4.0, Gst-nvstreammux is a required plugin. This is an example pipeline:
Gst nvv4l2decoder --> Gst nvstreammux --> Gst nvinfer --> Gst nvtracker --> Gst nvmultistreamtiler --> Gst nvvideoconvert --> Gst nvosd --> Gst nveglglessink
The DeepStream reference application fails to launch, or any plugin fails to load#
Solution:
Try clearing the GStreamer cache by running the command:
$ rm -rf ${HOME}/.cache/gstreamer-1.0
Also run this command if there is an issue with loading any of the plugins. Warnings or errors for failing plugins are displayed on the terminal.
$ gst-inspect-1.0
Then run this command to find missing dependencies:
$ ldd <plugin>.so
where <plugin> is the name of the plugin that failed to load.
Errors occur when deepstream-app is run with a number of streams greater than 100#
For example: (deepstream-app:15751): GStreamer-CRITICAL **: 19:25:29.810: gst_poll_write_control: assertion 'set != NULL' failed.
Solution:
Run this command on the console:
`` ulimit -Sn 4096``
Then run the deepstream-app
again.
After removing all the sources from the pipeline crash is seen if muxer and tiler are present in the pipeline#
This happens when muxer generates an empty batched buffer to clear the tiler output. To address this issue, install a probe on sink pad of tiler. Drop the buffer if batch_meta->num_frames_in_batch = 0, code snippet for reference below:
static GstPadProbeReturn drop_empty_buffer (GstPad * pad,
GstPadProbeInfo * info,
gpointer user_data) {
NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta(GST_BUFFER(info->data));
if (batch_meta && batch_meta->num_frames_in_batch == 0)
return GST_PAD_PROBE_DROP;
return GST_PAD_PROBE_OK;
}
GstPad *sinkpad = gst_element_get_static_pad(tiler, "sink");
gst_pad_add_probe(sinkpad, GST_PAD_PROBE_TYPE_BUFFER, drop_empty_buffer,
NULL, NULL);
gst_object_unref(sinkpad)
Some RGB video format pipelines worked before DeepStream 6.1 onwards on Jetson but don’t work now#
The below pipeline used to work in previous releases before DeepStream 6.1 onwards and also works on dGPU, but fails on Jetson:
gst-launch-1.0 -v videotestsrc ! 'video/x-raw,format=RGB' ! videoconvert ! \
nvvideoconvert nvbuf-memory-type=0 name=converter1 ! 'video/x-raw(memory:NVMM),format=(string)NV12' ! \
nvvideoconvert ! 'video/x-raw' ! nveglglessink
Setting pipeline to PAUSED ...
Using winsys: x11
Pipeline is PREROLLING ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0.GstPad:src: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:src: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/Gstnvvideoconvert:converter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/GstEglGlesSink:eglglessink0.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:converter1.GstPad:sink: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:sink: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
0:00:00.282672495 716265 0xaaab033c7400 ERROR nvvideoconvert gstnvvideoconvert.c:3750:``gst_nvvideoconvert_transform: buffer transform failed``
ERROR: from element /GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0:
streaming stopped, reason error (-5)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
The support for RGB/BGR video formats was added in nvvideoconvert in DeepStream 6.1.
Earlier RGB/BGR input wasn’t accepted by nvvideoconvert, hence for above pipeline the (OSS) videoconvert converts RGB to a format which can be accepted by nvvideoconvert
and hence it used to work. Now, post DeepStream 6.1 the (OSS) videoconvert plugin goes in passthrough mode, and the default compute hardware for Jetson is VIC which doesn’t support RGB input.
To allow RGB input and output to work with nvvideoconvert set compute-hw=GPU
property on the plugin as given below.
gst-launch-1.0 -v videotestsrc ! 'video/x-raw,format=RGB' ! videoconvert ! \
nvvideoconvert compute-hw=GPU nvbuf-memory-type=0 name=converter1 ! 'video/x-raw(memory:NVMM),format=(string)NV12' ! \
nvvideoconvert ! 'video/x-raw' ! nveglglessink
Setting pipeline to PAUSED ...
Using winsys: x11
Pipeline is PREROLLING ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0.GstPad:src: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:src: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/Gstnvvideoconvert:converter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert1.GstPad:src: caps = video/x-raw, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2.GstPad:src: caps = video/x-raw, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/GstEglGlesSink:eglglessink0.GstPad:sink: caps = video/x-raw, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/GstCapsFilter:capsfilter2.GstPad:sink: caps = video/x-raw, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)NV12, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:converter1.GstPad:sink: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstVideoConvert:videoconvert0.GstPad:sink: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, format=(string)RGB, width=(int)320, height=(int)240, framerate=(fraction)30/1, multiview-mode=(string)mono, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
UYVP video format pipeline doesn’t work on Jetson#
The below pipeline for UYVP video format doesn’t work for Jetson:
gst-launch-1.0 videotestsrc ! 'video/x-raw,format=UYVP,width=1920,height=1080,depth=10,framerate=(fraction)50/1' ! nvvideoconvert ! \
'video/x-raw(memory:NVMM),width=960,height=540,format=UYVP' ! nvvideoconvert ! nv3dsink -v
Setting pipeline to PAUSED ...
Using winsys: x11
Pipeline is PREROLLING ...;
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0.GstPad:src: caps = video/x-raw, format=(string)UYVP, width=(int)1920, height=(int)1080, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, format=(string)UYVP, width=(int)1920, height=(int)1080, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)UYVP, block-linear=(boolean)false
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)UYVP, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)I420, block-linear=(boolean)false
/GstPipeline:pipeline0/GstNv3dSink:nv3dsink0.GstPad:src: caps = video/x-raw, width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)RGBA, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)UYVP, block-linear=(boolean)false
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)UYVP, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert0.GstPad:sink: caps = video/x-raw, format=(string)UYVP, width=(int)1920, height=(int)1080, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps = video/x-raw, format=(string)UYVP, width=(int)1920, height=(int)1080, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
nvbufsurface: invalid colorFormat 69
nvbufsurface: Error in allocating buffer
Error(-1) in buffer allocation
** (gst-launch-1.0:723645): CRITICAL **: 17:18:46.766: gst_nvds_buffer_pool_alloc_buffer: assertion 'mem' failed
ERROR: from element /GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert0: failed to activate bufferpool
Additional debug info:
gstbasetransform.c(1678): default_prepare_output_buffer (): /GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert0:
failed to activate bufferpool
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
The default memory allocator (nvbuf-mem-surface-array)
on Jetson doesn’t support allocation for UYVP, so currently the allocation is only supported through
nvbuf-mem-cuda-device
on Jetson. As VIC doesn’t support transformation on Cuda memory hence, GPU can be used to do the required transformation.
So, the above pipeline can be modified to run on Jetson.
gst-launch-1.0 videotestsrc ! 'video/x-raw,format=UYVP,width=1920,height=1080,depth=10,framerate=(fraction)50/1' ! \
nvvideoconvert nvbuf-memory-type=nvbuf-mem-cuda-device compute-hw=GPU ! \
'video/x-raw(memory:NVMM),width=960,height=540,format=UYVP' ! nvvideoconvert compute-hw=GPU ! nv3dsink -v
Setting pipeline to PAUSED ...
Using winsys: x11
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/GstVideoTestSrc:videotestsrc0.GstPad:src: caps = video/x-raw, format=(string)UYVP, width=(int)1920, height=(int)1080, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps = video/x-raw, format=(string)UYVP, width=(int)1920, height=(int)1080, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = video/x-raw, format=(string)UYVP, width=(int)1920, height=(int)1080, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:src: caps = video/x-raw, format=(string)UYVP, width=(int)1920, height=(int)1080, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:src: caps = video/x-raw, format=(string)UYVP, width=(int)1920, height=(int)1080, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert0.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)UYVP, block-linear=(boolean)false
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)UYVP, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)I420, block-linear=(boolean)false
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)I420, block-linear=(boolean)false
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:src: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)I420, block-linear=(boolean)false
/GstPipeline:pipeline0/GstNv3dSink:nv3dsink0.GstPad:sink: caps = video/x-raw, width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)RGBA, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)UYVP, block-linear=(boolean)false
/GstPipeline:pipeline0/GstCapsFilter:capsfilter1.GstPad:sink: caps = video/x-raw(memory:NVMM), width=(int)960, height=(int)540, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive, format=(string)UYVP, block-linear=(boolean)false
/GstPipeline:pipeline0/Gstnvvideoconvert:nvvideoconvert0.GstPad:sink: caps = video/x-raw, format=(string)UYVP, width=(int)1920, height=(int)1080, framerate=(fraction)50/1, multiview-mode=(string)mono, depth=(int)10, pixel-aspect-ratio=(fraction)1/1, interlace-mode=(string)progressive
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Note
The original pipeline would work as is on dGPU, but need to modify it to remove nvegltransform element.
Memory usage keeps on increasing when the source is a long duration containerized files(e.g. mp4, mkv)#
A memory accumulation bug is present in GStreamer’s Base Parse class which potentially affects all codec parsers provided by GStreamer. This bug is seen only with long duration seekable streams (mostly containerized files e.g. mp4). This does not affect live sources like RTSP. An issue has been filed on GStreamer’s GitLab project https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/468
Solution:
Apply the following temporary fix to the GStreamer sources and build the library.
Check the exact Gstreamer version installed on the system.
$ gst-inspect-1.0 --version gst-inspect-1.0 version 1.16.2 GStreamer 1.16.2 https://launchpad.net/distros/ubuntu/+source/gstreamer1.0
Clone the Gstreamer repo and checkout the tag corresponding to the installed version.
$ git clone git@gitlab.freedesktop.org:gstreamer/gstreamer.git $ cd gstreamer $ git checkout 1.16.2
Make sure the build dependencies are installed.
$ sudo apt install libbison-dev build-essential flex debhelper
Run
autogen.sh
and configure script.
$ ./autogen.sh –noconfigure $ ./configure –prefix=(pwd)/out # Don’t want to overwrite system libs
Save the following patch to a file.
diff --git a/libs/gst/base/gstbaseparse.c b/libs/gst/base/gstbaseparse.c index 41adf130e..ffc662a45 100644 --- a/libs/gst/base/gstbaseparse.c +++ b/libs/gst/base/gstbaseparse.c @@ -1906,6 +1906,9 @@ gst_base_parse_add_index_entry (GstBaseParse * parse, guint64 offset, GST_LOG_OBJECT (parse, "Adding key=%d index entry %" GST_TIME_FORMAT " @ offset 0x%08" G_GINT64_MODIFIER "x", key, GST_TIME_ARGS (ts), offset); + if (!key) + goto exit; + if (G_LIKELY (!force)) { if (!parse->priv->upstream_seekable) {
Apply the patch.
$ cat patch.txt | patch -p1
Build the sources.
make -j(nproc) && make install
8. Backup the distribution provided library and copy the newly built library. Adjust the library name for version. For Jetson replace x86_64-linux-gnu with aarch64-linux-gnu.
$ sudo cp /usr/lib/x86_64-linux-gnu/libgstbase-1.0.so.0.1602.0 ${HOME}/libgstbase-1.0.so.0.1602.0.backup $ sudo cp out/lib/libgstbase-1.0.so.0.1602.0 /usr/lib/x86_64-linux-gnu/
Stale frames observed on RTSP output#
If stale frames are observed on the RTSP output, then update rtsp-port and udp-port parameter( at RTSP sink) inside the config file which is being used to run deepstream applications.
Update rtsp-port to other port number. For ex: rtsp-port=8660
Update udp-port to other port number. For ex: udp-port=5500
Seeing gitches / choppy video due to potential packet drops over remote viewing of RTSP output (using VLC)#
VLC use UDP by default to stream the RTSP URL. UDP is lossy and cause quality issues on the rendered video.
When viewing DeepStream RTSP sink output remotely, please force usage of TCP to avoid unnecessary packet drops.
Example command to force TCP when using VLC:
$ vlc --rtsp-tcp rtsp://path/to/stream
DeepStream plugins failing to load without DISPLAY variable set when launching DS dockers#
Solution:
The error “No EGL Display; nvbufsurftransform: Could not get EGL display connection” will be resolved if the user ensure to meet either of the below requirements.
The below requirements shall be met before starting the docker using docker run
command.
When user expect to use Display window
Set appropriate value for the
DISPLAY
variable andExecute the command:
xhost +
from the host terminal, to allow the docker to launch a display window.
Example:
$ export DISPLAY=:0 $ xhost +
When user expect to not use a Display window
unset the
DISPLAY
variable orlaunch the docker without exporting
DISPLAY
variable to its environment.
Nvidia driver installation issues#
Error: “An NVIDIA kernel module ‘nvidia-drm’ appears to already be loaded in your kernel. ****”
Solution:
The Error suggest another version of NVIDIA driver is loaded. You may refer to the links below to uninstall previous driver versions. Uninstallation method differs based on the installation method (runfile or debian).
Quick Links:
On Jetson, observing error : gstnvarguscamerasrc.cpp, execute:751 No cameras available#
In order to access CSI camera, you can use Jetson-IO tool to enable CSI camera sensor.
For example on AGX, navigate to /opt/nvidia/jetson-io/
:
-> Run sudo python jetson-io.py and follow Wizard -> Configure Jetson AGX Xavier CSI Connector -> Configure for compatible hardware. -> Choose the CSI camera module e.g. “Jetson Camera E3333 module” -> Save pin changes -> Save and reboot to reconfigure pins
For more details, refer to https://docs.nvidia.com/jetson/l4t/#page/Tegra%20Linux%20Driver%20Package%20Development%20Guide/hw_setup_jetson_io.html and section 2.3 and 3.0 of https://developer.nvidia.com/sites/default/files/akamai/embedded/jetson-linux-release-notes-r341-dp.pdf
On Jetson, observing error : “nvbufsurftransform_copy.cpp: Failed in mem copy” or “cuGraphicsEGLRegisterImage failed : 700”#
To avoid this error we can set “copy-hw” property as “VIC” for nvvideoconvert while using “”nvbuf-mem-surface-array” as memory type.
Example pipeline to set “copy-hw” property of nvvideoconvert to “VIC”:
$ gst-launch-1.0 filesrc location=test_I420.yuv ! videoparse width=1280 height=720 format=2 framerate=30/1 ! nvvideoconvert copy-hw=2 ! 'video/x-raw(memory:NVMM), format=(string)I420' ! nv3dsink sync=0 -e
Refer to “deepstream-appsrc-test” app available in Deepstream SDK, for usage of “copy-hw” property from application.
Note
For more FAQs and troubleshooting, see https://forums.developer.nvidia.com/t/deepstream-sdk-faq/