AV Sync in DeepStream#
AV Synchronization support is enabled for DeepStream-6.0 release and onwards. Currently it is in alpha development stage. A sample app deepstream-avsync-app is also provided at app/sample_apps/deepstream-avsync for reference. This document provides the sample gstreamer pipelines.
Setup for RTMP/RTSP Input streams for testing#
RTMP Server Setup#
$ apt-get update
$ apt-get install nginx -y
$ apt-get install libnginx-mod-rtmp -y
$ vim /etc/nginx/nginx.conf #add below content
rtmp {
server {
listen 1935;
chunk_size 4096;
application live {
live on;
record off;
}
}
}
$ service nginx restart
Command to simulate 2 RTMP streams using ffmpeg#
$ sudo ffmpeg -re -i <file_name1.mp4> -vcodec copy -1 -c:a aac -b:a 160k -ar 48000 -strict -2 -f flv rtmp://<host ip address:port>/live/test1
$ sudo ffmpeg -re -i <file_name2.mp4> -vcodec copy -1 -c:a aac -b:a 160k -ar 48000 -strict -2 -f flv rtmp://<host ip address:port>/live/test2
RTSP Server Setup#
$ cvlc <file_name.mp4> :sout=#gather:rtp{sdp=rtsp://:<port>/file-stream} :network-caching=1500 :sout-all :sout-keep
The rtsp stream generated from above command can be accessed using URL: rtsp://<host_ip_address:port>/file-stream.
AVSync Reference Pipelines#
Note
RTMP src is not a live source. You’ll need to add fakesrc with is-live=true and connect it to audiomixer. This makes sure it won’t break even if the data from one of the sources is unavailable.
A max-latency of 250msec is set on nvstreammux for rtmp input source. The value is set because max latency required for buffers from rtmpsrc to nvstreammux is 250msec. You can tune this value as per your requirement. Same value is set in audio path on audiomixer for the same reason.
RTSP output will be available at rtsp://<host_ip_address:8554>/ds-test.
You may experience AV sync is out of sync if the file output is played using VLC player. In that case try ffmpeg or gst-launch pipeline. This could be VLC player specific issue.
Make sure that audio stream has same encoded configuration. (e.g. sampling rate). If these are different, audiomixer crashes. You may add audioconvert and audioresample before audiomixer for given source so that all inputs to audiomixer have same format and sampling rate.
RTMP output on YouTube™ live takes time to appear (~30-40seconds). This could be because of YouTube™ player’s buffering mechanism.
In case of RTMP output, before running the pipeline make sure that YouTube™ live page is refreshed and ready to accept incoming data.
If case of packet drop, rtsp output may look like it is corrupted. In that case reduce bitrate and try.
Pipelines with existing nvstreammux component#
RTMP_IN->RTMP_OUT#
input1=rtmp://<host ip address:port>/live/test1
input2=rtmp://<host ip address:port>/live/test2
output=<RTMP url>
e.g.
rtmp://a.rtmp.youtube.com/live2/<key> For Youtube live
rtmp://<host ip address:port>/live/test For Host
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 max-latency=250000000 batched-push-timeout=33333 width=1920 height=1080 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer latency=250000000 ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
FILE_IN->RTSP_OUT#
input1=file:///AV_I_frames_1.mp4
input2=file:///AV_I_frames_2.mp4
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 batched-push-timeout=33333 width= 1920 height=1080 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvrtspoutsinkbin name=r uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! r. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
FILE_IN->RTMP_OUT#
input1=file:///AV_I_frames_1.mp4
input2=file:///AV_I_frames_2.mp4
output=<RTMP url>
e.g.
rtmp://a.rtmp.youtube.com/live2/<key> For Youtube live
rtmp://<host ip address:port>/live/test For Host
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 batched-push-timeout=33333 width=1920 height=1080 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
RTMP_IN->FILE_OUT#
input1=rtmp://<host ip address:port>/live/test1
input2=rtmp://<host ip address:port>/live/test2
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 batched-push-timeout=33333 width=1920 height=1080 sync-inputs=1 max-latency=250000000 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! filesink location=out.flv async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer latency=250000000 name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
RTSP_IN->FILE_OUT#
input1=<rtsp url>
input2=<rtsp url>
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 batched-push-timeout=33333 width=1920 height=1080 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! filesink location=out.flv async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. -e
FILE_IN->FILE_OUT#
input1=file:///AV_I_frames_1.mp4
input2=file:///AV_I_frames_2.mp4
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 batched-push-timeout=33333 width=1920 height=1080 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! filesink location=out.flv async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
RTSP_IN->RTSP_OUT#
input1=<rtsp url>
input2=<rtsp url>
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 batched-push-timeout=33333 width=1920 height=1080 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvrtspoutsinkbin name=r uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! r. demux2. ! queue ! audioconvert ! mixer. -e
RTMP_IN->RTSP_OUT#
input1=rtmp://<host ip address:port>/live/test1
input2=rtmp://<host ip address:port>/live/test2
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 batched-push-timeout=33333 width=1920 height=1080 sync-inputs=1 max-latency=250000000 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvrtspoutsinkbin name=r uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer latency=250000000 name=mixer ! queue ! r. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
Reference AVSync + ASR (Automatic Speech Recognition) Pipelines with existing nvstreammux#
Note
These pipelines demonstrate how to add ASR plugin in avsync pipeline.
To visualize overlaid text output from ASR on video frames, refer to deepstream-avsync-app
application.
RTMP_IN->RTMP_OUT#
input1=rtmp://<host ip address:port>/live/test1
input2=rtmp://<host ip address:port>/live/test2
output=<RTMP url>
e.g.:
rtmp://a.rtmp.youtube.com/live2/<key> #For YouTube™ live
rtmp://<host ip address:port>/live/test #For Host
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux max-latency=250000000 batched-push-timeout=33333 width=1920 height=1080 batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvdsosd ! queue ! nvvideoconvert ! queue ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! tee name=t1 t1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer latency=250000000 ! queue ! avenc_aac ! aacparse ! queue ! mux. t1. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 demux2. ! queue ! tee name=t2 t2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. t2. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 -e
Note
Make sure the ASR service is running (refer README of deepstream-avsync-app
, for detailed information ) . To ensure gRPC libraries are accessible, set LD _LIBRARY_PATH
using $source ~/.profile
. Update the correct path of riva_asr_grpc_jasper_conf.yml
in the above command before running it.
RTSP_IN->RTMP_OUT#
input1=<rtsp url>
input2=<rtsp_url>
output=<RTMP url>
e.g.:
rtmp://a.rtmp.youtube.com/live2/<key> #For Youtube live
rtmp://<host ip address:port>/live/test #For Host
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batched-push-timeout=33333 width=1920 height=1080 batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=flvmux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri= $input2 name =demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! tee name=t1 t1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! flvmux. t1. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 qos=0 demux2. ! queue ! tee name=t2 t2. ! queue ! audioconvert ! mixer. t2. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 qos=0 -e
Note
Make sure the ASR service is running (refer README of deepstream-avsync-app
, for detailed information ) . To ensure gRPC libraries are accessible, set LD _LIBRARY_PATH
using $source ~/.profile
. Update the correct path of riva_asr_grpc_jasper_conf.yml
in the above command before running it.
FILE_IN->RTMP_OUT#
input1=file:///AV_I_frames_1.mp4
input2=file:///AV_I_frames_2.mp4
output=<RTMP url>
e.g.:
rtmp://a.rtmp.youtube.com/live2/<key> #For Youtube live
rtmp://<host ip address:port>/live/test #For Host
gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 batched-push-timeout=33333 width=1920 height=1080 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvdsosd ! queue ! nvvideoconvert ! queue ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! tee name=t1 t1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. t1. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 demux2. ! queue ! tee name=t2 t2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. t2. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 -e
Note
Make sure the ASR service is running (refer README of deepstream-avsync-app
, for detailed information ) . To ensure gRPC libraries are accessible, set LD _LIBRARY_PATH
using $source ~/.profile
. Update the correct path of riva_asr_grpc_jasper_conf.yml
in the above command before running it.
Pipelines with New nvstreammux component#
You can enable the new nvstreammux (Beta quality) by exporting USE_NEW_NVSTREAMMUX=yes. For more information, see the :doc: “Gst-nvstreammux New (Beta)” section in the NVIDIA DeepStream SDK Developer Guide 6.1 Release.
Note
The existing nvstreammux functionality will be deprecated in the future.
RTMP_IN->RTMP_OUT#
input1=rtmp://<host ip address:port>/live/test1
input2=rtmp://<host ip address:port>/live/test2
output=<RTMP url>
e.g.
rtmp://a.rtmp.youtube.com/live2/<key> For Youtube live
rtmp://<host ip address:port>/live/test For Host
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 max-latency=250000000 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer latency=250000000 ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
RTSP_IN->RTMP_OUT#
input1=<rtsp url>
input2=<rtsp url>
output=<RTMP url>
e.g.
rtmp://a.rtmp.youtube.com/live2/<key> For Youtube live
rtmp://<host ip address:port>/live/test For Host
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. -e
FILE_IN->RTSP_OUT#
input1=file:///AV_I_frames_1.mp4
input2=file:///AV_I_frames_2.mp4
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvrtspoutsinkbin name=r uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! r. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
FILE_IN->RTMP_OUT#
input1=file:///AV_I_frames_1.mp4
input2=file:///AV_I_frames_2.mp4
output=<RTMP url>
e.g.
rtmp://a.rtmp.youtube.com/live2/<key> For Youtube live
rtmp://<host ip address:port>/live/test For Host
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
RTMP_IN->FILE_OUT#
input1=rtmp://<host ip address:port>/live/test1
input2=rtmp://<host ip address:port>/live/test2
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 sync-inputs=1 max-latency=250000000 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! filesink location=out.flv async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer latency=250000000 name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
RTSP_IN->FILE_OUT#
input1=<rtsp url>
input2=<rtsp url>
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! filesink location=out.flv async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. -e
FILE_IN->FILE_OUT#
input1=file:///AV_I_frames_1.mp4
input2=file:///AV_I_frames_2.mp4
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! filesink location=out.flv async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
RTSP_IN->RTSP_OUT#
input1=<rtsp url>
input2=<rtsp url>
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvrtspoutsinkbin name=r uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! r. demux2. ! queue ! audioconvert ! mixer. -e
RTMP_IN->RTSP_OUT#
input1=rtmp://<host ip address:port>/live/test1
input2=rtmp://<host ip address:port>/live/test2
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 sync-inputs=1 max-latency=250000000 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvrtspoutsinkbin name=r uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! audioconvert ! mixer.sink_0 audiomixer latency=250000000 name=mixer ! queue ! r. demux2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. -e
Reference AVSync + ASR Pipelines (with new nvstreammux)#
RTMP_IN->RTMP_OUT#
input1=rtmp://<host ip address:port>/live/test1
input2=rtmp://<host ip address:port>/live/test2
output=<RTMP url>
e.g.:
rtmp://a.rtmp.youtube.com/live2/<key> #For Youtube live
rtmp://<host ip address:port>/live/test #For Host
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux max-latency=250000000 batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvdsosd ! queue ! nvvideoconvert ! queue ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! tee name=t1 t1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer latency=250000000 ! queue ! avenc_aac ! aacparse ! queue ! mux. t1. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 demux2. ! queue ! tee name=t2 t2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. t2. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 -e
Note
Make sure ASR service is running (refer README of deepstream-avsync-app
, for detailed information ) . To ensure gRPC libraries are accessible, set LD _LIBRARY_PATH
using $source ~/.profile
. Update the correct path of riva_asr_grpc_jasper_conf.yml
in the above command before running it.
RTSP_IN->RTMP_OUT#
input1=<rtsp url>
input2=<rtsp_url>
output=<RTMP url>
e.g.:
rtmp://a.rtmp.youtube.com/live2/<key> #For Youtube live
rtmp://<host ip address:port>/live/test #For Host
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=flvmux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri= $input2 name =demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! tee name=t1 t1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! flvmux. t1. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 qos=0 demux2. ! queue ! tee name=t2 t2. ! queue ! audioconvert ! mixer. t2. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 qos=0 -e
Note
Make sure ASR service is running (refer README of deepstream-avsync-app
, for detailed information ) . To ensure gRPC libraries are accessible, set LD _LIBRARY_PATH
using $source ~/.profile
. Update the correct path of riva_asr_grpc_jasper_conf.yml
in the above command before running it.
FILE_IN->RTMP_OUT#
input1=file:///AV_I_frames_1.mp4
input2=file:///AV_I_frames_2.mp4
output=<RTMP url>
e.g.:
rtmp://a.rtmp.youtube.com/live2/<key> #For Youtube live
rtmp://<host ip address:port>/live/test #For Host
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=2 sync-inputs=1 name=mux1 ! queue ! nvmultistreamtiler width=480 height=360 ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvdsosd ! queue ! nvvideoconvert ! queue ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! rtmpsink location=$output async=0 qos=0 sync=1 uridecodebin3 uri=$input2 name=demux2 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_1 demux1. ! queue ! tee name=t1 t1. ! queue ! audioconvert ! mixer.sink_0 audiomixer name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. t1. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 demux2. ! queue ! tee name=t2 t2. ! queue ! audioconvert ! mixer. fakesrc num-buffers=0 is-live=1 ! mixer. t2. ! queue ! audioconvert ! "audio/x-raw, format=(string)S16LE, channels=(int)1" ! audioresample ! "audio/x-raw, rate=(int)16000" ! nvdsasr config-file= riva_asr_grpc_jasper_conf.yml customlib-name="libnvds_riva_asr_grpc.so" create-speech-ctx-func="create_riva_asr_grpc_ctx" ! fakesink sync=0 async=0 -e
Note
Make sure ASR service is running (refer README of deepstream-avsync-app
, for detailed information ) . To ensure gRPC libraries are accessible, set LD _LIBRARY_PATH
using $source ~/.profile. Update the correct path of riva_asr_grpc_jasper_conf.yml
in the above command before running it.
Gst-pipeline with audiomuxer (single source, without ASR + new nvstreammux)#
RTMP_IN->FILE_OUT#
input1=rtmp://<host ip address:port>/live/test1
USE_NEW_NVSTREAMMUX=yes gst-launch-1.0 uridecodebin3 uri=$input1 name=demux1 ! queue ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! mux1.sink_0 nvstreammux batch-size=1 sync-inputs=1 max-latency=250000000 name=mux1 ! queue ! nvmultistreamtiler ! nvvideoconvert ! "video/x-raw(memory:NVMM)" ! nvv4l2h264enc ! h264parse ! queue ! flvmux name=mux streamable=true ! filesink location=out.flv async=0 qos=0 sync=1 demux1. ! queue ! audioconvert ! audiomux.sink_0 nvstreammux name=audiomux batch-size=1 max-latency=250000000 sync-inputs=1 ! nvstreamdemux name=audiodemux audiodemux.src_0 ! audioconvert ! mixer.sink_0 audiomixer latency=250000000 name=mixer ! queue ! avenc_aac ! aacparse ! queue ! mux. fakesrc num-buffers=0 is-live=1 ! mixer. -e