|
NVIDIA DeepStream SDK API Reference
|
7.1 Release
|
Go to the documentation of this file.
23 #ifndef __INFER_CUDA_CONTEXT_H__
24 #define __INFER_CUDA_CONTEXT_H__
26 #include <shared_mutex>
35 class CropSurfaceConverter;
36 class NetworkPreprocessor;
38 class CudaEventInPool;
85 std::vector<UniqPreprocessor>& processors)
override;
101 const ic::InferenceConfig&
config)
override;
149 int poolSize,
int gpuId);
187 const ic::InferenceConfig&
config, BaseBackend&
backend,
const std::string& primaryTensor);
This is a header file for pre-processing cuda kernels with normalization and mean subtraction require...
void getNetworkInputInfo(NvDsInferNetworkInfo &networkInfo) override
Get the network input layer information.
InferDataType
Datatype of the tensor buffer.
std::unique_ptr< InferExtraProcessor > UniqInferExtraProcessor
UniqStreamManager m_MultiStreamManager
stream-id based management.
InferMediaFormat
Image formats.
NvDsInferStatus extraOutputTensorCheck(SharedBatchArray &outputs, SharedOptions inOptions) override
Post inference steps for the custom processor and LSTM controller.
NvDsInferStatus createPreprocessor(const ic::PreProcessParams ¶ms, std::vector< UniqPreprocessor > &processors) override
Create the surface converter and network preprocessor.
std::shared_ptr< SysMem > SharedSysMem
NvDsInferStatus allocateResource(const ic::InferenceConfig &config) override
Allocate resources for the preprocessors and post-processor.
Header file for the data types used in the inference processing.
Preprocessor for scaling and normalization of the input and conversion to network media format.
InferTensorOrder
The type of tensor order.
Stores the information of a layer in the inference model.
MapBufferPool< std::string, UniqSysMem > m_HostTensorPool
Map of pools for the output tensors.
UniqLstmController m_LstmController
LSTM controller.
Header file containing utility functions and classes used by the nvinferserver low level library.
InferCudaContext()
Constructor.
@ kRGB
24-bit interleaved R-G-B
A generic post processor class.
Header file of the common declarations for the nvinferserver library.
std::string m_NetworkImageName
The input layer name.
SharedCuEvent acquireTensorHostEvent()
Acquire a CUDA event from the events pool.
int tensorPoolSize() const
Get the size of the tensor pool.
Preprocessor for cropping, scaling and padding the inference input to required height,...
void notifyError(NvDsInferStatus status) override
In case of error, notify the waiting threads.
Holds information about the model network.
InferDataType m_InputDataType
The input layer datatype.
std::vector< SharedCudaTensorBuf > m_ExtraInputs
Array of buffers of the additional inputs.
std::shared_ptr< IOptions > SharedOptions
NvDsInferStatus deinit() override
Release the host tensor pool buffers, extra input buffers, LSTM controller, extra input processor.
CropSurfaceConverter * m_SurfaceConverter
Preprocessor and post-processor handles.
NetworkPreprocessor * m_NetworkPreprocessor
NvDsInferStatus preInference(SharedBatchArray &inputs, const ic::InferenceConfig &config) override
Initialize non-image input layers if the custom library has implemented the interface.
The base class for handling the inference context.
UniqInferExtraProcessor m_ExtraProcessor
Extra and custom processing pre/post inference.
InferTensorOrder m_InputTensorOrder
The input layer tensor order.
const ic::InferenceConfig & config() const
Postprocessor * m_FinalProcessor
SharedSysMem acquireTensorHostBuf(const std::string &name, size_t bytes)
Allocator.
NvDsInferStatus createPostprocessor(const ic::PostProcessParams ¶ms, UniqPostprocessor &processor) override
Create the post-processor as per the network output type.
std::unique_ptr< StreamManager > UniqStreamManager
SharedBufPool< std::unique_ptr< CudaEventInPool > > m_HostTensorEvents
Pool of CUDA events for host tensor copy.
NvDsInferStatus fixateInferenceInfo(const ic::InferenceConfig &config, BaseBackend &backend) override
Check the tensor order, media format, and datatype for the input tensor.
std::unique_ptr< LstmController > UniqLstmController
std::unique_ptr< BasePostprocessor > UniqPostprocessor
Processor interfaces.
std::shared_ptr< CudaEvent > SharedCuEvent
Base class of inference backend processing.
InferMediaFormat m_NetworkImageFormat
The input layer media format.
std::shared_ptr< BaseBatchArray > SharedBatchArray
~InferCudaContext() override
Destructor.
Header file of the base class for inference context.
NvDsInferStatus
Enum for the status codes returned by NvDsInferContext.
NvDsInferNetworkInfo m_NetworkImageInfo
Network input height, width, channels for preprocessing.
Class for inference context CUDA processing.