|
NVIDIA DeepStream SDK API Reference
|
7.1 Release
|
Go to the documentation of this file.
20 #ifndef __NVDSINFERSERVER_EXTRA_PROCESSOR_H__
21 #define __NVDSINFERSERVER_EXTRA_PROCESSOR_H__
46 SharedDllHandle dlHandle,
const std::string& funcName,
const std::string& config);
52 BaseBackend& backend,
const std::set<std::string>& excludes, int32_t poolSize,
int gpuId);
75 bool requireLoop()
const {
return m_RequireInferLoop; }
88 uint32_t m_maxBatch = 0;
93 bool m_firstDimDynamicBatch =
false;
95 bool m_RequireInferLoop =
false;
This is a header file for pre-processing cuda kernels with normalization and mean subtraction require...
std::shared_ptr< DlLibHandle > SharedDllHandle
std::shared_ptr< IInferCustomProcessor > InferCustomProcessorPtr
Header file for the data types used in the inference processing.
std::vector< LayerDescription > LayerDescriptionList
Header file containing utility functions and classes used by the nvinferserver low level library.
Header file of the common declarations for the nvinferserver library.
std::shared_ptr< IOptions > SharedOptions
std::shared_ptr< CudaStream > SharedCuStream
Cuda based pointers.
MapBufferPool< std::string, UniqCudaTensorBuf > TensorMapPool
std::unique_ptr< TensorMapPool > TensorMapPoolPtr
std::unique_ptr< StreamManager > UniqStreamManager
Base class of inference backend processing.
Header file for inference processing backend base class.
std::shared_ptr< BaseBatchArray > SharedBatchArray
NvDsInferStatus
Enum for the status codes returned by NvDsInferContext.