|
NVIDIA DeepStream SDK API Reference
|
7.1 Release
|
Go to the documentation of this file.
13 #ifndef __NVDSINFERSERVER_CUSTOM_PROCESSOR_H__
14 #define __NVDSINFERSERVER_CUSTOM_PROCESSOR_H__
18 #include <condition_variable>
75 const std::vector<IBatchBuffer*>& primaryInputs, std::vector<IBatchBuffer*>& extraInputs,
114 const char* config, uint32_t configLen);
Interface of Custom processor which is created and loaded at runtime through CreateCustomProcessorFun...
This is a header file for pre-processing cuda kernels with normalization and mean subtraction require...
virtual NvDsInferStatus inferenceDone(const IBatchArray *outputs, const IOptions *inOptions)=0
Inference done callback for custom postpocessing.
Header file for the data types used in the inference processing.
virtual bool requireInferLoop() const
Indicate whether this custom processor requires inference loop, in which nvdsinferserver lib guarante...
virtual void notifyError(NvDsInferStatus status)=0
Notification of an error to the interface implementation.
virtual void supportInputMemType(InferMemType &type)
Query the memory type, extraInputProcess() implementation supports.
InferMemType
The memory types of inference buffers.
virtual ~IInferCustomProcessor()=default
IInferCustomProcessor will be deleted by nvdsinferserver lib.
Interface class for an array of batch buffers.
virtual NvDsInferStatus extraInputProcess(const std::vector< IBatchBuffer * > &primaryInputs, std::vector< IBatchBuffer * > &extraInputs, const IOptions *options)=0
Custom processor for extra input data.
NvDsInferStatus
Enum for the status codes returned by NvDsInferContext.