|
NVIDIA DeepStream SDK API Reference
|
7.1 Release
|
Go to the documentation of this file.
22 #ifndef __NVDSINFER_BASE_BACKEND_H__
23 #define __NVDSINFER_BASE_BACKEND_H__
26 #include <condition_variable>
76 assert(!m_AllLayers.empty());
77 return (
int)m_AllLayers.size();
91 const std::string& bindingName)
const final;
119 m_KeepInputs = enable;
188 uint32_t m_InputSize = 0;
192 int32_t m_MaxBatchSize = 0;
196 bool m_IsFirstDimBatch =
false;
205 uint32_t m_UniqueId = 0;
209 bool m_KeepInputs =
false;
LayersTuple getOutputLayers() const final
Get the LayersTuple for output layers.
This is a header file for pre-processing cuda kernels with normalization and mean subtraction require...
bool isNonBatch(T b)
Checks if the input batch size is zero.
void setMaxBatchSize(uint32_t size)
Set the maximum batch size to be used for the backend.
~BaseBackend() override=default
Destructor, default.
int uniqueId() const
Get the unique ID of the object instance.
void resetLayers(LayerDescriptionList layers, int inputSize)
Set the layer description list of the backend.
std::vector< InputShapeTuple > InputShapes
Header file for the data types used in the inference processing.
InferTensorOrder
The type of tensor order.
LayerDescription * mutableLayerInfo(const std::string &bindingName)
Get the mutable layer description structure for the layer name.
Stores the information of a layer in the inference model.
std::tuple< const LayerDescription *, int > LayersTuple
Tuple containing pointer to layer descriptions and the number of layers.
std::unordered_map< std::string, int > LayerIdxMap
Map of layer name to layer index.
Inference processing backend interface header file.
std::vector< LayerDescription > LayerDescriptionList
Header file containing utility functions and classes used by the nvinferserver low level library.
bool isFirstDimBatch() const final
Returns boolean indicating if batched input is expected.
const LayerDescription * getLayerInfo(const std::string &bindingName) const final
Retrieve the layer information from the layer name.
std::unique_ptr< BaseBackend > UniqBackend
uint32_t getLayerSize() const final
Returns the total number of layers (input + output) for the model.
void setUniqueId(uint32_t id)
Set the unique ID for the object instance.
bool isNonBatching() const
Checks if the batch size indicates batched processing or no.
void setFirstDimBatch(bool flag)
Set the flag indicating that it is a batch input.
LayersTuple getInputLayers() const final
Get the LayersTuple for input layers.
int32_t maxBatchSize() const final
Returns the maximum batch size set for the backend.
bool needKeepInputs() const
Check if the keep input flag is set.
Base class of inference backend processing.
const LayerDescriptionList & allLayers() const
Returns the list of all descriptions of all layers, input and output.
bool checkInputDims(const InputShapes &shapes) const
Check that the list of input shapes have fixed dimensions and corresponding layers are marked as inpu...
InferTensorOrder getInputTensorOrder() const final
Returns the input tensor order.
void setInputTensorOrder(InferTensorOrder order)
Set the tensor order for the input layers.
void setKeepInputs(bool enable)
Set the flag indicating whether to keep inputs buffers.
uint32_t getInputLayerSize() const final
Returns the number of input layers for the model.