NVIDIA Documentation Hub

Get started by exploring the latest technical information and product documentation

New Homepage Image
  • Documentation Center
    04/10/23
    The integration of NVIDIA RAPIDS into the Cloudera Data Platform (CDP) provides transparent GPU acceleration of data analytics workloads using Apache Spark. This documentation describes the integration and suggested reference architectures for deployment.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    01/23/23
    This documentation should be of interest to cluster admins and support personnel of enterprise GPU deployments. It includes monitoring and management tools and application programming interfaces (APIs), in-field diagnostics and health monitoring, and cluster setup and deployment.
  • Documentation Center
    03/16/24
    Developer documentation for Megatron Core covers API documentation, quickstart guide as well as deep dives into advanced GPU techniques needed to optimize LLM performance at scale.
  • Documentation Center
    01/23/23
    nvCOMP is a high performance GPU enabled data compression library. Includes both open-source and non-OS components. The nvCOMP library provides fast lossless data compression and decompression using a GPU. It features generic compression interfaces to enable developers to use high-performance GPU compressors in their applications.
  • Product
    04/12/23
    NVIDIA AI Aerial™ is a suite of accelerated computing platforms, software, and services for designing, simulating, and operating wireless networks. Aerial contains hardened RAN software libraries for telcos, cloud service providers (CSPs), and enterprises building commercial 5G networks. Academic and industry researchers can access Aerial on cloud or on-premises setups for advanced wireless and AI/machine learning (ML) research for 6G.
    • Edge Computing
    • Telecommunications
  • Product
    04/27/23
    NVIDIA AI Enterprise is an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade co-pilots and other generative AI applications. Easy-to-use microservices provide optimized model performance with enterprise-grade security, support, and stability to ensure a smooth transition from prototype to production for enterprises that run their businesses on AI.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    06/12/23
    A simulation platform that allows users to model data center deployments with full software functionality, creating a digital twin. Transform and streamline network operations by simulating, validating, and automating changes and updates.
  • Documentation Center
    02/03/23
    NVIDIA Ansel is a revolutionary way to capture in-game shots and share the moment. Compose your screenshots from any position, adjust them with post-process filters, capture HDR images in high-fidelity formats, and share them in 360 degrees using your mobile phone, PC, or VR headset.
  • Documentation Center
    05/09/24
    Your guide to NVIDIA APIs including NIM and CUDA-X microservices.
  • Product
    10/28/24
    The NVIDIA Attestation Suite enhances Confidential Computing by providing robust mechanisms to ensure the integrity and security of devices and platforms. The suite includes NVIDIA Remote Attestation Service (NRAS), the Reference Integrity Manifest (RIM) Service, and the NDIS OCSP Responder.
    • Aerospace
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
  • Product
    10/30/23
    NVIDIA Base Command Manager streamlines cluster provisioning, workload management, and infrastructure monitoring. It provides all the tools you need to deploy and manage an AI data center. NVIDIA Base Command Manager Essentials comprises the features of NVIDIA Base Command Manager that are certified for use with NVIDIA AI Enterprise.
    • Data Center / Cloud
  • Technical Overview
    01/25/23
    NVIDIA Base Command Platform is a world-class infrastructure solution for businesses and their data scientists who need a premium AI development experience.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Product
    04/18/23
    NVIDIA Base OS implements the stable and fully qualified operating systems for running AI, machine learning, and analytics applications on the DGX platform. It includes system-specific configurations, drivers, and diagnostic and monitoring tools and is available for Ubuntu, Red Hat Enterprise Linux, and Rocky Linux.
    • Data Center / Cloud
  • Documentation Center
    03/08/23
    NVIDIA Bright Cluster Manager offers fast deployment and end-to-end management for heterogeneous HPC and AI server clusters at the edge, in the data center and in multi/hybrid-cloud environments. It automates provisioning and administration for clusters ranging in size from a single node to hundreds of thousands, supports CPU-based and NVIDIA GPU-accelerated systems, and orchestration with Kubernetes.
    • HPC / Scientific Computing
    • Edge Computing
    • Data Center / Cloud
  • Documentation Center
    01/23/23
    NVIDIA Capture SDK (formerly GRID SDK) enables developers to easily and efficiently capture, and optionally encode, the display content.
  • Documentation Center
    02/06/23
    NVIDIA’s program that enables enterprises to confidently deploy hardware solutions that optimally run accelerated workloads—from desktop to data center to edge.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Product
    04/27/23
    NVIDIA® Clara™ is an open, scalable computing platform that enables developers to build and deploy medical imaging applications into hybrid (embedded, on-premises, or cloud) computing environments to create intelligent instruments and automate healthcare workflows.
    • Healthcare & Life Sciences
    • Computer Vision / Video Analytics
  • Product
    Serverless API to deploy and manage AI workloads on GPUs at planetary scale.
  • Product
    01/23/23
    NVIDIA cloud-native technologies enable developers to build and run GPU-accelerated containers using Docker and Kubernetes.
    • Cloud Services
    • Data Center / Cloud
  • Documentation Center
    02/27/23
    CloudXR is NVIDIA's solution for streaming virtual reality (VR), augmented reality (AR), and mixed reality (MR) content from any OpenVR XR application on a remote server--desktop, cloud, data center, or edge.
  • Documentation Center
    04/25/23
    Compute Sanitizer is a functional correctness checking suite included in the CUDA toolkit. This suite contains multiple tools that can perform different type of checks. The memcheck tool is capable of precisely detecting and attributing out of bounds and misaligned memory access errors in CUDA applications. The tool can also report hardware exceptions encountered by the GPU. The racecheck tool can report shared memory data access hazards that can cause data races. The initcheck tool can report cases where the GPU performs uninitialized accesses to global memory. The synccheck tool can report cases where the application is attempting invalid usages of synchronization primitives. This document describes the usage of these tools.
  • Product
    04/03/23
    The NVIDIA® CUDA® Toolkit provides a comprehensive development environment for C and C++ developers building GPU-accelerated applications. With the CUDA Toolkit, you can develop, optimize, and deploy your applications on GPU-accelerated embedded systems, desktop workstations, enterprise data centers, cloud-based platforms and HPC supercomputers.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    04/12/23
    The NVIDIA CUDA® Deep Neural Network (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, attention, matmul, pooling, and normalization.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Product
    07/14/23
    NVIDIA cuOpt™ is a GPU-accelerated solver that uses heuristics and metaheuristics to solve complex vehicle routing problem variants with a wide range of constraints.
    • Data Science
    • Robotics
  • Product
    03/22/23
    The NVIDIA Data Loading Library (DALI) is a collection of highly optimized building blocks, and an execution engine, for accelerating the pre-processing of input data for deep learning applications. DALI provides both the performance and the flexibility for accelerating different data pipelines as a single library. This single library can then be easily integrated into different deep learning training and inference applications.
    • Aerospace
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
  • Documentation Center
    01/23/23
    NVIDIA Data Center GPU drivers are used in Data Center GPU enterprise deployments for AI, HPC, and accelerated computing workloads. Documentation includes release notes, supported platforms, and cluster setup and deployment.
  • Documentation Center
    02/03/23
    NVIDIA Data Center GPU Manager (DCGM) is a suite of tools for managing and monitoring NVIDIA Data Center GPUs in cluster environments.
    • Aerospace
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
  • Documentation Center
    01/23/23
    Deep Graph Library (DGL) is a framework-neutral, easy-to-use, and scalable Python library used for implementing and training Graph Neural Networks (GNN). Being framework-neutral, DGL is easily integrated into an existing PyTorch, TensorFlow, or an Apache MXNet workflow.
  • Documentation Center
    07/27/23
    GPUs accelerate machine learning operations by performing calculations in parallel. Many operations, especially those representable as matrix multipliers will see good acceleration right out of the box. Even better performance can be achieved by tweaking operation parameters to efficiently use GPU resources. The performance documents present the tips that we think are most widely useful.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Product
    08/16/24
    NVIDIA DGX Cloud is an AI platform for enterprise developers, optimized for the demands of generative AI.
  • Product
    11/03/23
    Built from the ground up for enterprise AI, the NVIDIA DGX platform incorporates the best of NVIDIA software, infrastructure, and expertise in a modern, unified AI development and training solution. Every aspect of the DGX platform is infused with NVIDIA AI expertise, featuring world-class software, record-breaking NVIDIA-accelerated infrastructure in the cloud or on-premises, and direct access to NVIDIA DGXPerts to speed the ROI of AI for every enterprise.
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
    • HPC / Scientific Computing
  • Product
    03/17/23
    Deployment and management guides for NVIDIA DGX SuperPOD, an AI data center infrastructure platform that enables IT to deliver performance—without compromise—for every user and workload. DGX SuperPOD offers leadership-class accelerated infrastructure and agile, scalable performance for the most challenging AI and high-performance computing (HPC) workloads, with industry-proven results.
    • Data Center / Cloud
  • Product
    04/24/23
    System documentation for the DGX AI supercomputers that deliver world-class performance for large generative AI and mainstream AI workloads.
    • Data Center / Cloud
  • Documentation Center
    02/03/23
    The NVIDIA Deep Learning GPU Training System (DIGITS) can be used to rapidly train highly accurate deep neural networks (DNNs) for image classification, segmentation, and object-detection tasks. DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best-performing model from the results browser for deployment.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    01/23/23
    The NVIDIA EGX platform delivers the power of accelerated AI computing to the edge with a cloud-native software stack (EGX stack), a range of validated servers and devices, Helm charts, and partners who offer EGX through their products and services.
  • Product
    06/26/23
    NVIDIA’s accelerated computing, visualization, and networking solutions are expediting the speed of business outcomes. NVIDIA’s experts are here for you at every step in this fast-paced journey. With our expansive support tiers, fast implementations, robust professional services, market-leading education, and high caliber technical certifications, we are here to help you achieve success with all parts of NVIDIA’s accelerated computing, visualization, and networking platform.
  • Documentation Center
    02/03/23
    FLARE (Federated Learning Active Runtime Environment) is Nvidia’s open source extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflow to a privacy preserving federated paradigm. FLARE makes it possible to build robust, generalizable AI models without sharing data.
  • Product
    01/25/23
    Documentation for GameWorks-related products and technologies, including libraries (NVAPI, OpenAutomate), code samples (DirectX, OpenGL), and developer tools (Nsight, NVIDIA System Profiler).
    • Gaming
    • Content Creation / Rendering
  • Documentation Center
    02/03/23
    The GeForce NOW Developer Platform is an SDK and toolset empowering integration of, interaction with, and testing on the NVIDIA cloud gaming service.
  • Documentation Center
    08/29/24
    NVIDIA GPUDirect Storage (GDS) enables the fastest data path between GPU memory and storage by avoiding copies to and from system memory, thereby increasing storage input/output (IO) bandwidth and decreasing latency and CPU utilization.
    • Aerospace
    • Hardware / Semiconductor
    • Architecture / Engineering / Construction
  • Product
    05/30/24
    Grace is NVIDIA’s first datacenter CPU. Comprising 72 high-performance Arm v9 cores and featuring the NVIDIA-proprietary Scalable Coherency Fabric (SCF) network-on-chip for incredible core-to-core communication, memory bandwidth and GPU I/O capabilities, Grace provides a high-performance compute foundation in a low-power system-on-chip.
    • Data Center / Cloud
  • Documentation Center
    02/03/23
    NVIDIA GVDB Voxels is a new framework for simulation, compute and rendering of sparse voxels on the GPU.
  • Documentation Center
    02/03/23
    NVIDIA Highlights enables automatic video capture of key moments, clutch kills, and match-winning plays, ensuring gamers’ best gaming moments are always saved. Once a Highlight is captured, gamers can simply share it directly to Facebook, YouTube, or Weibo right from GeForce Experience’s in-game overlay. Additionally, they can also clip their favorite 15 seconds and share as an animated GIF - all without leaving the game!
  • Product
    07/25/23
    NVIDIA Holoscan is a hybrid computing platform for medical devices that combines hardware systems for low-latency sensor and network connectivity, optimized libraries for data processing and AI, and core microservices to run surgical video, ultrasound, medical imaging, and other applications anywhere, from embedded to edge to cloud.
    • Healthcare & Life Sciences
  • Documentation Center
    01/23/23
    The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries, and development tools used for developing HPC applications for the NVIDIA platform.
  • Product
    03/23/23
    NVIDIA IGX Orin™ is an industrial-grade platform that combines enterprise-level hardware, software, and support. As a single, holistic platform, IGX allows companies to focus on application development and realize the benefits of AI faster.
    • Architecture / Engineering / Construction
    • Media & Entertainment
    • Restaurant / Quick-Service
  • Documentation Center
    02/03/23
    NVIDIA IndeX is a 3D volumetric interactive visualization SDK that allows scientists and researchers to visualize and interact with massive data sets, make real-time modifications, and navigate to the most pertinent parts of the data, all in real-time, to gather better insights faster. IndeX leverages GPU clusters for scalable, real-time, visualization and computing of multi-valued volumetric data together with embedded geometry data.
  • External Page
    04/26/23
    Access to the content you are requesting is restricted to members of the NVIDIA DRIVE AGX SDK Program.
  • External Page
    04/26/23
    Access to the content you are requesting is restricted to members of the NVIDIA DRIVE AGX SDK Program.
  • External Page
    04/26/23
    Access to the content you are requesting is restricted to members of the NVIDIA DRIVE AGX SDK Program.
  • External Page
    04/26/23
    Access to the content you are requesting is restricted to members of the NVIDIA DRIVE AGX SDK Program.
  • External Page
    04/26/23
    Access to the content you are requesting is restricted to members of the NVIDIA DRIVE AGX SDK Program.
  • External Page
    04/26/23
    Access to the content you are requesting is restricted to members of the NVIDIA DRIVE Developer Program for NVIDIA DRIVE PX 2.
  • External Page
    04/26/23
    Archive Version
  • External Page
    11/16/22
    FLARE (Federated Learning Active Runtime Environment) is Nvidia’s open source extensible SDK that allows researchers and data scientists to adapt existing ML/DL workflow to a privacy preserving federated paradigm. FLARE makes it possible to build robust, generalizable AI models without sharing data.
  • External Page
    FME
    11/16/22
    Feature Map Explorer (FME) enables visualization of 4-dimensional image-based feature map data using a range of views, from low-level channel visualizations to detailed numerical information about each channel slice.
  • External Page
    11/16/22
    NVIDIA WaveWorks enables developers to deliver a cinematic-quality ocean simulation for interactive applications. The simulation runs in the frequency domain using spectral wave model for wind waves and displacements plus velocity potentials for interactive waves. A set of inverse FFT steps then transforms to the spatial domain ready for rendering. The NVIDIA WaveWorks simulation is initialized and controlled by a simple C API and the results are accessed for rendering as native graphics API objects. Parameterization is via intuitive real-world variables, such as wind speed and direction. These parameters can be used to tune the look of the sea surface for a wide variety of conditions - from gentle ripples to a heavy storm-tossed ocean based on the Beaufort scale.
  • Documentation Center
    02/03/23
    NVIDIA Highlights enables automatic video capture of key moments, clutch kills, and match-winning plays, ensuring gamers’ best gaming moments are always saved. Once a Highlight is captured, gamers can simply share it directly to Facebook, YouTube, or Weibo right from GeForce Experience’s in-game overlay. Additionally, they can also clip their favorite 15 seconds and share as an animated GIF - all without leaving the game!
  • External Page
    11/16/22
    Welcome to Isaac, a collection of software packages for making autonomous robots.
  • Documentation Center
    02/03/23
    NVIDIA IndeX is a 3D volumetric interactive visualization SDK that allows scientists and researchers to visualize and interact with massive data sets, make real-time modifications, and navigate to the most pertinent parts of the data, all in real-time, to gather better insights faster. IndeX leverages GPU clusters for scalable, real-time, visualization and computing of multi-valued volumetric data together with embedded geometry data.
  • External Page
    11/16/22
    Welcome to Isaac ROS, a collection of ROS2 packages for making autonomous robots.
  • External Page
    11/16/22
    See below for downloadable documentation, software, and other resources.
  • External Page
    11/16/22
    Earlier version of Jetson Linux documentation.
  • Documentation Center
    06/27/23
    NVIDIA MAGNUM IO™ software development kit (SDK) enables developers to remove input/output (IO) bottlenecks in AI, high performance computing (HPC), data science, and visualization applications, reducing the end-to-end time of their workflows. Magnum IO covers all aspects of data movement between CPUs, GPUsns, DPUs, and storage subsystems in virtualized, containerized, and bare-metal environments.
  • Documentation Center
    02/03/23
    ibcu++, the NVIDIA C++ Standard Library, provides a C++ Standard Library for your entire system which can be used in and between CPU and GPU code.
  • Documentation Center
    02/03/23
    The NVIDIA Material Definition Language (MDL) is a programming language for defining physically based materials for rendering. The MDL SDK is a set of tools to integrate MDL support into rendering applications. It contains components for loading, inspecting, editing of material definitions as well as compiling MDL functions to GLSL, HLSL, Native x86, PTX and LLVM-IR. With the NVIDIA MDL SDK, any physically based renderer can easily add support for MDL and join the MDL eco-system.
  • External Page
    11/16/22
    DeepStream SDK 3.0 is about seeing beyond pixels. DeepStream exists to make it easier for you to go from raw video data to metadata that can be analyzed for actionable insights.
  • External Page
    11/16/22
    NVIDIA PhysX is a scalable multi-platform physics simulation solution supporting a wide range of devices, from smartphones to high-end multicore CPUs and GPUs. The powerful SDK brings high-performance and precision accuracy to industrial simulation use cases from traditional VFX and game development workflows, to high-fidelity robotics, medical simulation, and scientific visualization applications.
  • External Page
    11/16/22
    NVIDIA works with Facebook and the community to accelerate PyTorch on NVIDIA GPUs in the main PyTorch branch, as well as, with ready-to-run containers in NGC.
  • External Page
    11/16/22
    NVIDIA DLSS is a new and improved deep learning neural network that boosts frame rates and generates beautiful, sharp images for your games.
  • Documentation Center
    01/23/23
    An application framework for achieving optimal ray tracing performance on the GPU.
  • Documentation Center
    01/23/23
    Create block-compressed textures and write custom asset pipelines using NVTT 3, an SDK for CUDA-accelerated texture compression and image processing.
  • Documentation Center
    01/23/23
    New version of the Photoshop Texture Plugin; allows creators to import and export GPU-compressed texture formats such as DDS and KTX, and to apply image processing effects on the GPU. Uses NVTT (NVIDIA Texture Tools, proprietary version - this is the separate Exporter) as the base library. Includes a command-line interface for scripting and use in developer toolchains.
  • Documentation Center
    01/23/23
    The Turing architecture introduced a new programmable geometric shading pipeline through the use of mesh shaders. The new shaders bring the compute programming model to the graphics pipeline as threads are used cooperatively to generate compact meshes (meshlets) directly on the chip for consumption by the rasterizer.
  • Documentation Center
    01/25/23
    NVIDIA Unified Compute Framework (UCF) is a low-code framework for developing cloud-native, real-time, & multimodal AI applications. It features low-code design tools for microservices & applications, as well as a collection of optimized microservices and sample applications. Adopting a Microservices Architecture approach, Unified Compute Framework enables developers to combine microservices into cloud-native applications or services, meeting the real-time requirements of interactive AI use cases.
  • Documentation Center
    02/03/23
    VRWorks™ is a comprehensive suite of APIs, libraries, and engines that enable application and headset developers to create amazing virtual reality experiences. VRWorks enables a new level of presence by bringing physically realistic visuals, sound, touch interactions, and simulated environments to virtual reality.
  • Documentation Center
    01/23/23
    OpenACC is a directive-based programming model designed to provide a simple yet powerful approach to accelerators without significant programming effort. With OpenACC, a single version of the source code will deliver performance portability across the platforms. OpenACC offers scientists and researchers a quick path to accelerated computing with less programming effort. By inserting compiler “hints” or directives into your C11, C++17 or Fortran 2003 code, with the NVIDIA OpenACC compiler you can offload and run your code on the GPU and CPU.
  • Documentation Center
    01/23/23
    GPU-accelerated enhancements to gradient boosting library XGBoost to provide fast and accurate ways to solve large-scale AI and data science problems.
  • Documentation Center
    02/03/23
    Thrust is a powerful library of parallel algorithms and data structures. Thrust provides a flexible, high-level interface for GPU programming that greatly enhances developer productivity. Using Thrust, C++ developers can write just a few lines of code to perform GPU-accelerated sort, scan, transform, and reduction operations orders of magnitude faster than the latest multi-core CPUs.
  • Documentation Center
    01/23/23
    These archives provide access to previously released TensorRT documentation versions.
  • Documentation Center
    01/23/23
    NVIDIA TensorFlow Quantization Toolkit provides a simple API to quantize a given Keras model. Initially, the network is trained on the target dataset until fully converged. The quantization step consists of inserting Q/DQ nodes in the pretrained network to simulate quantization during training. The network is then retrained for a few epochs to recover accuracy in a step called fine-tuning.
  • Documentation Center
    01/23/23
    PyTorch-Quantization is a toolkit for training and evaluating PyTorch models with simulated quantization. Quantization can be added to the model automatically, or manually, allowing the model to be tuned for accuracy and performance. The quantized model can be exported to ONNX and imported to an upcoming version of TensorRT.
  • Documentation Center
    01/23/23
    This is the Python API documentation for Polygraphy. Polygraphy is a toolkit designed to assist in running and debugging deep learning models in various frameworks.
  • Documentation Center
    01/23/23
    This is the Python API documentation for ONNX GraphSurgeon. ONNX GraphSurgeon provides a convenient way to create and modify ONNX models.
  • Documentation Center
    01/23/23
    The TensorRT container is an easy to use container for TensorRT development. The container allows you to build, modify, and execute TensorRT samples. These release notes provide a list of key features, packaged software in the container, software enhancements and improvements, and known issues for the latest and earlier releases. The TensorRT container is released monthly to provide you with the latest NVIDIA deep learning software libraries and GitHub code contributions that have been sent upstream. The libraries and contributions have all been tested, tuned, and optimized.
  • Documentation Center
    01/23/23
    This Samples Support Guide provides an overview of all the supported NVIDIA TensorRT samples included on GitHub and in the product package. The TensorRT samples specifically help in areas such as recommenders, machine comprehension, character recognition, image classification, and object detection.
  • Documentation Center
    01/23/23
    In TensorRT, operators represent distinct flavors of mathematical and programmatic operations. The following sections describe every operator that TensorRT supports. The minimum workspace required by TensorRT depends on the operators used by the network. A suggested minimum build-time setting is 16 MB. Regardless of the maximum workspace value provided to the builder, TensorRT will allocate at runtime no more than the workspace it requires.
  • Documentation Center
    01/23/23
    This NVIDIA TensorRT Developer Guide demonstrates how to use the C++ and Python APIs for implementing the most common deep learning layers. It shows how you can take an existing model built with a deep learning framework and build a TensorRT engine using the provided parsers. The Developer Guide also provides step-by-step instructions for common user tasks such as creating a TensorRT network definition, invoking the TensorRT builder, serializing and deserializing, and how to feed the engine with data and perform inference; all while using either the C++ or Python API.
  • Documentation Center
    01/23/23
    This is the API Reference documentation for the NVIDIA TensorRT library. The following set of APIs allows developers to import pre-trained models, calibrate networks for INT8, and build and deploy optimized networks with TensorRT. Networks can be imported from ONNX. They may also be created programmatically using the C++ or Python API by instantiating individual layers and setting parameters and weights directly.
  • Documentation Center
    01/23/23
    This NVIDIA TensorRT Installation Guide provides the installation requirements, a list of what is included in the TensorRT package, and step-by-step instructions for installing TensorRT.
  • Documentation Center
    01/23/23
    The nvJPEG2000 library provides high-performance, GPU-accelerated JPEG2000 decoding functionality. This library is intended for JPEG2000 formatted images commonly used in deep learning, medical imaging, remote sensing, and digital cinema applications.
  • Documentation Center
    01/23/23
    NVIDIA Performance Primitives (NPP) is a library of functions for performing CUDA-accelerated 2D image and signal processing. This library is widely applicable for developers in these areas and is written to maximize flexibility while maintaining high performance.
  • Documentation Center
    11/30/23
    The NVIDIA Data Loading Library (DALI) is a collection of highly optimized building blocks, and an execution engine, for accelerating the pre-processing of input data for deep learning applications. DALI provides both the performance and the flexibility for accelerating different data pipelines as a single library. This single library can then be easily integrated into different deep learning training and inference applications.
  • Documentation Center
    11/30/23
    This document describes the key features, software enhancements and improvements, and known issues for DALI.
  • Documentation Center
    11/30/23
    This document is the Software License Agreement (SLA) for NVIDIA Data Loading Library (DALI). This document contains specific license terms and conditions for NVIDIA DALI. By accepting this agreement, you agree to comply with all the terms and conditions applicable to the specific product(s) included herein.
  • Documentation Center
    01/23/23
    Deep Graph Library (DGL) is a framework-neutral, easy-to-use, and scalable Python library used for implementing and training Graph Neural Networks (GNN). Being framework-neutral, DGL is easily integrated into an existing PyTorch, TensorFlow, or an Apache MXNet workflow.
  • Documentation Center
    01/23/23
    Learn how to develop for NVIDIA DRIVE®, a scalable computing platform that enables automakers and Tier-1 suppliers to accelerate production of autonomous vehicles.