WOLF Webinars

 

NVIDIA Turingis coming soon,

to our 3U/6U VPX products.

Join us for a 25 minute webinar and you'll get a guided tour of three new WOLF product designs that feature the NVIDIA Turing™ architecture, giving you the information you need to help make better system design decisions.

Production begins this fall.

Email any specific questions you want answered during the webinar to patrick@wolf.ca or ask them during the webinar itself, your choice. Anonymous chat will be enabled.

Register for the webinar to attend.

 

NVIDIA Turing Webinar Invitation for Military and Aerospace Companies
 

NVIDIA Turing™ 3U/6U VPX
Webinar
July 25, at 1:00 PM EST

In this 25-minute webinar, Greg Maynard will walk through three new product designs from WOLF that feature NVIDIA Turing™ GPUs. We'll cover digital and analog I/O options, production schedule, roadmap, and what we believe the overall impact of the NVIDIA Turing™ architecture will be for future products. Anonymous chat will be enabled for this webinar.

 

Why NVIDIA Turing™?

Let's start with the specifications for the NVIDIA® Quadro® Turing™ RTX5000 TU104, one of the GPUs we'll be incorporating in our new designs.

These specifications include:

  • 11 TFLOPS of peak single precision (FP32) performance
  • 3,072 CUDA Cores
  • 384 Tensor cores and 48 raytracing cores
  • New INT8 and INT4 precision modes for inferencing operations
  • Support for 16GB GDDR6 RAM with 448 GB/sec transfer rates
  • NVENC and NVDEC version 7.2, encoder/decoder acceleration for H.265 (HEVC) and H.264 (AVC) with support for B-Frames and significant bitrate savings
  • DisplayPort 1.4a featuring 8K resolution at 60Hz or 4K at 120Hz

The NVIDIA Turing™ GPUs include CUDA cores for parallel processing, Tensor cores for dedicated AI inference and ray tracing cores for superior rendering speeds. The WOLF FGX provides video conversion to formats which are not native to the GPU, such as SDI and various analog formats. Two of our new designs will include the WOLF FGX.

Support for GDDR6 memory provides twice the bandwidth of the previous generation’s GDDR5 memory.

Two of our new modules also include a PCIe switch, with non-transparent bridge capability, which enables the module to be configured to be compatible with various OpenVPX slot profiles while also supporting device lending, multi-casting, and asymmetric processing.

The Turing GPU with its Tensor cores provide these modules with the underlying architecture required for AI inference. Intended to work in conjunction with TensorRT, CUDA and CuDNN, the Turing Tensor Core design adds INT8 and INT4 matrix operations, while continuing to support FP16 for higher precision workloads.

Register for the webinar to attend.

 

 

 

 

 

 

 

This website uses cookies to collect information about site usage.