Tesla

  • CUDA AND GPU COMPUTING
  • GPU APPLICATIONS
  • GPUS FOR SERVERS AND WORKSTATIONS
NVIDIA Tesla P100
Infinite Compute Power for the Modern Data Centre
Infinite Compute Power for the Modern Data Centre

The Most Advanced Data centre GPU Ever Built

Artificial intelligence for self-driving cars. Predicting our climate's future. A new drug to treat cancer. Some of the world's most important challenges need to be solved today, but require tremendous amounts of computing to become reality. Today's data centres rely on many interconnected commodity compute nodes, limiting the performance needed to drive important High Performance Computing (HPC) and hyperscale workloads.

NVIDIA® Tesla® P100 GPU accelerators are the most advanced ever built for the data centre. They tap into the new NVIDIA Pascal™ GPU architecture to deliver the world's fastest compute node with higher performance than hundreds of slower commodity nodes. Higher performance with fewer, lightning-fast nodes enables data centres to dramatically increase throughput while also saving money.

With over 400 HPC applications accelerated—including 9 out of top 10—as well as all deep learning frameworks, every HPC customer can now deploy accelerators in their data centres.

 

TESLA P100 WITH NVLINK DELIVER UP TO 50X PERFORMANCE BOOST FOR DATA CENTRE APPLICATIONS

NVIDIA Tesla P100 performance versus Tesla K80
 

NVIDIA TESLA P100 ACCELERATOR FEATURES AND BENEFITS

The Tesla P100 is reimagined from silicon to software, crafted with innovation at every level. Each groundbreaking technology delivers a dramatic jump in performance to inspire the creation of the world's fastest compute node.

 
Exponential Performance Leap with Pascal Architecture

Exponential Performance Leap with Pascal Architecture

The new NVIDIA Pascal™ architecture enables the Tesla P100 to deliver the highest absolute performance for HPC and hyperscale workloads. With more than 21 TeraFLOPS of FP16 performance, Pascal is optimised to drive exciting new possibilities in deep learning applications. Pascal also delivers over 5 and 10 TeraFLOPS of double and single precision performance for HPC workloads.

 
Applications at Massive Scale with NVIDIA NVLink

Applications at Massive Scale with NVIDIA NVLink

Performance is often throttled by the interconnect. The revolutionary NVIDIA NVLink™ high-speed bidirectional interconnect is designed to scale applications across multiple GPUs by delivering 5X higher performance compared to today's best-in-class technology.
Note: this technology is not available in Tesla P100 for PCIe.

Unprecedented Efficiency with HBM2 Stacked Memory

Unprecedented Efficiency with CoWoS with HBM2

The Tesla P100 tightly integrates compute and data on the same package by adding CoWoS® (Chip-on-Wafer-on-Substrate) with HBM2 technology to deliver 3X memory performance over the NVIDIA Maxwell™ architecture. This provides a generational leap in time-to-solution for data-intensive applications.



 
Simpler Programming with Page Migration Engine

Simpler Programming with Page Migration Engine

Page Migration Engine frees developers to focus more on tuning for computing performance and less on managing data movement. Applications can now scale beyond the GPU's physical memory size to virtually limitless amount of memory.

 
CoWoS is a registered trademark of Taiwan Semiconductor Manufacturing Company Limited
Tesla P100

NVIDIA Tesla P100 for
Strong-Scale HPC

Tesla P100 with NVIDIA NVLink technology enables lightning-fast nodes to substantially accelerate time-to-solution for strong-scale applications. A server node with NVLink can interconnect up to eight Tesla P100s at 5X the bandwidth of PCIe. It’s designed to help solve the world’s most important challenges that have infinite compute needs in HPC and deep learning.

 

NVIDIA Tesla P100 for
Mixed-Workload HPC

Tesla P100 for PCIe enables mixed-workload HPC data centres to realise a dramatic jump in throughput while saving money. For example, a single GPU-accelerated node powered by four Tesla P100s interconnected with PCIe replaces up to 32 commodity CPU nodes for a variety of applications. Completing all the jobs with far fewer powerful nodes means that customers can save up to 70% in overall data centre costs.

NVIDIA Tesla P100 for Mixed-Workload HPC
 

PERFORMANCE SPECIFICATION FOR NVIDIA TESLA P100 ACCELERATORS

 
  P100 for PCIe-Based
Servers
P100 for NVLink-Optimised Servers
Double-Precision Performance 4.7 TeraFLOPS 5.3 TeraFLOPS
Single-Precision Performance 9.3 TeraFLOPS 10.6 TeraFLOPS
Half-Precision Performance 18.7 TeraFLOPS 21.2 TeraFLOPS
NVIDIA NVLink™ Interconnect Bandwidth - 160 GB/s
PCIe x16 Interconnect Bandwidth 32 GB/s 32 GB/s
CoWoS HBM2 Stacked Memory Capacity 16 GB or 12 GB 16 GB
CoWoS HBM2 Stacked Memory Bandwidth 732 GB/s or 549 GB/s 732 GB/s
Enhanced Programmability with Page Migration Engine Check Check
ECC Protection for Reliability Check Check
Server-Optimised for Data Centre Deployment Check Check

NVIDIA TESLA P100 PRODUCT LITERATURE

Pdf
NVIDIA Pascal Infographic (PDF – 1.03MB)
Pdf
NVIDIA Pascal Architecture Whitepaper
(PDF – Registration Required)
Pdf
P100 Datasheet (PDF – 342KB)
 
Pdf
P100 for PCIe Datasheet (PDF – 365KB)
 
 
 

Get the NVIDIA
Tesla P100 Today

The Tesla P100 is available in the NVIDIA® DGX-1™ system—purpose-built for deep learning.

Learn More

Where To Buy
Tesla

Find Systems Powered By NVIDIA
Tesla GPUs.

Buy Now

 
 
CUDA and GPU Computing

What is GPU Computing?
GPU Computing Facts
GPU Programming
Kepler GPU Architecture
GPU Cloud Computing
Contact Us

What is CUDA?
CUDA Showcase
CUDA Webinars
CUDA Training
CUDA Training Calendar
CUDA Research Centres
CUDA Teaching Centres

GPU Applications

Tesla GPU Applications
Tesla Case Studies
Tesla GPU Test Drive
OpenACC Directives

Tesla GPUs for
Servers for Workstations

Why Choose Tesla
Tesla Server Solutions
Tesla Workstation Solutions
Embedded Development Platform
Buy Tesla GPUs

Tesla News and Information

Tesla Product Literature
Tesla Software Features
Tesla Software Development Tools
NVIDIA Research
Tesla Alerts

Find Us Online

NVIDIA Blog NVIDIA Blog
Facebook Facebook
YouTube YouTube