NVIDIA Tesla: Features and Benefits

C for the GPU
A widely accepted, high-level, open-standard programming language that unlocks the power of programmable GPUs to enable entirely new categories of applications and new levels of computing performance.

Supercomputing Performance
Peak performance of over 500 gigaflops per GPU on floating point operations in data intensive applications.

Multi-GPU Computing
Multiple GPUs can be controlled by a single CPU via the CUDA GPU computing driver, delivering incredible throughput on computing applications. The power of the GPU to solve large-scale problems can be multiplied by splitting the problem across multiple GPUs.

Flexible Form Factor
Workstation, deskside tower or standard 19" 1U rack form factor enables deployment in a wide range of environments.

Breakthrough Computing Density
Unmatched performance in high density form factors enables breakthrough levels of capability and productivity.
Available only on the Tesla deskside supercomputer and server.

Standard 1U Server Form Factor
Industry standard form factor optimized for large scale server deployments. Four Tesla GPU computing processors in a high density 1U chassis offer the highest performance for parallel applications. Performance optimized and power optimized products cover the range of IT server room requirements.
Available only on the Tesla Server.

Data Center Support in 1U Server
System monitoring, thermal control and fault notification in the 1U server product provide the necessary features for efficient integration of GPU computing servers into the data center.
Available only on the Tesla Server.

Ultra Quiet Design
Acoustics lower than most desktops at sub 40 dB for a quiet desktop environment.
Available only on the Deskside System.

NVIDIA GPU Computing Drivers
Management of the GPU resources and an extensive runtime library for enhanced data management and program execution. Offers a high speed data transfer path and streamlined driver for computing independent of the graphics driver.

Massively Multi-threaded Computing Architecture
Executes thousands of concurrent threads for high throughput parallel processing of mathematically intensive problems. Modular design scales across multiple NVIDIA GPUs.

Parallel Data Cache
Multiplies effective bandwidth and reduces latency for the cache by enabling groups of processors to collaborate on shared information in the local cache. Data is copied fewer times and is available immediately for all processors sharing the same Parallel Data Cache.

384-Bit Memory Interface
Delivers the industry’s highest memory bandwidth for blistering data transfer. Supports the world’s fastest GDDR3 memory with lower power consumption than previous systems.

IEEE 754 Single Precision Floating Point
128 independent IEEE 754 single precision floating point units with support for advanced floating point features found in modern CPUs.

High Speed, PCI Express Data Transfer
With low latency and high bandwidth, computing applications benefit from the highest data transfer rate possible through standard PCI-Express architecture.

1.5 GB GDDR3 Dedicated Memory with Ultra-Fast Memory Bandwidth
On board 1.5 GB local memory and up to 76.8 GB/sec. memory bandwidth per GPU enables computation of large problem sets.

Compatible with Industry Standard Architectures
Based on industry standard architecture, Tesla is compatible with x86 32-bit and 64-bit microprocessor architectures from Intel and AMD, as well as, Microsoft and Linux operating systems.



 
 
 
 
FacebookTwitterLinkedInGoogle+Pinterest