Product Info
Relevant Links
 
 
NVIDIA® TeslaTM C1060 Computing Processor enables the transition to energy efficient parallel computing power by bringing the performance of a small cluster to a workstation.

The NVIDIA® TeslaTM C1060 transforms a workstation into a high-performance computing machine that outperforms a small cluster. This gives technical professionals a dedicated computing resource at their desk-side that is much faster and more energy-efficient than a shared cluster in the data center. The Tesla C1060 is based on the massively parallel, many-core Tesla processor, which is coupled with the standard CUDA C programming environment to simplify many-core programming.

Feeding the HPC Industry’s Relentless Demand for Performance.
Keeps pace with the increasing demands of the toughest computing challenges including drug research, oil and gas exploration and computational finance.

Many-core Architecture Delivers Optimum Scaling across HPC Applications.
Tesla many core architecture meets the computational demands of applications whose complexity has outstripped the CPU’s ability to solve them.
 
High Efficiency Computing Platform for Energy-conscious Organizations.
Higher performance and density computing for solving complex problems with fewer resources and power.
 
NVIDIA CUDA™ Technology Unlocks the Power of Tesla Many-core Computing Products.
The only C language environment that unlocks the many core processing power of GPUs to solve the world’s most computationally-intensive challenges.

Features Benefits
Massively-Parallel Many Core Architecture with 240 processor cores Solve compute problems on your workstation that traditionally required a large cluster installation
4 GB High-Speed Memory Enables larger datasets to be stored locally for each processor to maximize benefit from the 102GB/s memory transfer speeds and minimize data movement around the system.
Widely accepted, easy to learn CUDA C programming environment Easily express application parallelism to take advantage of the GPU’s many core architecture
Scale to multiple GPUs and harness the performance of thousands of processor cores Solve large-scale problems by scaling to thousands of cores across multiple GPUs
IEEE 754 single & double precision floating point units Achieve the highest floating point performance from a single chip, while meeting the precision requirements of your application
Asynchronous transfer capability Turbo charges system performance because data transfers can be executed concurrently with computation
512-bit memory interface from GPU to on-board memory Fast GDDR3, 512-bit memory interface delivers 102 GB/sec memory bandwidth for blistering data transfer
Shared Data Memory Groups of processor cores can collaborate using low latency memory
High Speed, PCI-Express Gen 2.0 Data Transfer Fast and high-bandwidth communication between CPU and GPU
Tesla GPUs available in flexible form factor Tesla workstation products and 1U systems enable deployment in a wide range of environments
 
Form Factor 10.5" x 4.376", Dual Slot
# of Tesla GPUs 1
# of Streaming Processor Cores 240
Frequency of processor cores 1.296 Ghz
Single Precision floating point performance 933 GFlops
Double Precision floating point performance 78 GFlops
Floating Point Precision IEEE 754 single & double precision floating point
Total Dedicated Memory 4 GB
Memory speed 800 MHz
Memory Interface 512-bit GDDR3
Memory Bandwidth 102 GB/sec
Max Power Consumption 187.8 W
System Interface PCI Express x16 Generation 2
Auxiliary Power Connectors Two 6-pin or one 8-pin
Thermal Solution Active fan sink
Programming environment C(CUDA)


 
 
 
 
FacebookTwitterLinkedInGoogle+Pinterest