NVIDIA today announced NVIDIA CUDA 6, the latest version of the world's most pervasive parallel computing platform and programming model.
The CUDA 6 platform makes parallel programming easier than ever, enabling software developers to dramatically decrease the time and effort required to accelerate their scientific, engineering, enterprise and other applications with GPUs.
It offers new performance enhancements that enable developers to instantly accelerate applications up to 8X by simply replacing existing CPU-based libraries. Key features of CUDA 6 include:
- Unified Memory -- Simplifies programming by enabling applications to access CPU and GPU memory without the need to manually copy data from one to the other, and makes it easier to add support for GPU acceleration in a wide range of programming languages.
- Drop-in Libraries -- Automatically accelerates applications' BLAS and FFTW calculations by up to 8X by simply replacing the existing CPU libraries with the GPU-accelerated equivalents.
- Multi-GPU Scaling -- Re-designed BLAS and FFT GPU libraries automatically scale performance across up to eight GPUs in a single node, delivering over nine teraflops of double precision performance per node, and supporting larger workloads than ever before (up to 512 GB). Multi-GPU scaling can also be used with the new BLAS drop-in library.
"By automatically handling data management, Unified Memory enables us to quickly prototype kernels running on the GPU and reduces code complexity, cutting development time by up to 50 percent," said Rob Hoekstra, manager of Scalable Algorithms Department at Sandia National Laboratories. "Having this capability will be very useful as we determine future programming model choices and port more sophisticated, larger codes to GPUs."
"Our technologies have helped major studios, game developers and animators create visually stunning 3D animations and effects," said Paul Doyle, CEO at Fabric Engine, Inc. "They have been urging us to add support for acceleration on NVIDIA GPUs, but memory management proved too difficult a challenge when dealing with the complex use cases in production. With Unified Memory, this is handled automatically, allowing the Fabric compiler to target NVIDIA GPUs and enabling our customers to run their applications up to 10X faster."
In addition to the new features, the CUDA 6 platform offers a full suite of programming tools, GPU-accelerated math libraries, documentation and programming guides.
Version 6 of the CUDA Toolkit is expected to be available in early 2014. Members of the CUDA-GPU Computing Registered Developer Program will be notified when it is available for download. To join the program, register here.
For more information about the CUDA 6 platform, visit NVIDIA booth 613 at SC13, Nov. 18-21 in Denver, and the NVIDIA CUDA website.
Recommended for You
Latest News Posts
- Psyonix reveals 'Salty Shores' Rocket League content update
- Overwatch two-year anniversary skins and patch notes reveal
- Human Head Studios release Rune: Ragarok gameplay video
- Mario + Rabbids Kingdom Battle: Donkey Kong Adventure unveil
- Hi-Rez Studios fires art contractor for copied Overwatch art
- Buy 5 Pieces Samsung Galaxy S9 (www.BizFests.com) 64GB $1,595
- Buy 5 Pieces Samsung Galaxy S9 64GB SM-G960UZKAXAA $1,595
- X58a-ud5 rev 1.0 Bios with VT-d support
- HyperX FURY DDR4-3466 16GB Dual-Channel Memory Kit Review
- If I use the M2_1 slot in the x470 Taichi Ultimate ,will it take away PCIE lanes from the GPU?
- Micron Launches Industry's First Enterprise SATA Solid State Drives Built on Leading 64-layer 3D NAND Technology
- Micron, Rambus, Northwest Logic and Avery Design to Deliver a Comprehensive GDDR6 Solution for Next-Generation Applications
- Toshiba Memory America Unveils UFS Devices Utilizing 64-Layer, 3D Flash Memory
- ASUS Announces GeForce GTX 1070 Ti Series Gaming Graphics Cards
- ASUS Announces ASUS Hangouts Meet Hardware Kit