2010年11月17日 星期三

全球前五強 NVIDIA Tesla繪圖處理器囊括三強



全球前五強 NVIDIA Tesla繪圖處理器囊括三強
(超頻者天堂)

全球超級電腦排行榜 NVIDIA GPU超級電腦位居要津

201011月的全球前500最快超級電腦排行榜(Top 500)今日在www.top500.org網站出爐。在前五強超級電腦系統中,NVIDIA® Tesla™繪圖處理器(GPU)為其中三大系統挹注驚人運算力。 

排行榜中名列第一、第三和第四的超級電腦系統皆採用Tesla GPU
其中,最近發表的天河一號A (Tianhe-1A)2.507 petaflops的效能紀錄躍升第一名。
前三強超級電腦能提供超越前10大排名中其餘7大超級電腦之總效能。其中最值得注意的是,最新打入前500強超級電腦排行榜的Tsubame 2.0,是由東京工業大學打造的全新超級電腦;它同時可以提供petaflop等級的效能和維持極高的運算效率,耗電僅134萬瓦,在前五強超級電腦中,比其他4大系統更為節省功耗。  

NVIDIA公司首席科學家Bill Dally表示:「Tsubame 2.0是一項非常了不起的成就,它是前所未有、最節能的petaflop等級超級電腦,可達到效能與功耗的最佳平衡狀態。像Tsubame 2.0這樣突破性的系統提供了打造exascale級運算之道。」 

GPU已經快速成為世界上頂級超級電腦技術的幕後推手。GPU擁有數百個平行處理器核心,可分割大型的運算負載並可同時處理,大幅提升了系統效能。採用GPUCPU共同打造的異質運算系統都能大大降低對空間和功耗的需求,也讓超級運算比以往更平易近人,在價格方面也變得更實惠。 
Dally將擔任本週在美國紐奧良舉辦的2010年超級電腦大會的會議主講人,將於1117日就GPU Computing: To Exascale and Beyond的主題發表演說。欲了解更多NVIDIA Tesla高效能GPU運算產品,請瀏覽以下網站:http://www.nvidia.com.tw/object/tesla_computing_solutions_tw.html 

關於NVIDIA 
NVIDIA公司在1999年發明了繪圖處理器(GPU)後,便讓全世界認識到電腦繪圖功能的威力;從此,NVIDIA藉由在各種可攜式媒體播放器、小筆電到工作站等裝置中採用的突破性、互動式繪圖功能,不斷為視覺運算定義各種全新標準。NVIDIA在可編程繪圖處理器領域的專精為平行處理技術帶來各種突破,並讓超級運算技術在價格上變得平易近人,因而廣被採用。NVIDIA在美國擁有超過1,600項專利,其中包括現代運算技術基礎之設計與深入研究。欲瞭解更多NVIDIA詳細資訊,請瀏覽www.nvidia.com.tw網站


GPGPUs, China Take the Lead in TOP500

Today's unveiling of the 36th TOP500 list revealed what many have suspected for weeks: China has beaten out the US for the number one spot, and GPU-powered machines have established themselves in the upper echelons of supercomputing. For the first time ever, the United States failed to dominate the top seven machines, and claims but a single system in the top four.
There are now seven petaflop supercomputers in the world. China's new Tianhe-1A system, housed at the National Supercomputer Center in Tianjin, took top honors with a Linpack mark of 2.56 petaflops, pushing the 1.76 petaflop Jaguar supercomputer at ORNL into the number two spot. At number three was China again, with the Nebulae machine, at 1.27 petaflops. Japan's TSUBAME 2.0 supercomputer is the 4th most powerful at 1.19 petaflops. And at number five is the 1.05 petaflop Hopper supercomputer installed at NERSC/Berkeley Lab. That last two petaflop entrants are the recently-announced Tera 100 system deployed at France's Commissariat a l'Energie Atomique (CEA) and the older Roadrunner machine at Los Alamos.
Not only is the US getting drubbed at the top of the list, but so are CPUs. Of the top four machines, three are GPU-powered -- all using NVIDIA Tesla processors, by the way. (Yes, I realize there are CPUs in those systems as well, but the vast majority of the FLOPS are provided by the graphics chips.) Of the top four, only the US-deployed Jaguar system relies entirely on CPUs.
In aggregate, there are 11 systems on the TOP500 that are being accelerated with GPUs, ten of them using NVIDIA chips and one using AMD Radeon processors. Only three of these GPU-ified machines are US-based, with the most powerful being the 100-teraflop "Edge" system installed at Lawrence Livermore.

The scarcity of top US systems and top CPU-only systems are not unrelated. Because GPUs offer much better performance per watt, it's much easier today to build a multi-petaflop system accelerated by graphics hardware than having to rely solely on CPUs. For example, the number four TSUBAME 2.0 supercomputer, equipped with NVIDIA's latest Tesla GPUS, consumes just 1.4 MW to attain 1.19 petaflops on Linpack, while the number five Hopper machine, employing AMD's latest Opterons, requires 2.6 MW to deliver 1.05 petaflops. Since the performance-per-watt trajectory of graphics processor technology is much steeper than that of CPUs, it seems almost certain that GPUs will expand their presence on the top systems over the next few years.
We're sure to see plenty of hand-wringing about the US being late to the GPU supercomputing party. The first GPU-powered multi-petaflop machine planned in the States looks to be the second phase of Keeneland. Keeneland is a joint project between Georgia Tech, the University of Tennessee and ORNL, which is being funded through the NSF. The first phase is already deployed at Georgia Tech and made the TOP500 at number 117 with a 64-teraflop Linpack mark. The second-phase machine will be equipped with more than 500 next-generation GPUs (so presumably based on NVIDIA "Keple" processors). That system should extend well into multi-petaflop territory, but will likely not be up and running until later in 2011.
One longer term trend that is now becoming rather apparent is the declining number of IBM systems and the increasing number of Cray systems in the top 100 portion of the list. IBM, who for a long time dominated this segment, had 49 machines in the top 100 in November 2005. In five years, that number has been cut to just 22 systems. Cray, on the other hand, claimed just eight systems in the top 100 in November 2005. It now has 25, which is more than any other vendor.
The trend parallels a general industry-wide move toward x86-based machines and away from every other CPU architecture. IBM's 2005 dominance was the result of the popularity of its Blue Gene (PowerPC ASIC) and Power-based server machines. Cray, meanwhile, standardized its flagship XT and XE product lines on AMD Opterons. Although the top systems, in general, tend to be more heterogenous on the CPU side than HPC systems of lesser stature, the ubiquitous x86 is slowly squeezing out all other CPUs even for the most powerful supercomputers. But the allure of commodity chip architectures cuts both ways. As is now being made abundantly clear, the x86 will now have to share supercomputing honors with the new kid on the block -- GPUs.

Amazon推出新的NVIDIA繪圖晶片運算服務

2010/11/17 12:15:01

Amazon推出新的NVIDIA繪圖晶片運算服務

為了反映科技界的未來趨勢,Amazon Web Services (AWS)新增了一項新的運算方式,它將使用電腦的繪圖晶片。
AWSElastic Compute Cloud (EC2)可讓用戶使用不同類型的線上運算資源,只要付費就能使用。EC2一開始只是傳統的企業伺服器設定,但是Amazon為它增加了不同的服務以滿足特定的運算需求。
新的Cluster GPU服務是一台伺服器,配備兩顆四核英特爾Nehalem系列Xeon X5570處理器、兩顆Nvidia Fermi系列Tesla M2050繪圖晶片、22GB的記憶體、1.7TB儲存空間、以及10 gigabit的乙太網路連線。
繪圖處理器一開始只是專門用來加速電腦的繪圖運作,主要是3D遊戲與設計軟體。但是近年來繪圖晶片的效能大幅增進,早已超出當初設計的目的,這也是他們會現身在超級電腦之中的原因,例如世界最快的超級電腦Tianhe-1A
深入來說,GPU可以用來處理媒體資料例如重新剪輯影像大小,或是壓縮音訊並且也可以處理一些平行執行的計算工作。這是因為繪圖晶片很擅於執行這種類型的運作。因為每一顆Nvidia M2050具有448個處理核心可處理這種平行工作。
但是設計這些混合式系統並不是容易的事,因為繪圖晶片與傳統的處理器各自有其記憶體。為了使用它,程式設計者可以直接寫入到Nvidia 專為GPU處理的CUDA技術,使用專供它運用的程式碼函式庫,或是使用較高層的介面如OpenCL標準。
新的GPU服務目前只有搭配Linux執行,並且僅在Amazon的北維吉尼亞州區提供。目前它最貴的服務為每小時2.10美元。相較於傳統執行Linux伺服器的每小時費用為34美分,Windows伺服器則為每小時48美分。
(ZDNet Taiwan編輯部/Rex Chang)

Amazon EC2 Functionality with NVIDIA Graphic Cores

Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of operating systems, load them with your custom application environment, manage your network’s access permissions, and run your image using as many or few systems as you desire.

Cluster GPU Instances

Instances of this family provide general-purpose graphics processing units (GPUs) with proportionally high CPU and increased network performance for applications benefitting from highly parallelized processing, including HPC, rendering and media processing applications. While Cluster Compute Instances provide the ability to create clusters of instances connected by a low latency, high throughput network, Cluster GPU Instances provide an additional option for applications that can benefit from the efficiency gains of the parallel computing power of GPUs over what can be achieved with traditional processors. Learn more about use of this instance type for HPC applications.
Cluster GPU Quadruple Extra Large 22 GB memory, 33.5 EC2 Compute Units, 2 x NVIDIA Tesla “Fermi”M2050 GPUs, 1690 GB of local instance storage, 64-bit platform, 10 Gigabit Ethernet.

Amazon Web Services takes NVIDIA GPU servers to the cloud

Amazon Web Services on Monday launched graphical processing unit instances for its high performance computing workloads.
In a statement and blog post, Amazon said graphical processing unit (GPU) servers have become popular enough to bring to AWS. GPU servers, powered by Nvidia’s Tesla chips, are about to hit mainstream as Dell, HP and IBM bring formerly custom servers to market.
AWS said that these GPU servers have generally been out of reach for many companies due to costs and architecture. Now AWS will put these servers on its Elastic Compute Cloud (EC2) service.
Amazon’s Cluster GPU Instances allow for 22 GB of memory and 33.5 EC2 Compute Units. The GPU instances tap Amazon’s cluster network, which is designed for data intensive applications. Each GPU instance features two NVIDIA Tesla M2050 GPUs.

The key specs:
·         A pair of NVIDIA Tesla M2050 “Fermi” GPUs.
·         A pair of quad-core Intel “Nehalem” X5570 processors offering 33.5 ECUs (EC2 Compute Units).
·         22 GB of RAM.
·         1690 GB of local instance storage.
·         10 Gbps Ethernet, with the ability to create low latency, full bisection bandwidth HPC clusters.

For Nvidia, the AWS launch is a nice win. GPUs may get a larger footprint in the enterprise.
For Amazon, the GPU clusters are a nice way to tap verticals such as the oil and gas industry, graphics and engineering design.
Amazon noted that customers may mix and match the usual instances with the GPU flavor to coax the most performance out of the cloud.
The company has been testing out its GPU instances with a few customers such as Calgary Scientific, a medical imaging software company; BrightScope, a financial data analytics outfit; Elemental Technologies, which provides video processing applications.