Dell EMC sets up $4M supercomputer for CSIRO to boost research
Dell EMC is announcing it will work with the Commonwealth Scientific and Industrial Research Organization (CSIRO) to build a new large-scale scientific computing system to expand CSIRO’s capability in deep learning, a key approach to furthering progress towards artificial intelligence.
The new system is named ‘Bracewell’ after Ronald N. Bracewell, an Australian astronomer and engineer who worked in the CSIRO Radiophysics Laboratory during World War II, and whose work led to fundamental advances in medical imaging.
In addition to artificial intelligence, the system provides capability for research in areas as diverse as virtual screening for therapeutic treatments, traffic and logistics optimization, modelling of new material structures and compositions, machine learning for image recognition and pattern analysis.
CSIRO requested tenders in November 2016 to build the new system with a $4 million budget, and following Dell EMC’s successful proposal, the new system was installed in just five days across May and June 2017. The system is now live and began production in early July 2017.
Greater scale and processing power enables richer, more realistic vision solution
One of the first research teams to benefit from the new processing power will be Data61’s Computer Vision group, led by Associate Professor Nick Barnes. His team develops the software for a bionic vision solution that aims to restore sight for those with profound vision loss, through new computer vision processing that uses large scale image datasets to optimize and learn more effective processing.
Bracewell will help the research team scale their software to tackle new and more advanced challenges, and deliver a richer and more robust visual experience for the profoundly vision impaired.
Assoc. Professor Barnes said
When we conducted our first human trial, participants had to be fully supervised and were mostly limited to the laboratory, but for our next trial we’re aiming to get participants out of the lab and into the real world, controlling the whole system themselves.
The system includes: 114 x PowerEdge C4130 with NVIDIA® Tesla P100 GPUs, NVLINK™, dual Intel® Xeon® processors and 100Gbps Mellanox® EDR InfiniBand. Totaling – 1,634,304 CUDA Compute Cores, 3,192 Xeon Compute Cores, 29TB RAM. 13 x 100Gbps 36p EDR InfiniBand switch fabric and Bright Cluster Manager Software 8.0.