Core Faculty

  1. Scott Field UMassD Math, MA

  2. Dana Fine, UMassD Math, MA

  3. Robert Fisher, UMassD Physics, MA

  4. J. P. Hsu, UMassD Physics, MA

  5. Gaurav Khanna, UMassD Physics, MA

  6. David Kagan, UMassD Physics, MA

Collaborative Faculty

  1. Martin Bojowald, Penn State, PA

  2. Lior Burko, Georgia G College, GA

  3. Richard Price, MIT / UMassD, MA

  4. Scott Hughes, MIT, MA

  5. Jorge Pullin, Louisiana State, LA

  6. Alessandra Buonanno, Max Planck Inst.

Current Students

  1. Rahul Kashyap, UMassD Physics, MA

  2. Patrick Parks, UMassD Physics, MA

  3. Alec Yonika, UMassD Physics, MA

  4. Caroline Mallary, UMassD Physics, MA

  5. Izak Thuestad, UMassD Physics, MA

  6. Dominic Gastaldo, UMassD Physics, MA

Past Students (Current Location)

  1. Will Duff, Industry

  2. Sarah Seva, Teaching

  3. Tyler Spilhaus, UAlaska

  4. David Torndorf-Dick, UNH

  5. Ed McClain, Louisiana State

  6. Charles Harnden, Teaching

  7. Dan Walsh, Teaching

  8. Gary Forrester, Teaching

  9. Mike DeSousa, Industry

  10. Justin McKennon, General Dynamics

  11. Dave Falta, Michigan State

  12. Matthew Hogan, Florida Atlantic Univ.

  13. Philip Mendonca, Florida Atlantic Univ.

  14. Rakesh Ginjupalli, IBM

  15. Sarah McLeod, Univ. of Melbourne

  16. Ian Nagle, Florida Atlantic Univ.

  17. Joshua Liberty, Univ. of Rhode Island

  18. Emanuel Simon, Univ. of Ulm, Germany

  19. Francis Boateng, UMass Lowell

  20. Subir Sabharwal, Columbia University

  21. Vishnu Paruchuri, Columbia U. Finance

  22. Jessica Rosen, Industry

  23. Peter Goetz, Univ. of Ulm, Germany

  24. Seth Connors, High-School Teacher

  25. Zhenhua Ning, Univ. of Illinois UC

  26. Nobuhiro Suzuki, Univ. of Rhode Island

  27. Mike O'Brien, Rutgers Univ.

  28. Matt Strafuss, MIT

ABSTRACT: UMass Dartmouth has built an extremely low-cost supercomputer using 176 Sony PS3 gaming consoles installed in a refrigerated shipping container  or “reefer” of large cooling capability located conveniently on-campus. The cost effectiveness of this novel approach to supercomputing is nearly an order-of-magnitude higher over traditional supercomputers in data-centers. This system’s performance is comparable to nearly 3000 processor-cores of a typical laptop or desktop. 

High-performance computing or “supercomputing” by linking together a large number of computer processors in order to build a large parallel computer cluster is currently the most common approach towards solving complex computational research problems in nearly all areas of science and engineering. The largest supercomputers on the planet today that achieve petascale performance (several thousand trillion calculations per second!) have all been built using such a parallel cluster approach.

The idea of using consumer gaming hardware, such as the Sony PlayStation 3, to build low-cost supercomputers has been appreciated and implemented for several years now at various locations worldwide. That approach was pioneered by Physics Professor Khanna at UMass Dartmouth back in 2007 when he built a small eight PS3 cluster and was able to perform research grade simulations of black hole systems with it. The Air Force Research Lab (AFRL) in Rome, NY implemented the same approach at a very large scale in 2011, using 1,716 PS3s and was able to demonstrate ten-fold cost effectiveness of such a system over traditional supercomputers.  

Under the auspices of a Department of Defense CRADA agreement, the AFRL has now granted a significant chunk of their cluster to UMass Dartmouth’s new Center for Scientific Computing and Visualization Research (CSCVR) — 4 racks of PS3s i.e. 176 units with associated networking gear, cables and software. A challenge for the CSCVR was to develop a suitable environment for the machines (proper power and cooling capacity) at very low cost and also as expeditiously as possible. Khanna worked with the highly-capable staff at the Campus IT and Facilities departments to design and implement a quick, and extremely low-cost solution to support the machine racks. Redesigning an existing space / lab and outfitting the required power and cooling was deemed as too costly and time-consuming.  

The novel approach that was developed involved the purchase of a refrigerated shipping container, so-called “reefer”, of adequate size and cooling capacity and simply locating it conveniently on-campus with power and network drawn from a nearby building. Such an approach is extremely low-cost — such shipping containers are available in abundance (meats, dairy, etc. are shipped utilizing such containers all across the country to grocery stores); they have very high cooling capacity (foods must be frozen during shipment); and they are very low cost because of how common they are (its often easy to obtain a “used” container that is in adequate condition at extremely low cost). The installation of a power-supply panel, and electrical and network outlets in the container for the equipment racks was performed by campus electricians / IT and thus allowed for the total final cost to be very modest. The entire processes was completed from beginning-to-end in a matter of months and the cluster has been in full production operation since Winter 2014. Here are some pictures of this PS3 “reefer”.

The PS3 cluster is currently being used by the CSCVR to perform large and complex calculations in the context of black hole astrophysics, and also explore vulnerabilities in cybersecurity. This cluster’s performance is comparable to nearly 3000 processor-cores of a typical laptop or desktop. Most of these research projects are funded by the National Science Foundation. Its application to other areas of science and engineering will be explored and implemented over the next few years. 

Questions? Feel free to contact Gaurav Khanna about this research and the PS3 “reefer”. FORBES magazine recently did a media story on this project in May 2014. And here is another media story done by in August 2014. In December 2014, the NYTimes did a full length story covering this system as well.

Here is a list of research articles published using results generated using this cluster: Phys. Rev. D88 104004 (2013); Preprint arXiv:1312.5210 (2013); Gen. Rel. Grav. 46, 1672 (2014); CSC’14 (2014);


  1. Binary Black Hole Coalescence using Perturbation Theory (GK)

  2. This project broadly deals with estimating properties of the gravitational waves produced by the merger of two black holes. Gravitational waves are “ripples” in space-time that travel at the speed of light. These were theoretically predicted by Einstein’s general relativity, but have never been directly observed. Currently, there is an extensive search being performed for these waves by the newly constructed NSF LIGO laboratory and various other such observatories in Europe and Asia. The ESA and NASA also have a mission planned in the near future -- the LISA mission -- that will also be attempting to detect these waves. To learn more about these waves and the recent attempts to observe them, please visit the eLISA mission website.

  3. The computer code for solving the extreme-mass-ratio limit of this problem (commonly referred to as EMRI) is essentially an inhomogeneous wave-equation solver which includes a mathematically complicated source-term. The source-term describes how the smaller black hole (or star) generates gravitational waves as it moves in the space-time of the larger hole. Because of the mathematical complexity of the source-term, it is the most computationally intensive part of the whole calculation. On the PS3's Cell processor, it is precisely this part of the computation that is “farmed out” to the six (6) SPEs. This approach essentially eliminates the entire time spent on the source computation and yields a speed up of over a factor of six (6) over a PPE-only computation. It should be noted that the context of this computation is double-precision floating point operations. In single-precision, the speed-up is significantly higher. Furthermore, we distribute the entire computational domain across the multiple PS3s using MPI (message passing) parallelization. This enables each PS3 to work on its part of the domain and communicate the appropriate boundary data to the others as needed, on-the-fly.  Overall, the  performance of our PS3 “reefer” compares to nearly 3000 cores of typical desktop or laptop processors.

  1. Kerr Black Hole Radiative “Tails” (GK,LB)

  2. This research is about developing an understanding of the late-time behavior of physical fields (scalar, vector, tensor) evolving in a rotating (Kerr) black hole space-time. It is well known that at very late times such fields exhibit power-law decay, but the value of the actual index of this power-law “tail” behavior is somewhat controversial -- different researchers quote different results in the published literature. The goal of this project is to perform highly accurate computations to generate quality numerical data that would help resolve this conflict. The nature of the computations is such that not only does one require high accuracy but also high numerical floating point precision i.e. quadruple (128-bit) and octal (256-bit) precision, to obtain the quality data as needed for these studies.   

  3. We implemented high precision floating point arithmetic on the Cell’s SPEs by developing a scaled down port of the LBNL QD Library. This approach yields a factor of four (4) gain in performance over a PPE-only computation and a factor of thirteen (13) gain over the performance of the native long double datatype on the PPE.


PlayStation 3 “Reefer”
(Spring 2014)