Core Faculty

  1. Scott Field UMassD Math, MA

  2. Dana Fine, UMassD Math, MA

  3. Robert Fisher, UMassD Physics, MA

  4. J. P. Hsu, UMassD Physics, MA

  5. Gaurav Khanna, UMassD Physics, MA

  6. David Kagan, UMassD Physics, MA

Collaborative Faculty

  1. Martin Bojowald, Penn State, PA

  2. Lior Burko, Georgia G College, GA

  3. Richard Price, MIT / UMassD, MA

  4. Scott Hughes, MIT, MA

  5. Jorge Pullin, Louisiana State, LA

  6. Alessandra Buonanno, Max Planck Inst.

Current Students

  1. Ed McClain, UMassD Physics, MA

  2. Feroz Shaik, UMassD Physics, MA

  3. Alec Yonika, UMassD Physics, MA

  4. Caroline Mallary, UMassD Physics, MA

  5. Connor Kenyon, UMassD Physics, MA

  6. Nur Rifat, UMassD Physics, MA

Past Students (Current Location)

  1. Izak Thuestad, NUWC

  2. Eliza Miley, NUWC

  3. Rahul Kashyap, ICTS, India

  4. Will Duff, Industry

  5. Sarah Seva, Teaching

  6. Tyler Spilhaus, UAlaska

  7. David Torndorf-Dick, UNH

  8. Ed McClain, Louisiana State

  9. Charles Harnden, Teaching

  10. Dan Walsh, Teaching

  11. Gary Forrester, Teaching

  12. Mike DeSousa, Industry

  13. Justin McKennon, General Dynamics

  14. Dave Falta, Michigan State

  15. Matthew Hogan, Florida Atlantic Univ.

  16. Philip Mendonca, Florida Atlantic Univ.

  17. Rakesh Ginjupalli, IBM

  18. Sarah McLeod, Univ. of Melbourne

  19. Ian Nagle, Florida Atlantic Univ.

  20. Joshua Liberty, Univ. of Rhode Island

  21. Emanuel Simon, Univ. of Ulm, Germany

  22. Francis Boateng, UMass Lowell

  23. Subir Sabharwal, Columbia University

  24. Vishnu Paruchuri, Columbia U. Finance

  25. Jessica Rosen, Industry

  26. Peter Goetz, Univ. of Ulm, Germany

  27. Seth Connors, High-School Teacher

  28. Zhenhua Ning, Univ. of Illinois UC

  29. Nobuhiro Suzuki, Univ. of Rhode Island

  30. Mike O'Brien, Rutgers Univ.

  31. Matt Strafuss, MIT

This section is dedicated to the ongoing research projects of our group related to the use of alternative computing technologies for scientific computation. This work is / was supported under the auspices of NSF grants (PHY-0831631, PHY-0902026, CNS-0959382, PHY-1016906 and PHY-1135664, PHY-1414440), US Air Force grants (FA9550-10-1-0354, 10-RI-CRADA-09) and also by Apple, Nvidia, IBM and Sony. Initials of the faculty involved, are in parentheses. Here is a list of research articles published using results generated from this effort: full list

Also check out the website of our new campus-wide Center for Scientific Computing & Visualization Research.


  1. The Sony PlayStation 3 Gravity Grid (GK)

        This NSF supported project has its own dedicated website. And here is its “big brother”.

  1. An Exploration of the use of OpenCL for Numerical Modeling & Data Analysis (GK)

         This NSF supported project has its own dedicated website. Please visit that site.

  1. Video-Gaming Technologies for Scientific Computing in Gravitational Physics (GK)

         This NSF supported project has its own dedicated website. Please visit that site.

  1. Alternative Technologies for Numerical Relativity and LIGO Data Analysis (GK)

  2. This NSF funded project is an exploration of the use of alternative computing technologies, such as gaming consoles, multimedia workstations and similar hardware for scientific computing. This approach promises to deliver a significantly higher cost effectiveness (measured as performance-per-dollar and performance-per-watt) when compared with traditional workstations. The project focusses on investigating the performance of such hardware on computational problems in the area of Numerical Relativity and Gravitational Wave Data Analysis.

  3. Currently, the hardware under careful investigation in this project are: Sony PlayStation 3, IBM QS22 BladeCenter, Nvidia Tesla CUDA GPU and Intel/AMD x86 multi-core CPUs. Below we briefly report on the results of our comparative performance studies based on the mentioned hardware.

         Comparative performance on the EMRI Teukolsky Code:          

  1. This code is (2+1)D linear partial-differential equation solver with a mathematically and computationally challenging source-term. The code is used to model the astrophysical event of the capture of a small compact object into a supermassive black hole. The baseline performance used in Table #1 is a quad-core Xeon 2.66 GHz Intel CPU. The PS3 Cell refers to the Cell processor as available in the Sony PlayStation 3, while the Cell eDP refers to the one in the IBM QS22 blades. And finally the GPU here is Nvidia’s Tesla GPU. All performance numbers refer to double-precision floating point computation. Both the Cell and the GPU deliver order-of-magnitude gains in metrics such as performance, performance-per-dollar and performance-per-Watt.  This work was published in the Parallel and Distributed Computing and Systems (PDCS) conference in November 2009. An eprint of the publication is available here.

  2. Comparative performance on the FFTW Code:                           Data Figure

  3. Depicted above is the comparative performance data for different processor architectures on the basis of Fast-Fourier-Transform computation. The focus here is on large problem sizes (occupying tens of GBs of memory) and double-precision performance. Note that once the problem size becomes large enough that the Intel Xeons can’t make effective use of their processor-caches, their performance falls rapidly. It is interesting to note that in this situation having 4 or 8 Xeon cores makes no difference; one essentially obtains the same performance from both. This happens because FFT is a memory-bound code, and since memory-bandwidth is a serious bottleneck for the Xeon system, additional cores on the processor don’t help much. On the other hand, the Cell is designed to overcome this serious problem and exhibits strong and stable performance throughout. 

  4. Comparative performance on the Kerr Black Hole “Tails” Code:        

  5. We evaluate the use of Nvidia’s Tesla GPU and the Cell processor for high numerical precision (double, quadruple and octal) computations in the area of black hole physics i.e. for solving a hyperbolic partial-differential-equation using finite-differencing. Our final comparative results are depicted in Table #2: we obtain mixed results -- order-of-magnitude gains in overall performance in some cases and negligible gains in others. This work was published in the High Performance Computing Systems (HPCS) conference in 2010. An eprint of the publication is available here. See a more recent update on this front here: J. Sci. Comp. (2013);


Scientific Computation