People



Core Faculty


  1. Dana Fine, UMassD Math, MA

  2. Robert Fisher, UMassD Physics, MA

  3. J. P. Hsu, UMassD Physics, MA

  4. Gaurav Khanna, UMassD Physics, MA

  5. David Kagan, UMassD Physics, MA


Collaborative Faculty


  1. Martin Bojowald, Penn State, PA

  2. Lior Burko, Georgia G College, GA

  3. Richard Price, UTexas Brownsville, TX

  4. Scott Hughes, MIT, MA

  5. Jorge Pullin, Louisiana State, LA

  6. Alessandra Buonanno, Max Planck Inst.


Current Students


  1. Rahul Kashyap, UMassD Physics, MA

  2. Patrick Parks, UMassD Physics, MA

  3. Tyler Spilhaus, UMassD Physics, MA

  4. Paritosh Verma, UMassD Physics, MA

  5. Will Duff Jr., UMassD Physics, MA

  6. David Tondorf-Dick, UMassD Physics, MA

  7. Jianyang Li, UMassD Physics, MA


Past Students (Current Location)


  1. Ed McClain, Louisiana State

  2. Charles Harnden, Teaching

  3. Dan Walsh, Teaching

  4. Gary Forrester, Teaching

  5. Mike DeSousa, Industry

  6. Justin McKennon, General Dynamics

  7. Dave Falta, Michigan State

  8. Matthew Hogan, Florida Atlantic Univ.

  9. Philip Mendonca, Florida Atlantic Univ.

  10. Rakesh Ginjupalli, IBM

  11. Sarah McLeod, Univ. of Melbourne

  12. Ian Nagle, Florida Atlantic Univ.

  13. Joshua Liberty, Univ. of Rhode Island

  14. Emanuel Simon, Univ. of Ulm, Germany

  15. Francis Boateng, UMass Lowell

  16. Subir Sabharwal, Columbia University

  17. Vishnu Paruchuri, Columbia U. Finance

  18. Jessica Rosen, Industry

  19. Peter Goetz, Univ. of Ulm, Germany

  20. Seth Connors, High-School Teacher

  21. Zhenhua Ning, Univ. of Illinois UC

  22. Nobuhiro Suzuki, Univ. of Rhode Island

  23. Mike O'Brien, Rutgers Univ.

  24. Matt Strafuss, MIT



This section is dedicated to the ongoing research projects of our group related to the use of alternative computing technologies for scientific computation. This work is currently supported under NSF grants (PHY-0831631, PHY-0902026, CNS-0959382, PHY-1016906 and PHY-1135664), AFOSR DURIP grant FA9550-10-1-0354 and also by Apple, Nvidia, IBM and Sony. Initials of the faculty involved, are in parentheses. Here is a list of research articles published using results generated from this effort: Phys. Rev. D78 064042 (2008); Class. Quant. Grav. 26 015014 (2009); PPAM (2009); PDCS (2009); IJMSSC (2009); Phys. Rev. D81 104009 (2010); CPC (2010); HPCS (2010); Class. Quant. Grav. 28 025012 (2011); Phys. Rev. Lett. 105 261102 (2010); Phys. Rev. Lett. 106 201103 (2011); Phys. Rev. D83 124002 (2011); Phys. Rev. D84 104006 (2011); Phys. Rev. X1 021017 (2011); Phys. Rev. D85 024046 (2012); XSEDE12 (2012); J. Sci. Comp. (2013); Phys. Rev. D88 024002 (2013); Phys. Rev. D88 044001 (2013); Phys. Rev. D88 104004 (2013); Phys. Rev. D89 044037 (2014); Preprint arXiv:1312.5210 (2013); Gen. Rel. Grav. 46, 1672 (2014)


Also check out the website of our new campus-wide Center for Scientific Computing & Visualization Research.


Projects



  1. The Sony PlayStation 3 Gravity Grid (GK)

        This NSF supported project has its own dedicated website. And here is its “big brother”.


  1. An Exploration of the use of OpenCL for Numerical Modeling & Data Analysis (GK)

         This NSF supported project has its own dedicated website. Please visit that site.


  1. Alternative Technologies for Numerical Relativity and LIGO Data Analysis (GK)

  2. This NSF funded project is an exploration of the use of alternative computing technologies, such as gaming consoles, multimedia workstations and similar hardware for scientific computing. This approach promises to deliver a significantly higher cost effectiveness (measured as performance-per-dollar and performance-per-watt) when compared with traditional workstations. The project focusses on investigating the performance of such hardware on computational problems in the area of Numerical Relativity and Gravitational Wave Data Analysis.

  3. Currently, the hardware under careful investigation in this project are: Sony PlayStation 3, IBM QS22 BladeCenter, Nvidia Tesla CUDA GPU and Intel/AMD x86 multi-core CPUs. Below we briefly report on the results of our comparative performance studies based on the mentioned hardware.

         Comparative performance on the EMRI Teukolsky Code:          

  1. This code is (2+1)D linear partial-differential equation solver with a mathematically and computationally challenging source-term. The code is used to model the astrophysical event of the capture of a small compact object into a supermassive black hole. The baseline performance used in Table #1 is a quad-core Xeon 2.66 GHz Intel CPU. The PS3 Cell refers to the Cell processor as available in the Sony PlayStation 3, while the Cell eDP refers to the one in the IBM QS22 blades. And finally the GPU here is Nvidia’s Tesla GPU. All performance numbers refer to double-precision floating point computation. Both the Cell and the GPU deliver order-of-magnitude gains in metrics such as performance, performance-per-dollar and performance-per-Watt.  This work was published in the Parallel and Distributed Computing and Systems (PDCS) conference in November 2009. An eprint of the publication is available here.

  2. Comparative performance on the FFTW Code:                           Data Figure

  3. Depicted above is the comparative performance data for different processor architectures on the basis of Fast-Fourier-Transform computation. The focus here is on large problem sizes (occupying tens of GBs of memory) and double-precision performance. Note that once the problem size becomes large enough that the Intel Xeons can’t make effective use of their processor-caches, their performance falls rapidly. It is interesting to note that in this situation having 4 or 8 Xeon cores makes no difference; one essentially obtains the same performance from both. This happens because FFT is a memory-bound code, and since memory-bandwidth is a serious bottleneck for the Xeon system, additional cores on the processor don’t help much. On the other hand, the Cell is designed to overcome this serious problem and exhibits strong and stable performance throughout. 

  4. Comparative performance on the Kerr Black Hole “Tails” Code:        

  5. We evaluate the use of Nvidia’s Tesla GPU and the Cell processor for high numerical precision (double, quadruple and octal) computations in the area of black hole physics i.e. for solving a hyperbolic partial-differential-equation using finite-differencing. Our final comparative results are depicted in Table #2: we obtain mixed results -- order-of-magnitude gains in overall performance in some cases and negligible gains in others. This work was published in the High Performance Computing Systems (HPCS) conference in 2010. An eprint of the publication is available here. See a more recent update on this front here: J. Sci. Comp. (2013);

 

Scientific Computation