People



Core Faculty


  1. Scott Field UMassD Math, MA

  2. Dana Fine, UMassD Math, MA

  3. Robert Fisher, UMassD Physics, MA

  4. J. P. Hsu, UMassD Physics, MA

  5. Gaurav Khanna, UMassD Physics, MA

  6. David Kagan, UMassD Physics, MA


Collaborative Faculty


  1. Martin Bojowald, Penn State, PA

  2. Lior Burko, Georgia G College, GA

  3. Richard Price, MIT / UMassD, MA

  4. Scott Hughes, MIT, MA

  5. Jorge Pullin, Louisiana State, LA

  6. Alessandra Buonanno, Max Planck Inst.


Current Students


  1. Ed McClain, UMassD Physics, MA

  2. Patrick Parks, UMassD Physics, MA

  3. Alec Yonika, UMassD Physics, MA

  4. Caroline Mallary, UMassD Physics, MA

  5. Izak Thuestad, UMassD Physics, MA

  6. Eliza Miley, UMassD Physics, MA


Past Students (Current Location)


  1. Rahul Kashyap, ICTS, India

  2. Will Duff, Industry

  3. Sarah Seva, Teaching

  4. Tyler Spilhaus, UAlaska

  5. David Torndorf-Dick, UNH

  6. Ed McClain, Louisiana State

  7. Charles Harnden, Teaching

  8. Dan Walsh, Teaching

  9. Gary Forrester, Teaching

  10. Mike DeSousa, Industry

  11. Justin McKennon, General Dynamics

  12. Dave Falta, Michigan State

  13. Matthew Hogan, Florida Atlantic Univ.

  14. Philip Mendonca, Florida Atlantic Univ.

  15. Rakesh Ginjupalli, IBM

  16. Sarah McLeod, Univ. of Melbourne

  17. Ian Nagle, Florida Atlantic Univ.

  18. Joshua Liberty, Univ. of Rhode Island

  19. Emanuel Simon, Univ. of Ulm, Germany

  20. Francis Boateng, UMass Lowell

  21. Subir Sabharwal, Columbia University

  22. Vishnu Paruchuri, Columbia U. Finance

  23. Jessica Rosen, Industry

  24. Peter Goetz, Univ. of Ulm, Germany

  25. Seth Connors, High-School Teacher

  26. Zhenhua Ning, Univ. of Illinois UC

  27. Nobuhiro Suzuki, Univ. of Rhode Island

  28. Mike O'Brien, Rutgers Univ.

  29. Matt Strafuss, MIT



This section is dedicated to the ongoing research projects of our group related to the use of OpenCL for scientific computation. This work was supported under NSF grant PHY-1016906 and also by Apple, Nvidia and IBM. Initials of the faculty involved, are in parentheses. Here is a list of research articles published using results generated from this effort: CPC (2010)Phys. Rev. Lett. 105 261102 (2010); Phys. Rev. Lett. 106 201103 (2011); Phys. Rev. D83 124002 (2011); Phys. Rev. D84 104006 (2011); Phys. Rev. X1 021017 (2011); Phys. Rev. D85 024046 (2012); XSEDE12 (2012); J. Sci. Comp. (2013); Phys. Rev. D88 024002 (2013); Phys. Rev. D88 044001 (2013); Phys. Rev. D88 104004 (2013); Phys. Rev. D89 044037 (2014); Preprint arXiv:1312.5210 (2013); Gen. Rel. Grav. 46, 1672 (2014); Phys. Rev. D90 084025 (2014); CSC’14 (2014); Phys. Rev. D91 104017 (2015); Phys. Rev. D93 041501R (2016); Phys. Rev. D94 084049 (2016); IEEE HPEC Conf. (2017); Phys. Rev. D95 081501 (2017); Phys. Rev. D96 024020 (2017); Class. Quant. Grav.  34  205012 (2017);


Also check out the website of our new campus-wide Center for Scientific Computing & Visualization Research.


Projects



  1. An Exploration of the use of OpenCL for Numerical Modeling & Data Analysis (GK)

  2. This NSF GOALI funded project was initiated in Spring 2011. Results closely related to this work are available here: CPC (2010); PDCS (2011); XSEDE12 (2012); J. Sci. Comp. (2013);

  3. Computational scientists and engineers have begun making use of many-core GPU architectures because these can provide significant gains in the overall performance of many numerical simulations at a relatively low cost. However, to the average computational scientist, these GPUs usually employ a rather unfamiliar and specialized programming model that often requires advanced knowledge of their architecture. In addition, these typically have their own vendor- and platform- specific software development frameworks (SDKs), that are different from the others in significant ways. For example: Nvidia's GPUs use CUDA SDK, AMD's GPUs use Stream SDK, while traditional multi-core processors (from Intel, AMD, IBM) typically employ an OpenMP-based parallel programming model.

  4. In 2009, an open standard was proposed by Apple to bring the software development for all these different processor architectures under a single standard -- the Open Computing Language (OpenCL) -- and all major multi-core processor and GPU vendors (Nvidia, AMD, IBM, Intel) have adopted this standard for their current and future hardware. OpenCL is of tremendous value to the scientific community because it is open, royalty-free and vendor- and platform- neutral. It delivers a high degree of portability across all major forms of current and future compute hardware, without significantly sacrificing performance.

  5. In this project, we make use of OpenCL to harness the massive parallelism offered by many-core architectures like GPUs in order to perform high-resolution and long-duration black hole binary inspiral computations, very efficiently. This plays a critical role in our EMRI Teukolsky Code's ability to achieve the required high level of accuracy and efficiency for such simulations.

  6. Comparative performance using our EMRI Teukolsky Code:

  7. The Table #1 below depicts the relative values for overall performance of our EMRI Teukolsky Code for several variants of current generation CPUs and GPUs. These results suggest that it is relatively straightforward to obtain order-of-magnitude gains in overall code performance by making use of many-core GPUs over multi-core CPUs and this fact is largely independent of the specific hardware architecture and vendor. All the systems used in these performance tests used a variant of the Linux operating system and OpenCL provided by the appropriate vendor. Detailed specifications of the compute hardware are included in the table. The baseline system here has dual AMD Opteron 6200, 8-core, 2.1 GHz CPUs.

  8. It is also noteworthy that the consumer-grade GPU, the AMD Radeon HD 7970, outperforms Nvidia's HPC-oriented, high-end Fermi M2050 GPU, while maintaining a significantly lower cost. The cost effectiveness of consumer-grade compute hardware is nearly an order-of-magnitude higher than the alternatives. This observation is consistent with our earlier findings that evaluated the Sony PlayStation 3 consumer gaming console for scientific computing: PS3 Gravity Grid.


 
Scientific Computation (OpenCL)