Skip to content

CUDA solver support #17

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
4 tasks done
ritukeshbharali opened this issue Mar 25, 2024 · 1 comment
Closed
4 tasks done

CUDA solver support #17

ritukeshbharali opened this issue Mar 25, 2024 · 1 comment
Assignees
Labels

Comments

@ritukeshbharali
Copy link
Owner

ritukeshbharali commented Mar 25, 2024

It would be nice to add some GPU solver interfaces. Recently, NVIDIA has released their first generation GPU-accelerated Direct Sparse Solver, cuDSS. AMGx has been around and a solver interface has been written already, but not thoroughly tested. Here is a list of things to be done:

  • Wrapper to cuDSS
  • Test cuDSS on the FE problems (sym, unsym)
  • Write tests for cuDSS, AMGX (small problems, that checks whether they are working)
  • New makefile
@ritukeshbharali
Copy link
Owner Author

NVIDIA has not yet released a stable cuDSS version (0.2.1), and most solver configuration options are absent. Hence, such configuration options are missing in the wrapper too. cuDSS works well for solid mechanics problems (e.g., linear elasticity, phase-field) but not for the saddle point problems in poromechanics.

AmgX by default is a distributed memory solver, however, the wrapper operates only on shared-memory space. This is because Jive has no notion of global dof numbering, see #18. AmgX also works well for problems that are not of saddle point nature.

@github-project-automation github-project-automation bot moved this from In review to Done in CUDA support Apr 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Done
Development

No branches or pull requests

1 participant