Scalable training of graph convolutional neural networks for fast and accurate predictions of homo-lumo gap in molecules

HIGHLIGHTS

SUMMARY

    To effectively process large volumes of data in training large complex GCNN models, both data loading and model training must scale on multi-node hybrid CPUGPU high-performance computing (HPC) resources. HPC techniques to scale the training use distributed data parallelism (DDP) to distribute data in batches across different processes. For the study, the authors use HydraGNN, a library the authors have developed for scalable data reading and GCNN training with portability on a broad variety of computational resources. The authors use the ADIOS high-performance data management library for efficient storage and reading . . .

     

    Logo ScioWire Beta black

    If you want to have access to all the content you need to log in!

    Thanks :)

    If you don't have an account, you can create one here.

     

Scroll to Top

Add A Knowledge Base Question !

+ = Verify Human or Spambot ?