The goals of the high performance biomedical computing program are to identify and solve computational problems in biomedicine that can benefit from high performance computing and communication hardware, modern software engineering principles, and efficient algorithms. This effort includes providing high performance parallel computer systems for the NIH staff and the development of parallel algorithms for biomedical applications. Using high performance parallel computers, biomedical scientists can greatly reduce the time it takes to complete computationally intensive tasks and adopt new approaches for processing experimental data. This may allow the inclusion of more data in a calculation, the determination of a more accurate result, a reduction in the time needed to complete a long computation, or the implementation of a new algorithm or more realistic model. With proper computer network connections and interactive user interface, parallel computing is readily available to a biomedical researcher in the laboratory or clinic at the investigator's computer workstation. In addressing these computational challenges, CBEL is developing algorithms for a number of biomedical applications where computational speedup is important. These include image processing of electron micrographs, radiation treatment planning, medical imaging, protein and nucleic acid sequence analysis, human genetic linkage analysis, protein folding prediction, nuclear magnetic resonance spectroscopy, x-ray crystallography, quantum chemical methods, and molecular dynamics simulations. The ultimate goal is to have high performance parallel computing facilitate the science that is researched at NIH. While developing these computationally demanding applications, CBEL is investigating the following high performance computing issues: partitioning a problem into many parts that can be independently executed on different processors; designing the parts so that the computing load can be distributed evenly over the available processors or dynamically balanced; designing algorithms so that the number of processors is a parameter and the algorithm can be configured dynamically for the available machine; developing tools and environments for producing portable parallel programs; monitoring system performance; and proving that a parallel algorithm on a given machine meets its specifications.
Dharmaraj, Christopher D; Thadikonda, Kishan; Fletcher, Anthony R et al. (2009) Reconstruction for Time-Domain In Vivo EPR 3D Multigradient Oximetric Imaging-A Parallel Processing Perspective. Int J Biomed Imaging 2009:528639 |
Lau, William W; Johnson, Calvin A; Becker, Kevin G (2007) Rule-based human gene normalization in biomedical text with confidence estimation. Comput Syst Bioinformatics Conf 6:371-9 |
Becker, Kevin G; Barnes, Kathleen C; Bright, Tiffani J et al. (2004) The genetic association database. Nat Genet 36:431-2 |
Joy, Deirdre A; Feng, Xiaorong; Mu, Jianbing et al. (2003) Early origin and recent expansion of Plasmodium falciparum. Science 300:318-21 |