The goals of the High Performance Biomedical Computing Program are to identify and solve those complex biomedical problems that can benefit from high performance computing and communications hardware, modern software engineering principles, and efficient algorithms. This effort includes providing high performance parallel computer systems for the NIH and development of parallel algorithms for biomedical applications. Using a high performance parallel computer, biomedical scientists can greatly reduce the time it takes to complete computationally intensive tasks and adopt new approaches for processing experimental data. This may allow for the inclusion of more data in a calculation, the determination of a more accurate result, a reduction in the time needed to complete a long calculation, or the implementation of a new algorithm or more realistic model. With high bandwidth network connections and interactive user interfaces, parallel computing is readily accessible to a biomedical researcher in the laboratory or clinic at the investigators computer workstation. In addressing these computational challenges, the Computational Bioscience and Engineering Laboratory (CBEL) is developing algorithms for a wide range of biomedical applications where computational speed and advanced visualization techniques are important. These include image processing of electron micrographs, medical imaging, radiation treatment planning, electron paramagnetic resonance imaging and spectroscopy, human genetic linkage analysis, cDNA microarray data analysis, protein folding prediction, nuclear magnetic resonance spectroscopy, x-ray crystallography, quantum chemical methods, and molecular dynamics simulations. The ultimate goal is to have high performance, parallel computing facilitate the science that is done at the NIH. While developing these computationally demanding applications, CBEL is investigating the following high performance computing issues: partitioning a problem into many parts that can be independently executed on different processors; designing the parts so that the computing load can be distributed evenly over the available processors or dynamically balanced; designing algorithms so that the number of processors is a parameter and the algorithms can be configured dynamically for the available machine; developing tools and environments for producing portable parallel programs; incorporating real-time data visualization into the user environment, monitoring system performance; and proving that a parallel algorithm on a given machine meets its specifications
Dharmaraj, Christopher D; Thadikonda, Kishan; Fletcher, Anthony R et al. (2009) Reconstruction for Time-Domain In Vivo EPR 3D Multigradient Oximetric Imaging-A Parallel Processing Perspective. Int J Biomed Imaging 2009:528639 |
Lau, William W; Johnson, Calvin A; Becker, Kevin G (2007) Rule-based human gene normalization in biomedical text with confidence estimation. Comput Syst Bioinformatics Conf 6:371-9 |
Becker, Kevin G; Barnes, Kathleen C; Bright, Tiffani J et al. (2004) The genetic association database. Nat Genet 36:431-2 |
Joy, Deirdre A; Feng, Xiaorong; Mu, Jianbing et al. (2003) Early origin and recent expansion of Plasmodium falciparum. Science 300:318-21 |