This subproject is one of many research subprojects utilizing the resources provided by a Center grant funded by NIH/NCRR. The subproject and investigator (PI) may have received primary funding from another NIH source, and thus could be represented in other CRISP entries. The institution listed is for the Center, which is not necessarily the institution for the investigator. Introduction: Parallel imaging has become a widely used clinical tool to accelerate MR data acquisition and improve diagnostic utility of what. Among the various reconstruction techniques available, autocalibrating methods have proven advantageous because they provide intrinsic coil sensitivity estimation and exhibit relatively benign artifacts, especially at reduced FOVs [1,2]. Recently, many advances have improved on the initial autocalibrated methods in both image and k-space. Specifically for k-space, several authors have shown that the accuracy of the method can be improved by using a 2D k-space kernel [3-5]; however, this improved accuracy comes at the expense of an increase in computation time. It has also been shown that the advantages of a 2D kernel in k-space can be realized by 1D kernels in hybrid (x, ky) space, where a unique 1D kernel is used at each x location [7]. Acquired data is transformed into hybrid space by applying a Fourier transform in the readout direction. However, finding the weights in hybrid space is computationally intensive. In addition, it has been shown that computation time can be reduced by transforming 1D k-space kernel weights into image space and reconstructing the image in the image domain. The computation involved in autocalibrating methods can be separated into two steps: 1) finding the kernel weights, and 2) using the kernel weights to remove the aliasing artifacts caused by insufficient gradient encoding. We show that the most computationally efficient means of obtaining the accuracy of a 2D k-space kernel is to find the kernel weights in k-space and synthesize missing k-space lines/remove aliasing artifacts in either hybrid or image space, depending on the application. References: [1] Griswold MA et al. MRM 47:1202-1210. [2] Griswold MA et al. MRM 52:1118-1126. [3] Kholmovski EG et al. ISMRM 2005, 2672. [4] Qu P et al. ISMRM 2005, 2667. [5] Wang Z et al. MRM 54:738-742, 2005. [6] Wang J et al. ISMRM 2005, 2428. [7] Skare S et al. ISMRM 2005, 2422.
Showing the most recent 10 out of 446 publications