In a collaboration with GATAN Inc., we have reported the development of a novel, multi-specimen imaging system for high-throughput transmission electron microscopy that circumvents time-consuming steps involved in manual specimen loading. This cartridge-based loading system, called the Gatling, permits the sequential examination of as many as 100 specimens in the microscope for room temperature electron microscopy using mechanisms for rapid and automated specimen exchange. The software for the operation of the Gatling and automated data acquisition has been implemented in an updated version of our in-house program AutoEM. In the current implementation of the system, the time required to deliver 95 specimens into the microscope and collect overview images from each is about 13 hours. Regions of interest are identified from a low magnification atlas generation from each specimen and an unlimited number of higher magnifications images can be subsequently acquired from these regions using fully automated data acquisition procedures that can be controlled from a remote interface. We anticipate that the availability of the Gatling will greatly accelerate the speed of data acquisition for a variety of applications in biology, materials science and nanotechnology that require rapid screening and image analysis of multiple specimens. (see Lefman et al (2007) for more details) Strategies for the determination of 3D structures of biological macromolecules using electron crystallography and single particle electron microscopy utilize powerful tools for the averaging of information obtained from 2D projection images of structurally homogeneous specimens. In contrast, electron tomographic approaches have often been used to study the 3D structures of heterogeneous, one-of-a-kind objects such as whole cells where image averaging strategies are not applicable. Complex entities such as cells and viruses, nevertheless, contain multiple copies of numerous macromolecules that can individually be subjected to 3D averaging. We have deeloped a complete framework for alignment, classification, and averaging of volumes derived by electron tomography that is computationally efficient and effectively accounts for the missing wedge that is inherent to limited angle electron tomography. Modeling the missing data as a multiplying mask in reciprocal space we have shown that the effect of the missing wedge can be accounted for seamlessly in all alignment and classification operations. We solve the alignment problem using the convolution theorem in harmonic analysis, thus eliminating the need for approaches that require exhaustive angular search, and adopt an iterative approach to alignment and classification that does not require the use of external references. We also demonstrated that our method could be successfully applied for 3D classification and averaging of phantom volumes as well as experimentally obtained tomograms of GroEL where the outcomes of the analysis can be quantitatively compared against the expected results. (see Bartesaghi et al (2008) for more details) Another area of focus has been on image processing and segmentation. Previous studies using nonlinear anisotropic methods, wavelet based methods and filtering have already demonstrated the value of image denoising in various 2D and 3D datasets. The existing methods usually consider clean data (or assume that clean data is available) and artificially add different types of noise to the clean data and then denoise the noisy data assuming that the statistics of noise is known using various algorithms. We have investigated the use of transform-domain denoising techniques and feature extraction to improve quantitative interpretation of cryo electron tomograms of viruses and cells. In our approach, we have used four metrics for analysis including the Kullback-Leibler (KL) distance based GOF test, Fourier ring correlation and single-image SNR to iteratively obtain the optimal denoising algorithm for a given 3D volume. Using these methods, we show that denoising, when used with care is an enormously powerful tool for the automated interpretation of complex 3D data sets at high throughput. (see Narasimha, Aganj et al (2008) for more details). In continued developments for automated image analysis, we have developed a machine-learning tool for automatic texton-based joint classification and segmentation of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscopy (IA-SEM). For diagnosing signatures that may be unique to cellular states, automatic tools with minimal user intervention need to be developed for high-throughput data mining and analysis of these large volume data sets (typically 2GB/cell). Challenges for developing such a tool in 3D electron microscopy arise in particular due to low contrast and low signal-to-noise ratios (SNR). Our approach is based on block-wise classification of images into a trained list of regions. Classification is performed using a k-nearest neighbor (k-NN) classifier, support vector machines (SVMs), adaptive boosting (AdaBoost) and histogram matching using an NN classifier. In addition, we study the computational complexity vs. segmentation accuracy tradeoff of these classifiers. Segmentation results demonstrate that our approach using minimal training data performs close to semi-automatic methods using the variational level-set method and manual segmentation by an experienced user. We apply the method to investigate quantitative parameters such as volume of the cytoplasm occupied by mitochondria, differences between the surface area of inner and outer membranes and mean mitochondrial width that are representative quantities that may have relevance to distinguishing cancer cells from normal cells. (see Narasimha, Ouyang et al (2008) for more details). Chemical definition of complex protein assemblies is integral to interpreting 3D structure. We have explored a general approach for the determination of absolute amounts and the relative stoichiometry of proteins in a mixture using fluorescence and mass spectrometry. We engineered a gene to express green fluorescent protein (GFP) with a synthetic fusion protein (GABGFP) in Escherichia coli to function as a spectroscopic standard for the quantification of an analogous stable isotope-labeled, non-fluorescent fusion protein (GAB*) and for the quantification and stoichiometric analysis of purified transducin, a heterotrimeric G-protein complex. Using our approach, the stoichiometry of the three transducin subunits was measured to be 1:1.1:1.15 over a 5-fold range of labeled internal standard with a relative standard deviation of 9%. Fusing a unique genetically coded spectroscopic signal element with concatenated proteotypic peptides provides a powerful method to accurately quantify and determine the relative stoichiometry of multiple proteins present in complexes or mixtures that cannot be readily assessed using classical gravimetric, enzymatic, or antibody-based technologies. (see Nanavati et al (2008) for more details).

Agency
National Institute of Health (NIH)
Institute
National Cancer Institute (NCI)
Type
Intramural Research (Z01)
Project #
1Z01BC010826-02
Application #
7733260
Study Section
Project Start
Project End
Budget Start
Budget End
Support Year
2
Fiscal Year
2008
Total Cost
$587,287
Indirect Cost
Name
National Cancer Institute Division of Basic Sciences
Department
Type
DUNS #
City
State
Country
United States
Zip Code
Milne, Jacqueline L S; Subramaniam, Sriram (2009) Cryo-electron tomography of bacteria: progress, challenges and future prospects. Nat Rev Microbiol 7:666-75
Lengyel, Jeffrey S; Milne, Jacqueline L S; Subramaniam, Sriram (2008) Electron tomography in nanoparticle imaging and analysis. Nanomedicine (Lond) 3:125-31
Lefman, Jonathan; Morrison, Robert; Subramaniam, Sriram (2007) Automated 100-position specimen loader and image acquisition system for transmission electron microscopy. J Struct Biol 158:318-26