Current graphics and visualization systems have to be built such that they can handle gigantic data sets. Such data sets include large scientific simulations such as nuclear and power simulations, data relevant to national priority and homeland security, and digital models of defense and commercial equipments such as tanks, aircraft, ships, and power plants. Such large data sets cannot fit into the main memory of the machines. Hence, the performance of the visualization systems depends on how efficiently they can process this data segments and still provide a holistic visualization for efficient and correct decision making. This project involves fundamental research in the analysis of methods that process these large geometry data sets for computer graphics and visualization applications. Using this analysis we model the data access pattern of common geometry processing algorithms. Such models can be used to organize data coherently in the secondary storage so that the access time of the data can be reduced. This will improve the performance of the graphics and visualization systems.
The coarse data analysis systems derive aggregate information from the data and hence are useful in streaming applications and out-of-core implementations; fine data analysis systems are interested in individual data points and their performance is dictated by data access patterns. In this research, we investigate if there is any natural grouping or partitioning of primitives that describes the data access patterns of most common geometry processing algorithms. We explore the existence of a function that would optimize the grouping of primitives and thus benefit a large class of geometry processing algorithms. This study will enable us to suggest an optimal layout for geometric data that would work best for common geometric algorithms.