This research has two aspects: proving lower bounds on the resources needed to solve problems and finding algorithms which meet those resource bounds. A major focus of this research is on how various features of parallel models affect the resources needed for performing computations. Large shared-memory machines are likely to be implemented on processor interconnection networks and this implementation suggests shared-memory models with even more powerful primitive operations than those of the machines previously studied - primitive operations like unbounded fan- in threshold functions and other unbounded fan-in associative operations. Under investigation are the limitations and capabilities of these more powerful machines which have recently been proposed as very useful abstractions of large parallel computers. There still are a number of very basic computational problems such as division, connectivity, and transitive closure for which parallel algorithms exist but whose parallel complexity is not clearly understood. It is particularly important to improve the resource usage for these basic problems because the algorithms for solving them underlie the parallel algorithms for many more complex problems.