The aim of this research project is to design fast and accurate numerical algorithms for the solution of large classes of mathematical equations that arise in engineering and science. In particular, the main concerns are the solution of integro-differential equations on complex domains and of signal and image processing problems. The approach is based on formulating the estimate of the solution of the equation at a point as the value of the smoothest solution (on average) at that point based on the given data. The resulting discrete equations can be shown to have specially structured matrices, which can be exploited to create fast solvers for these equations. The resulting methods have two main computational advantages. First, they can be designed to avoid gridding or triangulation of the complex domain. Second, these methods exhibit local convergence; that is, the rate at which the approximant converges to the solution at a point depends only on the local smoothness of the solution. These advantages enable the method to tackle equations with complicated singularity structures with relative ease.

Let Hs denote a Sobolev Hilbert space whose elements have s > 1 fractional derivatives. Suppose an unknown function f in Hs satisfies the equation L(F) = g, where L is a linear operator and g is a known function. Let Ln denote n linear functionals on Hr. Let q denote a linear functional on Hs. Then the best minmax estimate for q(f) can be computed from the minimum Sobolev norm function p in Hs that satisfies the constraints Ln(L(p)) = Ln(g). This p can be computed very rapidly since the optimal p is given by a nice set of equations that has Fast Multipole Method (FMM) structure when written in the proper representation. Also, it is possible to work with Lp Sobolev spaces with p = 1. In these cases the optimization problem is more complicated and can be reduced to linear programming problems, for which fast solvers are being developed that exploit the underlying FMM structure of the constraint matrix. The theoretical work consists of studying the convergence of the solution as n gets bigger, and also in proving the FMM structure of the resulting discrete equations. The algorithmic work consists of designing fast algorithms for constructing the FMM representation and then designing fast algorithms for the direct (non-iterative) solution of these equations. The application work consists of applying these ideas to image segmentation and multi-rate signal processing. Also, mesh free, locally convergent schemes are being developed for the solution of integral equations and elliptic partial differential equations on complex domains in two dimensions.

Project Report

Fast and Stable Structured Matrix Computations. Partly through our NSF-supported work, structured matrix computations have become a very active area of research. Many matrices arising from scientific computing applications are naturally "structured" as they can be characterized by far fewer parameters than the number of entries in the matrix. There are diverse forms of "structures," such as sparse (matrices with very large number of zero entries) and Toeplitz (matrices that are constant along each diagonal.) Often these structures are closely related and can be exploited for very significant savings in computational and storage costs. We have developed new structured algorithms based a particular matrix structure: semi-separability. Loosely speaking, a semi-separable matrix is one whose off-diagonal submatrices all have relatively low rank in a given precision. Surprisingly large classes of matrices can be represented as semi-separable matrices and therefore allow fast matrix computations. Partly through our work, fast structured algorithms are now an accepted approach to solving many very large sparse linear systems of equations. Communication-Avoiding Matrix Factorizations. On modern architectures, data communication has overtaken numerical computation as the dominant cost in large-scale matrix computations. Largely due to the work of Demmel and his colleagues, communication avoidance has become the central theme in current numerical linear algebra research. In this work, we consider the LU factorization and rank-revealing QR factorization algorithms. These algorithms are the workhorse in matrix computations and yet their practical performance can be far from optimal due to their excessive communication costs. In this work, we develop communication-avoiding versions of these algorithms that require orders of magnitude less communication. In both cases, we have to design the algorithms carefully so as to not create numerical instability while introducing new flows of numerical computation. Randomized Algorithms A classical problem in matrix computations is the efficient and reliable approximation of a given matrix by a matrix of much lower rank, with applications throughout wide areas of computational sciences and engineering. This problem becomes especially important today, as huge data sets are routinely processed with low-rank approximation techniques. Among the different approaches in the literature for computing low- rank approximations, randomized algorithms have attracted much of researchers’ recent attention due to their surprising reliability and computational efficiency in different application areas. Typically, such algorithms are shown to compute, with very high probability, low-rank approximations that are within a constant factor from optimal, and are known to perform even better in many practical situations. In this work [S1], we point out a close connection between randomized algorithms and the classical subspace iteration method in numerical linear algebra. Based on this connection, we provide strong new insight into both randomized algorithms and the subspace iteration method. This insight further allows us to develop a new class of condition number estimators that are far more reliable than any in existence.

Agency
National Science Foundation (NSF)
Institute
Division of Computer and Communication Foundations (CCF)
Application #
0830764
Program Officer
Dmitry Maslov
Project Start
Project End
Budget Start
2008-09-15
Budget End
2013-08-31
Support Year
Fiscal Year
2008
Total Cost
$307,600
Indirect Cost
Name
University of California Berkeley
Department
Type
DUNS #
City
Berkeley
State
CA
Country
United States
Zip Code
94704