People process natural language in real time and with very limited short-term memories. This researcher is developing a computational architecture that imitates these highly desirable attributes of human performance. The syntactic component processes sentences in real (linear) time, with quite limited and fixed memory requirements. It also appears that it will significantly hold down grammar size, and simplify semantics by systematically pruning away uninteresting ambiguities. The challenge now is to show that this new technique will scale up well for large fragments of natural language syntax. It is now proposed to implement a grammar with the coverage of two published test suites, reporting results with respect to grammar size and system performance. This will involve extending the current model to handle subcategorization, agreement, conjunction and adjunction. Another objective is to add the ability to build semantic forms compositionally, while holding to the finite resources hypothesis.