In global parallel scheduling all processors cooperate to schedule work. This should be contrasted with dynamic scheduling since it generates a well-balanced load without incurring a large overhead. This project will explore a new method, calling Runtime Incremental Parallel Scheduling (RIPS). In RIPS, the system scheduling activity alternates with the underlying computation work. Tasks are incrementally scheduled in parallel by the cooperating processors. This project aims at a framework for study of global parallel scheduling and its related techniques. The project includes the development of parallel scheduling algorithms, the control of low parallelism, the design and implementation of a preemptive scheduling system, and solutions to irregular applications. The objective is to show that runtime global parallel scheduling, as a new scheduling approach, can adapt to irregular and dynamic problems and produce high-quality scheduling.