Recent advances in speech-recognition technology have made spoken language a viable interface medium for human-computer communication, promising increased acceptance of medical computer systems by physicians and other health workers. However, speech recognition techniques have not yet advanced to the point where understanding of unconstrained speech is possible. We accordingly propose a staged research plan to study the way computer graphics can be bused to constrain the possible meanings of an utterance, thereby promoting physician-computer communication that is both effective and practical. We will create a prototype speech-and-graphics interface to ONCOCIN, a medical advice system previously developed in our laboratory. We will use this prototype in experiments to discover how we might expand the interaction language to support more fully the kinds of phrases physicians would want to speak to ONCOCIN's graphical datasheet. The results of these experiments will be used to extend the number and kind of phrases the ONCOCIN interface can accept. We will then explore the use of graphics to constrain the task of speech-recognition of limited dictated text. Using the same techniques employed for the graphical datasheet, we will design a structured progress note for reporting the status of cancer patients and implement it as a graphical interface. In a parallel endeavor, we will perform experiments to study the language used in creating progress notes by simulating a graphical progress note entry system. The research we propose benefits from our use of ONCOCIN, a medical expert system that already has a sophisticated graphical interface and is in limited clinical use. The research gains additional impetus from our collaboration with Speech Systems incorporated, a company that is providing us with advanced speech-recognition technology.