Drexel University, Carnegie Mellon University and the University of Hawaii are advancing the education of software engineers through research on teaching software modularity. This project addresses designing for modularity with an approach based on design rule theory, design structure matrix modeling, and architecture review. Activities include development of labs and homework assignments featuring a series of evolution scenarios for realistic software applications. A teaching package which includes the activities, instructional materials, and a tool for detecting modularity problems is being constructed. The tool is used to identify design problems within student implementations. Several approaches to performing architecture reviews are being evaluated to determine which approach best helps students design better modularized software.

Designing for modularity is a fundamental topic in educating software engineers, yet there has been little rigorous research on how to teach it. This project leverages research results to facilitate teaching practice and has the potential to advance our basic understanding of the causes of design problems that may eventually result in maintenance difficulties. Project results may fundamentally change the way software design is taught by introducing rigorous modularity analysis techniques and semi-automatic architecture review into the classroom, resulting in better trained software designers who are equipped with the knowledge, skills, and tools to produce software that incurs much lower maintenance costs.

Project Report

Our objective in this research project was to advance our understanding of why students make modularity mistakes in software design and development, and how we can train students to correct these mistakes and avoid making them in the first place. We proposed a combination of review techniques and tools that would, we hypothesized, address the shortcomings of existing approaches to teaching software design, so that modularity problems—primarily unwanted dependencies—in students’ implementation could be automatically detected, and so that students could better understand the negative implications of these dependencies. To achieve this we prosecuted the following project activities: 1. We created a suite of pedagogical materials for software design that we employed to understand why students make modularity mistakes and how to automatically detect these mistakes. These materials included: a pre-survey, several lab assignments, an associated architecture review form that guides the student in assessing the impacts of possible future changes, a model solution and associated DSM (design structure matrix) showing the modular structure of the model solution, and a post-survey. 2. We conducted a set of experiments at Drexel and Cal Poly with approximately 80 undergraduate students. In these experiments, we separated students into three groups: a control group (which completed their design assignment without any formal design review), a self-review group, and an instructor-guided review group. 3. We analyzed the results of those experiments and commented on the implications of the findings for software design and for the teaching of software design. To make the instructor-guided review more standardized so that other instructors can be properly trained, in our latest experiment at Drexel university, for each student who participated in the instructor-guided review, we recorded the review process so that both the instruction process and student responses could be further studied. The experiments revealed that even for the best students, who excel in programming and in the theory of design patterns, it is still extremely easy for them to unintentionally add extra dependencies, which seriously undermine the intent (and value) of those patterns. Furthermore, we learned that a DSM, by itself, is not sufficient to aid the students in recognizing and correcting their mistakes. A combination of the DSM and architecture review appears to be more efficient and leads to better outcomes. This, then, is a guide to how software design should be taught to undergraduate students. First, they need to get rapid feedback on their designs, and design mistakes. Second, they need to receive a combination of automated analysis, which gives precision and rapid feedback, and human-directed reviews, which help the students to understand the consequences of the poor decisions that they made. Furthermore, we are in the process of running a much larger study, at Carnegie Mellon University, which should give us enough data to do much more rigorous study, including statistical analysis, of the results. We have published the results of our initial study in: Yuanfang Cai, Rick Kazman, Ciera Jaspan, Jonathan Aldrich (2013). Introducing Tool-Supported Architecture Review into Software Design Education.. Conference on Software Engineering Education and Training (CSEE&T), San Francisco, CA, USA.

Agency
National Science Foundation (NSF)
Institute
Division of Undergraduate Education (DUE)
Type
Standard Grant (Standard)
Application #
1140300
Program Officer
Paul Tymann
Project Start
Project End
Budget Start
2012-07-01
Budget End
2014-06-30
Support Year
Fiscal Year
2011
Total Cost
$22,255
Indirect Cost
Name
University of Hawaii
Department
Type
DUNS #
City
Honolulu
State
HI
Country
United States
Zip Code
96822