This project's goal is to conduct a large study involving more than 400 students at two different institutions to study the relationship between conceptual misunderstandings, code testing and development times.
Conceptual understanding is measured by engaging students in reading and annotating documents that contain both accurate and erroneous concepts, and measuring how many students detect the errors and how many iterations it takes to detect all errors. This is then compared with code debugging times with the expectation that the text engagement process will actually reduce the amount of time spent on project tasks. The project establishes a scalable model of Reading-Annotation-Visualization (RAV) that can be deployed in any of the early programming courses. RAV model proceeds by pre-analyzing the important conceptual challenges students will encounter in the programming phase and then supporting the comprehension of these concepts prior to the programming phase through annotation, visualization and classroom discussions.
This project assesses the potential of a participatory reading, discussion and learning environment to improve the reading comprehension, writing, and programming skills of computer science undergraduates. It contributes to the understanding of the relationship between reading and discussion and programming skills in computer science. The project results are disseminated via conference proceedings. RAV model and Classroom Salon are shared with any other institution willing to implement a classroom instruction model where understanding of student misconceptions early in the learning process is critical to better learning outcomes.
The goal of this project was to test whether the introduction of a collaborative model of learning in the classroom can actually reduce the amount of individual time spent on task. To achieve this goal, a web-based software platform, Classroom Salon or CLS, was developed at Carnegie Mellon University. CLS is a web site that is part electronic textbook and part social network. In CLS instructors form students into social groups, called salons, and introduce documents, text, and videos into the salons. Students then collaborate by highlighting, annotating, and discussing (but not editing) the text/video and cooperatively answer questions about the content. CLS also contains analytical tools that help instructors determine how much students are participating and specifically where they are having trouble or find interesting information. CLS is a data driven tool that is built on the principles of social cognitive theory where people learn by observing what others do and do not do. The methodology used in this research study was based on the Read-Annotate-Visualize (RAV) model. Under RAV model, instructor must prepare detailed or summarized documents or videos that outline the important facts and observations about the concept(s) that are covered in the course. Documents and videos can be prepared in multiple ways based on pedagogical goals of the course and the instructor. During the first stage of the experiment, we presented conceptual documents to see if the students were able to detect incomplete or inaccurate facts. During the second stage of the experiment we presented a number of code examples and asked students to detect errors through annotations made in CLS. The annotations were then aggregated to identify critical parts of the text or code examples, where most students interpret incorrectly (or correctly). The research employed a control and experimental group and spanned two institutions, Carnegie Mellon University and Ithaca College. At Ithaca College two courses were used in the research, a computer organization course that also covered assembly language and a computer networking course. Class sizes averaged about 20 students in each of these courses and the experimental approach was used three times in the organization course and once in the networking course. RAV assessment was based on an analysis of all the data collected in the courses to see if community annotation visualization actually reduces the debugging errors and increase comprehension of material. The assessment was primarily based on comparing debugging times of students who were part of the RAV model versus students who were in the control group. In addition to this data, other data (including assignment and course grades, student surveys and time spent on assignments) was collected from the control group and/or experimental group The results of our analysis from the initial use of the RAV after the first year of the project showed that RAV had a marginal positive affect on student learning. Initial results were presented in the paper Classroom Salon: A Tool for Social Collaboration, Ananda Gunawardena and John Barr, proceedings of the 43rd ACM Technical Symposium, on Computer Science Education (SIGCSE), Feb 29-March 3, 2012, Raleigh, North Carolina. Seeking deeper results, we extended the RAV model to a new model, flipped classrooms, which have become popular in both universities and K-12. In the flipped model, instruction is delivered through carefully prepared text and videos viewed outside of the classroom. Class time is used for more immersive activities such as code analysis, problem solving, and projects, etc. Though CLS is ideally suited for this, we did modify the software significantly to make it more amenable to the flipped model. There are now two versions of CLS, one employing a user interface similar to the original web site described in the initial proposal and one using a much more streamlined interface designed to be used in flipped classrooms (called flip). The flipped model has been employed in one computer science course at Ithaca College and will also be used in a mathematics course in the spring semester of 2014. The flip tool is also being used in computer science courses at Princeton and at a number of other courses at Carnegie Mellon university and other institutions. We expect that flip experiments to continue while seeking more refined approaches to using group analytics to increase individual outcomes. In addition to the conference proceeding sited above, there have been four workshops given through Ithaca College as well as several workshops given at Carnegie Mellon University and three workshops at other institutions (Tompkins Cortland Community College, Bryn Mawr, and Chatham University). In addition, a workshop was given at a national conference (see Using Social Networking to Improve Student Learning Through Classroom Salon, John Barr and Ananda Gunawardena, Workshop, 43rd ACM Technical Symposium, on Computer Science Education (SIGCSE), Feb 29-March 3, 2012, Raleigh, North Carolina).