Emerging information technologies are enabling new forms of content delivery. Students are provided with the opportunity of writing a peer-reviewed textbook for their own course. Instead of being merely the consumers of knowledge, students become co-producers, along with the instructor. This is forcing them to learn the material in greater depth, and to reflect upon it more frequently. The natural medium for the creation of such a textbook is a wiki, because it standardizes the format and makes it easy for students to edit parts of a larger work. This project is combining two sets of expertise: wiki textbook creation and software for peer review of student-generated content.

A software system to manage the creation and peer review of a wiki textbook has several automating features. These features include rubric creation by students, double-blind feedback between author and reviewer, and quality-control strategies for student peer reviews. This software system also provides support for flow management to allow different chapters of the text to be written and reviewed at different times during the course of the creation to support this experience.

This project is advancing understanding in a variety of disciplines by engaging students in knowledge generation as well as knowledge acquisition. Students not only learn existing material better, but they uncover and codify new knowledge for their peers. This gives students first-hand experience with the process of inquiry and scientific discovery.

By making it easier to manage the writing of a wiki textbook in a class, it encourages instructors to have their students write wiki textbooks. Finally, the software system that is being developed is adoptable in disciplines other than STEM.

Project Report

The idea behind this project is that collectively, students are capable of writing a textbook for a course that they are taking. The advantages to doing this are many: They need to read the primary literature and reconcile conflicting claims. They need to organize the material in a form that is suitable for their peers to read. Because other parts of the book are written by people they know, they tend to pay more attention to it than they would to a commercially written book. Because they reflect more on what they read, their learning improves. Managing a wiki textbook is a big job. This project developed software support for automating the production and assessment process. We developed a program that let students choose a portion of the project to work on, and to review the contributions of their peers. To encourage better reviews, we developed a mechanism to pre-assess those reviews based on text- and natural-language processing. Students are given automated feedback before submitting the review on how effective the review appears to be, and how it can be improved. Results from the last two grant years (n = 391) indicate that, by a margin of 74% to 19%, students were proud of their contributions to the wiki textbook. By 79% to 15%, they reported that they had put a lot of effort into writing their articles. By a margin of 68% to 13% (n = 205), they thought that the chapters they read that were authored by other students gave them new insight into the material covered. They were even more positive about their own chapters; by 82% to 10%, they thought that the material they read in researching their chapter gave them new insight into the topic. Students also appreciated the review process. By a margin of 62% to 21% (n = 199), they thought that the reviews they received helped them to improve their work. By 56% to 19%, they felt that the numeric scores assigned by their peer reviewers were fair. By 56% to 23% (n = 262), the pre-service teachers who used the system at Old Dominion University thought it was easy to complete their peer reviews using Expertiza. The majority of respondents at both NCSU and ODU (n = 286) reported that they often thought about their own ideas and opinions on the topic as they read (77%), often thought about information that seemed to be missing or what else could be included (65%), and looked to see if the claims made in the wiki text were well supported (64%). In our work on automated metareviewing, we worked on strategies for identifying relevance in reviews. We showed that a graph-based text representation performed better than the dependency-tree representation, which had been developed earlier by other researchers. We also worked on determining review coverage, using AI and natural-language processing techniques. We showed that word-order graph with an agglomerative clustering approach to identify the most representative sentences in the submission and a heuristic to identify topic-representative sentences gave a correlation of 0.51 with human-provided coverage values. This is an improvement over the previously developed MEAD approach and a baseline approach based on extracting the top 100 words in the document. These two techniques are important advances in automating metareviewing.

Agency
National Science Foundation (NSF)
Institute
Division of Undergraduate Education (DUE)
Type
Standard Grant (Standard)
Application #
0942279
Program Officer
Jane Prey
Project Start
Project End
Budget Start
2010-02-15
Budget End
2012-10-31
Support Year
Fiscal Year
2009
Total Cost
$110,518
Indirect Cost
Name
North Carolina State University Raleigh
Department
Type
DUNS #
City
Raleigh
State
NC
Country
United States
Zip Code
27695