Hampton University, in collaboration with Carnegie-Mellon University, Florida Agricultural and Mechanical University, the University of the District of Columbia, Norfolk State University, Winston-Salem State University, Morgan State University, Jackson State University, Elizabeth City State University, Duke University, the University of Alabama Tuscaloosa, and the University of Michigan, proposes the ARTSI Alliance (Advancing Robotics Technology for Societal Impact). ARTSI is a consortium of Historically Black Colleges and Universities (HBCUs) and major research universities (R1s) working together to increase African American participation in computer science, with a focus on robotics. This extension proposal will expand ARTSI to seventeen Historically Black Colleges and Universities (HBCUs) and roughly 10 major research universities (R1s). Hampton University is the new lead institution; Carnegie Mellon University remains the lead R1 school. The extension introduces three new initiatives that (1) improve the quality and uniformity of robotics instruction by developing robotics curriculum modules specific to the needs of HBCUs, (2) pilot a program to attract STEM (Science, Technology, Engineering, and Mathematics) students to HBCUs by offering robot programming activities in local high schools, and (3) pilot skill-building program for rising sophomores to better prepare them to become involved in robotics research. The extension also includes new collaborations with the Caribbean Center for Computing Excellence (a BPC Alliance in Puerto Rico and the US Virgin Islands) and the Defense Advanced Research Projects Agency.
Intellectual Merit and Broader Impact This project was focused on exposing undergraduate computer science students to graduate level research. The goal is to inspire these students to apply and attend graduate school. The participating undergraduate students were all african american and have little or no exposure to what's involved in choosing graduate school as an option in their education future. Expanding the diversity of people who do research in computer science contributes greatly to the field and improves the quality of knowledge gained. Student research projects were designed as technical integration of interaction devices (Kinect, smartphones) with large displays (Tiled walls, Virtual reality chambers). These kind of applications require knowledge of messaging protocols, image processing methods and rendering methods while being engaging as a final product. Outcomes The PI hosted five undergraduate students and exposed them to the research group experience. The students attended seminar series over the summers which focused on topics such as applying to graduate school, taking the graduate record entrance exam (GRE), and what to expect in graduate school. All students learned how to conduct a literature review pertaining to their areas of research. The students were also introduced to utilizing libraries such as OpenFrameworks, WebSockets, and Syzygy to extend prior developments and create new software applications. They also worked with various operating systems (Windows, Linux, OS X). Student 1: Experimented with using the Microsoft Xbox Kinect as a full body gesture recognition system to control a giga-pixel image viewer application. The student defined, implemented and tested the elemental body gestures which correlate to panning and zooming an image, as well as activating an informational 'heads-up' control interface. Students 2 & 3: Experimented with using the Bluetooth signal of a mobile device to triangulate the position of a user holding the device in front of a large, tiled display. After a large number of experiments, the students determined that the Bluetooth received signal strength (RSSI) has too much variability to be used in a trilateration calculation. Student 4: Experimented using Quick Response (QR) codes to determine the position of a user scanning the codes in front of a large, tiled display and as invisible logins for controlling the display’s output. The student managed to dynamically generate QR codes to transfer a user’s drawing from a mobile canvas to the remote display. Student 5: Worked on modifying a preexisting 3D navigation technique that was designed for a head-mounted display to work in a projection-based virtual reality display. The student successfully completed this project.