Fake websites have emerged as a major source of online fraud, accounting for billions of dollars in fraudulent revenue at the expense of unsuspecting Internet users. Existing tools for combating fake websites are not very accurate, are limited in terms of the categories and genres of fake websites they detect, and lack adequate usability?often causing users to disregard their recommendations. Hence, there remains a need for intelligent detection systems capable of accurately detecting various types and genres of fake websites and displaying recommendations in a manner that is conducive to system use. In filling this gap, this research takes a novel user-centric approach that involves an assessment of user perceptions regarding detection-system design alternatives. The research method includes an extensive theory-based controlled lab experiment, which assesses the impacts of various design alternatives (such as website categories, genres, and accuracy/time tradeoffs) on users? perceptions, behaviors, and skills (including security threat awareness, security threat appraisal, coping assessment, security behaviors, internet trust, and ability to identify fake websites). The research also develops a novel fake website detection system comprised of an intelligent hierarchical classification algorithm capable of promoting users? trust in the Internet. It utilizes a test bed of two thousand fake websites that include more than two million web pages. This work uncovers new knowledge about factors influencing individuals? online security behaviors and skills, promotes Internet trust by developing enhanced systems for identifying fake websites, and develops advanced data and web mining techniques suitable for incorporating into information systems curricula.
Internet is now a major infrastructure for numerously many social, commercial, and governmental activities. A trustworthy Internet is essential for free countries in the world. Individuals form the weak link in the security chain and their lack of attention to self-protection has major consequences for the trustworthiness of the Internet. In recent studies, 60%-70% of test subjects provided personal information to fake websites. Fake websites are often very professional-looking and sophisticated in terms of their design, making it difficult for users to identify them as malicious. Individuals’ security behaviors put them at risk of financial, identity, and privacy losses. The Department of Homeland Security has recognized this issue as a major threat in its designation of October 2014 as Cybersecurity Awareness Month and the declaration that "cybersecurity is a shared responsibility" and individuals need to be vigilant in self-protection to ensure their own cyber safety and the integrity of the Internet for all. Using fake website detections tools is an important way to combat such cyber threats. Our work makes a major contribution to trustworthy cyberspace by focusing on ways that the detection technology could influence users’ trust, behaviors and self-protection success. In this sense, our work is the first to provide solid theories, scientifically controlled experiments and rigorous data analyses to link and assess the impacts of detection-tool design on individuals’ security behaviors and performances. Our approach has strong theoretical and scientific foundations. We have developed new theories for linking detection tools’ designs to people’s sense of vulnerability to attacks, and their coping abilities to deal with the attacks, and their trust in detection tools. Such changes in perceptions and trust attitudes subsequently influence users’ behaviors and their success in self-protection in cyberspace. Under well-defined and controlled environments, we carried out a series of scientific experiments to study how people’s actual success in self-protection changes, to observe if their uses of detection tools alter, and to assess if they will use detection tools in the future. We also observed in a controlled environment how people behave as they fall prey to fake website attacks and go down the spiral of deception and succumb to fake website attacks. The protocols for data collections were well-researched and complex. We developed new software tools for managing and carrying out our experiments. New survey instruments were developed to collect accurate data before and after each experiment. We used advanced analytical methods to analyze each dataset. So far, our research project has led to three published/accepted papers in top academic journals, five more manuscripts under review or revision for publication in top academic journals, five major conference presentations and published proceedings, one PhD dissertation, and one award nomination. Here, we provide only brief highlights of our findings and contributions. Our work opens a new avenue for investigating how to change people’s security-related perceptions and behaviors and to increase their success in self-protection through the design of suitable detection tools. It provides sound theoretical bases, well-defined behavioral models, and clear protocols for scientific investigation of tools that can change the way people protect themselves against online predators. This is a major contribution to the field of security research and a significant resource for developers and policy makers. We identified detection tools’ performance features that had significant impacts on people’s perceptions, trust attitudes, and behaviors. We also found that people are more easily deceived by spoof attacks (pretending to be well-known websites) than concocted attacks (made-up websites that pretend to be legitimate businesses). The results of multiple series of analysis provide clear guidelines for tool developers about areas on which to focus their design and how to market their products to increase use. We investigated impacts of personalization of interface elements through an intelligent system that dynamically creates unique interfaces for user. We called it intelligent interface personalization (IIP). We found that IIP causes profound changes in the paths in the behavior model. The extent of this impact was amazingly pervasive and significant. We also found gender differences in the impacts of IIP. These findings are novel and important since tool developers and policy makers have paid little attention to how and why personalization could change the way people protect themselves online. We developed the theory, model, and methodology to assess the vulnerability of various groups based on their profiles, and how they go down the deception funnel. Identifying profiles of people most vulnerable to fake website attacks empowers policy makers and tool developers to devise tools, training, and other mechanisms to help vulnerable people increase their success in self-protection against fake website attacks. In sum, our work contributes to the enhancement of cybersecurity for all and to the trustworthiness of the Internet by proposing the ways detection tools should be developed and assessed to motivate people to protect themselves against fake website attacks.