Understanding and analyzing the way our world is connected is a critical but new challenge in today's world, thanks to the technological advances of personal computers, mobile devices, as well as local and global Internet connections. Most current methods in the area of social media analysis, inference and understanding are based on textual data. However, the image data makes an increasingly large proportion of data in social media. Hence, there is an urgent need for tools that can effectively use image data to extract important information to infer patterns and activities of people, communities and society at large.
This project combines advances in computer vision, machine learning, and social networks in novel ways for understanding and analyzing large-scale social media data. The proposal brings together computer vision and machine learning research in novel ways to develop new methods for analyzing large-scale social media data. It pursues 4 inter-related aims: (i) Establishing a large-scale visual concept ontology and structures for the web-image world via crowdsourcing, taxonomy induction, and nonparametric learning methods; (ii) Understanding activity in social networks by analyzing image contents in the context of social media in large-scale and with connectivity; (iii) Inferring the structure of social networks and communities from image contents and activity of individuals in social networks; (iv) Discovering and analyzing dynamic social media trends.
Anticipated products of this research include new tools for analysis and modeling of socially generated content, with special emphasis on image data. The resulting methods provide potentially useful insights that characterize users, communities and societies, in a broad range of applications. The project offers enhanced research-based advanced training opportunities for graduate as well as undergraduate students and involves development of new courses on related topics at both Stanford University and Carnegie Mellon University.
This project focuses on the problem of using large-scale image data for analyzing and understanding online social media. We address the gaping hole in current social network research that largely ignores the use of images. We focus our efforts on the goals of establishing large-scale visual concept ontologies and structures for the web-image world, understanding nodal activity via image content analysis and human activity understanding. We link and identify role predictions to infer social networks and communities from image content. We also discover and analyze dynamic trends in social media using images. Towards these goals, we have introduced the ImageNet ontology and dataset, which allows us to evaluate and benchmark the performance of methods that understand nodal activity through image content analysis. We have developed various methods and techniques for leveraging and inferring social aspects of image content. Among these, one of our works is able to automatically group people participating in events based on their social roles. This work proposed a novel graphical model that incorporates unary features based on the appearance of each individual, and pairwise features based on the interactions between individuals at an event. Another work leverages social affinities as features for tracking millions of people in visual data of large-scale crowded settings. This was done by engineering features that capture social awareness of individuals by radially clustering group densities around each individual within crowds. Also, we have worked on improving standard tasks in computer vision such as image classification by leveraging social groups identified from online social networks, and incorporating these features into a standard neural network framework. Together, our works have resulted in several publications in major research venues, and we have strived to make our results and code publicly available to help promote further research into the areas and problems related to the intersections between image data and social media.