While the capacity to collect, transmit, and store big datasets has been increasing rapidly, the capacity of human experts to provide feedback and to leverage all available information has hardly changed. Humans are the information bottleneck in data analysis and will remain so in the future. The intellectual merit of this project lies in development of theory and methods for big data analysis which account explicitly for human-machine interactions. We will develop scalable online data processing algorithms that winnow large datasets to produce smaller subsets of the most important or informative data, for presentation to expert human analysts. We will also develop new algorithms that enable machines to learn efficiently from human experts, using a minimal amount of human interaction. The models so learned then inform the design of better algorithms for data processing. Through this project we will introduce new and challenging mathematical problems in optimization and machine learning.
2014 - presentpresent