Automated understanding of data visualizations

Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted...

Full description

Bibliographic Details
Main Author: Alsheikh, Sami Thabet
Other Authors: Frédo Durand., Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science.
Format: Thesis
Language:English
Published: Massachusetts Institute of Technology 2017
Subjects:
Online Access:http://hdl.handle.net/1721.1/112830
Description
Summary:Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 63-65). When a person views a data visualization (graph, chart, infographic, etc.), they read the text and process the images to quickly understand the communicated message. This research works toward emulating this ability in computers. In pursuing this goal, we have explored three primary research objectives: 1) extracting and ranking the most relevant keywords in a data visualization 2) predicting a sensible topic and multiple subtopics for a data visualization, and 3) extracting relevant pictographs from a data visualization. For the first task, we create an automatic text extraction and ranking system which we evaluate on 202 MASSVIS data visualizations. For the last two objectives, we curate a more diverse and complex dataset, Visually. We devise a computational approach that automatically outputs textual and visual elements predicted representative of the data visualization content. Concretely, from the curated Visually dataset of 29K large infographic images sampled across 26 categories and 391 tags, we present an automated two step approach: first, we use extracted text to predict the text tags indicative of the infographic content, and second, we use these predicted text tags to localize the most diagnostic visual elements (what we have called "visual tags"). We report performances on a categorization and multi-label tag prediction problem and compare the results to human annotations. Our results show promise for automated human-like understanding of data visualizations. by Sami Thabet Alsheikh. M. Eng.