Big data and uncertainty in the humanities
September 22, 2012,
The Institute for Digital Research in the Humanities,
University of Kansas.
This conference seeks to address the opportunities and challenges humanistic scholars face with the ubiquity and exponential growth of new web-based data sources (e.g. electronic texts, social media, and audiovisual materials) and digital methods (e.g. information visualization, text markup, crowdsourcing metadata).
“Big data” is any dataset that is too large to be analyzable with traditional means (whether e.g. manual close readings or database queries). Developments in cloud computing, data management, and analytics mean that humanists and allied scholars can analyze and visualize larger patterns in big data sets. With these opportunities come the challenges of scale and interpretation; we have moved from the uncertainty resulting from having too little data to the uncertainty implicit in large amounts of data.
What does this mean for how humanists structure, query, analyze and visualize data? How does this change the questions we ask and the interpretations we assign? How do we combine the best of a macro (larger-pattern) and a micro (close reading) approach? And how is interpretative and other uncertainty modeled?
Presentations addressing these both practical and epistemological questions are welcome.