THE PUBLIC PRIVATE exhibition curated by Christiane Paul, Feb. 6 opening


Sheila C. Johnson Design Center
 Anna-Maria and Stephen Kellen Gallery

2 W 13th St., NY, NY 10011

February 7 - April 17, 2013

Opening reception: Wednesday, February 6, 6:00 - 8:00 pm

The Public Private explores the impact of social media and new technologies on the relationship between the public and private realm. The artworks brought together in The Public Private address these issues from psychological, legal, and economic perspectives and use strategies ranging from hacking to self-surveillance to reflect upon the profound changes in our understanding of identity, personal boundaries, and self-representation.

Works on view include Paolo Cirio and Alessandro Ludovico’s Face to Facebook, a multimedia installation of one million Facebook profiles, which were “appropriated” by the artists, filtered using facial-recognition software, and then posted on a custom-made dating website sorted by facial expressions. Eva and Franco Mattes’ The Others is a video installation composed of 10,000 photos the Mattes have acquired through a software glitch that gives remote access to personal computers. The core of the work is not just the presentation of these images, but the act of “stealing” and moving them from the private into the public realm.

Other artists and works represented in the gallery include Jill Magid’s Evidence Locker, Luke Dubois’ Missed Connections, Wafaa Bilal’s 3rdi, Carlo Zanni’s Self Portrait with Friends, James Coupe's Panoptic Panorama #2: Five People in a Room, Paolo Cirio’s Street Ghosts, and Ben Grosser’s Facebook Demetricator.

The Public Private is curated by Christiane Paul, an Associate Professor in the School of Media Studies at The New School and Adjunct Curator of New Media Arts at the Whitney Museum of American Art.

class schedule - "Big Data, Visualization, and Digital Humanities", Manovich's course, spring 2013

The schedule for my class Big Data, Visualization, and Digital Humanities which runs this semester at The Graduate Center, CUNY is available at this URL (Google doc):

Facebook Graph Search, database, data stream, and data visualization

Facebook announced its new Graph Search on 01/15/2013:

With Graph Search, Facebook becomes a little more like a DATABASE for its users (many dimensions, time is less relevant) - as opposed to the present massive 1-dimensional data stream (present -> past):

This is how Facebook engineers who developed Graph search also expained the project to Wired: "“It’s like Facebook is this big database and you’re doing a lookup on the results that match."

For me, it is a new example of how database/narrative opposition in digital media (which accompanied it from the start) works now in web apps and services. Database is "spatial" - you can search and use other operations to get records using all fields, with time stamp being just one dimension among others.

A narrative (and a timeline) strongly mark a single dimension: time. Things become related if they occur close to each other in time. (In a timeline, other dimensions of similarity and possible links between far away points are not visible). Posts and replies on FB is one example (do you often comment on posts from a month before?); film editing is another example- a sequence of shots made to work together visually. (Of course, filmmakers and other artists also play with the functions and limits of our memory, making connections between events that can be far apart in a narrative stream.)

P.S. Connections to data visualization:

From this perspective, a scatter plot is more close to a database logic - especially with interactive options (and extended to graph matrix, etc.) ; a line graph is more like a narrative, with moods, anticipation or other analog qualities going up and down.

For an example that visualizes the linear/spatial opposition, see this classic project by Martin Wattenberg (2001):

For one possible way to visualize a film combining narrative and spatial representations to help us study the connections between shots, sequences and themes, see these visualizations from my new article "Visualizing Vertov"

4.1. Vertov_Eleven_Montage

4.3. The_Eleventh_Year_shots.faces_only.Montage

"Visualizing Vertov" - new article by Lev Manovich with 33 visualizations available for download

Lev Manovich. Visualizing Vertov. 2013. [PDF 10 MB].

10,000 words. 33 visualizations.

View high resolution versions of all visualizations discussed in the article on Flickr.

You can also download a single archive file containing article PDF and full-resolution versions of all visualizations: 2013. [ZIP, 58 MB].

4.3. The_Eleventh_Year_shots.faces_only.Montage
All shots with close-ups of faces from The Eleventh Year (Dziga Vertov, 1928). The shots are arranged in the order of their apperance in the film, left to right, top to bottom.

The article presents visualization analysis of the films The Eleventh Year (1928) and Man with a Movie Camera (1929) by the famous Russian filmmaker Dziga Vertov. One of the goals of the project is to show how various dimensions of films can be explored using special visualization techniques inspired by media and new media art, as well as the basic principle of cinema itself - editing (i.e., selecting and arranging together media elements).

In some cases, we use digital image processing software to measure visual properties of every film frame, and then plot these measurements along with the selected frames. (For example, this approach allows us to visualize the amounts of movement in every shot in a film.)

In other cases, we don’t measure or count anything. Instead, we arrange the sampled frames from a film in a single high-resolution visualizations in particular layouts. (For example, we can represent a feature film as a grid of frames - one frame for every shot.)

This use of visualization without measurements, counting, or adding annotations is the crucial aspect of my lab’s approach for working with media collections. We hope that it can add to other approaches already used in quantitative film studies and digital humanities.

The article presents a sequence of 33 visualizations which start from a “bird’s eye” view of the cultural artifacts (e.g., hundreds of 20th century films) and gradually zooms in closer and closer, eventually focusing on details of a single shot – similar to how Google Earth allows you to start with the Earth view and then zoom in and eventually enter a street view.

The article is an experiment. It does not develop a single argument or a concept. Instead, I progressively “zoom” into cinema, exploring alternative ways to visualize film media at different levels, and noting interesting observations along the way.

The digital copies of Vertov's films were provided by The Austrian Film Museum (Vienna) which has one of the best colllectoons of film prints and other Vertov materials. I am grateful to film researcher and Austrian Film Museum staff member Adelheid Heftberger for initiating and making possible this project in 2009, and providing detailed feedback on the work as it developed.

Publication information:
The first part of this article will appear in Russian Journal of Communication (Taylor & Francis); the second part will appear in Cinematicity, eds. Jeff Geiger and Karin Littau (Edinburgh University Press.)

The work of Software Studies Initiative on visualizing cinema, TV, animation, motion graphics and video games received grants from NEH, NSF, Mellon Foundation, Calit2 and UCSD. Special thanks go to Larry Smarr and Ramesh Rao at Calit2 who made our lab possible, and continuosly support our work since 2007.

Cover of "Software Takes Command" is here

Bloomsbury Academic will publish my next book Software Takes Command in July 2013. Below is their final design for the book cover.

The background image is a close-up of a visualization created in my lab by my PhD student and USC Video Game program faculty member William Huber. The visualization condenses 62.5 hours of video gameplay into a single image consisting from 22.500 sampled frames.

You can download the full resolution visualization (10,800 x 8000 pixels) from our Flickr gallery.

Book details are here:

Software Takes Command cover

Computational Folkloristics: Call for Papers

In 2010 I participated in the fantastic “Networks and Network Analysis for the Humanities.” (Humanities Institute for Advanced Topics in Digital Humanities) organized by Tim at UCLA, so I am sure that this special issue which is editing will be also ground-breaking. His call for papers below provides a great summary of the directions in “Computational Folkloristic" and its difference from "Digital Folklore" work.

Computational Folkloristics

Call for Papers

Special Issue of the Journal of American Folklore edited by Timothy R.Tangherlini.

Over the course of the past decade, a revolution has occurred in the materials available for the study of folklore. The scope of digital archives of traditional expressive forms has exploded, and the magnitude of machine-­‐readable materials available for consideration has increased by many orders of magnitude. Many national archives have made significant efforts to make their archival resources machine-­‐readable, while other smaller initiatives have focused on the digitization of archival resources related to smaller regions, a single collector, or a single genre. Simultaneously, the explosive growth in social media, web logs (blogs), and other Internet resources have made previously hard to access forms of traditional expressive culture accessible at a scale so large that it is hard to fathom. These developments, coupled to the development of algorithmic approaches to the analysis of large, unstructured data and new methods for the visualization of the relationships discovered by these algorithmic approaches—from mapping to 3-­‐D embedding, from time-­‐lines to navigable visualizations—offer folklorists new opportunities for the analysis of traditional expressive forms. We label approaches to the study of folklore that leverage the power of these algorithmic approaches “Computational Folkloristics” (Abello, Broadwell, Tangherlini 2012).

The Journal of American Folklore invites papers for consideration for inclusion in a special issue of the journal edited by Timothy Tangherlini that focuses on “Computational Folkloristics.” The goal of the special issue is to reveal how computational methods can augment the study of folklore, and propose methods that can extend the traditional reach of the discipline. To avoid confusion, we term those approaches “computational” that make use of algorithmic methods to assist in the interpretation of relationships or structures in the underlying data. Consequently, “Computational Folkloristics” is distinct from Digital Folklore in the application of computation to a digital representation of a corpus. We are particularly interested in papers that focus on: the automatic discovery of narrative structure; challenges in Natural Language Processing (NLP) related to unlabeled, multilingual data including named entity detection and resolution; topic modeling and other methods that explore latent semantic aspects of a folklore corpus; the alignment of folklore data with external historical datasets such as census records; GIS applications and methods; network analysis methods for the study of, among other things, propagation, community detection and influence; rapid classification of unlabeled folklore data; search and discovery on and across folklore corpora; modeling of folklore processes; automatic labeling of performance phenomena in visual data; automatic classification of audio performances. Other novel approaches to the study of folklore that make use of algorithmic approaches will also be considered.

A significant challenge of this special issue is to address these issues in a manner that is directly relevant to the community of folklorists (as opposed to computer scientists). Articles should be written in such a way that the argument and methods are accessible and understandable for an audience expert in folklore but not expert in computer science or applied mathematics. To that end, we encourage team submissions that bridge the gap between these disciplines. If you are in doubt about whether your approach or your target domain is appropriate for consideration in this special issue, please email the issue editor, Timothy Tangherlini at, using the subject line “Computational Folkloristics—query”. Deadline for all queries is April 1, 2013.

All papers must conform to the Journal of American Folklore’s style sheet for authors. The guidelines for article submission are as follows: Essay manuscripts should be no more than 10,000 words in length, including abstract, notes, and bibliography. The article must begin with a 50-­‐ to 75-­‐word abstract that summarizes the essential points and findings of the article. Whenever possible, authors should submit two copies of their manuscripts by email attachment to the editor of the special issue at: The first copy should be sent in Microsoft Word or Rich Text Format (rtf) and should include the author’s name. Figures should not be included in this document, but “call outs” should be used to designate where figures should be placed (e.g., “”). A list at the end of the article (placed after the bibliography) should detail the figures to be included, along with their captions. The second copy of the manuscript should be sent in Portable Document Format (pdf). This version should not include the author’s name or any references within the text that would identify the author to the manuscript reviewers. Passages that would identify the author can be marked in the following manner to indicate excised words: (****). Figures should be embedded in this version just as they would ideally be placed in the published text. Possible supplementary materials (e.g., additional photographs, sound files, video footage, etc.) that might accompany the article in its online version should be described in a cover letter addressed to the editor. An advisory board for the special issue consisting of folklorists and computer scientists will initially consider all papers. Once accepted for the special issue, all articles will be subject to the standard refereeing procedure for the journal. Deadline for submissions for consideration is June 15, 2013. Initial decisions will be made by August 1, 2013. Final decisions will be made by October 1, 2013. We expect the issue to appear in 2014.