"Analyzing Cultural Data" - Lev Manovich's Spring 2015 course at The Graduate Center, CUNY



Analyzing Cultural Data

Spring 2014 semester / The Graduate Center, City University of New York
Wednesday, 4:15-6:15 pm / 3 credits
MALS 78500 / IDS 81650

Instructor: Dr. Lev Manovich, Professor, The Graduate Center, CUNY.
One of 50 most important people of 2014 (Verge Top 50 list, 2014;
one of 25 People Shaping the Future of Design (Complex, 2013)


Course description:

The explosive growth of social media and digitization of cultural artifacts by libraries and museums opened up exiting new possibilities for the study of cultural life. The “big data turn” already affected many fields in humanities (digital humanities, history, literary studies, art history, film studies, archeology, etc.), social sciences (e.g., computational sociology), and professional fields such as journalism and arts administration.

This course explores the possibilities, the methods, and the tools for working with cultural data sets. We will cover both small and big humanistic data and different data sources (images, video, texts, library collections, sensor data, etc.) Students will learn the practical techniques for organizing, analyzing and visualizing cultural datasets used leading open source tools. We will also discuss relevant readings and projects from a number of fields including digital art, artistic visualization, media theory, social computing, and science and technology studies.

The course is open to all graduate students, and does not require any previous technical knowledge. The practical tutorials and homework will be adjusted to fit students backgrounds and interests.

The course will use of some of the data sets from Dr. Manovich's Software Studies Initiative such as 10.5 million Instagram images shared in NYC in 2014.


Examples of projects from Software Studies Initiative:

Selfiecity
Phototrails
The Exceptional and the Everyday: 144 hours in Kyiv
One million manga pages


When do people share? Comparing Instagram activity in six global cities




120,000 images from six global cities organized by average hue (distance to the center). The angle of each image is the day/time it was shared. All images use their local times (i.e. we keep offsets between the time zones). Because the temporal patterns for each city overlap, we see a uniform global image 24/7 cycle, without any separation between times of day. (This visualization and the post: Lev Manovich.)


In this post we compare patterns in Instagram activity between six cities: Bangkok, Berlin, Moscow, New York, Sao Paolo and Tokyo.

The analysis uses 120,000 images (20,000 from each city). To create this dataset, we first downloaded details of all geo-tagged Instagram images shared in the central same size area in each city during our full week (December 4-11, 2013; over 660,000 images in total). We then downloaded a random sample of 20,000 images from each city.

(This dataset was created as part of our Selficity project - see details below).



1. Numbers of Instagram images shared per hour in a 24 hour cycle

Berlin, Moscow, New York, and Sao Paolo have similar patterns: most images are shared between 1 and 11pm, with the peak around 7-8pm.

In Tokyo and Bangkok, there are two peaks: lunch time (1 or 2pm) and evening (7pm-11pm).





2. Numbers of Instagram images shared for every day of the week:

In most cities, people share most images on Saturday and Sunday. However, while in Berlin, Moscow and Tokyo and Bangkok, people appear to start their weekend already on Friday, in New York and Sao Paolo Friday is no different from previous weekdays.

(Because we are only using data for a single week, these patterns may not be typical. In particular, different Bangkok patterns maybe related to the political events in the city during that particular weeks.)





3. Number of Instagram users:

Our dataset contains twice as many users in New York than in Berlin -




4. Average number of images per user in each city:

- which means that while more people post on Instagram in NYC, on the average each user posts much fewer images (same as in Moscow and Sao Paolo)





Notes:

1. Capture time versus share time.

Instagram allows users to post any image from their phones - i.e. users are not limiting to capture and immediately post images with Instagram app. Therefore, the volume of sharing does not directly tell when people take pictures, but rather than they use the app to share them.


2. Dataset details.

To create our data set, we used Gnip service to download Instagram data and images, so we were not constrained by Instagram API download limits. Both Instagram and Gnip provide only publicaly shared images. We were only downloading images with location data, which represents only a part of all shared images.


Selfiecity receives Golden Award in a data visualization competition




Our project selfiecity has received Golden Award from 2014 Information is Beautiful competition.

Selfiecity is a collaboration between the outstanding team of data visualization designers and programmers - Moritz Stefaner, Dominicus Baur and Daniel Goddemeyer - and five members of Software Studies Initiative. The collaboration was a great experience for us. Everybody worked hard. Moritz was the heart of the project designing data visualizations and the web site and making sure all pieces come together.

Amazingly, Moritz's another recent project OECD Regional Well-Being received the Silver Award in the same competition. Bravo, Moritz!


Our new animated Phototrails visualizations for Google Zeitgeist 2014 conference


Phototrails video 1 for Google Zeitgeist 2014 from Lev Manovich on Vimeo.


Phototrails video 2 for Google Zeitgeist 2014 from Lev Manovich on Vimeo.


This summer we received a commission to create new artworks to be shown during Google Zeitgeist 2014 conference. The conference is an invitation only two day event; this year it took place during September 14-16 in Paradise Valley, Arizona.

Google produced high quality video of many of the presentations. (You can also find videos of the talks from the earlier conferences at www.zeitgeistminds.com). For me personally, the highlights were the talks of Presidents Carter and Clinton, Google's own Eric Schmidt and Larry Page, and Lawrence Lessig - and also chatting with the people from Google X who were showing their amazing research.

We were asked to create animated versions of our Phototrails project. In the original project, we analyzed and visualized 2.3 million Instagram photos from 13 global cities. For the new Google Zeitgeist project, we created a number of new still visualizations using our our ImagePlot tool. We also used the animation option in ImagePlot to render a long sequence of visualization frames. The frames were rendered in 4K and then scaled to HD resolution. We used Premiere and After Effects to assemble the videos.

The two final videos which were exhibited at the conference are above. The fist video dissolves between both original and new Phototrails visualizations. The second is a slow zoom into the animated visualization of 120,000 Instagram photos from 6 cities. (Note: because of the Vimeo compression, the videos do not look as sharp as the originals).

The project was created by the original Phototrails team: Nadav Hochman, Jay Chow and Lev Manovich.

During the weeks leading to the event, we collaborated using Dropbox because each of us was in a different place: Nadav in NYC, Jay in California, and I was first in Brazil and then in Ireland. After we saw our videos playing at the site the morning of September 14th, we went back to the hotel, made some adjustments and rendered new versions. Good thing that ImagePlot (originally written by Manovich in 2010, and later expanded by Chow) kept rendering and never quit - even in Arizona's heat!

Lev Manovich's slides - Tate Live: On Mediated Experience: Oct 27, 2014

The Imaginary App: a new book from Software Studies series @ The MIT Press



The latest book from Software Studies series at The MIT Press:


The Imaginary App

Edited by Paul D. Miller (aka DJ Spooky that Subliminal Kid) and Svitlana Matviyenko. The MIT Press, 2014.


From the publisher:

Mobile apps promise to deliver (h)appiness to our devices at the touch of a finger or two. Apps offer gratifyingly immediate access to connection and entertainment. The array of apps downloadable from the app store may come from the cloud, but they attach themselves firmly to our individual movement from location to location on earth. In The Imaginary App, writers, theorists, and artists--including Stephen Wolfram (in conversation with Paul Miller) and Lev Manovich--explore the cultural and technological shifts that have accompanied the emergence of the mobile app. These contributors and interviewees see apps variously as “a machine of transcendence,” “a hulking wound in our nervous system,” or “a promise of new possibilities.” They ask whether the app is an object or a relation, and if it could be a “metamedium” that supersedes all other artistic media. They consider the control and power exercised by software architecture; the app’s prosthetic ability to enhance certain human capacities, in reality or in imagination; the app economy, and the divergent possibilities it offers of making a living or making a fortune; and the app as medium and remediator of reality.

Also included (and documented in color) are selected projects by artists asked to design truly imaginary apps, “icons of the impossible.” These include a female sexual arousal graph using Doppler images; “The Ultimate App,” which accepts a payment and then closes, without providing information or functionality; and “iLuck,” which uses GPS technology and four-leaf-clover icons to mark places where luck might be found.


Contributors:

Christian Ulrik Andersen, Thierry Bardini, Nandita Biswas Mellamphy, Benjamin H. Bratton, Drew S. Burk, Patricia Ticineto Clough, Robbie Cormier, Dock Currie, Dal Yong Jin, Nick Dyer-Witheford, Ryan and Hays Holladay, Atle Mikkola Kjøsen, Eric Kluitenberg, Lev Manovich, Vincent Manzerolle, Svitlana Matviyenko, Dan Mellamphy, Paul D. Miller aka DJ Spooky That Subliminal Kid, Steven Millward, Anna Munster, Søren Bro Pold, Chris Richards, Scott Snibbe, Nick Srnicek, Stephen Wolfram.


About Software Studies series at MIT Press:

The Software Studies series publishes the best new work in a critical and experimental field that is at once culturally and technically literate, reflecting the reality of today’s software culture. The field of software studies engages and contributes to the research of computer scientists, the work of software designers and engineers, and the creations of software artists. Software studies tracks how software is substantially integrated into the processes of contemporary culture and society. It does this both in the scholarly modes of the humanities and social sciences and in the software creation/research modes of computer science, the arts, and design.


Software Studies series co-editors:

Dr. Noah Wardrip-Fruin, The University of California, Santa Cruz (UCSC).

Dr. Lev Manovich, The Graduate Center, City University of New York (CUNY).




The cover of The Imaginary App


"The Exceptional and the Everyday: 144 hours in Kiev" - our new project exploring 13K Instagram photos from 2014 Ukrainian revolution


http://www.the-everyday.net/

The Exceptional and the Everyday: 144 hours in Kiev is the first project to analyze the use of Instagram during a social upheaval.

Using computational and data visualization techniques, we explore 13,208 Instagram images shared by 6,165 people in the central area of Kiev during 2014 Ukrainian revolution (February 17 - February 22, 2014).


From The Everyday Project

Over a few days in February 2014, a revolution took place in Kiev, Ukraine. How was this exceptional event reflected on Instagram? What can visual social media tell us about the experiences of people during social upheavals?

If we look at images of Kiev published by many global media outlets during the 2014 Ukrainian Revolution, the whole city is reduced to what was taking place on its main square. On Instagram, it looks different. The images of clashes between protesters and the police and political slogans appear next to the images of the typical Instagram subjects. Most people continue their lives. The exceptional co-exists with the everyday. We saw this in the collected images, and we wanted to communicate it in the project.

The Exceptional and the Everyday: 144 hours in Kiev continues previous work of our lab (Software Studies Initiative, softwarestudies.com) with visual social media:

phototrails.net (analysis and visualization of 2.3 Instagram photos in 14 global cities, 2013)

selfiecity.net (comparison between 3200 selfie photos shared in six cities, 2014; collaboration with Moritz Stefaner).

In the new project we specifically focus on the content of images, as opposed to only their visual characteristics. We also explore non-visual data that accompanies the images: most frequent tags, the use of English, Ukrainian and Russian languages, dates and times when images their shared, and their geo-coordinates.


Project web site:

http://www.the-everyday.net/


"The Social Media Image" - a new paper from Nadav Hochman (Phototrails project)



Imageplot of 50,000 Instagram images from Paris (spring 2013) organized by average brightness and contrast.


The new paper by Nadav Hochman (see Phototrails project) has been published in BIG DATA and SOCIETY:

The Social Media Image

Download full paper text

Abstract:

How do the organization and presentation of large-scale social media
image sets in social networking apps affect the creation of visual
knowledge, value, and meaning?

The article analyzes fundamental elements in the changing syntax of
existing visual software ontology — the ways current social media
platforms and aggregators present and categorize social media images.
It shows how visual media created within social media platforms
follow distinct modes of knowledge production and acquisition.

First, I analyze the structure of social media images within data streams as
opposed to previous information organization in structured databases.
While the database has no pre-defined notions of time and thus
challenges traditional linear forms, the data stream re-emphasizes the
linearity of a particular data sequence and activates a set of new
relations to contemporary temporalities.

Next, I show how these visual arrangements and temporal principles are
manifested and discussed in three artworks: Untitled (Perfect
Lovers)
by Felix Gonzalez-Torres (1991), The Clock by
Christian Marclay (2011), and Last Clock by Jussi Ängeslevä and
Ross Cooper (2002).

By emphasizing the technical and poetic ways in which social media
situate the present as a “thick” historical unit that embodies
multiple and synchronous temporalities, the article illuminates some
of the conditions, challenges, and tensions between former visual
structures and current ones, and unfolds the cultural significations
of contemporary big visual data.



"Visualizing the Museum" - Manovich's lecture at São Paulo Museum of Art, August 26, 2014





Lev Manovich will lecture at São Paulo Museum of Art, August 26, 7pm.

Title and summary:


Visualizing the Museum

Over the last few years, many major art museums around the world digitized their collections and made them available online. As a result, we can now apply "big data" approach to history of art and visual culture, making visible the patterns across millions of historical images.

Our lab was setup in 2007 in anticipation of these developments. We begun to develop methods and techniques for visualization of massive collections of historical cultural images, even though they were not yet available at that time. Today (2014) the situation is different - there are plenty of museum datasets to choose from, including digitized collections from Rijksmuseum, Cooper-Hewitt Museum, and Library of Congress.

In my lecture I will discuss some of our already accomplished projects related to museums and large art datasets. Two of them apply computational and visualization tools to digitized collections; another uses social media photos people share in museums; and yet another looks at user-generated art.

The following are the datasets and their sources from these projects:

- MoMA (Museum of Modern Art, NYC) collection of over 20,000 photographs covering 19th and 20th century.

- Hundreds of thousands of Instagram photos shared by visitors in MoMA, Centre Pompidou and Tate Modern.

- 1 million artworks from deviantArt, the most popular social network for user-generated art.

- All Dziga Vertov's films from Austrian Film Museum.



Lecture poster:







MediaLab at the Met : Open Call for Internship Applications


MediaLab at the Met : Open Call for Internship Applications


Deadline for submission: August 27th.
Send resumes and letter of interest to
don.undeen@metmuseum.org


The MediaLab at the Metropolitan Museum of Art is a small team dedicated to exploring the intersections of art, technology, and the museum experience. We do this by partnering with talented students and professionals to develop prototypes and artistic provocations that fuel conversation and new ideas. We are currently accepting internship applications for the Fall 2014 Semester.

Because our internships are unpaid, great emphasis is put on supporting our student partners in realizing their vision, providing them with access to training, content, and museum expertise. Interns will have ample opportunities to present to, receive feedback from and pursue partnerships with Met staff and industry professionals. All internships conclude with a public expo of projects, and a blog post on the Met's Digital Underground blog:
http://www.metmuseum.org/blogs/digital-underground

Past projects have dealt with such diverse topics as: Projection mapping; 3D scanning, printing, modeling and animation; accessible wayfinding and path generation; iBeacon; Kinect; Arduino and robotics; digital art copyism and Nintendo hacking; Oculus Rift and virtual reality; augmented reality; and so much more!

Code developed in this program will be open-sourced, with ownership retained by the interns.

Selected interns will be encouraged to pursue their own interests as long as they in some way address art and/or the museum experience (interpret that as broadly as you like). However, here are some topics and technologies that the museum has been thinking about lately, and may prove especially fruitful:

Creative Reuse of Met Content : How can study and use of objects in the Met's collection expand or refine your own creative process?

Accessible Wayfinding : Previous MediaLab projects have led to an algorithm that provides paths through the museum to see your favorite objects, while respecting the users access preferences: avoiding stairs, dimly lit rooms, etc. This work, functioning as a web service, is ready for a UI layer to make it truly valuable to our visitors.

Crowdsourced Audio Descriptions of Art : our work with blind-and low-vision visitors has revealed a need for testing various approaches for capturing verbal descriptions of art objects. Both UI, UX, and mobile development work in this area would reap dividends.

Computer Vision and Image Analysis : What secrets are lurking in our collection that can be revealed through the gaze of the artificial eye?

Virtual Reality and the Museum Model : Previous MediaLab projects have leveraged the Oculus Rift and our architectural models to develop fantastical virtual environments. With the new Oculus on the way, we're looking to expand our virtual universe, for both practical and artistic purposes.

Projection Mapping : Sculptural and architectural forms in our collection lend themselves to very challenging and interesting possibilities for projection mapping. Previous work has involved re-creating the original colors on the Temple of Dendur, but more person, artistic opportunities abound as well.

Micro-Location Tracking : With iBeacon technology seeing more widespread use in retail and cultural environments, what types of experiences can we enable in our own space that make use of location-awareness and personal identification?

Natural Language Processing, Sentiment Analysis, Collective Intelligence, and the Semantic Web : There's vast amounts of didactic content about our objects and their context. What connections can we uncover using modern text analysis and concept modeling technologies?

3D Scanning, Animation, Modeling, Printing and Casting : The MediaLab has 4 3D printers, and access to more advanced scanning and modeling technologies. We've also been enthusiastic supporters of artists using Met objects in their creative work. Let's push the envelope even further.

Brain Scanning : A new area for us, we see brain-scanning technology as having interesting connections to accessibility, visitor research, and the museum's role as a place of meditation and reflection. We're exploring partnerships in this area, and are looking for someone who shares our enthusiasm to learn more.

Wild Card! : What connections do you want to explore between your favorite technology and the world's greatest museum? Let's talk!


Hardware to play with:
- Google Glass
- 3D printers
- 3D Scanners
- Kinect
- Leap Motion
- Oculus Rift
- Arduino and Assorted Eletronics
- Raspberry Pi
- Touchscreens
- Projectors
- iBeacons
- Brain Scanners
- and more fun stuff on the way!


Send resumes and letter of interest to
MediaLab at the Met : Open Call for Internship Applications


Deadline for submission: August 27th.
Send resumes and letter of interest to
don.undeen@metmuseum.org

The MediaLab at the Metropolitan Museum of Art is a small team dedicated to exploring the intersections of art, technology, and the museum experience. We do this by partnering with talented students and professionals to develop prototypes and artistic provocations that fuel conversation and new ideas. We are currently accepting internship applications for the Fall 2014 Semester.

Because our internships are unpaid, great emphasis is put on supporting our student partners in realizing their vision, providing them with access to training, content, and museum expertise. Interns will have ample opportunities to present to, receive feedback from and pursue partnerships with Met staff and industry professionals. All internships conclude with a public expo of projects, and a blog post on the Met's Digital Underground blog:
http://www.metmuseum.org/blogs/digital-underground

Past projects have dealt with such diverse topics as: Projection mapping; 3D scanning, printing, modeling and animation; accessible wayfinding and path generation; iBeacon; Kinect; Arduino and robotics; digital art copyism and Nintendo hacking; Oculus Rift and virtual reality; augmented reality; and so much more!

Code developed in this program will be open-sourced, with ownership retained by the interns.

Selected interns will be encouraged to pursue their own interests as long as they in some way address art and/or the museum experience (interpret that as broadly as you like). However, here are some topics and technologies that the museum has been thinking about lately, and may prove especially fruitful:

Creative Reuse of Met Content : How can study and use of objects in the Met's collection expand or refine your own creative process?

Accessible Wayfinding : Previous MediaLab projects have led to an algorithm that provides paths through the museum to see your favorite objects, while respecting the users access preferences: avoiding stairs, dimly lit rooms, etc. This work, functioning as a web service, is ready for a UI layer to make it truly valuable to our visitors.

Crowdsourced Audio Descriptions of Art : our work with blind-and low-vision visitors has revealed a need for testing various approaches for capturing verbal descriptions of art objects. Both UI, UX, and mobile development work in this area would reap dividends.

Computer Vision and Image Analysis : What secrets are lurking in our collection that can be revealed through the gaze of the artificial eye?

Virtual Reality, and the Museum Model : Previous MediaLab projects have leveraged the Oculus Rift and our architectural models to develop fantastical virtual environments. With the new Oculus on the way, we're looking to expand our virtual universe, for both practical and artistic purposes.

Projection Mapping: Sculptural and architectural forms in our collection lend themselves to very challenging and interesting possibilities for projection mapping. Previous work has involved re-creating the original colors on the Temple of Dendur, but more person, artistic opportunities abound as well.

Micro-Location Tracking : With iBeacon technology seeing more widespread use in retail and cultural environments, what types of experiences can we enable in our own space that make use of location-awareness and personal identification?

Natural Language Processing, Sentiment Analysis, Collective Intelligence, and the Semantic Web : There's vast amounts of didactic content about our objects and their context. What connections can we uncover using modern text analysis and concept modeling technologies?

3D Scanning, Animation, Modeling, Printing and Casting : The MediaLab has 4 3D printers, and access to more advanced scanning and modeling technologies. We've also been enthusiastic supporters of artists using Met objects in their creative work. Let's push the envelope even further.

Brain Scanning : A new area for us, we see brain-scanning technology as having interesting connections to accessibility, visitor research, and the museum's role as a place of meditation and reflection. We're exploring partnerships in this area, and are looking for someone who shares our enthusiasm to learn more.

Wild Card! : What connections do you want to explore between your favorite technology and the world's greatest museum? Let's talk!


Hardware to play with:
- Google Glass
- 3D printers
- 3D Scanners
- Kinect
- Leap Motion
- Oculus Rift
- Arduino and Assorted Eletronics
- Raspberry Pi
- Touchscreens
- Projectors
- iBeacons
- Brain Scanners
- and more fun stuff on the way!


Send resumes and letter of interest to
don.undeen@metmuseum.org

Deadline for submission: August 27th.





Recently...