Lev Manovich's slides - Tate Live: On Mediated Experience: Oct 27, 2014

The Imaginary App: a new book from Software Studies series @ The MIT Press



The latest book from Software Studies series at The MIT Press:


The Imaginary App

Edited by Paul D. Miller (aka DJ Spooky that Subliminal Kid) and Svitlana Matviyenko. The MIT Press, 2014.


From the publisher:

Mobile apps promise to deliver (h)appiness to our devices at the touch of a finger or two. Apps offer gratifyingly immediate access to connection and entertainment. The array of apps downloadable from the app store may come from the cloud, but they attach themselves firmly to our individual movement from location to location on earth. In The Imaginary App, writers, theorists, and artists--including Stephen Wolfram (in conversation with Paul Miller) and Lev Manovich--explore the cultural and technological shifts that have accompanied the emergence of the mobile app. These contributors and interviewees see apps variously as “a machine of transcendence,” “a hulking wound in our nervous system,” or “a promise of new possibilities.” They ask whether the app is an object or a relation, and if it could be a “metamedium” that supersedes all other artistic media. They consider the control and power exercised by software architecture; the app’s prosthetic ability to enhance certain human capacities, in reality or in imagination; the app economy, and the divergent possibilities it offers of making a living or making a fortune; and the app as medium and remediator of reality.

Also included (and documented in color) are selected projects by artists asked to design truly imaginary apps, “icons of the impossible.” These include a female sexual arousal graph using Doppler images; “The Ultimate App,” which accepts a payment and then closes, without providing information or functionality; and “iLuck,” which uses GPS technology and four-leaf-clover icons to mark places where luck might be found.


Contributors:

Christian Ulrik Andersen, Thierry Bardini, Nandita Biswas Mellamphy, Benjamin H. Bratton, Drew S. Burk, Patricia Ticineto Clough, Robbie Cormier, Dock Currie, Dal Yong Jin, Nick Dyer-Witheford, Ryan and Hays Holladay, Atle Mikkola Kjøsen, Eric Kluitenberg, Lev Manovich, Vincent Manzerolle, Svitlana Matviyenko, Dan Mellamphy, Paul D. Miller aka DJ Spooky That Subliminal Kid, Steven Millward, Anna Munster, Søren Bro Pold, Chris Richards, Scott Snibbe, Nick Srnicek, Stephen Wolfram.


About Software Studies series at MIT Press:

The Software Studies series publishes the best new work in a critical and experimental field that is at once culturally and technically literate, reflecting the reality of today’s software culture. The field of software studies engages and contributes to the research of computer scientists, the work of software designers and engineers, and the creations of software artists. Software studies tracks how software is substantially integrated into the processes of contemporary culture and society. It does this both in the scholarly modes of the humanities and social sciences and in the software creation/research modes of computer science, the arts, and design.


Software Studies series co-editors:

Dr. Noah Wardrip-Fruin, The University of California, Santa Cruz (UCSC).

Dr. Lev Manovich, The Graduate Center, City University of New York (CUNY).




The cover of The Imaginary App


"The Exceptional and the Everyday: 144 hours in Kiev" - our new project exploring 13K Instagram photos from 2014 Ukrainian revolution


http://www.the-everyday.net/

The Exceptional and the Everyday: 144 hours in Kiev is the first project to analyze the use of Instagram during a social upheaval.

Using computational and data visualization techniques, we explore 13,208 Instagram images shared by 6,165 people in the central area of Kiev during 2014 Ukrainian revolution (February 17 - February 22, 2014).


From The Everyday Project

Over a few days in February 2014, a revolution took place in Kiev, Ukraine. How was this exceptional event reflected on Instagram? What can visual social media tell us about the experiences of people during social upheavals?

If we look at images of Kiev published by many global media outlets during the 2014 Ukrainian Revolution, the whole city is reduced to what was taking place on its main square. On Instagram, it looks different. The images of clashes between protesters and the police and political slogans appear next to the images of the typical Instagram subjects. Most people continue their lives. The exceptional co-exists with the everyday. We saw this in the collected images, and we wanted to communicate it in the project.

The Exceptional and the Everyday: 144 hours in Kiev continues previous work of our lab (Software Studies Initiative, softwarestudies.com) with visual social media:

phototrails.net (analysis and visualization of 2.3 Instagram photos in 14 global cities, 2013)

selfiecity.net (comparison between 3200 selfie photos shared in six cities, 2014; collaboration with Moritz Stefaner).

In the new project we specifically focus on the content of images, as opposed to only their visual characteristics. We also explore non-visual data that accompanies the images: most frequent tags, the use of English, Ukrainian and Russian languages, dates and times when images their shared, and their geo-coordinates.


Project web site:

http://www.the-everyday.net/


"The Social Media Image" - a new paper from Nadav Hochman (Phototrails project)



Imageplot of 50,000 Instagram images from Paris (spring 2013) organized by average brightness and contrast.


The new paper by Nadav Hochman (see Phototrails project) has been published in BIG DATA and SOCIETY:

The Social Media Image

Download full paper text

Abstract:

How do the organization and presentation of large-scale social media
image sets in social networking apps affect the creation of visual
knowledge, value, and meaning?

The article analyzes fundamental elements in the changing syntax of
existing visual software ontology — the ways current social media
platforms and aggregators present and categorize social media images.
It shows how visual media created within social media platforms
follow distinct modes of knowledge production and acquisition.

First, I analyze the structure of social media images within data streams as
opposed to previous information organization in structured databases.
While the database has no pre-defined notions of time and thus
challenges traditional linear forms, the data stream re-emphasizes the
linearity of a particular data sequence and activates a set of new
relations to contemporary temporalities.

Next, I show how these visual arrangements and temporal principles are
manifested and discussed in three artworks: Untitled (Perfect
Lovers)
by Felix Gonzalez-Torres (1991), The Clock by
Christian Marclay (2011), and Last Clock by Jussi Ängeslevä and
Ross Cooper (2002).

By emphasizing the technical and poetic ways in which social media
situate the present as a “thick” historical unit that embodies
multiple and synchronous temporalities, the article illuminates some
of the conditions, challenges, and tensions between former visual
structures and current ones, and unfolds the cultural significations
of contemporary big visual data.



"Visualizing the Museum" - Manovich's lecture at São Paulo Museum of Art, August 26, 2014





Lev Manovich will lecture at São Paulo Museum of Art, August 26, 7pm.

Title and summary:


Visualizing the Museum

Over the last few years, many major art museums around the world digitized their collections and made them available online. As a result, we can now apply "big data" approach to history of art and visual culture, making visible the patterns across millions of historical images.

Our lab was setup in 2007 in anticipation of these developments. We begun to develop methods and techniques for visualization of massive collections of historical cultural images, even though they were not yet available at that time. Today (2014) the situation is different - there are plenty of museum datasets to choose from, including digitized collections from Rijksmuseum, Cooper-Hewitt Museum, and Library of Congress.

In my lecture I will discuss some of our already accomplished projects related to museums and large art datasets. Two of them apply computational and visualization tools to digitized collections; another uses social media photos people share in museums; and yet another looks at user-generated art.

The following are the datasets and their sources from these projects:

- MoMA (Museum of Modern Art, NYC) collection of over 20,000 photographs covering 19th and 20th century.

- Hundreds of thousands of Instagram photos shared by visitors in MoMA, Centre Pompidou and Tate Modern.

- 1 million artworks from deviantArt, the most popular social network for user-generated art.

- All Dziga Vertov's films from Austrian Film Museum.



Lecture poster:







MediaLab at the Met : Open Call for Internship Applications


MediaLab at the Met : Open Call for Internship Applications


Deadline for submission: August 27th.
Send resumes and letter of interest to
don.undeen@metmuseum.org


The MediaLab at the Metropolitan Museum of Art is a small team dedicated to exploring the intersections of art, technology, and the museum experience. We do this by partnering with talented students and professionals to develop prototypes and artistic provocations that fuel conversation and new ideas. We are currently accepting internship applications for the Fall 2014 Semester.

Because our internships are unpaid, great emphasis is put on supporting our student partners in realizing their vision, providing them with access to training, content, and museum expertise. Interns will have ample opportunities to present to, receive feedback from and pursue partnerships with Met staff and industry professionals. All internships conclude with a public expo of projects, and a blog post on the Met's Digital Underground blog:
http://www.metmuseum.org/blogs/digital-underground

Past projects have dealt with such diverse topics as: Projection mapping; 3D scanning, printing, modeling and animation; accessible wayfinding and path generation; iBeacon; Kinect; Arduino and robotics; digital art copyism and Nintendo hacking; Oculus Rift and virtual reality; augmented reality; and so much more!

Code developed in this program will be open-sourced, with ownership retained by the interns.

Selected interns will be encouraged to pursue their own interests as long as they in some way address art and/or the museum experience (interpret that as broadly as you like). However, here are some topics and technologies that the museum has been thinking about lately, and may prove especially fruitful:

Creative Reuse of Met Content : How can study and use of objects in the Met's collection expand or refine your own creative process?

Accessible Wayfinding : Previous MediaLab projects have led to an algorithm that provides paths through the museum to see your favorite objects, while respecting the users access preferences: avoiding stairs, dimly lit rooms, etc. This work, functioning as a web service, is ready for a UI layer to make it truly valuable to our visitors.

Crowdsourced Audio Descriptions of Art : our work with blind-and low-vision visitors has revealed a need for testing various approaches for capturing verbal descriptions of art objects. Both UI, UX, and mobile development work in this area would reap dividends.

Computer Vision and Image Analysis : What secrets are lurking in our collection that can be revealed through the gaze of the artificial eye?

Virtual Reality and the Museum Model : Previous MediaLab projects have leveraged the Oculus Rift and our architectural models to develop fantastical virtual environments. With the new Oculus on the way, we're looking to expand our virtual universe, for both practical and artistic purposes.

Projection Mapping : Sculptural and architectural forms in our collection lend themselves to very challenging and interesting possibilities for projection mapping. Previous work has involved re-creating the original colors on the Temple of Dendur, but more person, artistic opportunities abound as well.

Micro-Location Tracking : With iBeacon technology seeing more widespread use in retail and cultural environments, what types of experiences can we enable in our own space that make use of location-awareness and personal identification?

Natural Language Processing, Sentiment Analysis, Collective Intelligence, and the Semantic Web : There's vast amounts of didactic content about our objects and their context. What connections can we uncover using modern text analysis and concept modeling technologies?

3D Scanning, Animation, Modeling, Printing and Casting : The MediaLab has 4 3D printers, and access to more advanced scanning and modeling technologies. We've also been enthusiastic supporters of artists using Met objects in their creative work. Let's push the envelope even further.

Brain Scanning : A new area for us, we see brain-scanning technology as having interesting connections to accessibility, visitor research, and the museum's role as a place of meditation and reflection. We're exploring partnerships in this area, and are looking for someone who shares our enthusiasm to learn more.

Wild Card! : What connections do you want to explore between your favorite technology and the world's greatest museum? Let's talk!


Hardware to play with:
- Google Glass
- 3D printers
- 3D Scanners
- Kinect
- Leap Motion
- Oculus Rift
- Arduino and Assorted Eletronics
- Raspberry Pi
- Touchscreens
- Projectors
- iBeacons
- Brain Scanners
- and more fun stuff on the way!


Send resumes and letter of interest to
MediaLab at the Met : Open Call for Internship Applications


Deadline for submission: August 27th.
Send resumes and letter of interest to
don.undeen@metmuseum.org

The MediaLab at the Metropolitan Museum of Art is a small team dedicated to exploring the intersections of art, technology, and the museum experience. We do this by partnering with talented students and professionals to develop prototypes and artistic provocations that fuel conversation and new ideas. We are currently accepting internship applications for the Fall 2014 Semester.

Because our internships are unpaid, great emphasis is put on supporting our student partners in realizing their vision, providing them with access to training, content, and museum expertise. Interns will have ample opportunities to present to, receive feedback from and pursue partnerships with Met staff and industry professionals. All internships conclude with a public expo of projects, and a blog post on the Met's Digital Underground blog:
http://www.metmuseum.org/blogs/digital-underground

Past projects have dealt with such diverse topics as: Projection mapping; 3D scanning, printing, modeling and animation; accessible wayfinding and path generation; iBeacon; Kinect; Arduino and robotics; digital art copyism and Nintendo hacking; Oculus Rift and virtual reality; augmented reality; and so much more!

Code developed in this program will be open-sourced, with ownership retained by the interns.

Selected interns will be encouraged to pursue their own interests as long as they in some way address art and/or the museum experience (interpret that as broadly as you like). However, here are some topics and technologies that the museum has been thinking about lately, and may prove especially fruitful:

Creative Reuse of Met Content : How can study and use of objects in the Met's collection expand or refine your own creative process?

Accessible Wayfinding : Previous MediaLab projects have led to an algorithm that provides paths through the museum to see your favorite objects, while respecting the users access preferences: avoiding stairs, dimly lit rooms, etc. This work, functioning as a web service, is ready for a UI layer to make it truly valuable to our visitors.

Crowdsourced Audio Descriptions of Art : our work with blind-and low-vision visitors has revealed a need for testing various approaches for capturing verbal descriptions of art objects. Both UI, UX, and mobile development work in this area would reap dividends.

Computer Vision and Image Analysis : What secrets are lurking in our collection that can be revealed through the gaze of the artificial eye?

Virtual Reality, and the Museum Model : Previous MediaLab projects have leveraged the Oculus Rift and our architectural models to develop fantastical virtual environments. With the new Oculus on the way, we're looking to expand our virtual universe, for both practical and artistic purposes.

Projection Mapping: Sculptural and architectural forms in our collection lend themselves to very challenging and interesting possibilities for projection mapping. Previous work has involved re-creating the original colors on the Temple of Dendur, but more person, artistic opportunities abound as well.

Micro-Location Tracking : With iBeacon technology seeing more widespread use in retail and cultural environments, what types of experiences can we enable in our own space that make use of location-awareness and personal identification?

Natural Language Processing, Sentiment Analysis, Collective Intelligence, and the Semantic Web : There's vast amounts of didactic content about our objects and their context. What connections can we uncover using modern text analysis and concept modeling technologies?

3D Scanning, Animation, Modeling, Printing and Casting : The MediaLab has 4 3D printers, and access to more advanced scanning and modeling technologies. We've also been enthusiastic supporters of artists using Met objects in their creative work. Let's push the envelope even further.

Brain Scanning : A new area for us, we see brain-scanning technology as having interesting connections to accessibility, visitor research, and the museum's role as a place of meditation and reflection. We're exploring partnerships in this area, and are looking for someone who shares our enthusiasm to learn more.

Wild Card! : What connections do you want to explore between your favorite technology and the world's greatest museum? Let's talk!


Hardware to play with:
- Google Glass
- 3D printers
- 3D Scanners
- Kinect
- Leap Motion
- Oculus Rift
- Arduino and Assorted Eletronics
- Raspberry Pi
- Touchscreens
- Projectors
- iBeacons
- Brain Scanners
- and more fun stuff on the way!


Send resumes and letter of interest to
don.undeen@metmuseum.org

Deadline for submission: August 27th.





New CUNY Center for Digital Scholarship and Data Visualization to be established with a multi-million dollar grant



Graduate Center, June 2013.jpg
Graduate Center building in New York (5th Avenue and 34th street). Photo by Alex Irklievski.



The Graduate Center, City University of New York, co-leader of CUNY's newly created Big Data Consortium, announced that it will establish the CUNY Center for Digital Scholarship and Data Visualization. The consortium has been awarded $15 million from the State of New York in the CUNY 2020 grant competition.

See full Press Release for more details.


The Center will build on two already research directions at CUNY. One is digital humanities research, including a number of innovative projects by a number of faculty and students. The prominent examples of such projects are Professor's Matthew Gold Commons In a Box, hybrid print/digital publication Debates in Digital Humanities. For the full list of people, projects and labs see GC Digital Initiatives web site.

The second direction is the work on visualizing large cultural data sets headed by Professor
Lev Manovich. Manovich joined CUNY as The Professor in the Ph.D. Program in Computer Science in 2013. Manovich's lab (Software Studies Initiative, softwarestudies.com) has been based in California Institute for Telecommunication and Information (Calit2) since 2007 and now operates between San Diego and NYC. The lab a pioneer of theory and practice of cultural analytics, applying computational and visualization methods to massive sets of visual cultural data. (Manovich first proposed the idea of cultural analytics in 2005.)

The Center plans to analyze and visualize datasets from leading New York City and national cultural institutions. The institutions who already expressed interests in working with the Center are the Museum of Modern Art, the New York Public Library, the Cooper-Hewitt Museum, the New York Historical Society, the Brooklyn Historical Society, the Digital Public Library of America, and Rhizome at the New Museum.

Software Studies Initiative will continue to operate as the independent lab, drawing on the unique resources of Calit2 and CUNY. We are looking forward to participating in the work of the new Center and sharing our experience and tools.


Media coverage:

CUNY Colleges Harness Technology for Economic Development.




program of the workshop "Cultural Analytics, Informational Aesthetics and Distant Readings" July 4-5, MECS, Germany


Note: click on the image to to Google+ where you can zoom in.

From 2014-06-29



video from "Software Studies Retrospective" at New York University


Software Studies Retrospective
Media and Literature Program, New York University, April 25, 2014.

Program:

http://www.programseries.com/2013-2014/software-studies-a-retrospective/



PROGRAM SERIES: Software Studies Retrospective (Part 1) from Media, Culture, Communication on Vimeo.

Presentation by Noah Wardrip-Fruin (University of California, Santa Cruz) Noah Wardrip-Fruin is an Associate Professor of Computer Science at the University of California, Santa Cruz, where he co-directs the Expressive Intelligence Studio, one of the world’s largest technical research groups focused on games. He also directs the Playable Media group in UCSC’s Digital Arts and New Media program. Noah’s research areas include new models of storytelling in games, how games express ideas through play, and how games can help broaden understanding of the power of computation. He is the author of Expressive Processing: Digital Fictions, Computer Games, and Software Studies (2009) and, along with Nick Montfort, editor of the foundational volume The New Media Reader (2003).



 
PROGRAM SERIES: Software Studies Retrospective (Part 2) from Media, Culture, Communication on Vimeo.

Presentation by Matthew Fuller (Goldsmiths, University of London) Matthew Fuller is David Gee Reader in Digital Media at the Centre for Cultural Studies, Goldsmiths College, University of London. He is the author of Behind the Blip, essays on the culture of software (2003), Media Ecologies, materialist energies in art and technoculture (2005); and Evil Media (2012), as well as editor of Software Studies, a lexicon (2008).



 
PROGRAM SERIES: Software Studies Retrospective (Part 3) from Media, Culture, Communication on Vimeo.

Response by Lev Manovich (The Graduate Center, CUNY) Participant Q&A Lev Manovich is a Professor of Computer Science at The Graduate Center, City University of New York. Manovich published a number of books, including Software Takes Command (2013), Soft Cinema: Navigating the Database (2005) and The Language of New Media (2001). He also directs Software Studies Initiative, co-founded with Noah Wardrip-Fruin, which combines two research directions: 1) study of software using approaches from humanities and media studies; 2) development of methods and new software tools to exploration of large cultural data. The lab’s latest project is Selfiecity.

SelfieSaoPaulo, a new project by Moritz Stefaner, Jay Chow and Lev Manovich for a media facade in Sao Paolo



From SelfieSaoPaulo set2

From SelfieSaoPaulo set2

From SelfieSaoPaulo set2

From SelfieSaoPaulo


From SelfieSaoPaulo


SelfieSaoPaulo facade video from Lev Manovich on Vimeo.





SelfieSaoPaulo is a new project by Moritz Stefaner, Jay Chow and Lev Manovich created for 2014 SP_Urban Festival.

The work will run every evening for the duration of the festival: June 9 to July 7, 2014. The media facade is located at the building FIESP / SESI and Alameda das Flores (Avenida Paulista 1313).


The project develops further the ideas from Selfiecity published early in 2014. Created by the larger team headed by Stefaner and Manovich, Selfiecity investigates the style of self-portraits (selfies) in five cities across the world using a mix of theoretic, artistic and quantitative methods. Presentation of the project includes high resolution visualizations showing patterns in 3200 Instagram selfie photos. Interactive Selfiexploratory allows website visitors to explore the selfies database in real-time, selecting the photos using geographic and demographic information, and face characteristics as analyzed by software.

SelfieSaoPaulo is designed for the large media facade in the center of Sao Paolo. Individual Sao Paolo Instagram selfie photos are sorted by three characteristics and animated over time. One animation presents the photos organized by estimated age, another by gender, and the third by the amount of smile.

The animations show us diversity of Sao Paolo citizens, and the variety of ways in which they present their self-portraits, including face expressions, poses, colors, filters used, body styles etc. To allow us better see the differences between the photos, they are all automatically aligned by eye positions.

The project also reminds us that our spontaneous online actions become source of behavioral and cognitive data used for commercial and surveillance purposes - improving results of search engines, customizing recommendations, determining what are the best images to be used in online ads, etc. In the SelfieSaoPaulo, basic data extracted by face analysis software is superimposed on each photo. Images become data, which in its turn is used to structure images presentation.

The science used to focus on nature, with smartest people coming to work in physics, chemistry, astronomy and biology. Today, the social has become the new object of science, with hundreds of thousands of computer scientists, researchers and companies mining and mapping the data about our behaviors. In this way, the humans have become the new "nature" for the sciences. The implications of this monumental shift are only beginning to unfold. Will we become the atoms in the "social physics," first dreamed by the founder of sociology Auguste Comte in the middle of 19th century? SelfieSaoPaulo of course can't answer this - but by contrasting expressive and unique faces of the selfie subjects and the numbers software reduces them to, it makes the question itself visible.



EXHIBITION DATES AND LOCATION:

SelfieSaoPaulo will run every evening for the duration of the festival: June 9 to July 7, 2014. The media facade is located at the building FIESP / SESI and Alameda das Flores (Avenida Paulista 1313).

The work was commissioned by Tanya Toft (Ph.D. Fellow at Institute of Arts and Cultural Studies, Copenhagen University and Visiting Scholar, GSAPP, Columbia University). 2014 media facade program is curated by Toft and Marília Pasculli.


LINKS:

SP_Urban Festival

SP_Urban festival - exhibited works

Selfiecity



Recently...