"Visualizing the Museum" - Manovich's lecture at São Paulo Museum of Art, August 26, 2014





Lev Manovich will lecture at São Paulo Museum of Art, August 26, 7pm.

Title and summary:


Visualizing the Museum

Over the last few years, many major art museums around the world digitized their collections and made them available online. As a result, we can now apply "big data" approach to history of art and visual culture, making visible the patterns across millions of historical images.

Our lab was setup in 2007 in anticipation of these developments. We begun to develop methods and techniques for visualization of massive collections of historical cultural images, even though they were not yet available at that time. Today (2014) the situation is different - there are plenty of museum datasets to choose from, including digitized collections from Rijksmuseum, Cooper-Hewitt Museum, and Library of Congress.

In my lecture I will discuss some of our already accomplished projects related to museums and large art datasets. Two of them apply computational and visualization tools to digitized collections; another uses social media photos people share in museums; and yet another looks at user-generated art.

The following are the datasets and their sources from these projects:

- MoMA (Museum of Modern Art, NYC) collection of over 20,000 photographs covering 19th and 20th century.

- Hundreds of thousands of Instagram photos shared by visitors in MoMA, Centre Pompidou and Tate Modern.

- 1 million artworks from deviantArt, the most popular social network for user-generated art.

- All Dziga Vertov's films from Austrian Film Museum.



Lecture poster:







MediaLab at the Met : Open Call for Internship Applications


MediaLab at the Met : Open Call for Internship Applications


Deadline for submission: August 27th.
Send resumes and letter of interest to
don.undeen@metmuseum.org


The MediaLab at the Metropolitan Museum of Art is a small team dedicated to exploring the intersections of art, technology, and the museum experience. We do this by partnering with talented students and professionals to develop prototypes and artistic provocations that fuel conversation and new ideas. We are currently accepting internship applications for the Fall 2014 Semester.

Because our internships are unpaid, great emphasis is put on supporting our student partners in realizing their vision, providing them with access to training, content, and museum expertise. Interns will have ample opportunities to present to, receive feedback from and pursue partnerships with Met staff and industry professionals. All internships conclude with a public expo of projects, and a blog post on the Met's Digital Underground blog:
http://www.metmuseum.org/blogs/digital-underground

Past projects have dealt with such diverse topics as: Projection mapping; 3D scanning, printing, modeling and animation; accessible wayfinding and path generation; iBeacon; Kinect; Arduino and robotics; digital art copyism and Nintendo hacking; Oculus Rift and virtual reality; augmented reality; and so much more!

Code developed in this program will be open-sourced, with ownership retained by the interns.

Selected interns will be encouraged to pursue their own interests as long as they in some way address art and/or the museum experience (interpret that as broadly as you like). However, here are some topics and technologies that the museum has been thinking about lately, and may prove especially fruitful:

Creative Reuse of Met Content : How can study and use of objects in the Met's collection expand or refine your own creative process?

Accessible Wayfinding : Previous MediaLab projects have led to an algorithm that provides paths through the museum to see your favorite objects, while respecting the users access preferences: avoiding stairs, dimly lit rooms, etc. This work, functioning as a web service, is ready for a UI layer to make it truly valuable to our visitors.

Crowdsourced Audio Descriptions of Art : our work with blind-and low-vision visitors has revealed a need for testing various approaches for capturing verbal descriptions of art objects. Both UI, UX, and mobile development work in this area would reap dividends.

Computer Vision and Image Analysis : What secrets are lurking in our collection that can be revealed through the gaze of the artificial eye?

Virtual Reality and the Museum Model : Previous MediaLab projects have leveraged the Oculus Rift and our architectural models to develop fantastical virtual environments. With the new Oculus on the way, we're looking to expand our virtual universe, for both practical and artistic purposes.

Projection Mapping : Sculptural and architectural forms in our collection lend themselves to very challenging and interesting possibilities for projection mapping. Previous work has involved re-creating the original colors on the Temple of Dendur, but more person, artistic opportunities abound as well.

Micro-Location Tracking : With iBeacon technology seeing more widespread use in retail and cultural environments, what types of experiences can we enable in our own space that make use of location-awareness and personal identification?

Natural Language Processing, Sentiment Analysis, Collective Intelligence, and the Semantic Web : There's vast amounts of didactic content about our objects and their context. What connections can we uncover using modern text analysis and concept modeling technologies?

3D Scanning, Animation, Modeling, Printing and Casting : The MediaLab has 4 3D printers, and access to more advanced scanning and modeling technologies. We've also been enthusiastic supporters of artists using Met objects in their creative work. Let's push the envelope even further.

Brain Scanning : A new area for us, we see brain-scanning technology as having interesting connections to accessibility, visitor research, and the museum's role as a place of meditation and reflection. We're exploring partnerships in this area, and are looking for someone who shares our enthusiasm to learn more.

Wild Card! : What connections do you want to explore between your favorite technology and the world's greatest museum? Let's talk!


Hardware to play with:
- Google Glass
- 3D printers
- 3D Scanners
- Kinect
- Leap Motion
- Oculus Rift
- Arduino and Assorted Eletronics
- Raspberry Pi
- Touchscreens
- Projectors
- iBeacons
- Brain Scanners
- and more fun stuff on the way!


Send resumes and letter of interest to
MediaLab at the Met : Open Call for Internship Applications


Deadline for submission: August 27th.
Send resumes and letter of interest to
don.undeen@metmuseum.org

The MediaLab at the Metropolitan Museum of Art is a small team dedicated to exploring the intersections of art, technology, and the museum experience. We do this by partnering with talented students and professionals to develop prototypes and artistic provocations that fuel conversation and new ideas. We are currently accepting internship applications for the Fall 2014 Semester.

Because our internships are unpaid, great emphasis is put on supporting our student partners in realizing their vision, providing them with access to training, content, and museum expertise. Interns will have ample opportunities to present to, receive feedback from and pursue partnerships with Met staff and industry professionals. All internships conclude with a public expo of projects, and a blog post on the Met's Digital Underground blog:
http://www.metmuseum.org/blogs/digital-underground

Past projects have dealt with such diverse topics as: Projection mapping; 3D scanning, printing, modeling and animation; accessible wayfinding and path generation; iBeacon; Kinect; Arduino and robotics; digital art copyism and Nintendo hacking; Oculus Rift and virtual reality; augmented reality; and so much more!

Code developed in this program will be open-sourced, with ownership retained by the interns.

Selected interns will be encouraged to pursue their own interests as long as they in some way address art and/or the museum experience (interpret that as broadly as you like). However, here are some topics and technologies that the museum has been thinking about lately, and may prove especially fruitful:

Creative Reuse of Met Content : How can study and use of objects in the Met's collection expand or refine your own creative process?

Accessible Wayfinding : Previous MediaLab projects have led to an algorithm that provides paths through the museum to see your favorite objects, while respecting the users access preferences: avoiding stairs, dimly lit rooms, etc. This work, functioning as a web service, is ready for a UI layer to make it truly valuable to our visitors.

Crowdsourced Audio Descriptions of Art : our work with blind-and low-vision visitors has revealed a need for testing various approaches for capturing verbal descriptions of art objects. Both UI, UX, and mobile development work in this area would reap dividends.

Computer Vision and Image Analysis : What secrets are lurking in our collection that can be revealed through the gaze of the artificial eye?

Virtual Reality, and the Museum Model : Previous MediaLab projects have leveraged the Oculus Rift and our architectural models to develop fantastical virtual environments. With the new Oculus on the way, we're looking to expand our virtual universe, for both practical and artistic purposes.

Projection Mapping: Sculptural and architectural forms in our collection lend themselves to very challenging and interesting possibilities for projection mapping. Previous work has involved re-creating the original colors on the Temple of Dendur, but more person, artistic opportunities abound as well.

Micro-Location Tracking : With iBeacon technology seeing more widespread use in retail and cultural environments, what types of experiences can we enable in our own space that make use of location-awareness and personal identification?

Natural Language Processing, Sentiment Analysis, Collective Intelligence, and the Semantic Web : There's vast amounts of didactic content about our objects and their context. What connections can we uncover using modern text analysis and concept modeling technologies?

3D Scanning, Animation, Modeling, Printing and Casting : The MediaLab has 4 3D printers, and access to more advanced scanning and modeling technologies. We've also been enthusiastic supporters of artists using Met objects in their creative work. Let's push the envelope even further.

Brain Scanning : A new area for us, we see brain-scanning technology as having interesting connections to accessibility, visitor research, and the museum's role as a place of meditation and reflection. We're exploring partnerships in this area, and are looking for someone who shares our enthusiasm to learn more.

Wild Card! : What connections do you want to explore between your favorite technology and the world's greatest museum? Let's talk!


Hardware to play with:
- Google Glass
- 3D printers
- 3D Scanners
- Kinect
- Leap Motion
- Oculus Rift
- Arduino and Assorted Eletronics
- Raspberry Pi
- Touchscreens
- Projectors
- iBeacons
- Brain Scanners
- and more fun stuff on the way!


Send resumes and letter of interest to
don.undeen@metmuseum.org

Deadline for submission: August 27th.





New CUNY Center for Digital Scholarship and Data Visualization to be established with a multi-million dollar grant



Graduate Center, June 2013.jpg
Graduate Center building in New York (5th Avenue and 34th street). Photo by Alex Irklievski.



The Graduate Center, City University of New York, co-leader of CUNY's newly created Big Data Consortium, announced that it will establish the CUNY Center for Digital Scholarship and Data Visualization. The consortium has been awarded $15 million from the State of New York in the CUNY 2020 grant competition.

See full Press Release for more details.


The Center will build on two already research directions at CUNY. One is digital humanities research, including a number of innovative projects by a number of faculty and students. The prominent examples of such projects are Professor's Matthew Gold Commons In a Box, hybrid print/digital publication Debates in Digital Humanities. For the full list of people, projects and labs see GC Digital Initiatives web site.

The second direction is the work on visualizing large cultural data sets headed by Professor
Lev Manovich. Manovich joined CUNY as The Professor in the Ph.D. Program in Computer Science in 2013. Manovich's lab (Software Studies Initiative, softwarestudies.com) has been based in California Institute for Telecommunication and Information (Calit2) since 2007 and now operates between San Diego and NYC. The lab a pioneer of theory and practice of cultural analytics, applying computational and visualization methods to massive sets of visual cultural data. (Manovich first proposed the idea of cultural analytics in 2005.)

The Center plans to analyze and visualize datasets from leading New York City and national cultural institutions. The institutions who already expressed interests in working with the Center are the Museum of Modern Art, the New York Public Library, the Cooper-Hewitt Museum, the New York Historical Society, the Brooklyn Historical Society, the Digital Public Library of America, and Rhizome at the New Museum.

Software Studies Initiative will continue to operate as the independent lab, drawing on the unique resources of Calit2 and CUNY. We are looking forward to participating in the work of the new Center and sharing our experience and tools.


Media coverage:

CUNY Colleges Harness Technology for Economic Development.




program of the workshop "Cultural Analytics, Informational Aesthetics and Distant Readings" July 4-5, MECS, Germany


Note: click on the image to to Google+ where you can zoom in.

From 2014-06-29



video from "Software Studies Retrospective" at New York University


Software Studies Retrospective
Media and Literature Program, New York University, April 25, 2014.

Program:

http://www.programseries.com/2013-2014/software-studies-a-retrospective/



PROGRAM SERIES: Software Studies Retrospective (Part 1) from Media, Culture, Communication on Vimeo.

Presentation by Noah Wardrip-Fruin (University of California, Santa Cruz) Noah Wardrip-Fruin is an Associate Professor of Computer Science at the University of California, Santa Cruz, where he co-directs the Expressive Intelligence Studio, one of the world’s largest technical research groups focused on games. He also directs the Playable Media group in UCSC’s Digital Arts and New Media program. Noah’s research areas include new models of storytelling in games, how games express ideas through play, and how games can help broaden understanding of the power of computation. He is the author of Expressive Processing: Digital Fictions, Computer Games, and Software Studies (2009) and, along with Nick Montfort, editor of the foundational volume The New Media Reader (2003).



 
PROGRAM SERIES: Software Studies Retrospective (Part 2) from Media, Culture, Communication on Vimeo.

Presentation by Matthew Fuller (Goldsmiths, University of London) Matthew Fuller is David Gee Reader in Digital Media at the Centre for Cultural Studies, Goldsmiths College, University of London. He is the author of Behind the Blip, essays on the culture of software (2003), Media Ecologies, materialist energies in art and technoculture (2005); and Evil Media (2012), as well as editor of Software Studies, a lexicon (2008).



 
PROGRAM SERIES: Software Studies Retrospective (Part 3) from Media, Culture, Communication on Vimeo.

Response by Lev Manovich (The Graduate Center, CUNY) Participant Q&A Lev Manovich is a Professor of Computer Science at The Graduate Center, City University of New York. Manovich published a number of books, including Software Takes Command (2013), Soft Cinema: Navigating the Database (2005) and The Language of New Media (2001). He also directs Software Studies Initiative, co-founded with Noah Wardrip-Fruin, which combines two research directions: 1) study of software using approaches from humanities and media studies; 2) development of methods and new software tools to exploration of large cultural data. The lab’s latest project is Selfiecity.

SelfieSaoPaulo, a new project by Moritz Stefaner, Jay Chow and Lev Manovich for a media facade in Sao Paolo



From SelfieSaoPaulo set2

From SelfieSaoPaulo set2

From SelfieSaoPaulo set2

From SelfieSaoPaulo


From SelfieSaoPaulo


SelfieSaoPaulo facade video from Lev Manovich on Vimeo.





SelfieSaoPaulo is a new project by Moritz Stefaner, Jay Chow and Lev Manovich created for 2014 SP_Urban Festival.

The work will run every evening for the duration of the festival: June 9 to July 7, 2014. The media facade is located at the building FIESP / SESI and Alameda das Flores (Avenida Paulista 1313).


The project develops further the ideas from Selfiecity published early in 2014. Created by the larger team headed by Stefaner and Manovich, Selfiecity investigates the style of self-portraits (selfies) in five cities across the world using a mix of theoretic, artistic and quantitative methods. Presentation of the project includes high resolution visualizations showing patterns in 3200 Instagram selfie photos. Interactive Selfiexploratory allows website visitors to explore the selfies database in real-time, selecting the photos using geographic and demographic information, and face characteristics as analyzed by software.

SelfieSaoPaulo is designed for the large media facade in the center of Sao Paolo. Individual Sao Paolo Instagram selfie photos are sorted by three characteristics and animated over time. One animation presents the photos organized by estimated age, another by gender, and the third by the amount of smile.

The animations show us diversity of Sao Paolo citizens, and the variety of ways in which they present their self-portraits, including face expressions, poses, colors, filters used, body styles etc. To allow us better see the differences between the photos, they are all automatically aligned by eye positions.

The project also reminds us that our spontaneous online actions become source of behavioral and cognitive data used for commercial and surveillance purposes - improving results of search engines, customizing recommendations, determining what are the best images to be used in online ads, etc. In the SelfieSaoPaulo, basic data extracted by face analysis software is superimposed on each photo. Images become data, which in its turn is used to structure images presentation.

The science used to focus on nature, with smartest people coming to work in physics, chemistry, astronomy and biology. Today, the social has become the new object of science, with hundreds of thousands of computer scientists, researchers and companies mining and mapping the data about our behaviors. In this way, the humans have become the new "nature" for the sciences. The implications of this monumental shift are only beginning to unfold. Will we become the atoms in the "social physics," first dreamed by the founder of sociology Auguste Comte in the middle of 19th century? SelfieSaoPaulo of course can't answer this - but by contrasting expressive and unique faces of the selfie subjects and the numbers software reduces them to, it makes the question itself visible.



EXHIBITION DATES AND LOCATION:

SelfieSaoPaulo will run every evening for the duration of the festival: June 9 to July 7, 2014. The media facade is located at the building FIESP / SESI and Alameda das Flores (Avenida Paulista 1313).

The work was commissioned by Tanya Toft (Ph.D. Fellow at Institute of Arts and Cultural Studies, Copenhagen University and Visiting Scholar, GSAPP, Columbia University). 2014 media facade program is curated by Toft and Marília Pasculli.


LINKS:

SP_Urban Festival

SP_Urban festival - exhibited works

Selfiecity



our new Instagram visualization project opens at National Taiwan Museum of Fine Art



Taipei Phototime, the new work by Jay Chow and Lev Manovich premiered as part of the exhibition Wonder of Fantasy at The National Taiwan Museum of Fine Arts (NTMOFA) on May 17, 2014.


From 2014-05-19


Taipei Phototime continues our investigations into expressive possibilities of big visual data. In Phototrails (2013), Nadav Hochman, Lev Manovich and Jay Chow compared 2.3 million Instagram photos from 13 global cities. Selficity (2014), a project created by members of our lab and collaborators from New York and Germany (including visualization designer Moritz Stefaner) investigates the styles of Instagram self-portraits (selfies) in five cities.

Manovich uses the term "aggregate documentary" to describe large collections of social media images: "The photo-universe created by hundreds of millions of people might be considered a mega-documentary, without a script or director - but this documentary’s scale requires computational tools—databases, search engines, visualization — in order to be 'watched.'"(Lev Manovich, Watching the World, Aperture magazine #214, Spring 2014.)

While Phototrails created "aggregate documentary" from Instagram images downloaded by the team over a few months, Taipei Phototime captures and displays new Instagram images in real time. Two streams of images, one from Taipei and one from New York, are continuously updated on the screen as users in these cities share new images.

The display alternates between showing the actual photos and their abstraction - color squares representing images using their average hue. This abstracted representation allows viewers to compare temporal patterns and visual rhythms of Taiwan and New York. As images fill the screen, differences in color palettes, contrast, and other dimensions between the two cities become apparent.


Taipei Phototime screenshots (from live stream).

Images from Taipei (top) and New York (bottom) start streaming, represented only as squares with their average colors


More images continue to fill the screen

Display alternates every 15 sec between color squares and the actual Instagram images


The representation of city life and its patterns was an important subject of modernist art. From Pissaro's Le Boulevard de Montmartre (1897) and Berlin: Symphony of a Metropolis (Walter Ruttmann, 1927) to Mondrian's Broadway Boogie-Woogie (1943), and Play Time (Jacques Tati, 1967), artists experimented with ways to show modern city's rhythms. In these and other works, a city presented as a stage for social life, or as an energizing elixir, or as a gigantic machine which subsumes people in its mechanical work.

If steams of Instagram images give us a new representation of early 21st century metropolis, what kind of city do they portray? Can a representation composed from contributions of millions of people show us the city more objectively than single paintings, photos and documetaries by professional artists and filmakers? Taipei Phototime displays the images organized by their upload date and time, without imposing any ideas of its own. However, according to Manovich, just as other visual media before it, visualization is not a neutral vehicle:

"Our visualizations of human habits rendered through Instagram photographs do not reflect a single directorial point of view, but this does not make them entirely objective. Just as a photographer decides on framing and perspective, we make formal decisions about how to map images, organizing them by upload dates, average color, brightness, and so on. But by rendering the same set of images in multiple ways, we remind viewers that no single visualization offers a transparent interpretation, just as no single traditional documentary image could be considered neutral." (Manovich, "Watching the World.")


Wonder of Fantasy runs from 05/17/2014 to 08/03/2014. Our work will be running continusly during this time, displaying real-time streams of Internet images from Taipei and New York.


From Lev Manovich and Jay Chow in a visualization lab at Calit2

From Jay Chow's lecture at National Taiwan Art Museum

From 2014-05-18
Visualization of 5,000 Instagram images shared in Taipei, part of the initial data exploration by Jay Chow for Taipei Phototime.



Our lab is awarded Twitter Data Grant



(Twitter Data Grant team members in front of a visualization from their previous project selfiecity.net.)



We are among 6 international teams awarded Twitter Data Grants:

Twitter #DataGrants selections



Project proposal title:

Do happy people take happy images? Measuring happiness of cities from tweeted Images


Abstract:

Can visual characteristics of images shared on social media tell us about the “moods” of cities? We propose to study the relationship between features of tweeted images in a number of U.S. cities and existing measures of "happiness" estimated using traditional surveys and other data sources (such as health and well-being statistics).


For further about this project, see the following:

Calit2 news release, The Happiness of Cities: Do Happy People Take Happy Images?

San Diego Union-Tribune, Do happy people take happy photos?



A Window, a Message, or a Medium? Learning from Instagram





My upcoming talk at International Conference on Mobile and Social Media Practices (organized by Dr. Tristan Thielmann).

June 19-21, 2014
The University of Siegen
Germany


Title:

A Window, a Message, or a Medium? Learning from Instagram


Abstract:

Over last few years, tens of thousands of researchers in social computing and computational social sciences started to use available data from social networks and media sharing services (such as Twitter, Foursquare and Instagram) created by users of mobile platforms. The research uses techniques from statistics, machine learning, and visualization, among others, to analyze all kinds of patterns contained in this data and also (less frequently) propose new models for understanding the social. The examples include analysis of information propagation in Twitter, predicting popularity of photos on Flickr, proposing new sets of city neighborhoods using Foursquare users check-ins, and understanding connections between musical genres using listening data from Echonest.

In my talk I will address a fundamental question we face in doing this research: what exactly are we learning when analyzing can social media data? Is it a window into real-world social and cultural behaviors, a reflection of lifestyles of particular demographics who use mobile platforms and particular network services, or only an artifact of mobile apps? In other words - is social media a "message" or a "medium"?

I will discuss this question using three recent projects from my lab (softwarestudies.com). The projects use large sets of Instagram images and accompanying data together with data science and visualization tools. Phototrails.net (2013) analyzes 2.3 million photos from 13 global cities to investigate how different kinds of events are represented in these photos. The project also investigates if the universal affordances of Instagram app (same interface and same set of filters available to all users) result in universal digital visual language. Selfiecity.net (2014) analyzes the distinct artifact of mobile platforms – selfies. We compare thousands of selfies to see if cultural specificity of different places and cultural is preserved in this genre. Finally, our third project compares Instagram photos taken by visitors in a few major modern art museums, asking if photographs of famous works of the art differ depending on what these artworks are and where they are situated.

Call for papers: Cultural Analytics, Information Aesthetics, and Distant Readings - workshop with Lev Manovich and Frieder Nake (July 4-5 2014, Germany)



Max Bense during his lecture at University of Stuttgart.



Call for Papers:

Cultural Analytics, Information Aesthetics, and Distant Readings: Workshop with Lev Manovich and Frieder Nake



4-5th July 2014

Institute for Advanced Study on Media Cultures of Computer Simulation (mecs), Leuphana University, Lüneburg, Germany

Organized by Martin Warnke, Anneke Janssen & Isabell Schrickel

Workshop call: http://mecs.leuphana.com/aktuelles/workshop-cultural-analytics/


What can we learn from Information Aesthetics to understand today’s condition and potentials of media analytics? What could Max Bense’s mathematical philosophy of critical rationalism tell us about the objective reign of information and algorithms of nowadays? Are there affiliations between the filigree vector graphics of the sixties and seventies to the exuberant image aggregates after the iconic turn? And: how could connections look like between methods of distant readings of abundant piles of pictures with very close investigations of their details? What would ultimately be a simulation of the art historian’s gaze by the means of digital computers? Could even art history become a branch of computer science? And: how will aesthetical questions be answered in the age of Big Data?

Cultural Analytics as proposed by Lev Manovich is a contemporary attempt to address such questions. Departing from the problem that digital image media brought about in the last decades – the impossibility to view all or at least a significant fraction of all of the images that circulate in the net – Cultural Analytics aims to offer methodologies for dealing with this torrent of images by creating visualizations and thus even more images extracting chrominance, size, creation date, information, redundancy etc.

A similarity to the historical efforts of Information Aesthetics is obvious: analyzing and generating images with algorithms using various image properties including complexity, redundancy and entropy. There also important differences: in the sixties, when information aesthetics dealt with images, the number of images to be analyzed and/or generated was relatively small, due to the technology limitations. Now we are confronted with huge amounts of them, and are also able to produce floods of images algorithmically. Cultural Analytics is also concerned with content of images (see selfiecity.net) and it uses visualization to explore patterns in image collections. During the process of datafication images become data, data becomes image.

We want to confront and compare Cultural Analytics with historical predecessors of negotiating the relation between data and images, between facts and imagination, between immersion into singular images and abstraction into visualizations.


We invite scholars from all relevant fields to submit abstracts of no more than 300 words (for a max. 30 min talk) together with a short CV (up to two pages) before 22nd April 2014 to mecs@leuphana.de. A publication is intended. Acceptance notification will be sent out on May 12th 2014.


mecs is the Institute for Advanced Study on Media Cultures of Computer Simulation, funded by the German Research Foundation (DFG), at Leuphana University Lüneburg.

http://mecs.leuphana.de


More about Max Bense and Information Aesthetics:

http://dada.compart-bremen.de/item/agent/209

http://monoskop.org/Max_Bense#Information_aesthetics

Christoph Klütsch, Information Aesthetics and the Stuttgart School, in Mainframe Experimentalism: Early Computing and the Foundation of the Digital Arts, eds. Hannah Higgins and Douglas Kahn, University of California Press, 2012, pp 65-89. (in English)

Elisabeth Walther, Max Bense's Informational and Semiotical Aesthetics, September 2000. (in English)




Cover for publication:
Max Bense, Abraham A. Moles. bit international 1 – the theory of information and the new aesthetics. Zagreb, 1968. S
Source and context: http://www.neuegalerie-archiv.at/07/bit/konzept.html



More about Frieder Nake:

http://en.wikipedia.org/wiki/Frieder_Nake

Thomas Dreher. History of Computer Art, chap.II.2.2 "Digital Computer Graphics." Published online, 2011-. Complete book in German and translated chapters in English: http://iasl.uni-muenchen.de/links/GCA_Indexe.html



Frieder Nake. Walk-Through-Raster, series 7.1, 1966, plotter drawing in four colours (Nake: Ästhetik 1974, p.237, ill. 5.5-6).
Source: http://iasl.uni-muenchen.de/links/GCA-III.2e.html#Computergrafik



More about Cultural Analytics:

http://lab.softwarestudies.com/p/cultural-analytics.html

http://lab.softwarestudies.com/p/publications.html

Lev Manovich, Watching the World, Aperture, 2014.



cultural analytics - recent projects:

selfiecity.net (2014)

phototrails.net (2013)

Visualizing Vertov (2013)




Recently...