I’ve recently been reading articles by Mitchell Whitelaw and Dominic Fee, that have both referenced the term ‘flaneur’ in relation to different aspects of modern archives and information seeking habits. I was quite intrigued by the term, and decide to follow up with some further research.
As I will describe, it proved to be a serendipitous find in more ways than one, and articulated quite eloquently many things that I have often thought, but have never been able to express very well, in relation to how I would like the internet to function regarding design and visual layout.
‘The flanuer appears to have no goal; rather, experiencing city life is his primary aim. Without becoming fully part of it, he passes through squares and crowds making sense of the city. The growing disparity between the large city population and the individual makes it unlikely to meet personal acquaintances by chance. While city life becomes more accelerated, the flaneur keeps a leisurely pace, resisting the growing speed of emerging capitalism. The flaneur moves “through space and among the people with a viscosity that both enables and privileges vision. He explores the city following whatever cue, or indeed clue, that the streets offer as enticement to fascination.” (Dörk et. al. | 2011)
Other important aspects to note of the urban flaneur is a more critical side that is aware of “the accompanying social realities“. The flaneur is a somewhat contradictory character. The flaneur values and notices unseen or under-appreciated aspects of the city’s beauty and design and populace, creating an ‘aestheticisation of everyday life‘. At the same time, they are a cultural critic ‘resisting the commercialisation and acceleration by taking time and ‘walking out of step‘.
The connections to the contemporary online ‘information seeker’ are easy to see. Digital information spaces are in many ways today what major urban cities were back then, functioning as the main cultural platform of exchange and interaction for a large percentage of the world’s population.
Dörk suggests that the urban flaneur is thus an excellent ‘lens through which to envision new perspectives on information seeking‘.
There are many interesting connections and suggestions that come from looking through the lens of the urban flaneur, feeding into the definition/portrait of a modern information flaneur or information seeker. Some of the ones that I found most interesting and relevant to my research into visual arts archives were the terms ‘visual foraging‘ and the importance of serendipity in the search/research process.
For many visual artists, designers, and illustrators today (especially for younger and emerging visual artists), the research process begins with an image search, or an image gathering across online and offline platforms, amassing elements, styles, and compositions from a variety of sources into a collection of items/images/objects/artefacts. Aspects of this visual collection are then viewed through the lens of personal thematic or conceptual concerns and help develop a starting point(s) for the creation of authentic new work(s) by the artist(s).
This process is very much a ‘visual foraging‘ of the area(s) of interest to the artist, guided in no small part by a serendipitous approach and outlook. Its similar to how one might browse a shop for items, without an idea of exactly what one is looking for, but usually having a broader area of interest; like ‘winter-wear‘, or slightly more specific like ‘something for that sunny patch at the back of the garden‘. The discovery of a new visual of interest often pivots the artist into a slightly new direction, creating a meandering pathway made up of smaller ‘steps’ with varying degrees of ‘pivot‘ in between them.
A nice visual for this in the article is the idea of the information flaneur ‘bumping into information‘.
The problem is that the visual framework of many websites, archives, and databases do not encourage this type of ‘visual foraging‘ and discovery.
Explicit search techniques, or “Interfaces centred around keyword search and filtering” are identified in the article as impeding these kinds of serendipitous information encounters. An example of designing for serendipitous information encounters include visual information surrogates (an image representing a work of art or ‘parent image’) instead of text abstracts, and faceted navigation . A comparative usability study is referenced in the article comparing explicit search techniques and faceted navigation with 32 art students. The study showed that the art students showed much higher levels of confidence, satisfaction, and recall with a faceted versus search based-interface.
The article finally introduces ‘explorability‘ as “an umbrella goal that integrates principles that have evidence for supporting the information flaneur in cultivating her curiosity, reflection, and imagination.” I would change this goal to ‘visual explorability‘ in the context of my research into visual arts archives, as I think that this principle of visual explorability has the most potential in defining the type of service user I am most looking to create design and functionality for in an arts archive.
Overall I have been inspired by the human centred approach to knowledge seeking that is described in the Dörk article, and I like to think that my own experience in carrying out research for this masters programme follows the model of the ‘information flaneur‘. One of the most exciting aspects to this approach is that you never know what new knowledge is around the corner that you will ‘bump’ into.
This post represents my last one of the semester and the year…looking forward to ‘bumping’ into new things next year!
The immortal words of the the American television artist/painter Bob Ross are called to mind:
Remember friends, there’s no such thing as mistakes; just happy little accidents!
The following is an excerpt from a comment of mine in an online group discussion through the MA Digital Cultures in UCC around the topic of crowdsourcing and crowdfunding. I’m interested in relating this to another topic that was recently raised in the course regarding Generative Artwork.
Is crowdsourcing to become an alternative in the future to public or state funding?
If so, does this force projects, artists, groups, etc. to have a popular spin or edge to their work/research?
In recent years Arts funding has been cut drastically across the country, and artists, groups, etc. have in some cases been told by local Arts Offices and arts funding bodies to look to FUND:IT and other crowdfunding sites as a viable alternative for funding.
This might work fine if you’re a budding photographer who creates beautiful landscape images, but will it support a less popular or ‘sell-able’ product like an artist looking to create a body of work about an abstract art movement at the turn of the century and how it has impacted current research into….and so on and so forth.
The very nature of successful crowdsourcing is that it be popular, and as businesses, governments and other funding bodies take advantage of ‘free contributions’, will they continue to sponsor and support projects, groups, research etc. that are not ‘popular’?
From online post in Canvas by Conall Cary 23 Sep 2019
While crowdfunding and Generative artwork are not necessarily directly related topics, the common thread between the two is my worry over potential misuses and abuses of their platforms in the future.
Generative Art is is defined by Wikipedia as the following:
Generative art refers to art that in whole or in part has been created with the use of an autonomous system. An autonomous system in this context is generally one that is non-human and can independently determine features of an artwork that would otherwise require decisions made directly by the artist
I remember my first real personal experience with a Generative Artwork when I encountered my friend Richard Forrest’s self made drawing machines creating quite beautifully distorted portraits in coloured pen onto perspex.
This a fantastic example of an artist using machine automation as a tool in creating a type of artwork that wouldn’t be possible without the automated process, and I think that the majority of Generative Artwork created by professional artists all strikes me as having a similar sense of uniqueness; an artwork that crosses disciplines and combines methodologies, and can even act as a fantastic learning aid as seen in the ArtMovement site designed by friend and artist Dominic Fee.
What becomes different is when someone else takes Richard’s machine, and makes work in his unique style for the purpose of resale or their own artistic identity. In this instance I’m sure artistic and intellectual copyright would protect Richard, but when it comes to algorithms that mimic artistic styles, there is currently not much in place to easily protect artists from being taken advantage of.
For instance, in the future I can imagine there being very little difficulty in creating a programme that can scan a variety of say Illustrative styles and combine the most interesting formal and conceptual elements within them to then generate its own images responding to specific briefs or commands. It would be difficult to say that it had ripped off any one particular style, and as such would be more difficult to legislate against.
If this programme was then used in a professional job, where a tender was put out for an illustration to a group of illustrators, it would be impossible for the human illustrators to compete in terms of time and cost with the speed of the automated system.
This is only one example, and in fact you probably could bring a law-suite against someone who ran that programme for that purpose, but I can easily imagine that before more explicit rules are put in place surrounding the usage of certain categories of Generative Art systems then there might very well be a period of abuse and exploitation of artistic and intellectual copyright that takes place.
The other purpose of this post was to possibly make use of the ‘comments’ feature in WordPress, so I would love to hear back from people on any thoughts or ideas surrounding the issues of either crowdsourcing, Generative Arts, or anything else that may have been sparked!
(For viewing from P.C. or Tablet : The ‘Leave a Comment’ link is on the left hand side near the post header)
For this post I’ll be looking at the article ‘How We Read: Close, Hyper, Machine’ from 2010 by N. Katherine Hayles. In the article she looks at how we read text on a digital platform, and the ways in which we can exploit the tools of our digital environment to improve engagement with text and re-imagine a new definition for reading today.
Hayles examines and explains the concept of ‘close reading’, which would have been my association with reading of a literary text in school. Close reading refers to the process of critically analysing a text, where we try to notice patterns or aspects of the work that can uncover a deeper meaning or understanding of the text.
This method of ‘close reading’ is then juxtaposed to forms of digital reading, such as ‘hyperreading’ defined as “reader-directed, screen-based, computer-assisted reading” which basically refers to the ways we explore text online, defined more by a certain quickness and immediacy, where relevant information can be found and assimilated from a variety of sources and media, and where the more time consuming and critical process of ‘close reading’ is often not applied. ‘Machine reading’ is also explored, and refers to the myriad of ways in which computer software programs can analyse and read text. Today we obviously have a multitude of Apps at our fingertips that offer advanced versions of many of the software programs that are referenced in the article.
The pros and cons of each of these reading practices are looked at, and the need to form a holistic fusion of the collective benefits of close, hyper-, and machine reading methods is put forward.
I think that in certain environments, such as colleges and universities, we have definitely started to create this new way of reading and analysing text, where students have access to a host of digital tools and are able to take advantage of a combination of the strengths that each reading practice offers, not only helping with understanding, but also opening up new ways of engagement, and making life a bit easier! (I’m thinking of my own exposure to programs like Voyant,Zotero, hypothes.is, TAPoR, Project Gutenburg, Merriam-Webster app, and many more )
I’m not so sure how much has changed in the school environment though, and possibly more traditional methods of ‘close reading’ of literature aren’t combined with digital methods of reading to the extant or with the same ease as they are in the college or university setting. This could obviously have to do with lots of economic and social reasons, where many of the students might not have access to a computer, tablet, or even the internet.
I think an important thing in relation to the article though is that I don’t feel that we need to argue the importance of what Hayles is advocating for, and her call for action in this area has for the most part been heard. A multi-media approach to reading and learning is the goal for most educational institutions and bodies in most parts of the world that have the resources to pursue it.
There is a quote in the article from Maryanne Wolfe that eloquently sums up this goal:
“We must teach our children to be bitextual or multitextual, able to read and analyze texts flexibly in different ways…Teaching children to uncover the invisible world that resides in written words needs to be both explicit and part of a dialogue between learner and teacher, if we are to promote the processes that lead to fully formed expert reading in our citizenry“
Maryanne Wolfe, Proust and the Squid: The Story and Science of the Reading Brain
There were also some fun and interesting projects discussed as examples of new approaches to reading and examining traditional texts, such as the dissection of a piece of electronic hypertext fiction written in Storyspace from author Shelly Jackson called Patchwork Girl.
Another really interesting project was a student project called “Romeo and Juliet: A Facebook Tragedy” which you can follow the link here to read more about. Basically though they adapted the Shakespeare play to a Facebook model, creating maps such as the one below of a social network Friend Wheel.
Its also great that the content for the Romeo and Juliet Facebook project is still accessible online, and highlights the importance of accessible data that we talk so much about in the course.
Also worth a look is the TAPoR website, or Text Analysis Portal for Research, which has a very interesting and engaging way of displaying its content, and is really fun to explore:
In conclusion, the article by Hayles was an interesting examination of the ways in which we read and analyse text, both in print and online. Much of what she is referencing in the digital side of things has moved on quite a bit in the years since the article was written, but the fundamental aspects of close, hyper-, and machine reading practices as described are still relevant and important to understand. It allows us to be aware of how we can better use these practices collectively, and foster exciting possibilities for new ways of reading and learning.
The book was written in 2001, and yet nearly two decades later many of the
issues raised and points made are still relevant.
The book begins with a sobering ultimatum: either we do something now to
protect the freedom of ‘creativity and innovation’ that the Internet promised
at its’ inception, or we succumb to the forces of control and regulation that
are seeking to impose themselves upon it.
Lessig states that the Internet’s original design facilitated, encouraged,
and fostered the growth of ‘new’ ways of learning, making and creating, and
that this structure of newness and possibility is being taken control of by the
‘old’ ways of making and creating, whereby content will be packaged,
controlled, and sold to us the consumers of the future through the same methods
of distributing goods that have always been.
He argues that we live in a time when the idea that private entities can control and direct just about everything around us is accepted and taken for granted as a fact. So when controls and restrictions are placed on the ways in which the internet operates and is allowed to be used, we don’t even register what has been lost.
“A TIME is marked not so much by ideas that are argued about as by ideas that are taken for granted…In these times, the hardest task for social or political activists is to find a way to get people to wonder again about what we all believe is true. The challenge is to sow doubt.” (p. 19)
This statement is incredibly relevant today, as we have been pushed so far
into our respective like-minded turtle shells that often the hardest thing is
indeed to convince someone to come out of it.
I am a relative newcomer to this debate, having spent most of my life in my own turtle shell of truth, which more often than not meant deciding to ignore these issues as something that didn’t matter if I was involved; the forces at play were so beyond me that there didn’t seem to be much point in even having an opinion.
But that is just the point that Lessig is making; these issues affect us all, and governments and corporations will continue on as they have before unless we support an alternative vision.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.