title graphic

From Hyperchoreography to Kinaesthediting (5)

Kinaesthediting

Conceptually I propose to reframe the context of this work in terms of a 'meshwork' (Ingold 2007:100) of video dance as opposed to a network of video clips. The word 'network' in its current usage implies a series of linked static nodes, whereas a 'meshwork' is an entanglement of interwoven lines. (Ingold 2007: 103) This idea of a meshwork allows us to rethink how we move through this work dynamically as 'a flowing line proceeding through a succession of places' and not leap-frogging from one connection to another with each destination as a sole focus. (Ingold 2007: 101)

The line of research I have been working on right now is to create a 'Kinaesthediting' interface that allows editing to take place in a fluid and dynamic environment where rapid selection of video material is achievable via a system of user tags and a dynamic video store database.

As with Jumpcut.com, once the video dance clips are uploaded to a clip store video server and database, they are given user defined text tags according to the content. This could be literal description of the dance or the video frame, e.g. hand, fast, short, up, name of dancer, close up; or metaphoric score e.g. clutter, reason, core, winter. A user coming to the site will then be able to select from a list of available keywords to define the starting point of the video dance they are about to navigate. The application selects clips based on the keyword selection appropriateness and cues them ready for playing.

fig.5 - mock up of the 'Kinaestheditor' screen interface. (images from Moment, McPherson 1999)

An initial selection of eight thumbnails appears arranged around a central screen. The user then mouse-clicks on a 'joystick-like' graphic and scrolls in the direction of the desired thumbnail to start the video dance. The selected clip starts playing in the central screen area and a new thumbnail takes its place from the database. The user will always have 8 clip choices to navigate to as the application will continually draw from the same pool of clips. If the user chooses not to select a thumbnail then the current clip will loop. As with Matthew Gough's proposed system the selection will adjust during the dynamic editing process to reflect the shifting thematic biases towards particular material or interpreted intention, but also material that might edit together better according to cinematic conventions e.g. avoiding cutting too many wide shots together.

Additionally there will be the ability to record a sequence for play back and to share edits for others to see by publishing on the site. This will be achieved by creating 'edit decision lists' as recording takes place. The original video material is not changed. In a very basic way this takes place in our existing work 'The Truth:The Truth'. These edits can then be used to generate new meta tags for the material used in these edits e.g. 'Pete's close up hand edit' thus creating a nesting or clustering effect. Intertexts and Folksonomies will develop as the content and the tag clouds develop.

In this situation one of Nelson's tenets that you should be able to track the original context of the transdelivered / transcluded material, so the concept of authorship is preserved, is needed for sustaining trust amongst content providers and maybe in the future even provide income through transcopyright licensing and micro-payments. (Nelson 2007)

This is a significant step away from a truly transcluded / transdelivered distributed content based system that could tap into the relevant meta tags on sites like You Tube and Google Video but it will in my opinion, yield significantly better results aesthetically. To allow a degree of curatorial control over the content has its benefits for the end-user experience. Though there are good arguments for allowing the self-policing approach, you just have to put the words 'dance' and 'video' in to a search on most video sites to see you will get a considerable amount of pop promos and adult content.

An additional future feature that I have discussed with a colleague at the University of Dundee is automated gesture/movement image analysis. This is very difficult to achieve but could yield some interesting results nonetheless. The benefits are that you can have 2 levels of tagging one user-based subjective tag and secondly an 'objective' computer-generated tag. This could act as an interesting filter that could be turned on or off as desired. This would also possibly help in sifting content if a wider net was cast in to other user generated video content sites.

YouTube.com have just recently launched a new viewing tool called Warp that shows what is possible when tag clusters are used to generate linkages between video clips. It is an exciting development in navigational interfaces but it does show that you are at the mercy of unfiltered content. You still move slowly from one video clip to the next and any sense of dynamic implied from the interface is lost once a selection is made.

Fig 6. Youtube.com warp interface


The kinaesthetic effect is critical in all this. We are talking about engaging with dance after all. I am certain a slow editing process for many dance-focussed users will not be engaging enough. Will this new type of 'Kinaesthediting' interface produce more interesting results by forcing a more intuitive and responsive style of editing of this type of material? I think so. Though it may seem we are overly preoccupied with technical structures, what McPherson and I are ultimately interested in, as with the other Hyperchoreographic works, is what type of content we can make for this type of interface?

Simon Fildes

May 2008

next page

intro : what : works : writing