Date: Tuesday January 19, 2009
Location: Civic Suite 3
|10:45-11:15||Foundations of Open Media Software workshop summary|
Silvia Pfeiffer and FOMS participants
|11:30-11:45||Cool news on Video and Audio Accessibility for Ogg in HTML5|
Silvia Pfeiffer (lca2010_video_a11y.pdf?)
|11:45-12:15||Lightning Talks: Web video|
- Ralph Giles: Live Streaming Ogg
- Michael Dale: mwEmbed
- Jan Gerber: P2P vs HTML5 *mashup*
- Jeremy Apthorp: Games
- Shane Stephens: Multimedia in Wave
|14:30-14:50||Status of Blu-Ray playback on Linux|
|14:55-15:15||Articulate: Adding expression to LilyPond MIDI output|
- Timothy Terriberry: Thusnelda Update
- Jonathan Woithe: FFADO project update (lca2010_mmm_ffado_talk.pdf?)
|16:15-17:30||That AV syncing feeling|
Jonathan Woithe (lca2010_mmm_syncing_talk.pdf?)
An overview of Blu-Ray technology, the challenges to playing it back under Linux, and the state of current efforts to bring Blu-Ray technology to Linux.
Making multichannel digital audio-video recordings invariably requires the use of multiple recorders. Ideally these are all clocked from a single master clock, but unfortunately many "affordable" devices do not provide inputs to allow for this. Trying to synchronise independent recordings can be an exercise in frustration but there are several Linux-based approaches to ease the pain. After describing the fundamental synchronisation problem, this presentation will provide a box of tools that people can deploy in their own recording and production workflows to deal with the problem.
Software to be covered in this talk will include Cinelerra, Ardour, Audacity, dvgrab and a number of small utilities written by the author to glue all this together (audiosync, batchrec, channelsplit, trackedit, all mostly found at http://www.physics.adelaide.edu.au/~jwoithe/.
This talk will be relevant to anyone with an interest in digital recording and editting under Linux. It will provide attendees with ideas and solutions based on many years of location A/V recording using unsynchronised digital recorders. While the talk will not be overly technical in nature, some prior knowledge of the digital sampling process will be useful.
Video is accessible when every person, no matter what limitations in language understanding, hearing, seeing, or other senses, can follow what is happening in a video and navigate it. Video accessibility is fundamentally about providing textual and other additional information about the video to help provide information in channels other than eyes and ears.
Captions and subtitles are only one type of accessibility features - there are also audio annotations for the blind, and many other text representations that are related. For years, people have been requesting a solution for Ogg content with subtitles/captions. So far, the main solution was to create a text file (e.g. a srt file) and load it together with the video file into a media player that was then able to do the subtitling ("soft subs"). Now that Firefox supports Ogg Theora/Vorbis out of the box, an encapsulated solution is required ("hard subs").
Silvia is working for Xiph, Mozilla in the W3C on this and has made several proposals on how to extend HTML5 in a declarative way to include such features. She will report on current status and show new cool demos.
LilyPond is primarily a means to produce beautifully typeset music scores; but it can also produce MIDI (musical instrument digital interface) output for 'proofhearing' the scores. Unfortunately, LilyPond's MIDI output is not very good: it obeys the notes and any explicit metronome and dynamic markings, but that's about it.
So I (Peter Chubb) wrote some scheme code that interpreted some of the more commonly used marks in a musical score. The idea was to rewrite the LilyPond input before LilyPond interpreted it, so, for example, slurs and phrases were obeyed, and trills were fully expanded.