Information Theory, Pattern Recognition, and Neural Networks: Recordings
What follows is a description of the workflow used to record David MacKay's lecture series
"Information Theory, Pattern Recognition, and Neural Networks".
To keep costs down, the video recording and editing was done entirely by volunteers.
We made a few mistakes, and learnt a few things along the way. It is hoped that other research groups who wish to record their own lecture series on a shoestring budget can benefit from these tips and tricks. Be warned, however, that the process we arrived at is probably a bit of a Heath Robinson contraption. So if you have any professional recording experience, it will probably make you shudder.
The lectures made use of the blackboard as well as slides and dynamic content displayed using a projector.
We wanted to make the lectures available for on-line streaming, preferably with synchronised slides, on
or a similar site.
So, we needed to
- record the speaker while he was writing on the blackboard,
- record the projected slides and dynamic content displayed on the projector,
- ensure the recordings were synchronised,
- ensure that we could make slides of the writing on the blackboard, and
- compress and upload the video, and synchronise the slides.
How long does it take?
It took more time to do the videos than we thought at first (it turns out there's a reason professional video recorders charge so much money).
It took about 6-7 man hours to produce one hour of edited video footage and slides. You also need to make provision for 2-3 hours of waiting time for video format conversions and uploading videos.
|Activity ||Hours ||People ||Total Hours|
|Equipment Check ||1.0 ||1 ||1.0 |
|Live Recording ||1.0 ||3 ||3.0 |
|Video Editing ||0.5 ||1 ||0.5 |
|Making Slides ||1.0 ||1 ||1.0 |
|Synchronising Slides ||1.0 ||1 ||1.0 |
|TOTAL || || ||6.5 |
- Video editing: Editing in OpenShot to add a title page, remove long periods of silence and splicing video and audio clips.
- Making slides: We used about 60 slides for a one hour lecture. They consisted of about 15 projector slides and 45 photographs (or still frames from video) of the blackboard. This work included digitally enhancing blackboard slides and making a suitable selection of projector and blackboard slides.
- Synchronising slides: This step is specific to using the synchronisation tool of videolectures.net. It took about one hour to synchronise slides for one hour of footage. The tool provides rapid playback facilities and options to skip to specific key frames, which helps a great deal, but double checking and renaming slides adds to the amount of time required to do this task.
While making a video recording of a lecture seems a straightforward task, there are an endless number of things
that can go wrong. As you'll likely be learning things as you go along, it's essential to do a realistic test run of your
whole workflow so that you can iron out any kinks before recording a lecture series. For example, things we needed
to work on were:
- Lighting was tricky to set up. We needed to tweak lighting in the lecture hall to avoid specular reflections
from the blackboard. You may need to use multiple recording devices to capture brighter and darker areas of the
- Audio and video editing can take far longer than the initial recording. Not all formats are compatible with all
applications, and not all applications run on the same operating system. So it's worth spending some time on your
test run video to make sure you have a streamlined process.
- Video camera work:
- Try to minimise the camera movement when the speaker writes on the blackboard.
- Compressed video will make writing harder to read. So zoom-in on writing when you can.
- Try to anticipate the speaker's next actions. For example, zoom out as soon as the speaker stops writing,
so that it's easier to track the speaker when they move to a different part of the blackboard.
- Live mixing of projector & camera footage:
- Use ear phones to monitor the sound quality at the final destination (i.e. at the laptop
that records the data) at all times.
- Always let the main focus be the speaker, tracking whatever they do. Don't stay on a projector slide if
the speaker is writing on the blackboard (a user can pause a video to look at a projector image in more detail later,
but the same doesn't hold if you didn't record something on the blackboard).
- Equipment check:
- We shared equipment with other users. Every other week there would be some wonderful new problem to debug against the clock.
- We made sure we had about two hours ahead of each lecture to verify that everything was in good working order.
- Watch out for ground noise (e.g., ask audience to switch their cell phones off).
- Control lighting beforehand as much as possible - a window allowing sudden bright light can saturate a blackboard so you can not see the chalk, even with automatic gain control enabled.
- If snapshots are taken, also take snapshots of projector slides to use as time stamps for computer-generated slide replacement.
- Use large fonts for slides and other projected media. The level of compression used by on-line video sites will make small fonts illegible.
- Technical problems:
- Problems during a recording can happen unpredictably. If you're lucky, you'll pick up on them during the recording session and not afterwards when
there may be nothing you can do about the problem.
For example, a microphone can easily become disconnected as the speaker moves, or recording software may crash. It's best to discuss and agree how these
situations should be handled with the speaker. In our case, we agreed to interrupt the lectures until the issue was resolved.
Equipment and Software
The equipment you need will need to change depending on how many things you wish to record. We recorded a projector and the speaker. We used a digital mixer to switch between these videos on the fly. We could have recorded the videos separately, but that would likely have required additional work to synchronise the video sources during off-line editing. The same two to three volunteers attended to the following tasks during the live recordings:
This is the list of hardware we used:
- One to use the video camera to follow the speaker and record his writing on the blackboard.
- One to monitor the laptop and to operate the video mixer (i.e. to choose whether to use video from the camera or from the projector).
- In some lectures, we had a third volunteer photograph the blackboard, so that we had high-resolution photographs to use as lecture slides.
- A video camera to record speaker.
- A digital still camera to take photos of the speaker and blackboard.
- A small clip-on microphone for the speaker.
- A laptop to record all the data (installed with recording software that supports live feedback from all cameras and audio devices).
Any decent recording software program will allow you to set the recorded format, and to set the compression levels.
- An Edirol, 8 Channel video mixer (V-8).
- The Cavendish Laboratory had previously acquired this equipment. It is fairly expensive, but investing
in such a mixer can significantly reduce the time required to edit the videos.
- It allows real-time switching between multiple cameras and input coming from the projector.
- You can apply several real-time image processing operations on the raw data before being stored on the laptop.
- It synchronises all inputs (audio and video from possibly multiple cameras). It also has a nifty
"picture-within-picture" function, that overlays the speaker in cropped video while displaying
data from the projector.
- Two tripods to stabilise the two cameras.
- A set of headphones to monitor the audio levels at all times.
Tips when using a video mixer
- Make sure there is no border around the image (see the manual of the device to change it).
Since streaming websites like video lectures compress the videos and reduce their display resolution, it is important to make sure
all recorded images make use of as much display area as possible. It is therefore also important to ask the speaker to display any demonstrations in full-screen mode.
- The "flicker filter" was set to its maximum value.
- Make sure the refresh rate of the speaker's laptop and the projector monitor are set to the maximum possible value (must be > 50 Hz).
A low refresh rate can cause synchronisation problems, resulting in flickering.
- Invest in good quality cables, e.g., the VGA cable connecting the
speaker's laptop and the projector. In some of the initial lectures the projector data "wobbled" due to a bad cable - this problem was difficult to find, and took a lot of effort to partially fix this during editing. The rule of thumb: Use high-bandwidth, thick (well insulated) cables.
- Do not hesitate to call the customer support team if there is an issue with the video mixer that is difficult to resolve (we received excellent advice from them).
- The raw footage was captured in the following format:
- Video Codec: mpgv
- Video Resolution: 720x576
- Video Display Resolution: 720x576 (pixel aspect ratio 16:15, data aspect ratio 4:3)
- Video Sample Rate: 8300 kb/s
- Frame Rate: 25 fps
- Audio Codec: a52
- Audio Channels: Stereo
- Audio Sample Rate: 48000 Hz
- Audio Bitrate: 256 kb/s
- Note that the video compression (sample rate) can be much lower if the videos are intended to view on a computer (the sample rate of the final "high-resolution" videos that can be downloaded is 1514 kb/s on average).
We experimented with a number of commercial editing offerings, e.g. Final Cut Pro and Adobe After Effects (After Effects is more for compositing and effects, and it's not designed for video editing, but we had it to hand, so we evaluated it as well).
In the end, we found that for our simple needs, we could do all of the editing on the Linux platform using open source software only. In short, we used
I've listed the APT command lines for installing the software on Ubuntu.
- Video & Audio
- Slides & Images
A library and command line tool for video editing.
ffmpeg was used to:
- Compress videos and convert to appropriate formats via bash scripting.
- Extract sound from videos to edit. In general, you don't need to do this unless you need to edit the audio separately:
ffmpeg -i video_in.mpg -vn -ac 2 -ar 48000 -ab 256k -f wav sound.wav
- Read video format information:
ffmpeg -i video.mp4
An application for linear video editing.
apt-get install openshot
- If you have trouble exporting videos to newer formats, try to get a newer version of OpenShot from their website.
apt-get install libx264
- This library is necessary to get the h.264 compression for the MP4 format.
OpenShot was used to do most of the video editing. It is a simple 'linear' editor, but we
found that it better matched our requirements than leading commercial 'non-linear' editors:
If you need special effects or re-timing of footage, then it's best to buy commercial software and to do the time-consuming
video compression conversions (think hours, not minutes).
However, if you just want to splice snippets of video, audio and still images together, then OpenShot is great.
- It ran on our operating system of choice: Linux
- It supports real-time playback and real-time editing of video in a large number of compressed formats.
Most of the commercial applications only allow real-time playback in for a very limited number of compressed
- OpenShot supports a large variety of input formats, so it's likely that you won't need to convert your video
before you can start editing it.
Some simple editing actions that were performed:
- Trimming unwanted borders.
- Replacing frames with photos, whilst overlaying the speaker in a cropped video at the corner.
- Smooth fading between video channels (make use of the "effects" toolbox) after inserting
frames in separate channels (e.g., the title page and "study problem" slides).
- Exporting videos to the format required by videolectures.net.
For convenience these formats can be downloaded here. All downloadable lectures videos are in the ".mp4" format required by video lectures (all the required fields were set in Openshot before exporting the videos). We uploaded the ".mp4" videos to video lectures,
and they did the conversions to the low-bitrate ".flv" and ".wmv".
There are few things you need to watch out for when converting video formats:
- Watch out for inadvertently changing the frame rate (e.g. from 25fps to 29fps),
particularly if you're working with interlaced footage.
- Be careful when dealing with interlaced footage. You may need to deinterlace footage
depending on your desired output video format, but this will cause some loss of resolution in your video.
- Watch out for differences in pixel aspect ratio (PAR) and frame aspect ratio of video formats. Getting this wrong
can lead to unwanted stretching and/or cropping of the footage. It is useful to decide where the videos should be uploaded before recording the lecture series, so that one can experiment
with short test clips to iron out any problems.
An application for editing audio files.
The version of OpenShot we used did not let one visualise the audio timeline. So we used Audacity to look at the audio waveform instead. Audacity was used to quickly identify long periods of silence (e.g. when the audience works out a problem), to remove noise from the recorded audio (after extracting the sound using ffmpeg), and to amplify parts of the audio. After processing the sound, you can import it as a sound clip in OpenShot or replace the audio of the original recording via ffmpeg.
- Audacity has a "noise removal" function, where a noise profile can be defined, after which that noise profile will be matched to the sound file
to remove similar noisy patterns.
- Several plug-ins can also be downloaded.
- Filtering operations (e.g., a notch filter) can be defined in the form of a script to apply to the sound.
- During lecture 11-13, the power supply of the recording laptop caused "ground noise interference". The cause of the problem was very difficult to detect, and we then had the need to remove the sound interference afterwards. This was done using the noise removal function in audacity and defining some additional filters in audacity.
A library and set of command-line tools for image editing.
Image Magick was used to process images related to the slides via bash scripting.
apt-get install imagemagick
- An example, remove unwanted pixels, and changing the resolution to 627x470 of all ".png" images in the current directory:
convert *.png -shave 14x28 -gravity South -chop 0x1 -geometry 627x470
- An example to apply a sharpening filter to blackboard slides to make the writing more visible:
convert -sharpen 0x3 file_in.png file_out.png
- Combine all slides into a single PDF:
convert *.png file_out.pdf
- Fred's image magick scripts contains a comprehensive set of bash scripts to combine
image magick functions, resulting in more complicated image processing operations.
A morphology (dilation) example:
./morphology.sh -t dilate -m grayscale file_in.png file_out.png
An application for image editing. Similar to Photoshop.
We used GIMP to for editing images (e.g. slides and video inserts).
Back to the videos.