For the students, staff, faculty, and administrators of the University of California (UC), the six Regents meetings that take place throughout the year are crucially important. With 10 campuses across California, technology plays a key role in making these meetings accessible to as many of the UC’s stakeholders as possible. Live-streaming and posting the Regents deliberations to the web means the events can be accessed in dorm rooms and offices across our Golden State and anyone with a phone screen or a laptop can stay up-to-date with the high-level decisions affecting them. However, providing greater accessibility such as live captioning has been a logistical challenge due to the nature of the meetings and live-streams.
Capturing 26 voting members, it is a formidable task to accommodate the various committees and subcommittees. Oftentimes, the meetings take place in two rooms simultaneously, and capturing the comments and feedback of 26 voting members and the various committees and subcommittees can be a formidable task.
Earlier this year the UC San Francisco’s Educational Technology Services production team partnered with UC Berkeley’s video production teams, Berkeley AV and Berkeley Video to tackle the work. By collaborating on the livestream and providing audio visual support for each other, the UCSF and UCB teams could ensure the Regent’s meetings reached their audience as effectively and efficiently as possible. The joint set-up for the meetings now involves two dozen microphones set up around a circular table and three cameras panning between speakers. A director for both the video recording and the live-stream choreographs a team of professions including a projectionist and a sound engineer.
Along with setting the stage to get the best visual recording possible, the team has also worked to improve the recorded audio. In the past, the push-to-talk microphones posed a feedback problem that disrupted the audio and interfered with the quality of the transmission. Together, the UCSF and Berkeley AV/Video professionals worked to insure clear audio quality and minimal background noise providing for a better remote experience.
At the January 25th and 26th meetings offered a new technological challenge for the teams, live captioning. First proposed last year as a way to increase accessibility, this initiative aligns with Office of the President’s Electronic Accessibility Commission recommendations to “support an information technology environment that was accessible to all, particularly individuals with disabilities.”
Bringing live captioned content to the YouTube live stream required the teams to evaluate the best way resources to get the best experience for users. The team decided to partner with a 3rd party vendor Aberdeen Broadcast Services, who specialize in live captioning. The in-room audio is transcribed in process similar to court stenographers. The text is saved as an SRT file which in turn is embedded into the YouTube link, marrying the live stream with the live caption. A delay in the live stream video provides a small time buffer that keeping the spoken audio and captioned audio synchronized. According to Berkeley Video’s Jon Schainker, the Regents were very receptive to the new technology when it was first pitched, and with a successful January meeting in their rearview, the multi-campus team is now preparing to live captioning in all future meetings.
The Bay Area is a cradle of innovation and a relentless generator of new ideas and technology. As live captioning, 360˚ video, and virtual reality become more and more commonplace, Berkeley AV and Berkeley Video and the UCSF production team is ready to confront the challenges ahead. The success of the recent Regents meeting proved the viability of live-captioning the webcasts and we look forward to seeing the technology spread to course captures, graduation videos, and more.