November 13, 2020
Now that it’s been a bit over a week since VIS 2020 ended I thought I’d write up some information on the streaming infrastructure we used during conference. For context, IEEE VIS 2020, like all conferences this year and likely well into the next, was held as a virtual event. VIS 2020 was “hosted” by the University of Utah, as it was originally planned (pre-COVID) to be held in Salt Lake City. My advisor was one of the co-chairs, and asked if I’d volunteer to be on the Technology Committee for VIS 2020. The role of this committee is to manage the technical aspects of streaming the event. The change to a virtual format brings a lot of challenges, especially when pivoting later in the planning cycle (the past in-person events are typically over a year in the making). However, the virtual format also brings improvements in terms of accessibility, cost to attendees, environmental impact, and archiving.
This post will be one part technical documentation and one part reflection. The feedback we received for VIS 2020 was overwhelmingly positive, and thus I hope that both the technical documentation on how we ran the event and the reflection on what worked and didn’t are helpful to organizers planning virtual events through the next year.
Before we begin, I must of course mention that this was not a solo effort. Alex Bock and Martin Falk were also on the Tech committee and provided valuable advice about their experience running EGEV 2020 as a virtual event earlier this year, which was also well received. We followed the same model for VIS, which aims to keep the feeling of a live conference while reducing surface area for technical issues. I must also mention the amazing work done by Alper Sarikaya, Hendrik Strobelt, Jagoda Walny, and Steve Petruzza on the web committee setting up the virtual conference webpage. The webpage was adapted from mini-conf, originally written by Alexander Rush and Hendrik Strobelt. Alper has written up a blog post about this, so I won’t cover it here. Finally, during the event we had a rotation of about 24 student volunteers who were responsible for managing the streams and assisting presenters with technical issues, without whom the event would not have been possible.
When held in person, VIS is a medium to large-sized conference, averaging about 1200 attendees and taking place over 7-8 parallel sessions each day through the week. Symposiums, workshops and tutorials are held Sunday and Monday, and during past in-person events have pretty much free rein to structure their event how they see fit. VIS paper presentations make up the sessions on Tuesday through Friday, with some additional events taking place after hours, e.g., the VIS arts program, meetups, along with a keynote on Tuesday and capstone on Friday. When considering how to bring this event to a virtual format, we looked to EGEV 2020, which is the most similar event to VIS and took place earlier in the year, and to other well received virtual events (e.g., i3D, HPG, to name some that I attended this year).
A common goal in each event was to preserve the feeling of an in-person conference, while reducing the surface area for technical issues interrupting presentations. At a high-level, EGEV, i3D and HPG followed the same structure: presenters provided pre-recorded talks, except in rare cases. Attendees of the conference watched the talks through a live streaming platform (YouTube, Twitch), and asked questions over a chat platform (Slack, Discord). Presenters would then answer the questions on the stream by joining a Zoom call that would be streamed to YouTube/Twitch after the talk was played for Q&A. The chair would pick questions from the chat, repeat them to the presenter on Zoom, who would then answer them. This structure provides benefits over having all the presentations done live or making the recorded talks available as on-demand videos with a separate live Q&A portion:
Another goal I had was to reduce a lot of the manual effort that was required in previous events to create the YouTube videos, Zoom meetings, Discord channels, etc. Since VIS isn’t a small event and we had time to prepare, I spent time working on various scripts to automate this process. These have developed into SuperMiniStream, and are available on GitHub, where they can hopefully be of use to others. If you’re interested in using SuperMiniStream to drive your conference, get in touch!
These observations led us to decide on the following technical platform for streaming VIS 2020:
These choices also come with some of their own challenges. Separating each session into its own YouTube video, Zoom call, and Discord channel, means that one of each has to be created for each session. In total VIS 2020 had approximately 110 sessions. By splitting the sessions up we also need to start and stop the appropriate YouTube broadcasts during the day and migrate the stream from the computer to the new broadcast. Finally, our choice to synchronize the chat between YouTube and Discord required a YouTube API client with a high API quota, due to the high amount of traffic generated.
The core scenes in our OBS configuration are:
The key things to note here are that we make heavy use of OBS’s support for text sources
reading content from files. When the content of the file changes, OBS will update
the corresponding text shown in the video output. This functionality is used for all
dynamic text shown on the stream: session title, session chair, session schedule, and
the current presentation. To allow swapping out the content being shown as the conference
progresses, these sources are configured to read from a
C:\LIVE directory on
each machine. You can download an example of our OBS configuration
Extract it to
C:\LIVE and import the JSON file into OBS to try it out. If you’re
on Mac or Linux some of the sources here will break, as the text and window capture
sources differ on the other OS’s.
As the session progresses, the technician needs to update the current presentation text file to match the current talk, for this to be shown correctly at the bottom of the Zoom Q&A, Zoom Direct/Panel, and Zoom Screenshare scenes. To make this easy for the technician, I used OBS’s “Chatlog Mode” for current presentation text file input. In Chatlog Mode, OBS will read some configurable number of lines from the bottom of the file and display them, e.g., as you would use when livestreaming a game and displaying chat from your subscribers on the video. However, we can also use this for the current presentation text. When compiling the assets for each session (Section 3.6) I output the list of presentations in reverse order to a text file. This file is fed in as the text input for the current presentation and read in chatlog mode with a line limit of 1. OBS will then read and display the last line of the file, allowing the technician to simply delete the last line of text as the session progresses.
Finally, to make the various Zoom captures look nice I applied a color key filter on the
Zoom window capture to remove the Zoom window background. This filter is setup in OBS
by selecting the Zoom background color to pick the color to filter (it’s
To avoid filtering similar colors, e.g., if someone has dark hair or a dark shirt on,
I configured the filter to have a similarity and smoothness of 1 (i.e., none). The result is that
just the Zoom background color is made transparent, producing the nice looking output
shown in Figure 1c-e. To avoid showing the Zoom background in the Q&A scene we initially
had a more complicated scene that used two crop filters to capture the two pinned videos
in Zoom and position them nicely. However, this was trickier to setup for the technicians,
who had to resize the Zoom window to try and line up with the crop filters.
Using a color key gave better results and was easier for the technicians.
As mentioned before, VIS 2020 took place over about 110 sessions. Each session needs a unique YouTube broadcast, Zoom meeting, and Discord channel. Furthermore, the presenters in each session need to receive an email containing this information. Doing this all manually would be a significant effort and likely introduce some errors or mix-ups in the process. However, Youtube, Zoom, and Discord all provide APIs that we can use to create everything we need. To send emails out we can use Amazon SES.
To feed structured data into the scheduling script, and the scripts used later during the event, the entire conference schedule is stored in an Excel workbook in a database-like format (Section 3.1). This workbook is used by our scheduling script to create the YouTube broadcasts (Section 3.2), Zoom Meetings (Section 3.3), and Discord channels (Section 3.4) for each session, to email presenters the information about their sessions (Section 3.5), and to compile the assets needed by each streaming computer for a given session (Section 3.6).
The script that actually schedules everything is
where the API interaction with YouTube, Zoom, and Discord is largely handled
Schedule Day also assigns a computer to run each stream based on their availability
to ensure no computer is double-booked and no session is without a computer to run it.
Computers are identified by a single letter ID, and are also associated with a unique YouTube stream key.
The Excel workbook is structured in a “database-like” format to make it easy to parse from a Python script using openpyxl, essentially acting as a poor man’s database. You can see an example of how the sheet is formatted here on Google Sheets, also embedded below.
Each row in the sheet corresponds to a “time slot”, which is a time where something takes place during a session: a recorded talk + Q&A, a live panel, etc. Each row repeats its attributes even if some are the same as the row preceeding it. This repetition is key to the “database-like” structure of the sheet and vastly simplifies the logic of parsing it in Python. For example, instead of having an empty “Event” entry mean that the time slot shares the value of the first non-empty one above it, all the data for the time slot is immediately available in the row. Thus, instead of having complicated parsing and searching logic in the script that is more susceptible to errors introduced by reshuffling data in the sheet (e.g., a row is moved and now something else is the first non-empty Event above it), each row can be read independently and events, sessions and so on merged by the script using a dictionary. The extra visual clutter is a small price to pay for the added simplicity of parsing this model. The sheet is an internal database used only by the Technical committee, making human readability less of a concern.
To treat the workbook like a database I implemented
(not to be confused with the package on PIP with the same name).
ExcelDB adds some convenience database operations on top of openpyxl.
ExcelDB treats the first row as the database index and adds functionality for:
finding rows with some attribute value, finding rows using a lambda,
fetching a row as a dictionary, appending or writing rows by passing dictionaries,
and iterating through rows.
A reasonable question at this point is “why not use an SQLite database?”. There are some motivations for using Excel instead of a real database engine:
Each session needs a unique YouTube video that it will be streamed to, which we create using the YouTube LiveStreaming API. First, let me clarify some terminology to match the YouTube API in the following discussion:
A video being livestreamed on YouTube consists of two parts: the Broadcast where viewers can watch the video and the Stream containing the data. Each Broadcast is associated with one stream that contains the audio/video data being shown. In this section, we will create the Broadcasts for each session using Google’s Python wrapper for the YouTube API.
To get access to the YouTube API, I created a project in Google Cloud Platform and added the YouTube API to its API library. Whether you go through the OAuth verification process to remove the “unverified app” screen is up to you, however it is extremely important that you request an audit of your app by YouTube. If you don’t, all videos uploaded by your app (including live broadcasts after they finish) will be flagged as private.
To create a Broadcast we use the
API. For each session we create a title and description for the video and determine the
start time based on the schedule sheet.
We are also required to inform YouTube if the video is made for kids or not.
liveBroadcasts.insert API takes this as a parameter; however it doesn’t seem to
have any effect. Instead we set this through the
The broadcast for each conference session is created below, within the
The most important things to note here are:
YouTube also allows you to set the thumbnail image for the video through the API.
If you don’t set a thumbnail for the video it will use your account profile picture,
which in our case was just the VIS 2020 logo. While that’s ok, it’d be really cool to be able
to show the session schedule as the thumbnail. However, manually creating the
images for each of our 110 sessions (either in Photoshop or by screenshoting the scene in OBS)
is quite a lot of work. Fortunately, we can do this
in Python using Pillow and upload
it using the
The code to render the thumbnail image is in
and takes the background image used during the stream, the session schedule info, and the
fonts to use. It then positions and draws the text over the background to closely
match the scene in OBS. A tricky thing here was getting
the text sized to fit on the image properly. The width and height of the schedule
text varies significantly between sessions depending on the number of presentations
and the lengths of their titles or author lists. To pick a font size that maximizes
the size of the text within some specified bounds I run a binary search between a set
minimum and maximum size to find the largest font size that fits the text in the bounds.
render_thumbnail returns an
io.BytesIO object containing
the thumbnail image in PNG format to the caller, which we can set as the
video thumbnail (also done in
Zoom provides API that can be used to create meetings, among other operations. For VIS we purchased a Zoom Business license to also enable 1080p Group HD. Each streaming computer was given an account with its computer ID letter used as its last name to identify them in the script.
To schedule the Zoom meetings I created a JWT app on the VIS 2020 Zoom admin account. Creating the app on the admin account allows us to use the same API key to schedule meetings for all hosts (the other PCs). This is the easiest path for a single-user app that you’ll run locally (i.e., a Python script) since no additional verification or OAuth screen is required. After making the JWT app you’re given an authentication token to use to authenticate your API requests.
To create a meeting we use the Create Meeting
API. This API takes a JSON object containing the configuration of the meeting
and returns back the scheduled meeting information containing the meeting URL.
This is done in the
Session.schedule_zoom member in
To create and manage the Discord channels for each session I
created a Discord bot and used discord.py
to access the API. The library
is very nice to work with, though is designed for people writing bots.
This makes it a bit annoying to use from “non-bot” scripts (like
since you can’t make API calls without running an event loop.
After scheduling the YouTube broadcasts and Zoom meetings
starts a bot and creates the Discord channel and category (if necessary) for each session of the day.
In Discord we grouped session channels by event, with a category per event (e.g., Full Papers, Short Papers, etc.),
and the channels for sessions of that event placed within it.
The script also prints the schedule of each session as an “embed” in the channel and
pins this message. This gives attendees quick access to the schedule and YouTube link
for the session. An example of how these embeds looks is shown below. The embed
for each session is built in
and posted in the corresponding channel in
After scheduling everything we need to send the presenters, chairs and organizers
of each session the Zoom meeting information so that they can join for the
live portions of the stream. This means that we need to send 110 emails (one per-session) to many
more recipients (everyone presenting in each session). The best solution I found for sending
these bulk emails out from a Python script was Amazon’s Simple Email Service.
I used the Boto3 Python SDK for AWS, which
provides a client representing the
To send a nicely formatted email with the schedule and links I sent an HTML email using
send_html_email function in
takes the HTML content, recipients, attachments, and alternative plain text then builds the final email
ands sends it out. Some of the more challenging things to try
and get right here were properly attaching the plain text alternate
to reduce the “suspiciousness” level of the email and attaching
other items like the ICS file for the session and conference logo image.
Session.email_contributors we generate an ICS file to send presenters that
they can add to their calendar and attach the conference logo to have a nice official
feel to the email. These both took a bit of trial and error to get attached properly,
and so I include these snippets below. They can also be found
Session.email_contributors method in
The ICS files were created using the ics library in Python.
The final step in scheduling each day’s sessions is to build the asset bundles
for each session. Each asset bundle contains all the videos for the session along
with a playlist to play them back in order, the text input files for OBS
containing the session information, and a dashboard for the technician.
The dashboard contains the schedule, list of people who will join the Zoom meeting
to present, and links to the Zoom meeting, Discord channel, and Youtube studio
for the session. This task is done by
which builds the session asset bundles for each day.
The output is a directory structure of
/<day>/<session start-end time>/<computer ID>/,
where each leaf directory stores the assets that a specific computer needs
at that time for its session. When a technician is setting up to stream a session
they simply find the right day, time, and computer ID directory from a shared Google
Drive sync’d to all the streaming machines and copy the content from this
C:\LIVE. OBS will then update the session info shown and they can
view the session schedule in the dashboard and follow the provided link to start
the Zoom meeting for presenters to join.
The result of scheduling each day with
schedule_day.py is a YouTube broadcast,
Zoom Meeting, and Discord channel for each session. Each session is also assigned
a computer that is responsible for streaming it, where each computer has its own
stream key. In this section, we’ll look at the scripts used when the sessions
are live, to manage the YouTube broadcasts (Section 4.1), synchronize
the chat (Section 4.2), and track and plot viewer statistics (Section 4.3).
One goal I had in designing the virtual conference infrastructure was to reduce the workload placed on the technicians. The technicians already have a lot to manage between helping presenters set up on Zoom, playing back the videos, transitioning the OBS scenes, and helping the chair watch the chat for questions. Taking extra work that can be scripted off their plate, like making the Broadcasts live or offline, will help them out.
To this end, the YouTube broadcasts were all managed by a script,
Advance Streams is responsible for
binding the appropriate stream keys to the broadcasts based on the schedule and making them live
when they start and taking them offline when they finish.
The script takes a
[<time end>, <time start>] window and the schedule sheet.
Broadcasts for sessions that end within
this window will be ended, while those starting in this window will be made live.
We simply start streaming in OBS at the beginning of the day and leave it running the entire time,
with this script used to bind the streams to the appropriate sessions during the day.
During the breaks between sessions I use
advance_streams.py to migrate
the streams of each computer to the broadcasts for the upcoming sessions.
A session is made live by binding the stream key for its computer to its broadcast
and transitioning the broadcast to be live. This is done in
The method also performs some sanity checks on the state of the stream and
broadcast to ensure both are in the correct state to be made live.
A session is finished by transitioning it to the complete state and detaching the stream key
from the broadcast. Finally, we explicitly make the video embeddable, since they had
to be made not embeddable when created as live streams.
Note that detaching the key may not be necessary to do, I didn’t come back to check on this.
The session is stopped by
A common issue experienced in virtual conferences that stream to YouTube or Twitch and use Slack or Discord for longer discussion is that, since YouTube and Twitch also provide a small livestream chat, the conference attendees end up split between these two chat platforms. Some attendees watch on YouTube/Twitch and discuss or ask questions in the live stream chat, while others use Slack/Discord, and few participate in both. This also puts additional work on presenters and session chairs, who have to monitor two chat platforms for questions about the presentations. We wanted to avoid this issue during VIS by somehow synchronizing the YouTube and Discord chat platforms.
To synchronize the two platforms I wrote
which uses the YouTube liveChatMessages API
and discord.py to synchronize the chat platforms.
When messages are posted on YouTube the bot fetches them and posts them
back to the Discord channel for that session. Similarly, when messages
are posted on Discord during the session they are posted to the YouTube live chat.
This worked really nicely during the conference, and helped us achieve our goal
of a unified chat platform. Users who didn’t want to make a Discord account
could still participate fully in the discussion taking place during the session,
and presenters or chairs only had to monitor one platform for questions.
The script is a bit too long to paste here, but is available on Github.
The bot is started after the sessions have been made live and is given
the schedule sheet and current time. The bot then finds the YouTube broadcasts
and Discord channels that are currently active for that time in the sheet
and begins polling the YouTube live chat and monitoring the Discord channels.
The YouTube API is polled via
which returns any new messages and a time to wait before requesting new ones.
An async function is added to the Discord client’s event loop for each active
YouTube broadcast to poll its chat. Messages from Discord are posted
to the live chat via the
During VIS, some attendees were surprised about the bidirectional nature of the chat synchronization. Synchronizing the chat platforms bidirectionally is required to provide a unified chat experience on either platform; however, some users were not comfortable with their messages on the more private/internal Discord channel being visible publicly on YouTube.
To address this, I initially developed a quick hack that would let attendees prefix their message
- character, thereby instructing the bot to not synchronize the message to YouTube.
There were some difficulties in also filtering out these messages if they were quoted by
another user, though this was largely addressed by filtering messages starting with
The latter corresponds to a quoted message which the original author did not want synchronized.
Though this solution worked well for attendees who only wanted some of their messages to not be synchronized, it
was a bit fragile. During the week it also became clear that some attendees did not want
any of their messages synchronized, based on their continuous use of the
Midway through the conference, I added another filtering feature to the chat synchronization bot
that allowed users to add themselves to a “no-sync” list. Once on this list, all messages
posted by them or mentioning them (i.e., quotes) would not by synchronized to YouTube.
Attendees could register themselves on this list by typing
$nosync in our support channel,
though I think this feature was added a bit late in the week to get wide enough visibility to those who
were interested in using it.
A major concern I had about actually using the chat synchronization bot during the
conference was the large amount of API requests it would make. Each call to
would cost us 1 unit, while each call to
cost 50. By my estimate, just running the polling every second to pull messages down from YouTube
and post them to Discord would use about 173k units every day of the conference.
However, the default YouTube API limit given to new applications is just 10k units per day.
Running out of quota during the middle of the day would mean the chat sync would stop working,
as it would no longer be able to make API requests.
To request a higher API quota, the application has to go through a review process with YouTube where you justify your request for the higher quota. This process can take a few weeks depending on the application and how quickly you respond to their requests for additional information. I requested a quota of 10M units based on a conservative overestimate of how much activity there might be that we need to synchronize back to YouTube. We requested this increase on Sep 3, and after some small corrections were granted 5M units per day on Sep 17. This request was made on an unverified “testing” application, when I went through the same process with SuperMiniStream, a public application, there were more requests from the YouTube API team that I had to fix, such as putting additional ToS and info on the app’s website, and it took a bit longer (23 days).
In the end, we used significantly less API units during the conference than we had anticipated. The 10M quota request was quite an overestimate, so when we received 5M we still felt like we were in good shape. Our highest use during the week was 652k units on Sunday, where 8 parallel tracks run 4 sessions each. The rest of the week had fewer parallel tracks and/or sessions, and resulted in less chat to sync. Our API use during the week is shown below.
The last bot I wrote was based on a suggestion by Jason Dykes during the conference. He asked if it would be possible to have a bot post something about the current number of people watching a session while the session was live.
To add this feature I wrote
The bot is run similar to the chat sync bot, and is given the schedule sheet and current
time. It then finds the YouTube broadcasts for the currently live sessions and posts
the viewer information to their discord channels.
The bot fetches the live stream viewer stats through the
videos.list API by requesting
The bot would request this information for the current live sessions every minute. After 10 minutes had passed since it last posted an update it would plot the viewer statistics using matplotlib, save the plot out to a PNG, and post the image to the Discord channel for the session. To avoid filling the channel with these plots it would also track its last message and delete it before posting the new plot. This loop was run in an async method added to the Discord client’s event loop, and is shown below. The full code for the bot is available on GitHub. An example of the plot made by the bot is shown in Figure 3.
VIS 2020 was very well received and I was really happy with how the event turned out (I even got an award for meritorious service!). While it’s hard to truly capture a live conference in a virtual format, since things like after hours networking, hallways, parties, are not the same in a virtual format, the event was still a ton of fun. The less formal nature of the Discord channel and the wide participation in it by the community helped a lot to this end. Near the end of VIS Hendrik sent around an informal survey to solicit feedback about what we should keep from the virtual event for future virtual and in person events, and the Discord channel was frequently mentioned as an item to keep. Attendees also found that asking questions on Discord was less intimidating than stepping up the microphone during an in person event.
It would also be nice to keep some aspects of the virtual format’s wider accessibility even when we return to in person events. Being able to virtually attend any conference for no fee (or a small one) would be great for students with funding or travel challenges. The automatic archiving of talks is also a nice feature provided by the virtual format that allows people to view talks after the event if the timing is inconvenient.
The rest of this section will cover some of the issues that we ran into, things that could have been done better, and just some general notes or things to be aware of for other people organizing the streaming aspects of a virtual conference.
Zoom is a nice video conference application, but is a real pain to capture with OBS Studio. The application manages its windows and rendering in a strange way that makes only the Window Capture mode work. We ran into additional issues due to our decision to screenshare the presentation videos back to those in the Zoom call when playing back the talks. This made it much easier for chairs and presenters, as they didn’t need to watch YouTube to see the talk, and made it less likely that we would have audio feedback issues due to someone leaving YouTube unmuted during the Q&A. However, when exiting the screen sharing and preparing the Zoom Q&A scene in OBS, the technicians would always have to toggle the Zoom window capture on to a different application and back to Zoom to pick up the window again. After testing all of Zoom’s different rendering modes, I didn’t find a way to work around this.
I also found that Zoom does not work with OBS’s Game Capture mode, as it only captures a single person’s video, and switches to the small video of yourself when someone else joins the meeting.
Finally, we decided to provide live captions for certain live sessions during the conference to improve accessibility. All talks were required to provide subtitles with their video which could be generated easily by editing the output of YouTube’s automatic caption generation. For live sessions we used White Coat Captioning, a live captioning service that specializes in technical events. Although YouTube can caption live streams when allowing more latency, it can have issues with technical terms and quality. I initially planned to use Zoom’s caption display and simply capture this through the same window capture to display them on the stream. However, although the captions appear to be as part of the same window when they’re displayed, they aren’t captured by OBS when using the window capture. As a result we had the captioner send their captions directly to YouTube instead.
Hopefully these issues are resolved in future releases of OBS and Zoom.
The most important issue that you must address is to get your application audited and approved by YouTube. This is not the same as having Google approve your OAuth consent screen, which is not required for the YouTube API audit. As mentioned above, your app must have been audited to be able to upload videos or make the archived live stream videos publicly available. This is due to a recent policy change in the YouTube API. If you don’t have the application reviewed your videos and archived live stream videos will be forced private for a ToS violation. Requesting and being approved for a higher API quota also counts as an audit, so you only need to do one (request audit or higher quota).
YouTube enforces some limits on the number of live streams that you can schedule each day, which is lower than the number of videos you can upload each day. I found that we could schedule 30 live streams each day, though did not hit a limit on the total number of live streams we could have scheduled (I tested up to ~180). The 30 live stream limit also counts against your video upload limit, for example, after scheduling 30 live streams you could upload 70 videos. I didn’t have time to test, but uploading videos may count against the live stream limit as well, e.g., after uploading 100 videos (the max per day for accounts with less than ~2k videos) you would probably not be able to schedule any live streams that day.
The audit process can take a few weeks, if you plan to go this route I recommend making the audit or quota increase request far ahead of your event to ensure it will be completed in time.
It’s also important to be aware that if you make a new account for your conference, you likely won’t be able to embed the live stream videos on your website. Unfortunately, embedding live streams requires 1k subscribers and 4k public watch hours on the account, after which you have to enable monetization to enable embedding live streams. The videos will be embeddable after they’re streamed though.
We also had some issues with the display of live captions on YouTube, where they would flash up for a half second and disappear. From talking to the captioner it sounds like this is a common issue with live captions on YouTube. Fortunately they also provided a separate text output stream for attendees to view the captions.
I initially used the GMail API to try sending emails out from my personal GMail. However, since the app’s OAuth screen wasn’t approved by Google, the emails were flagged with a “suspicious email” warning by GMail. Making a public app that can access the GMail API requires a more extensive verification process because it accesses restricted scopes. The guide also mentions a possible independent security audit, though I don’t think this applies to a Python script run locally. An internal application to a G-Suite account can get around this and not need the consent screen to be verified; however, new G-Suite accounts are still restricted to 500 emails per day until they’ve paid $30 in their subscription. New G-Suite accounts also have a low reputation score on their emails. After struggling against these G-Suite restrictions and having emails seem to get stuck sending for hours or route to spam, I decided to try Amazon’s Simple Email Service.
This issue came up very late, when we were trying to send out registration information to attendees and session information to participants. Having these emails get delayed or lost was really concerning. Fortunately, migrating to Amazon SES and getting approved to move out of the testing sandbox only took a day and we were able to get these emails out to people. My recommendation here is to not bother with the GMail API and just use Amazon SES. The sender IPs have a high reputation to start and you’re given a much higher sending limit (we were granted 50k). The API for building the email is the same as you use the Python email module to build it, so the difficulty of using either is the same and it’s easy to swap between them. SES is also quite cheap, at $0.10 per 1000 emails our total cost for VIS was $0.40.
Another aspect that I think could have worked better was collecting the video submissions from presenters. We set some naming conventions to be used to help us uniquely identify each talk, though this got a bit mixed up on the final submission site. IEEE set up an FTP server for folks to upload to and provided slightly different guidance on the naming convention (though this naming convention was also fine). However, the FTP server isn’t able to enforce any specific naming convention. The result was that a variety of file naming patterns were used (ours, IEEE’s, and others) making it a bit harder to sort through the video submissions (some more flexible regex did the trick). Some presenters also had issues uploading to the FTP, either with file corruption or just general difficulty accessing it.
My recommendation would be to use a Google Form to collect the presentation videos. Google Forms can accept file uploads and will save the uploaded files out to the account’s Google Drive. The Google Form can also output an Excel sheet with a row per response containing the form info and Drive links for each file. You can then write a script using openpyxl to parse the data and use the Google Drive files.get API to retrieve the file name and look up the local path in your sync’d Google Drive folder to organize the videos. Note that the Google Drive API is also a restricted API (i.e., requiring a more extensive verification process). However, you can register a G-Suite account and make this script an internal app to avoid needing this review. A business standard account is $12/user/month and comes with 2TB of Drive storage per user, which should be plenty of space to store the videos. You’ll also want to increase the max file upload space allowed to be used by the form.
Another bot I implemented but didn’t discuss or add to the public code release was a polling system bot. Users could start new polls and add or vote on questions using the bot. However, this adds a lot of overhead to asking a question, since users need to learn how to use this bot. In the end this bot was not used at all during the conference, and I think this was for the best. The chairs and technicians were able to pick up the questions from the chat without too much difficulty. However, I would recommend that attendees practice “clap etiquette” in the future. Instead of everyone posting 👏 as a message in the channel, and thus flooding the channel with applause, it works much better to have everyone use the emoji as a reaction on a single clap post (either made by the chair or the first few posted).
As part of the video review process we used the VIS student volunteers to check the videos
for errors. I wrote
to check for basic encoding errors (correct resolution, format), and create bundles
of those passing this check to distribute to SVs for review. We did catch a few videos
with errors in this way, e.g., where the audio would cut out at slide transitions.
If possible, I would recommend performing a manual review of the videos. Playing them back
at ~1.5x speed was enough to still pick up on any issues.
Also on the topic of videos, is audio normalization or compression. When faced with a large number of videos (over 200 for VIS across all events) recorded in a huge variety of environments and with widely varying audio quality, this is not an easy task. For VIS 2020, I was hesitant to start messing with the audio of all the videos out of concern that I would introduce errors in some without time for a re-review by the SVs to catch these. Alex and Martin mentioned that a colleague of theirs who has experience with audio processing tried this for a few days for EGEV 2020 and didn’t have very good results, due to the wide variation in recording quality. Unfortunately this results in the volume level being inconsistent during the conference, requiring the technicians to adjust it on the fly. If there is a reliable route to compressing/normalizing audio for hundreds of videos recorded in widely varying environments, I’d be interested to know about it, as this issue will be faced by many virtual conferences through the next year.
My recommendation to presenters is to purchase a good microphone if possible, and to be close to the mic when recording your audio. If you’re not able to pick up a high-quality mic, you can still get pretty good results doing some noise filtering and compression in Audacity (free software).
To encourage informal discussion we created a few “hallway” voice chat channels, suggested by someone attending UIST 2020 which had a similar setup. We created 10 voice channels with a 10 person capacity and a few with a 20 person capacity to encourage small to medium size groups of people to chat. These were open to all attendees the entire conference for any informal discussion or after hours socializing and were pretty popular. I’d recommend providing this to attendees for other virtual events in the future.
If you’ll be having students volunteer as technicians to run the streams during the conference I strongly recommend doing a training session with them to familiarize them with the software before the event. While it isn’t too complex a task, it is not something that can be figured out in 30 minutes before the session starts. I recorded a demo video for the technicians and organized small training groups which took place the week before VIS to familiarize the technicians with the setup.
This post covers the key parts of the virtual conference streaming infrastructure and my experience with VIS 2020. Of course a lot more went on behind the scenes than is discussed here, but if I try to fit everything in this post would be enormous and I’d probably never finish writing it. So feel free to get in touch if you have other questions that aren’t answered here, in the documentation on Github or the demo video about SuperMiniStream. Also be sure to check out Alper’s write up about the virtual conference webpage!