In this tutorial you'll learn to use speech recognition software to automatically add
subtitles or captions to a video. We'll show you how to use three leading artificial intelligence technologies, two free and one paid. Adding subtitles to your videos is easier than it may seem, and brings all kinds of great benefits. Read on to find
out how.
Accessibility is Good for Everyone
With the rise of multiple online video platforms and the spread of video-capable smartphones we're now producing and consuming more and more video every day. This swell of video creates three new access problems:
- How do we find the videos we want in a crowded world?
- How do you watch all these new videos if you have a hearing impairment?
- How do you follow along with video that isn't in your first language, as is now an everyday reality for millions of people?
Subtitles and captions solve all of these problems.
Search engines have no idea what's in your videos. This is slowly changing with the rise of algorithms that can interpret images but, fundamentally, search engines are built to read text. The best way to tell the world what's in your video is still to describe it. Subtitles and closed captions provide exactly the kind of juicy text information that Google and the rest love to have.
As many as 15 percent of Americans have a hearing impairment. Closed captions, or written text overlaid on a video that replicates what the speaker is saying, are a key accessibility tool. Accessibility is essential in all kinds of videos, but it's especially important when teaching with video, which is why Envato is moving towards providing captions on as many Tuts+ videos as we possibly can.
It's not just us, either: in the USA, significantly more closed captioning will be required by law by next year. It's also worth noting that Facebook videos play automatically, but they
play silently by default, so if you want everyone on Facebook to know
what people in your videos are saying you really need captions or subtitles.
Subtitles are frequently used as a way to translate language from one medium to another (such as spoken English to written English) or one language to another. They're a great way to make video more accessible to diverse linguistic audiences.
1. YouTube's Automatic Captioning (Free)
Creating subtitles with YouTube is a breeze. Automatic transcription is built directly into the service, and can be edited in the Video Manager. Although the service does seem to be continually improving, YouTube's transcription is notoriously (and sometimes hilariously) imperfect. Thankfully, you have the option to manually adjust them. Select the Subtitles & CC option to view the subtitle settings, and then click on the subtitle file you want to alter. In my case, that is English (Automatic).
Now press the Edit button to begin adjusting them. On the left side, you'll see the transcription that YouTube automatically generated. You can type over any of the incorrect subtitles to correct them. Use the embedded video player to work your way through the video and intervene as needed. Click Publish Edits when you're finished to save the corrected subtitles.
To export and download a subtitle file, click the Actions dropdown and select the file type you need. SRT is a widely supported format (including Facebook video). Google makes it tricky to save the subtitles from other peoples' videos, but there is a workaround.
There are other problems with YouTube's service. For one, there is no way to collaborate on the subtitles. Or maybe your
videos are for a course you're producing and you don't want to put your files on YouTube. Or maybe you are Irish and YouTube just completely mangles your accent.
There are plenty of reasons why YouTube isn't always the right tool for the job. Let's
look at how to generate subtitles and captions for any video.
2. Watson and Amara (Free)
Hello, Watson!
You might have heard about IBM's Watson supercomputer when it was defeating defending Jeopardy! champions in 2011. IBM have put Watson technology to use processing all kinds of data, including speech. Simply upload an audio file to the speech-to-text demo and it will transcribe the audio for you.
Go ahead and jump to the app. Don't be fooled: the demo has plenty of functionality. I uploaded a six-minute audio clip and it was transcribed in just a few minutes.
I recommend transcription as a finishing step in editing a video. Lock down your edit, then export a WAV audio file from your video editor. If your app doesn't support exporting directly to WAV, I recommend you use Audacity to convert from other audio formats to WAV.
To upload audio, click on Select Audio File on the app page. Browse to your WAV file and choose it. Watson also supports Spanish, French, Portuguese and Japanese, so make sure and choose the language from the dropdown if it differs from English. Note: If you use an ad blocking plugin for your web browser, make sure
and disable it for the IBM website. The demo might not
function correctly if adblock is enabled.
A few minutes later, you'll have a transcription of the audio. At this stage, don't worry about correcting all of the incorrect words in the transcription. Copy the text from the transcription box and save it into a text file, using an app like Notepad on Windows or TextEdit on mac.
If you are a capable programmer, IBM offers a much more in-depth way to use the Watson technology. Check out IBM's tutorial on how to use the full Watson capabilities programmatically.
Use Amara to Create a Finished Subtitle File
Now you have a rough transcription of our audio. Next you'll synchronize it with the video.
For this step we'll use Amara, a service that's designed to do just that. Amara can help us generate a finished subtitle file that can be used on many different services. Jump over to Amara's site and sign up for a free account to get started.
After you've signed up for Amara, you'll need to link to your video file. Amara doesn't support uploading video directly to its site, so you'll need to get the file online first. I have an FTP server, so I'll often upload the video to my server and link to it. If you don't have a web server to use you can upload to Vimeo, for example, and use it as a temporary host.
Paste in a link to your video, hit Begin, and Amara will load the video into its subtitling system. Now it's time to upload the transcription Watson generated for us. Click on Upload them directly on the left side of the screen to upload the text file to Amara.
Once you click on the upload them directly option, you'll need to point Amara to the text file that we saved in the last step. Also, specify the language of the text and the video language. Once you press Upload, Amara will load the generated text.
Click on the language of the subtitle on the left side. Then press Edit Subtitles on the next screen to adjust the timings of your subtitles.
Check out the video below to walk through the time sync process:
You could create subtitles using Amara alone, but that requires typing the captions from scratch. IBM Watson gives us a great starting point, and pairs well with Amara to get the timing right.
One of the perks of using Amara is that it's great for collaboration and translation. If you have a skilled translator, they can just as easily link to your video and generate subtitles in a different language.
3. Descript (Subscription Service)
Descript is
a relatively new tool and takes a really refreshing approach to edit
audio and create transcripts, all in one package. It features a really
slick interface and desktop apps.
A free account unlocks three hours
of audio processing. To start with it, I created an account, then opened
an audio file in the app.
After
a few minutes, Descript generated captions as accurate as any tool I've
tried. Here's the magic: your transcribed audio appears just like a
document. You can scroll and review your take easily using the efficient interface.
But while you're
reviewing (and adjusting) your transcription, you can also edit and
adjust your audio on the timeline. And notice those beautiful text
annotations on top of the waveform!
Past
tools always felt like choosing between text-centric and audio-centric
tools. But with Descript, it feels like a "best of both worlds" where
you can think in audio and see the text output.
Because Descript
has built-in recording tools, you could actually use it as a
fully-featured recording center. Imagine using it to create a podcast
and export a finished file that includes a ready-to-share transcript.
When you're finished, you can export a subtitle file or jump directly to the video editing tool of your choice.
Accessibility Matters to Your Audience
You can't ignore the fact that your viewers may have disabilities that must be accommodated. It's not just a matter of what's right; it'll soon be the law. More than that, though, subtitling or captioning your video is a great way to make your work stand out and connect with audiences. The good news is that recent developments in speech-to-text AI have helped us do this without much additional work, as you've seen in this tutorial.
What other accessibility challenges should we think about when creating content? Also, how are you adding captions to your video productions? Let me know in the comments section.
Learn Video Editing
We've built a complete guide to help you learn how to edit videos, whether you're just getting started with the basics or you want to master video editing and post-production.
- Video
iMovie vs Final Cut Pro: What's the Best Video macOS App for Your Needs?
- Video
Video Editing Software Compared: Apple's Final Cut Pro X vs Adobe's Premiere Pro
- Video
Adobe Rush or Adobe Premiere: Which Video Editing Tool Is Right for You?
- Video
After Effects vs Premiere Pro: for Motion Graphics and Video Editing