FFmpeg

From MTULUG
Revision as of 20:02, 4 June 2022 by Sjwhitak (talk | contribs) (More verbose)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to:navigation Jump to:search

FFmpeg is a really complicated, all-in-one software that deals with videos and encoders. The software is typically not packaged with anything because some encoders are non-free and package managers don't typically use a version with the non-free encoders. So, you'll need to download it yourself https://ffmpeg.org/download.html unless you really care about the notion that there's non-free software on your computer.

FFmpeg is mainly the backend to software like Song Vegas and OpenShot, but you can use FFmpeg for small video edits like re-encoding and scripting multiple audio files at once. Since FFmpeg is such a behemoth of a software, here are some examples.

Simple commands

I consider a simple edit to be any edit that does not re-encode your streams.

user $ffmpeg -i video.mp4 -ss 1:00 -t 2:00 out.mp4

This will start the video at 1 minute and end at 2 minutes.

user $ffmpeg -i video.mp4 -an out.mp4

This will remove audio from the video if there was audio previously.

user $ffmpeg -re -i out.mp4 -acodec copy -vcodec copy -f flv -y rtmp://localhost/live/livestream

This will stream your video to SRS.

Complicated edits

I consider a complicated edit to be any edit that will re-encode your streams. When you encode files, it takes a long time. Expect to take around real-time (running your code for 1 hour for a 1 hour video) for any command that re-encodes your streams.

At this point, we focus on properly handling streams. According to FFmpeg, streams are either audio, visual, or subtitles. If there are multiple video streams, the process to handle each one is:

0:v:0
0:v:1
0:v:2
...

This shows 3 different video streams on your first input. You can have multiple inputs, for example:

user $ffmpeg -i movie.mp4 -i overlay.mp4 -i music.mp4 ...

In this case, you'll have 3 inputs, and if each mp4 has multiple video streams, then you need to handle them like so:

0:v:0
0:v:1
0:v:2
1:v:0
2:v:0

Typically, there really aren't multiple video streams, like, ever, but there are typically multiple audio streams for dubbing in different languages. Audio streams are:

0:a:0

and subtitle streams are:

0:s:0

Rate modification

user $ffmpeg -i phone_video.mp4 -b:v 2M out.mp4

Your phone records at an extremely high bitrate and it's not required to be such a massive file. Reducing your video bitrate (-b:v) is a good way to reduce the size of your files before uploading to a server.

user $ffmpeg -i music.mp3 -ar 44100 out.mp3

This sets your audio sample rate to 44100, or 44.1kHz, a standard sample rate for speakers.

Subtitle example

Suppose you have a .srt or a .ass file with your subtitles and you want to hardcode them before streaming them.

user $ffmpeg -i movie.mp4 -vf "subtitles=sub.srt" out.mp4

More typically, subtitles are embedded into an mkv or avi file, and as such, you'll need to hardcode from a stream inside the file itself using multiple streams. This depends upon the subtitle stream itself, but if the stream is a text stream, then you can use a text encoder:

user $ffmpeg -i movie.mkv -map 0:v:0 -map 0:s:0 out.mp4

If you have a bitmap-based subtitle stream (dvb is bitmap-based, srt/ass is text-based), you need to overlay the subtitles onto the movie itself.

user $ffmpeg -i movie.mkv -filter_complex "[0:v:0][0:s:0]overlay" out.mp4

Encoders

The following is a verbose manner of defining encoders:

user $ffmpeg -vcodec copy -acodec copy -scodec copy -map 0:v:0 -map 0:a:0 -map 0:s:0 out.mp4

The video codec (-vcodec) and the subtitle codec (-scodec) are used to transform the video stream (0:v:0) and the subtitle stream (0:s:0) into the output file (out.mp4). The term copy for each codec simply uses whatever codec is used in the original file, movie.mk4.

If you want to use a specific codec,

user $ffmpeg -i movie.mkv -vcodec libx264 -scodec copy -map 0:v:0 -map 0:s:0 -map 0:a:0 out.mp4

Typically, audio codecs are standardized and each output format will accept the audio codecs. libx264 is used for mp4 files, libvpx-vp9 is used for webm files and these are non-free codecs. To go from .mp4 to .webm to reduce the video size:

user $ffmpeg -i video.mp4 -c:v libvpx-vp9 -b:v 1M out.webm

-b:v sets the bitrate to 1 MB/s, which should reduce the size down to below 2MB and -an removes noise, which should allow you to post your video on a certain website.