安装
源码安装:
https://blog.csdn.net/qq_41004932/article/details/117049095
这篇文章先看“解决ffplay没有的问题”那节。
当然也可以一步到位直接sudo apt-get ffmpeg
。
官方文档
https://ffmpeg.org/ffmpeg.html
目前来说没必要全部阅读,有疑问去查阅即可。
1. 概要
1ffmpeg [global_options] {[input_file_options] -i INPUT_FILE} ... {[output_file_options] OUTPUT_FILE} ...}
2. 说明
ffmpeg
从输入“文件”(其可以是常规文件,管道,网络流,录制装置等),由指定任意数量的读取-i
选项,并写入到任意数量的输出“文件”,只需指定一个输出的文件名。任何一个命令行中不能被解释为选项的内容都被认为是一个输出文件名。每个输入或输出文件可以在原则上,包含任意数量的不同类型(视频/音频/字幕/附件/数据)的流。输出文件中允许流的数量和类型是由输出格式容器限制决定的。输入流和输出流直接的映射可以自动完成也可以用-map
选项给定(见流选择章节)。
引用输入文件的选项时,则必须使用他们的索引(从0开始)。例如:第一输入文件是0 ,第二个是1等。类似地,一个文件中的流也通过其索引指定。例如2:3指的是在第三个输入文件中的第四数据流。参见流章节。
作为一般规则,选项作用于下一个指定的文件。因此,命令的顺序是重要,你可以在命令行上多次相同的选项。每次选项的出现都将作用于下一个输入或输出文件。这条规则若有例外将会提前声明(例如冗余级别)。不要混合输入和输出文件。首先指定所有输入文件,那么所有的输出文件。也不要混用属于不同的文件的选项。所有选项仅适用于下一个输入或输出文件,之后选项将被重置。
例:
-
设置输出文件以64千比特/秒的视频比特率
ffmpeg -i input.avi -b:V 64K -bufsize 64K output.avi
-
要强制输出文件为24 fps的帧速率
ffmpeg -i input.avi -r 24 output.avi
-
要强制输入文件的帧频(仅对原始格式有效),以1 FPS读入文件,以每秒24帧的帧速率输出
ffmpeg -r 1 -i input.m2v -r 24 output.avi
3. 详细描述
ffmpeg
builds a transcoding pipeline(转码管道) out of the components listed below. The program’s operation then consists of input data chunks(数据块) flowing from the sources down the pipes towards the sinks, while being transformed by the components they encounter along the way.
The following kinds of components are available:
-
Demuxers (short for “demultiplexers”) (多路分流器)read an input source in order to extract
- global properties such as metadata or chapters;
- list of input elementary streams and their properties
One demuxer instance is created for each -i option, and sends encoded packets to decoders or muxers.
In other literature, demuxers are sometimes called splitters(分流器), because their main function is splitting a file into elementary streams (though some files only contain one elementary stream).
1┌──────────┬───────────────────────┐ 2│ demuxer │ │ packets for stream 0 3╞══════════╡ elementary stream 0 ├──────────────────────⮞ 4│ │ │ 5│ global ├───────────────────────┤ 6│properties│ │ packets for stream 1 7│ and │ elementary stream 1 ├──────────────────────⮞ 8│ metadata │ │ 9│ ├───────────────────────┤ 10│ │ │ 11│ │ ........... │ 12│ │ │ 13│ ├───────────────────────┤ 14│ │ │ packets for stream N 15│ │ elementary stream N ├──────────────────────⮞ 16│ │ │ 17└──────────┴───────────────────────┘ 18 ⯅ 19 │ 20 │ read from file, network stream, 21 │ grabbing device, etc. 22 │
-
Decoders(解码器) receive encoded (compressed) packets for an audio, video, or subtitle elementary stream, and decode them into raw frames (arrays of pixels for video, PCM for audio). A decoder is typically associated with (and receives its input from) an elementary stream in a demuxer, but sometimes may also exist on its own (see Loopback decoders).
1 ┌─────────┐ 2 packets │ │ raw frames 3─────────⮞│ decoder ├────────────⮞ 4 │ │ 5 └─────────┘
-
Filtergraphs(滤镜图) process and transform raw audio or video frames. A filtergraph consists of one or more individual filters linked into a graph. Filtergraphs come in two flavors(风格) - simple and complex, configured with the -filter and -filter_complex options, respectively.
A simple filtergraph(简单滤镜图)is associated with an output elementary stream; it receives the input to be filtered from a decoder and sends filtered output to that output stream’s encoder.
A simple video filtergraph that performs deinterlacing(去隔行处理) (using the
yadif
deinterlacer) followed by resizing(调整大小) (using thescale
filter) can look like this:1 ┌────────────────────────┐ 2 │ simple filtergraph │ 3 frames from ╞════════════════════════╡ frames for 4 a decoder │ ┌───────┐ ┌───────┐ │ an encoder 5────────────⮞├─⮞│ yadif ├─⮞│ scale ├─⮞│────────────⮞ 6 │ └───────┘ └───────┘ │ 7 └────────────────────────┘
A complex filtergraph(复杂滤镜图) is standalone and not associated with any specific stream. It may have multiple (or zero) inputs, potentially of different types (audio or video), each of which receiving data either from a decoder or another complex filtergraph’s output. It also has one or more outputs that feed either an encoder or another complex filtergraph’s input.
The following example diagram represents a complex filtergraph with 3 inputs and 2 outputs (all video):
1 ┌─────────────────────────────────────────────────┐ 2 │ complex filtergraph │ 3 ╞═════════════════════════════════════════════════╡ 4 frames ├───────┐ ┌─────────┐ ┌─────────┐ ┌────────┤ frames 5─────────⮞│input 0├─⮞│ overlay ├─────⮞│ overlay ├─⮞│output 0├────────⮞ 6 ├───────┘ │ │ │ │ └────────┤ 7 frames ├───────┐╭⮞│ │ ╭⮞│ │ │ 8─────────⮞│input 1├╯ └─────────┘ │ └─────────┘ │ 9 ├───────┘ │ │ 10 frames ├───────┐ ┌─────┐ ┌─────┬─╯ ┌────────┤ frames 11─────────⮞│input 2├⮞│scale├⮞│split├───────────────⮞│output 1├────────⮞ 12 ├───────┘ └─────┘ └─────┘ └────────┤ 13 └─────────────────────────────────────────────────┘
Frames from second input are overlaid(覆盖) over those from the first. Frames from the third input are rescaled(重新缩放), then the duplicated into two identical streams. One of them is overlaid over the combined first two inputs, with the result exposed as(显示) the filtergraph’s first output. The other duplicate ends up being the filtergraph’s second output.
-
ncoders receive raw audio, video, or subtitle(字幕) frames and encode them into encoded packets. The encoding (compression) process is typically lossy(有损的) - it degrades(降级) stream quality to make the output smaller; some encoders are lossless, but at the cost of much higher output size. A video or audio encoder receives its input from some filtergraph’s output, subtitle encoders receive input from a decoder (since subtitle filtering is not supported yet). Every encoder is associated with some muxer’s output elementary stream and sends its output to that muxer.
A schematic(原理的) representation of an encoder looks like this:
1 ┌─────────┐ 2 raw frames │ │ packets 3────────────⮞│ encoder ├─────────⮞ 4 │ │ 5 └─────────┘
-
Muxers (short for “multiplexers”) receive encoded packets for their elementary streams from encoders (the transcoding(转码) path) or directly from demuxers (the streamcopy(流式拷贝) path), interleave(交错) them (when there is more than one elementary stream), and write the resulting bytes into the output file (or pipe, network stream, etc.).
1 ┌──────────────────────┬───────────┐ 2 packets for stream 0 │ │ muxer │ 3──────────────────────⮞│ elementary stream 0 ╞═══════════╡ 4 │ │ │ 5 ├──────────────────────┤ global │ 6 packets for stream 1 │ │properties │ 7──────────────────────⮞│ elementary stream 1 │ and │ 8 │ │ metadata │ 9 ├──────────────────────┤ │ 10 │ │ │ 11 │ ........... │ │ 12 │ │ │ 13 ├──────────────────────┤ │ 14 packets for stream N │ │ │ 15──────────────────────⮞│ elementary stream N │ │ 16 │ │ │ 17 └──────────────────────┴─────┬─────┘ 18 │ 19 write to file, network stream, │ 20 grabbing device, etc. │ 21 │ 22 ▼
3.1 Streamcopy
The simplest pipeline in ffmpeg
is single-stream streamcopy(流式拷贝), that is copying one input elementary stream’s packets without decoding, filtering, or encoding them. As an example, consider an input file called INPUT.mkv with 3 elementary streams, from which we take the second and write it to file OUTPUT.mp4.
1┌──────────┬─────────────────────┐
2│ demuxer │ │ unused
3╞══════════╡ elementary stream 0 ├────────╳
4│ │ │
5│INPUT.mkv ├─────────────────────┤ ┌──────────────────────┬───────────┐
6│ │ │ packets │ │ muxer │
7│ │ elementary stream 1 ├─────────⮞│ elementary stream 0 ╞═══════════╡
8│ │ │ │ │OUTPUT.mp4 │
9│ ├─────────────────────┤ └──────────────────────┴───────────┘
10│ │ │ unused
11│ │ elementary stream 2 ├────────╳
12│ │ │
13└──────────┴─────────────────────┘
将INPUT.mkv的第2个流拷贝到 OUTPUT.mp4:
1ffmpeg -i INPUT.mkv -map 0:1 -c copy OUTPUT.mp4
- there is a single input INPUT.mkv;
- there are no input options for this input;
- there is a single output OUTPUT.mp4;
- there are two output options for this output:
-map 0:1
selects the input stream to be used - from input with index 0 (i.e. the first one) the stream with index 1 (i.e. the second one);-c copy
selects thecopy
encoder, i.e. streamcopy with no decoding or encoding.
Streamcopy is useful for changing the elementary stream count, container format, or modifying container-level metadata. Since there is no decoding or encoding, it is very fast and there is no quality loss. However, it might not work in some cases because of a variety of factors (e.g. certain information required by the target container is not available in the source). Applying filters is obviously also impossible, since filters work on decoded frames.
More complex streamcopy scenarios can be constructed - e.g. combining streams from two input files into a single output:
1┌──────────┬─────────────────────┐ ┌───────────────────┬───────────┐
2│ demuxer │ │ packets │ │ muxer 0 │
3╞══════════╡ elementary stream 0 ├─────────⮞│elementary stream 0╞═══════════╡
4│ │ │ │ │OUTPUT0.mp4│
5│INPUT.mkv ├─────────────────────┤ └───────────────────┴───────────┘
6│ │ │ packets ┌───────────────────┬───────────┐
7│ │ elementary stream 1 ├─────────⮞│ │ muxer 1 │
8│ │ │ │elementary stream 0╞═══════════╡
9└──────────┴─────────────────────┘ │ │OUTPUT1.mp4│
10 └───────────────────┴───────────┘
将一个输入的两个流分别拷贝到不同的输出:
1ffmpeg -i INPUT.mkv -map 0:0 -c copy OUTPUT0.mp4 -map 0:1 -c copy OUTPUT1.mp4
Note how a separate instance of the -c option is needed for every output file even though their values are the same. This is because non-global options (which is most of them) only apply in the context of the file before which they are placed.
These examples can of course be further generalized into arbitrary remappings of any number of inputs into any number of outputs.
3.2 Trancoding
Transcoding(串编码) is the process of decoding a stream and then encoding it again. Since encoding tends to be computationally expensive and in most cases degrades the stream quality (i.e. it is lossy), you should only transcode when you need to and perform streamcopy otherwise(否则). Typical reasons to transcode are:
- applying filters(加过滤) - e.g. resizing, deinterlacing(反交错), or overlaying video; resampling or mixing audio;
- you want to feed the stream to something that cannot decode the original codec.(交给不能解码的处理器)
Note that ffmpeg
will transcode all audio, video, and subtitle streams unless you specify -c copy for them.(自动转码,除非copy)
Consider an example pipeline that reads an input file with one audio and one video stream, transcodes the video and copies the audio into a single output file. This can be schematically represented as follows
1┌──────────┬─────────────────────┐
2│ demuxer │ │ audio packets
3╞══════════╡ stream 0 (audio) ├─────────────────────────────────────╮
4│ │ │ │
5│INPUT.mkv ├─────────────────────┤ video ┌─────────┐ raw │
6│ │ │ packets │ video │ video frames │
7│ │ stream 1 (video) ├─────────⮞│ decoder ├──────────────╮ │
8│ │ │ │ │ │ │
9└──────────┴─────────────────────┘ └─────────┘ │ │
10 ▼ ▼
11 │ │
12┌──────────┬─────────────────────┐ video ┌─────────┐ │ │
13│ muxer │ │ packets │ video │ │ │
14╞══════════╡ stream 0 (video) │⮜─────────┤ encoder ├──────────────╯ │
15│ │ │ │(libx264)│ │
16│OUTPUT.mp4├─────────────────────┤ └─────────┘ │
17│ │ │ │
18│ │ stream 1 (audio) │⮜────────────────────────────────────╯
19│ │ │
20└──────────┴─────────────────────┘
输入INPUT.mkv,对音频进行拷贝,对视频进行libx264编码:
1ffmpeg -i INPUT.mkv -map 0:v -map 0:a -c:v libx264 -c:a copy OUTPUT.mp4
Note how it uses stream specifiers :v
and :a
to select input streams and apply different values of the -c option to them
3.3 Filtering
When transcoding, audio and video streams can be filtered before encoding, with either a simple or complex filtergraph.
3.3.1 Simple filtergraphs
Simple filtergraphs are those that have exactly one input and output, both of the same type (audio or video). They are configured with the per-stream -filter
option (with -vf
and -af
aliases for -filter:v
(video) and -filter:a
(audio) respectively). Note that simple filtergraphs are tied to their output stream, so e.g. if you have multiple audio streams, -af
will create a separate filtergraph for each one.
Taking the trancoding example from above, adding filtering (and omitting audio, for clarity) makes it look like this:
1┌──────────┬───────────────┐
2│ demuxer │ │ ┌─────────┐
3╞══════════╡ video stream │ packets │ video │ frames
4│INPUT.mkv │ ├─────────⮞│ decoder ├─────⮞───╮
5│ │ │ └─────────┘ │
6└──────────┴───────────────┘ │
7 ╭───────────⮜───────────╯
8 │ ┌────────────────────────┐
9 │ │ simple filtergraph │
10 │ ╞════════════════════════╡
11 │ │ ┌───────┐ ┌───────┐ │
12 ╰──⮞├─⮞│ yadif ├─⮞│ scale ├─⮞├╮
13 │ └───────┘ └───────┘ ││
14 └────────────────────────┘│
15 │
16 │
17┌──────────┬───────────────┐ video ┌─────────┐ │
18│ muxer │ │ packets │ video │ │
19╞══════════╡ video stream │⮜─────────┤ encoder ├───────⮜───────╯
20│OUTPUT.mp4│ │ │ │
21│ │ │ └─────────┘
22└──────────┴───────────────┘
3.3.2 Complex filtergraphs
全局的。
Complex filtergraphs are those which cannot be described as simply a linear processing chain applied to one stream. This is the case, for example, when the graph has more than one input and/or output, or when output stream type is different from input. Complex filtergraphs are configured with the -filter_complex
option. Note that this option is global, since a complex filtergraph, by its nature, cannot be unambiguously associated with a single stream or file. Each instance of -filter_complex
creates a new complex filtergraph, and there can be any number of them.
A trivial example of a complex filtergraph is the overlay
filter, which has two video inputs and one video output, containing one video overlaid on top of the other. Its audio counterpart is the amix
filter.
3.4 Loopback decoders
While decoders are normally associated with demuxer streams, it is also possible to create “loopback” decoders that decode the output from some encoder and allow it to be fed back to complex filtergraphs. This is done with the -dec
directive, which takes as a parameter the index of the output stream that should be decoded. Every such directive creates a new loopback decoder, indexed with successive integers starting at zero. These indices should then be used to refer to loopback decoders(环回解码器) in complex filtergraph link labels, as described in the documentation for -filter_complex
.
Decoding AVOptions can be passed to(传递给) loopback decoders by placing them before -dec
, analogously(类似的) to input/output options.
TODO 下面这段没看懂
E.g. the following example:
1ffmpeg -i INPUT \
2 -map 0✌️0 -c:v libx264 -crf 45 -f null - \
3 -threads 3 -dec 0:0 \
4 -filter_complex '[0:v][dec:0]hstack[stack]' \
5 -map '[stack]' -c:v ffv1 OUTPUT
reads an input video and
- (line 2) encodes it with
libx264
at low quality; - (line 3) decodes this encoded stream using 3 threads;
- (line 4) places decoded video side by side with the original input video;
- (line 5) combined video is then losslessly encoded and written into OUTPUT.
Such a transcoding pipeline can be represented with the following diagram:
1┌──────────┬───────────────┐
2│ demuxer │ │ ┌─────────┐ ┌─────────┐ ┌────────────────────┐
3╞══════════╡ video stream │ │ video │ │ video │ │ null muxer │
4│ INPUT │ ├──⮞│ decoder ├──┬────────⮞│ encoder ├─┬─⮞│(discards its input)│
5│ │ │ └─────────┘ │ │(libx264)│ │ └────────────────────┘
6└──────────┴───────────────┘ │ └─────────┘ │
7 ╭───────⮜──╯ ┌─────────┐ │
8 │ │loopback │ │
9 │ ╭─────⮜──────┤ decoder ├────⮜──╯
10 │ │ └─────────┘
11 │ │
12 │ │
13 │ │ ┌───────────────────┐
14 │ │ │complex filtergraph│
15 │ │ ╞═══════════════════╡
16 │ │ │ ┌─────────────┐ │
17 ╰─╫─⮞├─⮞│ hstack ├─⮞├╮
18 ╰─⮞├─⮞│ │ ││
19 │ └─────────────┘ ││
20 └───────────────────┘│
21 │
22┌──────────┬───────────────┐ ┌─────────┐ │
23│ muxer │ │ │ video │ │
24╞══════════╡ video stream │⮜─┤ encoder ├───────⮜──────────╯
25│ OUTPUT │ │ │ (ffv1) │
26│ │ │ └─────────┘
27└──────────┴───────────────┘
4 Stream selection(流选择)
ffmpeg
provides the -map
option for manual control of stream selection in each output file. Users can skip -map
and let ffmpeg perform automatic stream selection as described below. The -vn / -an / -sn / -dn
options can be used to skip inclusion of video, audio, subtitle and data streams respectively, whether manually mapped or automatically selected, except for those streams which are outputs of complex filtergraphs.
4.1 Description
The sub-sections(小节) that follow describe the various rules that are involved in stream selection. The examples that follow next show how these rules are applied in practice.
While every effort is made to accurately reflect the behavior of the program, FFmpeg is under continuous development and the code may have changed since the time of this writing.
4.1.1 Automatic stream selection
In the absence of(在没有…情况下) any map options for a particular output file, ffmpeg inspects the output format to check which type of streams can be included in it, viz. video, audio and/or subtitles. For each acceptable stream type, ffmpeg will pick one stream, when available, from among all the inputs.
It will select that stream based upon the following criteria(标准):
- for video, it is the stream with the highest resolution,
- for audio, it is the stream with the most channels,
- for subtitles, it is the first subtitle stream found but there’s a caveat(警告). The output format’s default subtitle encoder can be either text-based or image-based, and only a subtitle stream of the same type will be chosen.
In the case where several streams of the same type rate equally, the stream with the lowest index is chosen.
Data or attachment streams are not automatically selected and can only be included using -map
.
4.1.2 Manual stream selection
When -map
is used, only user-mapped streams are included in that output file, with one possible exception for filtergraph outputs described below.
4.1.3 Complex filtergraphs
If there are any complex filtergraph output streams with unlabeled pads, they will be added to the first output file. This will lead to a fatal error if the stream type is not supported by the output format. In the absence of the map option, the inclusion of these streams leads to the automatic stream selection of their types being skipped. If map options are present, these filtergraph streams are included in addition to(除了..还包括) the mapped streams.
Complex filtergraph output streams with labeled pads must be mapped once and exactly once.
4.1.4 Stream handling
Stream handling(流处理) is independent of stream selection, with an exception for subtitles described below. Stream handling is set via the -codec
option addressed to streams within a specific output file. In particular, codec options are applied by ffmpeg after the stream selection process and thus(因此) do not influence the latter. If no -codec
option is specified for a stream type, ffmpeg will select the default encoder registered by the output file muxer.
An exception exists for subtitles. If a subtitle encoder is specified for an output file, the first subtitle stream found of any type, text or image, will be included. ffmpeg does not validate if the specified encoder can convert the selected stream or if the converted stream is acceptable within the output format. This applies generally as well: when the user sets an encoder manually, the stream selection process cannot check if the encoded stream can be muxed into the output file. If it cannot, ffmpeg will abort and all output files will fail to be processed.
4.2 Examples
The following examples illustrate the behavior, quirks(怪事) and limitations(局限性) of ffmpeg’s stream selection methods.
They assume the following three input files.
1input file 'A.avi'
2 stream 0: video 640x360
3 stream 1: audio 2 channels
4
5input file 'B.mp4'
6 stream 0: video 1920x1080
7 stream 1: audio 2 channels
8 stream 2: subtitles (text)
9 stream 3: audio 5.1 channels
10 stream 4: subtitles (text)
11
12input file 'C.mkv'
13 stream 0: video 1280x720
14 stream 1: audio 2 channels
15 stream 2: subtitles (image)
Example: automatic stream selection
1ffmpeg -i A.avi -i B.mp4 out1.mkv out2.wav -map 1:a -c:a copy out3.mov
There are three output files specified, and for the first two, no -map
options are set, so ffmpeg will select streams for these two files automatically.
out1.mkv is a Matroska container file and accepts video, audio and subtitle streams, so ffmpeg will try to select one of each type.
For video, it will select stream 0
from B.mp4, which has the highest resolution among all the input video streams.
For audio, it will select stream 3
from B.mp4, since it has the greatest number of channels.
For subtitles, it will select stream 2
from B.mp4, which is the first subtitle stream from among A.avi and B.mp4.
out2.wav accepts only audio streams, so only stream 3
from B.mp4 is selected.
For out3.mov, since a -map
option is set, no automatic stream selection will occur. The -map 1:a
option will select all audio streams from the second input B.mp4. No other streams will be included in this output file.
For the first two outputs, all included streams will be transcoded. The encoders chosen will be the default ones registered by each output format, which may not match the codec of the selected input streams.
For the third output, codec option for audio streams has been set to copy
, so no decoding-filtering-encoding operations will occur, or can occur. Packets of selected streams shall be conveyed from the input file and muxed within the output file.
Example: automatic subtitles selection
1ffmpeg -i C.mkv out1.mkv -c:s dvdsub -an out2.mkv
Although out1.mkv is a Matroska container file which accepts subtitle streams, only a video and audio stream shall be selected. The subtitle stream of C.mkv is image-based and the default subtitle encoder of the Matroska muxer is text-based, so a transcode operation for the subtitles is expected to fail and hence the stream isn’t selected. However, in out2.mkv, a subtitle encoder is specified in the command and so, the subtitle stream is selected, in addition to the video stream. The presence of -an
disables audio stream selection for out2.mkv.
Example: unlabeled filtergraph outputs
1ffmpeg -i A.avi -i C.mkv -i B.mp4 -filter_complex "overlay" out1.mp4 out2.srt
A filtergraph is setup here using the -filter_complex
option and consists of a single video filter. The overlay
filter requires exactly two video inputs, but none are specified, so the first two available video streams are used, those of A.avi and C.mkv. The output pad of the filter has no label and so is sent to the first output file out1.mp4. Due to this, automatic selection of the video stream is skipped, which would have selected the stream in B.mp4. The audio stream with most channels viz(即). stream 3
in B.mp4, is chosen automatically. No subtitle stream is chosen however, since the MP4 format has no default subtitle encoder registered, and the user hasn’t specified a subtitle encoder.
The 2nd output file, out2.srt, only accepts text-based subtitle streams. So, even though the first subtitle stream available belongs to C.mkv, it is image-based and hence skipped. The selected stream, stream 2
in B.mp4, is the first text-based subtitle stream.
Example: labeled filtergraph outputs
1ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0[outv];overlay;aresample" \
2 -map '[outv]' -an out1.mp4 \
3 out2.mkv \
4 -map '[outv]' -map 1🅰️0 out3.mkv
The above command will fail, as the output pad labelled [outv]
has been mapped twice. None of the output files shall be processed.
1ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0[outv];overlay;aresample" \
2 -an out1.mp4 \
3 out2.mkv \
4 -map 1🅰️0 out3.mkv
This command above will also fail as the hue filter output has a label, [outv]
, and hasn’t been mapped anywhere.
The command should be modified as follows,
1ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0,split=2[outv1][outv2];overlay;aresample" \
2 -map '[outv1]' -an out1.mp4 \
3 out2.mkv \
4 -map '[outv2]' -map 1🅰️0 out3.mkv
The video stream from B.mp4 is sent to the hue filter, whose output is cloned once using the split filter, and both outputs labelled. Then a copy each is mapped to the first and third output files.
The overlay filter, requiring two video inputs, uses the first two unused video streams. Those are the streams from A.avi and C.mkv. The overlay output isn’t labelled, so it is sent to the first output file out1.mp4, regardless of the presence of the -map
option.
The aresample filter is sent the first unused audio stream, that of A.avi. Since this filter output is also unlabelled, it too is mapped to the first output file. The presence of -an
only suppresses(抑制) automatic or manual stream selection of audio streams, not outputs sent from filtergraphs. Both these mapped streams shall be ordered before the mapped stream in out1.mp4.
The video, audio and subtitle streams mapped to out2.mkv
are entirely determined by automatic stream selection.
out3.mkv consists of the cloned video output from the hue filter and the first audio stream from B.mp4.
5 Options(选项)
不补了,自己查吧,反正就在那。
使用
教程
-
将input.mp4转换为output.avi格式:
1ffmpeg -i input.mp4 output.avi 2ffmpeg -i input.avi output.mp4 3 4#i:代表和输入文件
-
设置元数据:
1#设置元数据键/值对。 2#可以提供可选的元数据指定器来设置流、章节或程序的元数据。有关详细信息,请参阅-map_medata文档。 3#此选项覆盖了使用-map_metadata设置的元数据。也可以使用空值删除元数据。 4 5ffmpeg -i in.avi -metadata title="my title" out.flv 6 7#在输出文件中设置标题 8 9ffmpeg -i INPUT -metadata:s:a:0 language=eng OUTPUT 10 11#设置第一音频流的语言
-
将input.mp4的起始时间为00:01:30,时长为30秒钟的部分裁剪出来:
1ffmpeg -i "input.mp4" -ss 00:01:30 -t 00:00:30 -c copy "output.mp4" 2 3#ss:当用作输入选项(在-i之前)时,在此输入文件中查找要定位的位置。 4#请注意,在大多数格式中,无法精确查找,因此ffmpeg将查找位置之前最近的查找点。 5#当启用转码和-accurate_seek(默认设置)时,寻道点和位置之间的额外段将被解码并丢弃。 6#在进行流复制或使用-noaccurate_seek时,它将被保留。 7#当用作输出选项时(在输出url之前),会解码但丢弃输入,直到时间戳到达位置。 8 9#t:当用作输入选项时(在-i之前),限制从输入文件读取数据的持续时间。 10#当用作输出选项时(在输出url之前),在持续时间达到持续时间后停止写入输出。
-
将video.mp4和audio.mp3合并成output.mp4:
1ffmpeg -i "video.mp4" -i audio.mp3 -c:v copy -c:a aac -strict experimental "output.mp4" 2 3#c:为一个或多个流选择编码器(在输出文件之前使用时)或解码器(在输入文件之前使用)。 4#codec是解码器/编码器或特殊值副本(仅输出)的名称,用于指示流不进行重新编码。 5#copy就是复制
-
使用libx264对所有视频流进行编码,并复制所有音频流:
1ffmpeg -i INPUT -map 0 -c:v libx264 -c:a copy OUTPUT
-
复制的所有流,除第二个视频(将使用libx264编码)和第138个音频(将使用libvorbis编码)之外:
1ffmpeg -i INPUT -map 0 -c copy -c✌️1 libx264 -c🅰️137 libvorbis OUTPUT
-
将第二个音频流设置为默认流:
1ffmpeg -i in.mkv -c copy -disposition🅰️1 default out.mkv
-
使第二字幕流成为默认流并从第一字幕流中删除默认配置:
1ffmpeg -i in.mkv -c copy -disposition:s:0 0 -disposition:s:1 default out.mkv 2 3#通过设置0来清除
-
添加嵌入式封面/缩略图:
1ffmpeg -i in.mp4 -i IMAGE -map 0 -map 1 -c copy -c✌️1 png -disposition✌️1 attached_pic out.mp4
-
在第一个音频流中添加“原始”并删除“注释”处置标志,而不删除其其他处置标志:
1ffmpeg -i in.mkv -c copy -disposition🅰️0 +original-comment out.mkv 2 3#要删除“原始”并将“注释”处置标志添加到第一个音频流中,而不删除其其他处置标志 4ffmpeg -i in.mkv -c copy -disposition🅰️0 -original+comment out.mkv 5 6#在第一个音频流上仅设置“原始”和“注释”处置标志(并删除其其他处置标志): 7ffmpeg -i in.mkv -c copy -disposition🅰️0 original+comment out.mkv
-
从第一个音频流中删除所有处置标志:
1ffmpeg -i in.mkv -c copy -disposition🅰️0 0 out.mkv
-
将video.mp4压缩输出为output.mp4,降低码率以减小文件大小:
1ffmpeg -i video.mp4 -c:v libx264 -crf 23 -c:a aac -b:a 192k output.mp4 2 3#r:作为输入选项,忽略文件中存储的任何时间戳,而是假设帧速率为恒定fps来生成时间戳。 4# 作为输出选项在编码之前复制或删除帧,以实现恒定的输出帧率fps。视频流拷贝 5#f:强制输入或输出文件格式。 6#b:a b代表比特率,a匹配所有的音频流
-
将输出文件的比特率设置为64kbps:
1ffmpeg -i input.avi -b:v 64k -bufsize 64k output.mp4
-
将输出文件的帧率设置为24fps:
1ffmpeg -i input.avi -r 24 output.mp4 2 3#r:设置帧率(Hz值、分数或缩写)。 4#作为输入选项,忽略文件中存储的任何时间戳,而是假设帧速率为恒定fps来生成时间戳。 5#这与用于某些输入格式(如image2或v4l2)的-framerate选项不同(在旧版本的FFmpeg中是相同的)。如有疑问,请使用-framerate而不是输入选项-r。 6#作为输出选项: 7#视频编码: 8#在编码之前复制或删除帧,以实现恒定的输出帧率fps。 9#视频流拷贝: 10#向多路复用器指示fps是流帧率。在这种情况下,没有数据被丢弃或复制。如果fps与数据包时间戳确定的实际流帧率不匹配,则可能会产生无效文件。另请参见sets比特流过滤器。
-
将输入文件的帧速率(仅对原始格式有效)强制为1 fps,将输出文件的帧速率强制为24 fps:
1ffmpeg -r 1 -i input.m2v -r 24 output.mp4
-
考虑一个名为input.mkv的输入文件,其中有3个基本流,我们从中提取第二个并将其写入文件OUTPUT.mp4:
1ffmpeg -i INPUT.mkv -map 0:1 -c copy OUTPUT.mp4 2 3#0:1 表示第一个输入文件的第一个2个流。 4#-c copy:选择拷贝编码器,即不进行解码或编码的流拷贝。
-
将两个输入文件中的流合并到一个输出中:
1ffmpeg -i INPUT0.mkv -i INPUT1.aac -map 0:0 -map 1:0 -c copy OUTPUT.mp4
-
多个流从单个输入拆分为多个输出:
1ffmpeg -i INPUT.mkv -map 0:0 -c copy OUTPUT0.mp4 -map 0:1 -c copy OUTPUT1.mp4
-
读取一个包含一个音频和一个视频流的输入文件,对视频进行转码,并将音频复制到一个输出文件中:
1ffmpeg -i INPUT.mkv -map 0:v -map 0:a -c:v libx264 -c:a copy OUTPUT.mp4 2 3#ffmpeg将对所有音频、视频和字幕流进行转码,除非您为它们指定了-c copy。
-
流自动选择:
1ffmpeg -i A.avi -i B.mp4 out1.mkv out2.wav -map 1:a -c:a copy out3.mov 2 3#指定了三个输出文件,对于前两个,没有设置映射选项,因此ffmpeg将自动为这两个文件选择流。 4#out1.mkv是一个Matroska容器文件,接受视频、音频和字幕流,因此ffmpeg将尝试选择每种类型中的一种。 5#对于视频,它将从B.mp4中选择流0,B.mp4在所有输入视频流中具有最高的分辨率。 6#对于音频,它将从B.mp4中选择流3,因为它具有最多的通道。 7#对于字幕,它将从B.mp4中选择流2,这是A.avi和B.mp4之间的第一个字幕流 8#out2.wav只接受音频流,因此只选择B.mp4中的流3。因为它具有最多的通道。 9#对于out3.mov,由于设置了-map选项,因此不会自动选择流。-map 1:a选项将从第二个输入B.mp4中选择所有音频流。此输出文件中不会包含其他流。
-
流自动选择2(字幕):
1ffmpeg -i C.mkv out1.mkv -c:s dvdsub -an out2.mkv 2 3#虽然out1.mkv是一个接受字幕流的Matroska容器文件,但只能选择视频和音频流。C.mkv的字幕流是基于图像的,Matroska多路复用器的默认字幕编码器是基于文本的,因此字幕的转码操作预计会失败,因此不会选择该流。 4#然而,在out2.mkv中,命令中指定了字幕编码器,因此除了视频流外,还选择了字幕流。 5#-an的存在将禁用out2.mkv的音频流选择。
-
流自动选择3(没有标签的复杂滤镜图的输出):
1ffmpeg -i A.avi -i C.mkv -i B.mp4 -filter_complex "overlay" out1.mp4 out2.srt 2 3#这里使用-filter_complex选项设置了一个过滤器图,它由一个视频过滤器组成。叠加滤镜图需要两个视频输入,但没有指定,因此使用前两个可用的视频流,即A.avi和C.mkv。过滤器的输出焊盘没有标签,因此被发送到第一个输出文件out1.mp4。 4#因此,跳过视频流的自动选择,这将选择B.mp4中的流。自动选择具有大多数channel的音频流,即B.mp4中的流3。但是,由于MP4格式没有注册默认字幕编码器,用户也没有指定字幕编码器,因此没有选择字幕流。
-
流自动选择4(有标签的复杂滤镜图的输出):
1ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0[outv];overlay;aresample" \ 2 -map '[outv]' -an out1.mp4 \ 3 out2.mkv \ 4 -map '[outv]' -map 1🅰️0 out3.mkv 5 6#上述命令将失败,因为标记为[outv]的输出焊盘已被映射两次。不得处理任何输出文件。 7 8ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0[outv];overlay;aresample" \ 9 -an out1.mp4 \ 10 out2.mkv \ 11 -map 1🅰️0 out3.mkv 12 13#上述命令也将失败,因为色调过滤器输出有一个标签[outv],并且没有映射到任何地方。 14 15ffmpeg -i A.avi -i B.mp4 -i C.mkv -filter_complex "[1:v]hue=s=0,split=2[outv1][outv2];overlay;aresample" \ 16 -map '[outv1]' -an out1.mp4 \ 17 out2.mkv \ 18 -map '[outv2]' -map 1🅰️0 out3.mkv 19 20#来自B.mp4的视频流被发送到色调过滤器,使用分割过滤器(split)克隆一次色调过滤器的输出,并标记两个输出。然后,每个副本都被映射到第一个和第三个输出文件。 21#叠加滤镜图(overlay)需要两个视频输入,使用前两个未使用的视频流。这些是来自A.avi和C.mkv的流。 22#覆盖overlay的输出没有标记(label),因此无论是否存在-map选项,它都会被发送到第一个输出文件out1.mp4。 23#aresample过滤器被发送第一个未使用的音频流,即A.avi的音频流。由于此过滤器输出也未标记,因此它也被映射到第一个输出文件out1.mp4 24#-an的存在只会抑制音频流的自动或手动流选择,而不会抑制从滤镜图图发送的输出。这两个映射流应在out1.mp4中的映射流之前排序。 25#映射到out2.mkv的视频、音频和字幕流完全由自动流选择决定。 26#out3.mkv由色调滤镜图的克隆视频输出和B.mp4的第一个音频流组成。 27#1🅰️0 B.mp4的第一个音频流
-
从文件中加载命令:
1ffmpeg -i INPUT -/filter:v filter.script OUTPUT 2 3#接受参数的选项支持一种特殊语法,其中命令行上给出的参数被解释为加载实际参数值的文件的路径。要使用此功能,请在选项名称之前(前导破折号之后)添加一个正斜杠“/”。 4#将从名为filter.script的文件中加载过滤器图描述
-
将第一个输入文件中的所有流映射到输出:
1ffmpeg -i INPUT -map 0 output
-
将input中的第二个输入流映射到out.wav中的(单个)输出流:
1ffmpeg -i INPUT -map 0:1 out.wav
-
从输入文件中选择所有视频和第三个音频流:
1ffmpeg -i INPUT -map 0:v -map 0🅰️2 OUTPUT
-
映射除第二个音频之外的所有流,请使用负映射:
1ffmpeg -i INPUT -map 0 -map -0🅰️1 OUTPUT
-
映射来自第一个输入的视频和音频流,并使用尾随?,这样如果第一输入中不存在音频流,则忽略音频映射:
1ffmpeg -i INPUT -map 0:v -map 0:a? OUTPUT
-
选择英语音频流:
1ffmpeg -i INPUT -map 0ⓜ️language:eng OUTPUT
-
将输出mpegts文件的流0 PID设置为33,流1 PID设置为36:
1ffmpeg -i inurl -streamid 0:33 -streamid 1:36 out.ts
-
将h264_mp4toannexb比特流过滤器(将MP4封装的H.264流转换为附件B)应用于输入视频流:
1ffmpeg -bsf:v h264_mp4toannexb -i h264.mp4 -c:v copy -an out.h264
-
将mov2textsub比特流过滤器(从MOV字幕中提取文本)应用于输出字幕流:
1ffmpeg -i file.mov -an -vn -bsf:s mov2textsub -c:s copy -f rawvideo sub.txt
-
环回解码器+复杂滤波图应用1:
1ffmpeg -i INPUT \ 2 -map 0✌️0 -c:v libx264 -crf 45 -f null - \ 3 -threads 3 -dec 0:0 \ 4 -filter_complex '[0:v][dec:0]hstack[stack]' \ 5 -map '[stack]' -c:v ffv1 OUTPUT 6 7# 0✌️0 第一个输入的第一个视频流, 最后面这个0是view_specifier(视图说明符) 8#-dec:对某些编码器的输出进行解码,并将其反馈给复杂的滤镜图图。这是通过-dec指令完成的,该指令将应解码的输出流的索引作为参数 9#-filter_complex:定义一个复杂的滤波图 10#[file_index:stream_specifier]:连接输入流,如果stream_specifier匹配多个流,则将使用第一个流。对于多视图视频,流说明符后面可能跟有视图说明符 11#[dec:dec_idx]:环回解码器,其中dec_idx是要连接到给定输入的环回解码器的索引。对于多视图视频,解码器索引后面可能跟有视图说明符 12#[stack]:是输出标签。
-
环回解码器+复杂滤波图应用2:
1ffmpeg -i input.mkv \ 2 -filter_complex '[0:v]scale=size=hd1080,split=outputs=2[for_enc][orig_scaled]' \ 3 -c:v libx264 -map '[for_enc]' output.mkv \ 4 -dec 0:0 \ 5 -filter_complex '[dec:0][orig_scaled]hstack[stacked]' \ 6 -map '[stacked]' -c:v ffv1 comparison.mkv 7 8# []包围的名称基本都是标签。 9#(第2行)使用具有一个输入和两个输出的复杂滤镜图图将视频缩放到1920x1080,并将结果复制到两个输出; 10#(第3行)用libx264对一个缩放输出进行编码,并将结果写入output.mkv; 11#(第4行)用环回解码器对该编码流进行解码; 12#(第5行)将环回解码器的输出(即libx264编码视频)与缩放的原始输入并排放置(hstack); 13#(第6行)然后对组合视频进行无损编码(ffv1)并写入comparison.mkv。
-
将图像叠加在视频上:
1ffmpeg -i video.mkv -i image.png -filter_complex '[0:v][1:v]overlay[out]' -map 2'[out]' out.mkv 3 4#假设每个输入文件中只有一个视频流,我们可以省略输入标签 5ffmpeg -i video.mkv -i image.png -filter_complex 'overlay[out]' -map 6'[out]' out.mkv 7 8#此外,我们可以省略输出标签,过滤器图中的单个输出将自动添加到输出文件中 9ffmpeg -i video.mkv -i image.png -filter_complex 'overlay' out.mkv
-
以MPEG-TS格式存储的DVB-T记录上硬编码字幕,将字幕延迟1秒:
1ffmpeg -i input.ts -filter_complex \ 2 '[#0x2ef] setpts=PTS+1/TB [sub] ; [#0x2d0] [sub] overlay' \ 3 -sn -map '#0x2dc' output.mkv 4 5#0x2d0、0x2dc和0x2ef分别是视频、音频和字幕流的MPEG-TS PID;0:0、0:3和0:7也会奏效 6#作为一个特殊的例外,您可以使用位图字幕流作为输入:它将被转换为与文件中最大视频大小相同的视频,如果没有视频,则转换为720x576。 7#-sn:作为输入选项,阻止文件的所有字幕流被过滤或自动选择或映射到任何输出。请参阅-discard选项以单独禁用流。 8#作为输出选项,禁用字幕录制,即自动选择或映射任何字幕流。有关完全手动控制,请参阅-map选项。
-
使用lavfi色源生成5秒的纯红色视频:
1ffmpeg -filter_complex 'color=c=red' -t 5 out.mkv
-
将ID3v2.3标头而不是默认的ID3v2.4写入MP3文件,请使用MP3多路复用器的ID3v2_version私有选项:
1ffmpeg -i input.flac -id3v2_version 3 out.mp3
-
添加附件:
1ffmpeg -i INPUT -attach DejaVuSans.ttf -metadata:s:2 mimetype=application/x-truetype-font out.mkv
-
提取附件:
1ffmpeg -dump_attachment:t:0 out.ttf -i INPUT 2#提取名为“out.ttf”的文件的第一个附件 3 4 5ffmpeg -dump_attachment:t "" -i INPUT 6#提取由文件名标记确定的文件的所有附件
-
显示输入设备的自动检测源:
1ffmpeg -sources pulse,server=192.168.0.4
-
显示输出设备的自动检测接收器:
1ffmpeg -sinks pulse,server=192.168.0.4
-
启用重复日志输出:
1ffmpeg -loglevel repeat+level+verbose -i input output 2 3#允许重复日志输出而不影响当前级别前缀标志或日志级别状态: 4ffmpeg [...] -loglevel +repeat
-
使用32的日志级别(日志级别信息的别名)将报告输出到名为ffreport.log的文件:
1FFREPORT=file=ffreport.log:level=32 ffmpeg -i input output
-
允许设置和清除cpu标志。此选项用于测试:
1ffmpeg -cpuflags -sse+mmx ... 2ffmpeg -cpuflags mmx ... 3ffmpeg -cpuflags 0 ...
-
覆盖CPU计数检测。此选项用于测试:
1ffmpeg -cpucount 2
-
将进度信息记录到stdout:
1ffmpeg -progress pipe:1 -i in.mkv out.mkv
官方示例
-
指定了输入格式和设备,那么ffmpeg可以直接抓取视频和音频:
1ffmpeg -f oss -i /dev/dsp -f video4linux2 -i /dev/video0 /tmp/out.mpg 2 3#使用ALSA音频源(单声道输入,卡id 1)而不是OSS: 4ffmpeg -f alsa -ac 1 -i hw:1 -f video4linux2 -i /dev/video0 /tmp/out.mpg
-
通过ffmpeg抓取X11显示器:
1ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0 /tmp/out.mpg 2#使用ffmpeg通过0.0抓取X11显示器。X11服务器的屏幕号与display环境变量相同。 3 4 5ffmpeg -f x11grab -video_size cif -framerate 25 -i :0.0+10,20 /tmp/out.mpg 6#0.0是X11服务器的display.screen编号,与display环境变量相同。10是抓取的x偏移,20是抓取的y偏移。
-
可以使用YUV文件作为输入:
1ffmpeg -i /tmp/test%d.Y /tmp/out.mpg 2 3#会使用文件: 4#/tmp/test0.Y, /tmp/test0.U, /tmp/test0.V, 5#/tmp/test1.Y, /tmp/test1.U, /tmp/test1.V, etc... 6#Y文件的分辨率是U和V文件的两倍。它们是原始文件,没有标题。它们可以由所有像样的视频解码器生成。如果ffmpeg无法猜测,则必须使用-s选项指定图像的大小。
-
从原始YUV420P文件输入
1ffmpeg -i /tmp/test.yuv /tmp/out.avi 2 3#test.yuv是一个包含原始yuv平面数据的文件。每一帧由Y平面、U平面和V平面组成,具有半垂直和水平分辨率。
-
可以输出到原始YUV420P文件:
1ffmpeg -i mydivx.avi hugefile.yuv
-
可以设置多个输入文件和输出文件:
1ffmpeg -i /tmp/a.wav -s 640x480 -i /tmp/a.yuv /tmp/a.mpg 2#将音频文件a.wav和原始YUV视频文件a.YUV转换为MPEG文件a.mpg。
-
同时进行音频和视频转换:
1ffmpeg -i /tmp/a.wav -ar 22050 /tmp/a.mp2 2#以22050 Hz的采样率将.wav转换为MPEG音频
-
同时编码为多种格式,并定义从输入流到输出流的映射:
1ffmpeg -i /tmp/a.wav -map 0:a -b:a 64k /tmp/a.mp2 -map 0:a -b:a 128k /tmp/b.mp2 2#将.wav转换为64 kbits的a.mp2和128 kbits的b.mp2。 3#-map file:index按照输出流的定义顺序指定每个输出流使用哪个输入流。
-
对解密的VOB进行转码:
1ffmpeg -i snatch_1.vob -f avi -c:v mpeg4 -b:v 800k -g 300 -bf 2 -c:a libmp3lame -b:a 128k snatch.avi 2 3#这是一个典型的DVD翻录示例;输入是VOB文件、输出是具有MPEG-4视频和MP3音频的AVI文件。 4#请注意,在此命令中,我们使用B帧,因此MPEG-4流与DivX5兼容,GOP大小为300,这意味着对于29.97fps的输入视频,每10秒有一个帧内帧。 5#此外,音频流是MP3编码的,因此您需要通过传递--enable-libmp3lame进行配置来启用LAME支持。该映射对于DVD转码以获得所需的音频语言特别有用。
-
从视频中提取图像:
1ffmpeg -i foo.avi -r 1 -s WxH -f image2 foo-%03d.jpeg 2 3#这将从视频中每秒提取一个视频帧,并将其输出到名为foo-001.jpeg、foo-002.jpeg等的文件中。图像将被重新缩放以适应新的WxH值。 4#如果只想提取有限数量的帧,可以将上述命令与-frames:v或-t选项结合使用,或与-ss结合使用,从某个时间点开始提取。
-
从许多图像创建视频:
1ffmpeg -f image2 -framerate 12 -i foo-%03d.jpeg -s WxH foo.avi 2 3#语法foo-%03d.jpeg指定使用由三个填充零的数字组成的十进制数来表示序列号。它与C printf函数支持的语法相同,但只有接受普通整数的格式才合适。 4#在导入图像序列时,-i还支持通过选择image2特定的-pattern_type glob选项在内部扩展类似shell的通配符模式(globbing)。 5 6ffmpeg -f image2 -pattern_type glob -framerate 12 -i 'foo-*.jpeg' -s WxH foo.avi 7#为了从与glob模式foo-*.jpeg匹配的文件名创建视频
-
在输出中放入许多相同类型的流:
1ffmpeg -i test1.avi -i test2.avi -map 1:1 -map 1:0 -map 0:1 -map 0:0 -c copy -y test12.nut 2 3#生成的输出文件test12.nut将以相反的顺序包含输入文件的前四个流。
-
强制CBR视频输出:
1ffmpeg -i myfile.avi -b 4000k -minrate 4000k -maxrate 4000k -bufsize 1835k out.m2v
-
lmin、lmax、mblmin和mblmax这四个选项使用“lambda”单位,但您可以使用QP2LAMBDA常数轻松地从“q”单位转换:
1ffmpeg -i src.ext -lmax 21*QP2LAMBDA dst.ext
场景
我目前的需求很简单,就是转换下格式及编码,还有推流。
推流
-
推送rtp
1ffmpeg - re - i cw.ts - vcodec copy - acodec copy - f rtp_mpegts rtp://238.123.46.66:8001 2 3ffmpeg -re -i chunwan.h264 -vcodec copy -f rtp rtp://233.233.233.223:6666>test.sdp 4#发送H.264裸流“chunwan.h264”至地址rtp://233.233.233.223:6666 5#最右边的“>test.sdp”用于将ffmpeg的输出信息存储下来形成一个sdp文件。该文件用于RTP的接收。当不加“>test.sdp”的时候,ffmpeg会直接把sdp信息输出到控制台。将该信息复制出来保存成一个后缀是.sdp文本文件,也是可以用来接收该RTP流的。加上“>test.sdp”后,可以直接把这些sdp信息保存成文本。
-
推送UDP
1ffmpeg - re - i cw.ts - vcodec copy - acodec copy - f mpegts udp://238.123.46.66:8001 2 3ffmpeg -re -i chunwan.h264 -vcodec copy -f h264 udp://233.233.233.223:6666 4#发送H.264裸流“chunwan.h264”至地址udp://233.233.233.223:6666 5#-re一定要加,代表按照帧率发送,否则ffmpeg会一股脑地按最高的效率发送数据。 6#-vcodec copy要加,否则ffmpeg会重新编码输入的H.264裸流。 7 8ffmpeg -re -i chunwan.h264 -vcodec mpeg2video -f mpeg2video udp://233.233.233.223:6666 9#读取本地摄像头的数据,编码为MPEG2,发送至地址udp://233.233.233.223:6666。