关于GStreamer架构的说的比较详细和底层的文 关于GStreamer架构的说的比较详细和底层的文

关于GStreamer架构的说的比较详细和底层的文

  1. 特别是数据传输的一块,看了之后豁然开朗。记录在此。

  2. Overview
  3. --------

  4. This part gives an overview of the design of GStreamer with references to
  5. the more detailed explanations of the different topics.

  6. This document is intented for people that want to have a global overview of
  7. the inner workings of GStreamer.


  8. Introduction
  9. ------------

  10. GStreamer is a set of libraries and plugins that can be used to implement various
  11. multimedia applications ranging from desktop players, audio/video recorders,
  12. multimedia servers, transcoders, etc.

  13. Applications are built by constructing a pipeline composed of elements. An element
  14. is an object that performs some action on a multimedia stream such as:

  15. - read a file
  16. - decode or encode between formats
  17. - capture from a hardware device
  18. - render to a hardware device
  19. - mix or multiplex multiple streams

  20. Elements have input and output pads called sink and source pads in GStreamer. An
  21. application links elements together on pads to construct a pipeline. Below is
  22. an example of an ogg/vorbis playback pipeline.

  23. +-----------------------------------------------------------+
  24. | ----------> downstream -------------------> |
  25. | |
  26. | pipeline |
  27. | +---------+ +----------+ +-----------+ +----------+ |
  28. | | filesrc | | oggdemux | | vorbisdec | | alsasink | |
  29. | | src-sink src-sink src-sink | |
  30. | +---------+ +----------+ +-----------+ +----------+ |
  31. | |
  32. | <---------< upstream <-------------------< |
  33. +-----------------------------------------------------------+

  34. The filesrc element reads data from a file on disk. The oggdemux element parses
  35. the data and sends the compressed audio data to the vorbisdec element. The
  36. vorbisdec element decodes the compressed data and sends it to the alsasink
  37. element. The alsasink element sends the samples to the audio card for playback.

  38. Downstream and upstream are the terms used to describe the direction in the
  39. Pipeline. From source to sink is called "downstream" and "upstream" is
  40. from sink to source. Dataflow always happens downstream.

  41. The task of the application is to construct a pipeline as above using existing
  42. elements. This is further explained in the pipeline building topic.

  43. The application does not have to manage any of the complexities of the
  44. actual dataflow/decoding/conversions/synchronsiation etc. but only calls high
  45. level functions on the pipeline object such as PLAY/PAUSE/ STOP.

  46. The application also receives messages and notifications from the pipeline such
  47. as metadata, warning, error and EOS messages.

  48. If the application needs more control over the graph it is possible to directly
  49. access the elements and pads in the pipeline.


  50. Design overview
  51. ---------------

  52. GStreamer design goals include:

  53. - Process large amounts of data quickly
  54. - Allow fully multithreaded processing
  55. - Ability to deal with multiple formats
  56. - Synchronize different dataflows
  57. - Ability to deal with multiple devices

  58. The capabilities presented to the application depends on the number of elements
  59. installed on the system and their functionality.

  60. The GStreamer core is designed to be media agnostic but provides many features
  61. to elements to describe media formats.


  62. Elements
  63. --------

  64. The smallest building blocks in a pipeline are elements. An element provides a
  65. number of pads which can be source or sinkpads. Sourcepads provide data and
  66. sinkpads consume data. Below is an example of an ogg demuxer element that has
  67. one pad that takes (sinks) data and two source pads that produce data.

  68. +-----------+
  69. | oggdemux |
  70. | src0
  71. sink src1
  72. +-----------+

  73. An element can be in four different states: NULL, READY, PAUSED, PLAYING. In the
  74. NULL and READY state, the element is not processing any data. In the PLAYING state
  75. it is processing data. The intermediate PAUSED state is used to preroll data in
  76. the pipeline. A state change can be performed with gst_element_set_state().

  77. An element always goes through all the intermediate state changes. This means that
  78. when en element is in the READY state and is put to PLAYING, it will first go
  79. through the intermediate PAUSED state.

  80. An element state change to PAUSED will activate the pads of the element. First the
  81. source pads are activated, then the sinkpads. When the pads are activated, the
  82. pad activate function is called. Some pads will start a thread (GstTask) or some
  83. other mechanism to start producing or consuming data.

  84. The PAUSED state is special as it is used to preroll data in the pipeline. The purpose
  85. is to fill all connected elements in the pipeline with data so that the subsequent
  86. PLAYING state change happens very quickly. Some elements will therefore not complete
  87. the state change to PAUSED before they have received enough data. Sink elements are
  88. required to only complete the state change to PAUSED after receiving the first data.

  89. Normally the state changes of elements are coordinated by the pipeline as explained
  90. in [part-states.txt].

  91. Different categories of elements exist:

  92. - source elements, these are elements that do not consume data but only provide data
  93. for the pipeline.
  94. - sink elements, these are elements that do not produce data but renders data to
  95. an output device.
  96. - transform elements, these elements transform an input stream in a certain format
  97. into a stream of another format. Encoder/decoder/converters are examples.
  98. - demuxer elements, these elements parse a stream and produce several output streams.
  99. - mixer/muxer elements, combine several input streams into one output stream.

  100. Other categories of elements can be constructed (see part-klass.txt).


  101. Bins
  102. ----

  103. A bin is an element subclass and acts as a container for other elements so that multiple
  104. elements can be combined into one element.

  105. A bin coordinates its children 's state changes as explained later. It also distributes
  106. events and various other functionality to elements.

  107. A bin can have its own source and sinkpads by ghostpadding one or more of its children 's
  108. pads to itself.

  109. Below is a picture of a bin with two elements. The sinkpad of one element is ghostpadded
  110. to the bin.

  111. +---------------------------+
  112. | bin |
  113. | +--------+ +-------+ |
  114. | | | | | |
  115. | /sink src-sink | |
  116. sink +--------+ +-------+ |
  117. +---------------------------+


  118. Pipeline
  119. --------

  120. A pipeline is a special bin subclass that provides the following features to its
  121. children:

  122. - Select and manage a global clock for all its children.
  123. - Manage running_time based on the selected clock. Running_time is the elapsed
  124. time the pipeline spent in the PLAYING state and is used for
  125. synchronisation.
  126. - Manage latency in the pipeline.
  127. - Provide means for elements to comunicate with the application by the GstBus.
  128. - Manage the global state of the elements such as Errors and end- of-stream.

  129. Normally the application creates one pipeline that will manage all the elements
  130. in the application.


  131. Dataflow and buffers
  132. --------------------

  133. GStreamer supports two possible types of dataflow, the push and pull model. In the
  134. push model, an upstream element sends data to a downstream element by calling a
  135. method on a sinkpad. In the pull model, a downstream element requests data from
  136. an upstream element by calling a method on a source pad.

  137. The most common dataflow is the push model. The pull model can be used in specific
  138. circumstances by demuxer elements. The pull model can also be used by low latency
  139. audio applications.

  140. The data passed between pads is encapsulated in Buffers. The buffer contains a
  141. pointer to the actual data and also metadata describing the data. This metadata
  142. includes:

  143. - timestamp of the data, this is the time instance at which the data was captured
  144. or the time at which the data should be played back.
  145. - offset of the data: a media specific offset, this could be samples for audio or
  146. frames for video.
  147. - the duration of the data in time.
  148. - the media type of the data described with caps, these are key/value pairs that
  149. describe the media type in a unique way.
  150. - additional flags describing special properties of the data such as
  151. discontinuities or delta units.

  152. When an element whishes to send a buffer to another element is does this using one
  153. of the pads that is linked to a pad of the other element. In the push model, a
  154. buffer is pushed to the peer pad with gst_pad_push(). In the pull model, a buffer
  155. is pulled from the peer with the gst_pad_pull_range() function.

  156. Before an element pushes out a buffer, it should make sure that the peer element
  157. can understand the buffer contents. It does this by querying the peer element
  158. for the supported formats and by selecting a suitable common format. The selected
  159. format is then attached to the buffer with gst_buffer_set_caps() before pushing
  160. out the buffer.

  161. When an element pad receives a buffer, if has to check if it understands the media
  162. type of the buffer before starting processing it. The GStreamer core does this
  163. automatically and will call the gst_pad_set_caps() function of the element before
  164. sending the buffer to the element.

  165. Both gst_pad_push() and gst_pad_pull_range() have a return value indicating whether
  166. the operation succeeded. An error code means that no more data should be sent
  167. to that pad. A source element that initiates the data flow in a thread typically
  168. pauses the producing thread when this happens.

  169. A buffer can be created with gst_buffer_new() or by requesting a usable buffer
  170. from the peer pad using gst_pad_alloc_buffer(). Using the second method, it is
  171. possible for the peer element to suggest the element to produce data in another
  172. format by attaching another media type caps to the buffer.

  173. The process of selecting a media type and attaching it to the buffers is called
  174. caps negotiation.


  175. Caps
  176. ----

  177. A media type (Caps) is described using a generic list of key/value pairs. The key is
  178. a string and the value can be a single/list/range of int/float/ string.

  179. Caps that have no ranges/list or other variable parts are said to be fixed and
  180. can be used to put on a buffer.

  181. Caps with variables in them are used to describe possible media types that can be
  182. handled by a pad.


  183. Dataflow and events
  184. -------------------

  185. Parallel to the dataflow is a flow of events. Unlike the buffers, events can pass
  186. both upstream and downstream. Some events only travel upstream others only downstream.

  187. the events are used to denote special conditions in the dataflow such as EOS or
  188. to inform plugins of special events such as flushing or seeking.

  189. Some events must be serialized with the buffer flow, others don 't. Serialized
  190. events are inserted between the buffers. Non serialized events jump in front
  191. of any buffers current being processed.

  192. An example of a serialized event is a TAG event that is inserted between buffers
  193. to mark metadata for those buffers.

  194. An example of a non serialized event is the FLUSH event.


  195. Pipeline construction
  196. ---------------------

  197. The application starts by creating a Pipeline element using gst_pipeline_new ().
  198. Elements are added to and removed from the pipeline with gst_bin_add() and
  199. gst_bin_remove().

  200. After adding the elements, the pads of an element can be retrieved with
  201. gst_element_get_pad(). Pads can then be linked together with gst_pad_link().

  202. Some elements create new pads when actual dataflow is happening in the pipeline.
  203. With g_signal_connect() one can receive a notification when an element has created
  204. a pad. These new pads can then be linked to other unlinked pads.

  205. Some elements cannot be linked together because they operate on different
  206. incompatible data types. The possible datatypes a pad can provide or consume can
  207. be retrieved with gst_pad_get_caps().

  208. Below is a simple mp3 playback pipeline that we constructed. We will use this
  209. pipeline in further examples.

  210. +-------------------------------------------+
  211. | pipeline |
  212. | +---------+ +----------+ +----------+ |
  213. | | filesrc | | mp3dec | | alsasink | |
  214. | | src-sink src-sink | |
  215. | +---------+ +----------+ +----------+ |
  216. +-------------------------------------------+


  217. Pipeline clock
  218. --------------

  219. One of the important functions of the pipeline is to select a global clock
  220. for all the elements in the pipeline.

  221. The purpose of the clock is to provide a stricly increasing value at the rate
  222. of one GST_SECOND per second. Clock values are expressed in nanoseconds.
  223. Elements use the clock time to synchronize the playback of data.

  224. Before the pipeline is set to PLAYING, the pipeline asks each element if they can
  225. provide a clock. The clock is selected in the following order:

  226. - If the application selected a clock, use that one.
  227. - If a source element provides a clock, use that clock.
  228. - Select a clock from any other element that provides a clock, start with the
  229. sinks.
  230. - If no element provides a clock a default system clock is used for the pipeline.

  231. In a typical playback pipeline this algorithm will select the clock provided by
  232. a sink element such as an audio sink.

  233. In capture pipelines, this will typically select the clock of the data producer, which
  234. in most cases can not control the rate at which it produces data.


  235. Pipeline states
  236. ---------------

  237. When all the pads are linked and signals have been connected, the pipeline can
  238. be put in the PAUSED state to start dataflow.

  239. When a bin ( and hence a pipeline) performs a state change, it will change the state
  240. of all its children. The pipeline will change the state of its children from the
  241. sink elements to the source elements, this to make sure that no upstream element
  242. produces data to an element that is not yet ready to accept it.

  243. In the mp3 playback pipeline, the state of the elements is changed in the order
  244. alsasink, mp3dec, filesrc.

  245. All intermediate states are traversed for each element resulting in the following
  246. chain of state changes:

  247. alsasink to READY: the audio device is probed
  248. mp3dec to READY: nothing happens.
  249. filesrc to READY: the file is probed
  250. alsasink to PAUSED: the audio device is opened. alsasink is a sink and returns
  251. ASYNC because it did not receive data yet.
  252. mp3dec to PAUSED: the decoding library is initialized
  253. filesrc to PAUSED: the file is opened and a thread is started to push data to
  254. mp3dec

  255. At this point data flows from filesrc to mp3dec and alsasink. Since mp3dec is PAUSED,
  256. it accepts the data from filesrc on the sinkpad and starts decoding the compressed
  257. data to raw audio samples.

  258. The mp3 decoder figures out the samplerate, the number of channels and other audio
  259. properties of the raw audio samples, puts the decoded samples into a Buffer,
  260. attaches the media type caps to the buffer and pushes this buffer to the next
  261. element.

  262. Alsasink then receives the buffer, inspects the caps and reconfigures itself to process
  263. the buffer. Since it received the first buffer of samples, it completes the state change
  264. to the PAUSED state. At this point the pipeline is prerolled and all elements have
  265. samples. Alsasink is now also capable of providing a clock to the pipeline.

  266. Since alsasink is now in the PAUSED state it blocks while receiving the first buffer. This
  267. effectively blocks both mp3dec and filesrc in their gst_pad_push().

  268. Since all elements now return SUCCESS from the gst_element_get_state() function,
  269. the pipeline can be put in the PLAYING state.

  270. Before going to PLAYING, the pipeline select a clock and samples the current time of
  271. the clock. This is the base_time. It then distributes this time to all elements.
  272. Elements can then synchronize against the clock using the buffer running_time +
  273. base_time (See also part-synchronisation.txt).

  274. The following chain of state changes then takes place:

  275. alsasink to PLAYING: the samples are played to the audio device
  276. mp3dec to PLAYING: nothing happens
  277. filesrc to PLAYING: nothing happens


  278. Pipeline status
  279. ---------------

  280. The pipeline informs the application of any special events that occur in the
  281. pipeline with the bus. The bus is an object that the pipeline provides and that
  282. can be retrieved with gst_pipeline_get_bus().

  283. The bus can be polled or added to the glib mainloop.

  284. The bus is distributed to all elements added to the pipeline. The elements use the bus
  285. to post messages on. Various message types exist such as ERRORS, WARNINGS, EOS,
  286. STATE_CHANGED, etc..

  287. The pipeline handles EOS messages received from elements in a special way. It will
  288. only forward the message to the application when all sink elements have posted an
  289. EOS message.

  290. Other methods for obtaining the pipeline status include the Query functionality that
  291. can be performed with gst_element_query() on the pipeline. This type of query
  292. is useful for obtaining information about the current position and total time of
  293. the pipeline. It can also be used to query for the supported seeking formats and
  294. ranges.


  295. Pipeline EOS
  296. ------------

  297. When the source filter encounters the end of the stream, it sends an EOS event to
  298. the peer element. This event will then travel downstream to all of the connected
  299. elements to inform them of the EOS. The element is not supposed to accept any more
  300. data after receiving an EOS event on a sinkpad.

  301. The element providing the streaming thread stops sending data after sending the
  302. EOS event.

  303. The EOS event will eventually arrive in the sink element. The sink will then post
  304. an EOS message on the bus to inform the pipeline that a particular stream has
  305. finished. When all sinks have reported EOS, the pipeline forwards the EOS message
  306. to the application. The EOS message is only forwarded to the application in the
  307. PLAYING state.

  308. When in EOS, the pipeline remains in the PLAYING state, it is the applications
  309. responsability to PAUSE or READY the pipeline. The application can also issue
  310. a seek, for example.


  311. Pipeline READY
  312. --------------

  313. When a running pipeline is set from the PLAYING to READY state, the following
  314. actions occur in the pipeline:

  315. alsasink to PAUSED: alsasink blocks and completes the state change on the
  316. next sample. If the element was EOS, it does not wait for
  317. a sample to complete the state change.
  318. mp3dec to PAUSED: nothing
  319. filesrc to PAUSED: nothing

  320. Going to the intermediate PAUSED state will block all elements in the _push()
  321. functions. This happens because the sink element blocks on the first buffer
  322. it receives.

  323. Some elements might be performing blocking operations in the PLAYING state that
  324. must be unblocked when they go into the PAUSED state. This makes sure that the
  325. state change happens very fast.

  326. In the next PAUSED to READY state change the pipeline has to shut down and all
  327. streaming threads must stop sending data. This happens in the following sequence:

  328. alsasink to READY: alsasink unblocks from the _chain() function and returns a
  329. WRONG_STATE return value to the peer element. The sinkpad is
  330. deactivated and becomes unusable for sending more data.
  331. mp3dec to READY: the pads are deactivated and the state change completes when
  332. mp3dec leaves its _chain() function.
  333. filesrc to READY: the pads are deactivated and the thread is paused.

  334. The upstream elements finish their chain() function because the downstream element
  335. returned an error code (WRONG_STATE) from the _push() functions. These error codes
  336. are eventually returned to the element that started the streaming thread (filesrc),
  337. which pauses the thread and completes the state change.

  338. This sequence of events ensure that all elements are unblocked and all streaming
  339. threads stopped.


  340. Pipeline seeking
  341. ----------------

  342. Seeking in the pipeline requires a very specific order of operations to make
  343. sure that the elements remain synchronized and that the seek is performed with
  344. a minimal amount of latency.

  345. An application issues a seek event on the pipeline using gst_element_send_event()
  346. on the pipeline element. The event can be a seek event in any of the formats
  347. supported by the elements.

  348. The pipeline first pauses the pipeline to speed up the seek operations.

  349. The pipeline then issues the seek event to all sink elements. The sink then forwards
  350. the seek event upstream until some element can perform the seek operation, which is
  351. typically the source or demuxer element. All intermediate elements can transform the
  352. requested seek offset to another format, this way a decoder element can transform a
  353. seek to a frame number to a timestamp, for example.

  354. When the seek event reaches an element that will perform the seek operation, that
  355. element performs the following steps.

  356. 1) send a FLUSH_START event to all downstream and upstream peer elements.
  357. 2) make sure the streaming thread is not running. The streaming thread will
  358. always stop because of step 1).
  359. 3) perform the seek operation
  360. 4) send a FLUSH done event to all downstream and upstream peer elements.
  361. 5) send NEWSEGMENT event to inform all elements of the new position and to complete
  362. the seek.

  363. In step 1) all dowstream elements have to return from any blocking operations
  364. and have to refuse any further buffers or events different from a FLUSH done.

  365. The first step ensures that the streaming thread eventually unblocks and that
  366. step 2) can be performed. At this point, dataflow is completely stopped in the
  367. pipeline.

  368. In step 3) the element performs the seek to the requested position.

  369. In step 4) all peer elements are allowed to accept data again and streaming
  370. can continue from the new position. A FLUSH done event is sent to all the peer
  371. elements so that they accept new data again and restart their streaming threads.

  372. Step 5) informs all elements of the new position in the stream. After that the
  373. event function returns back to the application. and the streaming threads start
  374. to produce new data.

  375. Since the pipeline is still PAUSED, this will preroll the next media sample in the
  376. sinks. The application can wait for this preroll to complete by performing a
  377. _get_state() on the pipeline.

  378. The last step in the seek operation is then to adjust the stream time of the pipeline
  379. to 0 and to set the pipeline back to PLAYING.

  380. The sequence of events in our mp3 playback example.

  381. | a) seek on pipeline
  382. | b) PAUSE pipeline
  383. +----------------------------------V--------+
  384. | pipeline | c) seek on sink
  385. | +---------+ +----------+ +---V------+ |
  386. | | filesrc | | mp3dec | | alsasink | |
  387. | | src-sink src-sink | |
  388. | +---------+ +----------+ +----|-----+ |
  389. +-----------------------------------|-------+
  390. <------------------------+
  391. d) seek travels upstream

  392. --------------------------> 1) FLUSH event
  393. | 2) stop streaming
  394. | 3) perform seek
  395. --------------------------> 4) FLUSH done event
  396. --------------------------> 5) NEWSEGMENT event

  397. | e) update stream time to 0
  398. | f) PLAY pipeline


  1. 特别是数据传输的一块,看了之后豁然开朗。记录在此。

  2. Overview
  3. --------

  4. This part gives an overview of the design of GStreamer with references to
  5. the more detailed explanations of the different topics.

  6. This document is intented for people that want to have a global overview of
  7. the inner workings of GStreamer.


  8. Introduction
  9. ------------

  10. GStreamer is a set of libraries and plugins that can be used to implement various
  11. multimedia applications ranging from desktop players, audio/video recorders,
  12. multimedia servers, transcoders, etc.

  13. Applications are built by constructing a pipeline composed of elements. An element
  14. is an object that performs some action on a multimedia stream such as:

  15. - read a file
  16. - decode or encode between formats
  17. - capture from a hardware device
  18. - render to a hardware device
  19. - mix or multiplex multiple streams

  20. Elements have input and output pads called sink and source pads in GStreamer. An
  21. application links elements together on pads to construct a pipeline. Below is
  22. an example of an ogg/vorbis playback pipeline.

  23. +-----------------------------------------------------------+
  24. | ----------> downstream -------------------> |
  25. | |
  26. | pipeline |
  27. | +---------+ +----------+ +-----------+ +----------+ |
  28. | | filesrc | | oggdemux | | vorbisdec | | alsasink | |
  29. | | src-sink src-sink src-sink | |
  30. | +---------+ +----------+ +-----------+ +----------+ |
  31. | |
  32. | <---------< upstream <-------------------< |
  33. +-----------------------------------------------------------+

  34. The filesrc element reads data from a file on disk. The oggdemux element parses
  35. the data and sends the compressed audio data to the vorbisdec element. The
  36. vorbisdec element decodes the compressed data and sends it to the alsasink
  37. element. The alsasink element sends the samples to the audio card for playback.

  38. Downstream and upstream are the terms used to describe the direction in the
  39. Pipeline. From source to sink is called "downstream" and "upstream" is
  40. from sink to source. Dataflow always happens downstream.

  41. The task of the application is to construct a pipeline as above using existing
  42. elements. This is further explained in the pipeline building topic.

  43. The application does not have to manage any of the complexities of the
  44. actual dataflow/decoding/conversions/synchronsiation etc. but only calls high
  45. level functions on the pipeline object such as PLAY/PAUSE/ STOP.

  46. The application also receives messages and notifications from the pipeline such
  47. as metadata, warning, error and EOS messages.

  48. If the application needs more control over the graph it is possible to directly
  49. access the elements and pads in the pipeline.


  50. Design overview
  51. ---------------

  52. GStreamer design goals include:

  53. - Process large amounts of data quickly
  54. - Allow fully multithreaded processing
  55. - Ability to deal with multiple formats
  56. - Synchronize different dataflows
  57. - Ability to deal with multiple devices

  58. The capabilities presented to the application depends on the number of elements
  59. installed on the system and their functionality.

  60. The GStreamer core is designed to be media agnostic but provides many features
  61. to elements to describe media formats.


  62. Elements
  63. --------

  64. The smallest building blocks in a pipeline are elements. An element provides a
  65. number of pads which can be source or sinkpads. Sourcepads provide data and
  66. sinkpads consume data. Below is an example of an ogg demuxer element that has
  67. one pad that takes (sinks) data and two source pads that produce data.

  68. +-----------+
  69. | oggdemux |
  70. | src0
  71. sink src1
  72. +-----------+

  73. An element can be in four different states: NULL, READY, PAUSED, PLAYING. In the
  74. NULL and READY state, the element is not processing any data. In the PLAYING state
  75. it is processing data. The intermediate PAUSED state is used to preroll data in
  76. the pipeline. A state change can be performed with gst_element_set_state().

  77. An element always goes through all the intermediate state changes. This means that
  78. when en element is in the READY state and is put to PLAYING, it will first go
  79. through the intermediate PAUSED state.

  80. An element state change to PAUSED will activate the pads of the element. First the
  81. source pads are activated, then the sinkpads. When the pads are activated, the
  82. pad activate function is called. Some pads will start a thread (GstTask) or some
  83. other mechanism to start producing or consuming data.

  84. The PAUSED state is special as it is used to preroll data in the pipeline. The purpose
  85. is to fill all connected elements in the pipeline with data so that the subsequent
  86. PLAYING state change happens very quickly. Some elements will therefore not complete
  87. the state change to PAUSED before they have received enough data. Sink elements are
  88. required to only complete the state change to PAUSED after receiving the first data.

  89. Normally the state changes of elements are coordinated by the pipeline as explained
  90. in [part-states.txt].

  91. Different categories of elements exist:

  92. - source elements, these are elements that do not consume data but only provide data
  93. for the pipeline.
  94. - sink elements, these are elements that do not produce data but renders data to
  95. an output device.
  96. - transform elements, these elements transform an input stream in a certain format
  97. into a stream of another format. Encoder/decoder/converters are examples.
  98. - demuxer elements, these elements parse a stream and produce several output streams.
  99. - mixer/muxer elements, combine several input streams into one output stream.

  100. Other categories of elements can be constructed (see part-klass.txt).


  101. Bins
  102. ----

  103. A bin is an element subclass and acts as a container for other elements so that multiple
  104. elements can be combined into one element.

  105. A bin coordinates its children 's state changes as explained later. It also distributes
  106. events and various other functionality to elements.

  107. A bin can have its own source and sinkpads by ghostpadding one or more of its children 's
  108. pads to itself.

  109. Below is a picture of a bin with two elements. The sinkpad of one element is ghostpadded
  110. to the bin.

  111. +---------------------------+
  112. | bin |
  113. | +--------+ +-------+ |
  114. | | | | | |
  115. | /sink src-sink | |
  116. sink +--------+ +-------+ |
  117. +---------------------------+


  118. Pipeline
  119. --------

  120. A pipeline is a special bin subclass that provides the following features to its
  121. children:

  122. - Select and manage a global clock for all its children.
  123. - Manage running_time based on the selected clock. Running_time is the elapsed
  124. time the pipeline spent in the PLAYING state and is used for
  125. synchronisation.
  126. - Manage latency in the pipeline.
  127. - Provide means for elements to comunicate with the application by the GstBus.
  128. - Manage the global state of the elements such as Errors and end- of-stream.

  129. Normally the application creates one pipeline that will manage all the elements
  130. in the application.


  131. Dataflow and buffers
  132. --------------------

  133. GStreamer supports two possible types of dataflow, the push and pull model. In the
  134. push model, an upstream element sends data to a downstream element by calling a
  135. method on a sinkpad. In the pull model, a downstream element requests data from
  136. an upstream element by calling a method on a source pad.

  137. The most common dataflow is the push model. The pull model can be used in specific
  138. circumstances by demuxer elements. The pull model can also be used by low latency
  139. audio applications.

  140. The data passed between pads is encapsulated in Buffers. The buffer contains a
  141. pointer to the actual data and also metadata describing the data. This metadata
  142. includes:

  143. - timestamp of the data, this is the time instance at which the data was captured
  144. or the time at which the data should be played back.
  145. - offset of the data: a media specific offset, this could be samples for audio or
  146. frames for video.
  147. - the duration of the data in time.
  148. - the media type of the data described with caps, these are key/value pairs that
  149. describe the media type in a unique way.
  150. - additional flags describing special properties of the data such as
  151. discontinuities or delta units.

  152. When an element whishes to send a buffer to another element is does this using one
  153. of the pads that is linked to a pad of the other element. In the push model, a
  154. buffer is pushed to the peer pad with gst_pad_push(). In the pull model, a buffer
  155. is pulled from the peer with the gst_pad_pull_range() function.

  156. Before an element pushes out a buffer, it should make sure that the peer element
  157. can understand the buffer contents. It does this by querying the peer element
  158. for the supported formats and by selecting a suitable common format. The selected
  159. format is then attached to the buffer with gst_buffer_set_caps() before pushing
  160. out the buffer.

  161. When an element pad receives a buffer, if has to check if it understands the media
  162. type of the buffer before starting processing it. The GStreamer core does this
  163. automatically and will call the gst_pad_set_caps() function of the element before
  164. sending the buffer to the element.

  165. Both gst_pad_push() and gst_pad_pull_range() have a return value indicating whether
  166. the operation succeeded. An error code means that no more data should be sent
  167. to that pad. A source element that initiates the data flow in a thread typically
  168. pauses the producing thread when this happens.

  169. A buffer can be created with gst_buffer_new() or by requesting a usable buffer
  170. from the peer pad using gst_pad_alloc_buffer(). Using the second method, it is
  171. possible for the peer element to suggest the element to produce data in another
  172. format by attaching another media type caps to the buffer.

  173. The process of selecting a media type and attaching it to the buffers is called
  174. caps negotiation.


  175. Caps
  176. ----

  177. A media type (Caps) is described using a generic list of key/value pairs. The key is
  178. a string and the value can be a single/list/range of int/float/ string.

  179. Caps that have no ranges/list or other variable parts are said to be fixed and
  180. can be used to put on a buffer.

  181. Caps with variables in them are used to describe possible media types that can be
  182. handled by a pad.


  183. Dataflow and events
  184. -------------------

  185. Parallel to the dataflow is a flow of events. Unlike the buffers, events can pass
  186. both upstream and downstream. Some events only travel upstream others only downstream.

  187. the events are used to denote special conditions in the dataflow such as EOS or
  188. to inform plugins of special events such as flushing or seeking.

  189. Some events must be serialized with the buffer flow, others don 't. Serialized
  190. events are inserted between the buffers. Non serialized events jump in front
  191. of any buffers current being processed.

  192. An example of a serialized event is a TAG event that is inserted between buffers
  193. to mark metadata for those buffers.

  194. An example of a non serialized event is the FLUSH event.


  195. Pipeline construction
  196. ---------------------

  197. The application starts by creating a Pipeline element using gst_pipeline_new ().
  198. Elements are added to and removed from the pipeline with gst_bin_add() and
  199. gst_bin_remove().

  200. After adding the elements, the pads of an element can be retrieved with
  201. gst_element_get_pad(). Pads can then be linked together with gst_pad_link().

  202. Some elements create new pads when actual dataflow is happening in the pipeline.
  203. With g_signal_connect() one can receive a notification when an element has created
  204. a pad. These new pads can then be linked to other unlinked pads.

  205. Some elements cannot be linked together because they operate on different
  206. incompatible data types. The possible datatypes a pad can provide or consume can
  207. be retrieved with gst_pad_get_caps().

  208. Below is a simple mp3 playback pipeline that we constructed. We will use this
  209. pipeline in further examples.

  210. +-------------------------------------------+
  211. | pipeline |
  212. | +---------+ +----------+ +----------+ |
  213. | | filesrc | | mp3dec | | alsasink | |
  214. | | src-sink src-sink | |
  215. | +---------+ +----------+ +----------+ |
  216. +-------------------------------------------+


  217. Pipeline clock
  218. --------------

  219. One of the important functions of the pipeline is to select a global clock
  220. for all the elements in the pipeline.

  221. The purpose of the clock is to provide a stricly increasing value at the rate
  222. of one GST_SECOND per second. Clock values are expressed in nanoseconds.
  223. Elements use the clock time to synchronize the playback of data.

  224. Before the pipeline is set to PLAYING, the pipeline asks each element if they can
  225. provide a clock. The clock is selected in the following order:

  226. - If the application selected a clock, use that one.
  227. - If a source element provides a clock, use that clock.
  228. - Select a clock from any other element that provides a clock, start with the
  229. sinks.
  230. - If no element provides a clock a default system clock is used for the pipeline.

  231. In a typical playback pipeline this algorithm will select the clock provided by
  232. a sink element such as an audio sink.

  233. In capture pipelines, this will typically select the clock of the data producer, which
  234. in most cases can not control the rate at which it produces data.


  235. Pipeline states
  236. ---------------

  237. When all the pads are linked and signals have been connected, the pipeline can
  238. be put in the PAUSED state to start dataflow.

  239. When a bin ( and hence a pipeline) performs a state change, it will change the state
  240. of all its children. The pipeline will change the state of its children from the
  241. sink elements to the source elements, this to make sure that no upstream element
  242. produces data to an element that is not yet ready to accept it.

  243. In the mp3 playback pipeline, the state of the elements is changed in the order
  244. alsasink, mp3dec, filesrc.

  245. All intermediate states are traversed for each element resulting in the following
  246. chain of state changes:

  247. alsasink to READY: the audio device is probed
  248. mp3dec to READY: nothing happens.
  249. filesrc to READY: the file is probed
  250. alsasink to PAUSED: the audio device is opened. alsasink is a sink and returns
  251. ASYNC because it did not receive data yet.
  252. mp3dec to PAUSED: the decoding library is initialized
  253. filesrc to PAUSED: the file is opened and a thread is started to push data to
  254. mp3dec

  255. At this point data flows from filesrc to mp3dec and alsasink. Since mp3dec is PAUSED,
  256. it accepts the data from filesrc on the sinkpad and starts decoding the compressed
  257. data to raw audio samples.

  258. The mp3 decoder figures out the samplerate, the number of channels and other audio
  259. properties of the raw audio samples, puts the decoded samples into a Buffer,
  260. attaches the media type caps to the buffer and pushes this buffer to the next
  261. element.

  262. Alsasink then receives the buffer, inspects the caps and reconfigures itself to process
  263. the buffer. Since it received the first buffer of samples, it completes the state change
  264. to the PAUSED state. At this point the pipeline is prerolled and all elements have
  265. samples. Alsasink is now also capable of providing a clock to the pipeline.

  266. Since alsasink is now in the PAUSED state it blocks while receiving the first buffer. This
  267. effectively blocks both mp3dec and filesrc in their gst_pad_push().

  268. Since all elements now return SUCCESS from the gst_element_get_state() function,
  269. the pipeline can be put in the PLAYING state.

  270. Before going to PLAYING, the pipeline select a clock and samples the current time of
  271. the clock. This is the base_time. It then distributes this time to all elements.
  272. Elements can then synchronize against the clock using the buffer running_time +
  273. base_time (See also part-synchronisation.txt).

  274. The following chain of state changes then takes place:

  275. alsasink to PLAYING: the samples are played to the audio device
  276. mp3dec to PLAYING: nothing happens
  277. filesrc to PLAYING: nothing happens


  278. Pipeline status
  279. ---------------

  280. The pipeline informs the application of any special events that occur in the
  281. pipeline with the bus. The bus is an object that the pipeline provides and that
  282. can be retrieved with gst_pipeline_get_bus().

  283. The bus can be polled or added to the glib mainloop.

  284. The bus is distributed to all elements added to the pipeline. The elements use the bus
  285. to post messages on. Various message types exist such as ERRORS, WARNINGS, EOS,
  286. STATE_CHANGED, etc..

  287. The pipeline handles EOS messages received from elements in a special way. It will
  288. only forward the message to the application when all sink elements have posted an
  289. EOS message.

  290. Other methods for obtaining the pipeline status include the Query functionality that
  291. can be performed with gst_element_query() on the pipeline. This type of query
  292. is useful for obtaining information about the current position and total time of
  293. the pipeline. It can also be used to query for the supported seeking formats and
  294. ranges.


  295. Pipeline EOS
  296. ------------

  297. When the source filter encounters the end of the stream, it sends an EOS event to
  298. the peer element. This event will then travel downstream to all of the connected
  299. elements to inform them of the EOS. The element is not supposed to accept any more
  300. data after receiving an EOS event on a sinkpad.

  301. The element providing the streaming thread stops sending data after sending the
  302. EOS event.

  303. The EOS event will eventually arrive in the sink element. The sink will then post
  304. an EOS message on the bus to inform the pipeline that a particular stream has
  305. finished. When all sinks have reported EOS, the pipeline forwards the EOS message
  306. to the application. The EOS message is only forwarded to the application in the
  307. PLAYING state.

  308. When in EOS, the pipeline remains in the PLAYING state, it is the applications
  309. responsability to PAUSE or READY the pipeline. The application can also issue
  310. a seek, for example.


  311. Pipeline READY
  312. --------------

  313. When a running pipeline is set from the PLAYING to READY state, the following
  314. actions occur in the pipeline:

  315. alsasink to PAUSED: alsasink blocks and completes the state change on the
  316. next sample. If the element was EOS, it does not wait for
  317. a sample to complete the state change.
  318. mp3dec to PAUSED: nothing
  319. filesrc to PAUSED: nothing

  320. Going to the intermediate PAUSED state will block all elements in the _push()
  321. functions. This happens because the sink element blocks on the first buffer
  322. it receives.

  323. Some elements might be performing blocking operations in the PLAYING state that
  324. must be unblocked when they go into the PAUSED state. This makes sure that the
  325. state change happens very fast.

  326. In the next PAUSED to READY state change the pipeline has to shut down and all
  327. streaming threads must stop sending data. This happens in the following sequence:

  328. alsasink to READY: alsasink unblocks from the _chain() function and returns a
  329. WRONG_STATE return value to the peer element. The sinkpad is
  330. deactivated and becomes unusable for sending more data.
  331. mp3dec to READY: the pads are deactivated and the state change completes when
  332. mp3dec leaves its _chain() function.
  333. filesrc to READY: the pads are deactivated and the thread is paused.

  334. The upstream elements finish their chain() function because the downstream element
  335. returned an error code (WRONG_STATE) from the _push() functions. These error codes
  336. are eventually returned to the element that started the streaming thread (filesrc),
  337. which pauses the thread and completes the state change.

  338. This sequence of events ensure that all elements are unblocked and all streaming
  339. threads stopped.


  340. Pipeline seeking
  341. ----------------

  342. Seeking in the pipeline requires a very specific order of operations to make
  343. sure that the elements remain synchronized and that the seek is performed with
  344. a minimal amount of latency.

  345. An application issues a seek event on the pipeline using gst_element_send_event()
  346. on the pipeline element. The event can be a seek event in any of the formats
  347. supported by the elements.

  348. The pipeline first pauses the pipeline to speed up the seek operations.

  349. The pipeline then issues the seek event to all sink elements. The sink then forwards
  350. the seek event upstream until some element can perform the seek operation, which is
  351. typically the source or demuxer element. All intermediate elements can transform the
  352. requested seek offset to another format, this way a decoder element can transform a
  353. seek to a frame number to a timestamp, for example.

  354. When the seek event reaches an element that will perform the seek operation, that
  355. element performs the following steps.

  356. 1) send a FLUSH_START event to all downstream and upstream peer elements.
  357. 2) make sure the streaming thread is not running. The streaming thread will
  358. always stop because of step 1).
  359. 3) perform the seek operation
  360. 4) send a FLUSH done event to all downstream and upstream peer elements.
  361. 5) send NEWSEGMENT event to inform all elements of the new position and to complete
  362. the seek.

  363. In step 1) all dowstream elements have to return from any blocking operations
  364. and have to refuse any further buffers or events different from a FLUSH done.

  365. The first step ensures that the streaming thread eventually unblocks and that
  366. step 2) can be performed. At this point, dataflow is completely stopped in the
  367. pipeline.

  368. In step 3) the element performs the seek to the requested position.

  369. In step 4) all peer elements are allowed to accept data again and streaming
  370. can continue from the new position. A FLUSH done event is sent to all the peer
  371. elements so that they accept new data again and restart their streaming threads.

  372. Step 5) informs all elements of the new position in the stream. After that the
  373. event function returns back to the application. and the streaming threads start
  374. to produce new data.

  375. Since the pipeline is still PAUSED, this will preroll the next media sample in the
  376. sinks. The application can wait for this preroll to complete by performing a
  377. _get_state() on the pipeline.

  378. The last step in the seek operation is then to adjust the stream time of the pipeline
  379. to 0 and to set the pipeline back to PLAYING.

  380. The sequence of events in our mp3 playback example.

  381. | a) seek on pipeline
  382. | b) PAUSE pipeline
  383. +----------------------------------V--------+
  384. | pipeline | c) seek on sink
  385. | +---------+ +----------+ +---V------+ |
  386. | | filesrc | | mp3dec | | alsasink | |
  387. | | src-sink src-sink | |
  388. | +---------+ +----------+ +----|-----+ |
  389. +-----------------------------------|-------+
  390. <------------------------+
  391. d) seek travels upstream

  392. --------------------------> 1) FLUSH event
  393. | 2) stop streaming
  394. | 3) perform seek
  395. --------------------------> 4) FLUSH done event
  396. --------------------------> 5) NEWSEGMENT event

  397. | e) update stream time to 0
  398. | f) PLAY pipeline

猜你喜欢

转载自blog.csdn.net/liuweihui521/article/details/81019929