扩展ExoPlayer实现多音轨同时播放

如果要同时播放2个音频甚至播放多个音频怎么办呢?比如同时播放伴奏声音和原唱声音,而且要做到多音轨同步。这里涉及一个核心问题,多音轨播放如何同步,因为每个音轨有对应时钟。Android平台的ExoPlayer扩展性非常好,虽然本身不支持多音轨播放,但是我们可以扩展。通过扩展TrackSelector、AudioRender、RenderFactory来支持多音轨。

1、自定义AudioRender

继承于MediaCodecAudioRenderer,在构造方法传入index记录音轨索引,重写getMediaClock方法,默认使用第一个音轨作为同步时钟。也就是解决上面讨论的核心问题。

public class MultiTrackAudioRender extends MediaCodecAudioRenderer {

    private final int index;

    public MultiTrackAudioRender(int index,
                                 Context context,
                                 MediaCodecSelector mediaCodecSelector,
                                 @Nullable DrmSessionManager<FrameworkMediaCrypto> drmSessionManager,
                                 boolean playClearSamplesWithoutKeys,
                                 boolean enableDecoderFallback,
                                 @Nullable Handler eventHandler,
                                 @Nullable AudioRendererEventListener eventListener,
                                 AudioSink audioSink) {
        super(context, mediaCodecSelector, drmSessionManager, playClearSamplesWithoutKeys,
                enableDecoderFallback, eventHandler, eventListener, audioSink);
        this.index = index;
    }

    @Nullable
    @Override
    public MediaClock getMediaClock() {
        if (index == 0) {
            return super.getMediaClock();
        }
        return null;
    }

}

2、自定义RenderFactory

继承于DefaultRenderersFactory,把自定义的AudioRender添加到renderList里。

public class MultiTrackRenderFactory extends DefaultRenderersFactory {

    private final static int count = 2;

    public MultiTrackRenderFactory(Context context) {
        super(context);
    }

    @Override
    protected void buildAudioRenderers(Context context, int extensionRendererMode, MediaCodecSelector mediaCodecSelector,
                                       @Nullable DrmSessionManager<FrameworkMediaCrypto> drmSessionManager,
                                       boolean playClearSamplesWithoutKeys, boolean enableDecoderFallback,
                                       AudioProcessor[] audioProcessors, Handler eventHandler,
                                       AudioRendererEventListener eventListener, ArrayList<Renderer> out) {

        // Create ProcessorChain, and add any AudioProcess you need
        DefaultAudioSink.DefaultAudioProcessorChain processorChain = new DefaultAudioSink.DefaultAudioProcessorChain(
                new SilenceSkippingAudioProcessor(),
                new SonicAudioProcessor());

        for (int i=0; i<count; i++) {
            AudioSink audioSink = new DefaultAudioSink(AudioCapabilities.getCapabilities(context),
                    processorChain, true);
            // Using MultiTrackAudioRender and add into renderList
            MultiTrackAudioRender audioRender = new MultiTrackAudioRender(i, context, MediaCodecSelector.DEFAULT,
                    drmSessionManager, playClearSamplesWithoutKeys, enableDecoderFallback,
                    eventHandler, eventListener, audioSink);
            out.add(audioRender);
        }
    }
}

3、自定义TrackSelector 

继承于DefaultTrackSelector,判断到媒体类型是音频,都添加renderList里。这样就实现支持多音轨播放。

public class MultiTrackSelector extends DefaultTrackSelector {

    public MultiTrackSelector(Context context) {
        super(context);
    }

    @Override
    public TrackSelectorResult selectTracks(RendererCapabilities[] rendererCapabilities, TrackGroupArray trackGroups,
                                            MediaSource.MediaPeriodId periodId, Timeline timeline) throws ExoPlaybackException {
        if (trackGroups == null || rendererCapabilities == null) {
            return null;
        }

        LinkedList<Integer> renderList = new LinkedList<>();
        TrackSelection[] selections = new TrackSelection[rendererCapabilities.length];
        RendererConfiguration[] configs = new RendererConfiguration[rendererCapabilities.length];
        for (int i = 0; i < rendererCapabilities.length; i++) {
            // Add all audio tracks into renderList
            if(rendererCapabilities[i].getTrackType() == C.TRACK_TYPE_AUDIO) {
                renderList.add(i);
                configs[i] = RendererConfiguration.DEFAULT;
            }
        }
        for (int i = 0; i < trackGroups.length; i++) {
            if (trackGroups.get(i).getFormat(0) != null
                    && MimeTypes.isAudio(trackGroups.get(i).getFormat(0).sampleMimeType)) {
                Integer index = renderList.poll();
                if (index != null) {
                    selections[index] = new FixedTrackSelection(trackGroups.get(i), 0);
                }
            }
        }

        int[] rendererTrackGroupCounts = new int[rendererCapabilities.length + 1];
        TrackGroup[][] rendererTrackGroups = new TrackGroup[rendererCapabilities.length + 1][];
        @RendererCapabilities.Capabilities int[][][] rendererFormatSupports = new int[rendererCapabilities.length + 1][][];
        for (int i = 0; i < rendererTrackGroups.length; i++) {
            rendererTrackGroups[i] = new TrackGroup[trackGroups.length];
            rendererFormatSupports[i] = new int[trackGroups.length][];
        }

        // Determine the extent to which each renderer supports mixed mimeType adaptation.
        @RendererCapabilities.AdaptiveSupport
        int[] rendererMixedMimeTypeAdaptationSupports =
                getMixedMimeTypeAdaptationSupports(rendererCapabilities);

        for (int groupIndex = 0; groupIndex < trackGroups.length; groupIndex++) {
            TrackGroup group = trackGroups.get(groupIndex);
            // Associate the group to a preferred renderer.
            int rendererIndex = findRenderer(rendererCapabilities, group);
            // Evaluate the support that the renderer provides for each track in the group.
            @RendererCapabilities.Capabilities
            int[] rendererFormatSupport =
                    rendererIndex == rendererCapabilities.length
                            ? new int[group.length]
                            : getFormatSupport(rendererCapabilities[rendererIndex], group);
            // Stash the results.
            int rendererTrackGroupCount = rendererTrackGroupCounts[rendererIndex];
            rendererTrackGroups[rendererIndex][rendererTrackGroupCount] = group;
            rendererFormatSupports[rendererIndex][rendererTrackGroupCount] = rendererFormatSupport;
            rendererTrackGroupCounts[rendererIndex]++;
        }

        // Create a track group array for each renderer, and trim each rendererFormatSupports entry.
        TrackGroupArray[] rendererTrackGroupArrays = new TrackGroupArray[rendererCapabilities.length];
        int[] rendererTrackTypes = new int[rendererCapabilities.length];
        for (int i = 0; i < rendererCapabilities.length; i++) {
            int rendererTrackGroupCount = rendererTrackGroupCounts[i];
            rendererTrackGroupArrays[i] =
                    new TrackGroupArray(
                            Util.nullSafeArrayCopy(rendererTrackGroups[i], rendererTrackGroupCount));
            rendererFormatSupports[i] =
                    Util.nullSafeArrayCopy(rendererFormatSupports[i], rendererTrackGroupCount);
            rendererTrackTypes[i] = rendererCapabilities[i].getTrackType();
        }

        // Create a track group array for track groups not mapped to a renderer.
        int unmappedTrackGroupCount = rendererTrackGroupCounts[rendererCapabilities.length];
        TrackGroupArray unmappedTrackGroupArray =
                new TrackGroupArray(
                        Util.nullSafeArrayCopy(
                                rendererTrackGroups[rendererCapabilities.length], unmappedTrackGroupCount));

        // Package up the track information and selections.
        MappedTrackInfo mappedTrackInfo =
                new MappedTrackInfo(
                        rendererTrackTypes,
                        rendererTrackGroupArrays,
                        rendererMixedMimeTypeAdaptationSupports,
                        rendererFormatSupports,
                        unmappedTrackGroupArray);

        return new TrackSelectorResult(configs, selections, mappedTrackInfo);
    }

}

4、接入SimpleExoPlayer

创建MultiTrackSelector、MultiRenderFactory,作为参数传入到SimpleExoPlayer.Builder,来构造出SimpleExoPlayer。

DefaultTrackSelector trackSelector = new MultiTrackSelector(context);
RenderersFactory  renderersFactory = new MultiTrackRenderFactory(context);
SimpleExoPlayer.Builder builder    = new SimpleExoPlayer.Builder(context, renderersFactory, trackSelector, new DefaultLoadControlX(),
        DefaultBandwidthMeter.getSingletonInstance(context), handler.getLooper(), new AnalyticsCollector(Clock.DEFAULT), true, Clock.DEFAULT);
SimpleExoPlayer simpleExoPlayer = builder.build();

至此,实现多音轨同时播放。ExoPlayer架构设计合理,易于扩展,这是值得我们学习的地方。

猜你喜欢

转载自blog.csdn.net/u011686167/article/details/121191525
今日推荐