Audio API implements audio player

There are many audio player libraries on the market, such as wavesurfer.js , howler.js, etc., but none of them support the processing of large audio files, and a file of more than 100 M may cause the program to crash. In short, it does not meet my current needs, so I plan to implement an audio player by myself, so that no matter what requirements are technically controllable. Below we briefly introduce the implementation of wavesurferJs, , and howlerJs, and then explain how to use the audio API to implement a custom voice player.

Specific resource github download

wavesurferJs

I chose it at the beginning wavesurferJsmainly because of its audio graph function.
The effect is as follows:
insert image description here
Isn’t it beautiful hh
Here are the implementation steps:

  1. initialization
this.playWavesurfer = WaveSurfer.create({
    
    
	container: '#waveform2',
	mediaType: 'audio',
	height: 43,
	scrollParent: false,
	hideScrollbar: true,
	waveColor: '#ed6c00',
	interact: true,
	progressColor: '#dd5e98',
	cursorColor: '#ddd5e9',
	interact: true,
	cursorWidth: 1,
	barHeight: 1,
	barWidth: 1,
	plugins: [
		WaveSurfer.microphone.create()
	]
});
  1. Dynamically load audio URL
this.playWavesurfer.load(this.audioUrl);
  1. Set the loading and calculate the total audio duration after completion
this.playWavesurfer.on('loading', (percent, xhr) => {
    
    
		this.audioLoadPercent = percent - 1;
	})
	this.playWavesurfer.on('ready', () => {
    
    
		this.audioLoading = false;
		const duration = this.playWavesurfer.getDuration();
		this.duration = this.formatTime(duration);
		this.currentTime = this.formatTime(0);
	})
  1. Calculate duration during playback
this.playWavesurfer.on('audioprocess', function () {
    
    
	const duration = that.playWavesurfer.getDuration();
	const currentTime = that.playWavesurfer.getCurrentTime();
	that.currentTime = that.formatTime(currentTime);
	that.duration = that.formatTime(duration);
	if (that.currentTime === that.duration) {
    
    
		that.audioPlayingFlag = false;
	}
});
  1. play / Pause
this.playWavesurfer.playPause.bind(this.playWavesurfer)();
  1. fast forward, rewind
this.playWavesurfer.skip(15);
//this.playWavesurfer.skip(-15);
  1. multiple play
this.playWavesurfer.setPlaybackRate(value, true);

In this way, the basic functions are probably realized.

Realized by using howlerJs

  1. Initialize and dynamically load audio paths
this.howler = new Howl({
    
    
	src: [this.audioUrl]
});
  1. Calculate the total audio duration after loading
this.howler.on('load', () => {
    
    
	this.audioLoading = false;
	const duration = this.howler.duration();
	this.duration = this.formatTime(duration);
	this.currentTime = this.formatTime(0);
});

  1. Get the current time during playback
this.currentTime = this.formatTime(this.howler.seek());
  1. finished playing
this.howler.on('end', () => {
    
    
	this.audioPlayingFlag = false;
	this.siriWave2.stop();
	this.currentTime = "00:00:00";
	this.progressPercent = 0;
	cancelAnimationFrame(this.playTimer);
})
  1. fast forward, rewind
this.howler.seek(this.howler.seek() + 15);
//this.howler.seek(this.howler.seek() - 15);
  1. Set multiple playback
this.howler.rate(value);
  1. play / Pause
this.howler.play();
// this.howler.pause();
  1. Manually locate the playback duration
<div id="waveform2" ref="waveform2" @click="changProgress">
	<div class="bar" v-if="!audioLoading&&!audioPlayingFlag"></div>
	<div class="progress" :style="{width: `${progressPercent}`}"></div>
</div>
changProgress(e) {
    
    
	if (this.howler.playing()) {
    
    
		this.howler.seek((e.offsetX / this.$refs['waveform2'].offsetWidth)*this.howler.duration());
	}
},

In this way, the basic functions are probably realized.

Use audio API to implement the player

Effect picture: siriwave.js
insert image description here
temporarily used by the animation library

First define the hidden audio tag, which can be dynamically generated in js

<audio :src="audioUrl" style="display: none;" controls ref="audio"></audio>
this.audio = this.$refs['audio'];
  1. After obtaining the audio url, dynamic loading needs to be loaded
this.audio.load();
  1. Audio loaded
this.audio.addEventListener("canplaythrough", () => {
    
    
	this.audioLoading = false;
	console.log('music ready');
}, false);
  1. Calculate the audio duration after listening can be played
this.audio.addEventListener("canplay", this.showTime, false);
showTime() {
    
    
	if (!isNaN(this.audio.duration)) {
    
    
		this.duration = this.formatTime(this.audio.duration);
		this.currentTime = this.formatTime(this.audio.currentTime);
	}
},
  1. Time changes during playback to calculate the current time
this.audio.addEventListener("timeupdate", this.showTime, true);
  1. Listen for playback events
this.audio.addEventListener('play', () => {
    
    
	this.audioPlaying();
}, false);
  1. finished playing
this.audio.addEventListener('ended', () => {
    
    
	this.audioPlayingFlag = false;
	this.siriWave2.stop();
	this.currentTime = "00:00:00";
	this.progressPercent = 0;
	cancelAnimationFrame(this.playTimer);
}, false)
  1. forward, backward
this.audio.currentTime += 15;
// this.audio.currentTime -= 15;
  1. Set playback multiplier
this.audio.playbackRate = value;
  1. play / Pause
this.audio.play();
// this.audio.pause();
  1. audio positioning
<div id="waveform2" ref="waveform2" @click="changProgress">
	<div class="bar" v-if="!audioLoading&&!audioPlayingFlag"></div>
	<div class="progress" :style="{width: `${progressPercent}`}"></div>
</div>

Calculate positioning time

changProgress(e) {
    
    
	// if (this.audioPlayingFlag) {
    
    
		this.audio.currentTime = (e.offsetX / this.$refs['waveform2'].offsetWidth)*this.audio.duration;
		this.progressPercent = ((this.audio.currentTime/this.audio.duration) * 100) + '%';
	// }
},
  1. Siri animation realization
this.siriWave = new SiriWave({
    
    
	container: that.$refs['waveform'],
		height: 43,
		cover: true,
		color: '#ed6c00',
		speed: 0.03,
		amplitude: 1,
		frequency: 6
	});

Start animation, stop animation

this.siriWave.start();
// this.siriWave.stop();

In this way, the basic functions are probably realized. Even if you load a large audio file, it will not get stuck.

Step on the pit

insert image description here
The big pit here is the built-in audio, the audio playback positioning function, which can be located in the external browser and the vscode main code, but I can’t locate it in the vscode plug-in, and it will automatically return to 0. Searched through the documentation and found this description on MDN:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Configuring_servers_for_Ogg_media#Handle_HTTP_1.1_byte_range_requests_correctly

Handle HTTP 1.1 byte range requests correctly
In order to support seeking and playing back regions of the media that aren’t yet downloaded, Gecko uses HTTP 1.1 byte-range requests to retrieve the media from the seek target position. In addition, Gecko uses byte-range requests to seek to the end of the media (assuming you serve the Content-Length header) in order to determine the duration of the media.
Your server should accept the Accept-Ranges: bytes HTTP header if it can accept byte-range requests. It must return 206: Partial content to all byte range requests; otherwise, browsers can’t be sure you actually support byte range requests.
Your server must also return 206: Partial Content for the request Range: bytes=0- as well.

It has been verified that it is response headerrelated to . I verified the different MP3 resource sets response header, and the results are as follows (it seems that segmentfault does not support markdown tables, so the layout below is a bit messy.):
ie
Content-Typemust be played when I set it to audio/mpeg, set it to application/octet- stream cannot.
Content-Lengthmust. Has nothing to do with Accept-Ranges.

Chrome
Content-Typehas nothing to do, set it application/octet-streamto be playable.
Content-Length, Accept-Rangesmust have to change the currentTime.

That is to say , ie needs the response header to be correct Content-Type. Chrome requires headers with and .Content-Length
Content-LengthAccept-Ranges

Then it occurred to me that the vscode plugin system usesService Worker

const headers = {
    
    
		'Content-Type': entry.mime,
		'Content-Length': entry.data.byteLength.toString(),
		'Access-Control-Allow-Origin': '*',
	};

Sure enough, the Accept-Ranges field was not added

const headers = {
    
    
	'Content-Type': entry.mime,
	'Content-Length': entry.data.byteLength.toString(),
	'Access-Control-Allow-Origin': '*',
};

/**
 * @author lichangwei
 * @description 音频额外处理 否则无法调节进度
 * https://developer.mozilla.org/en-US/docs/Web/HTTP/Configuring_servers_for_Ogg_media#Handle_HTTP_1.1_byte_range_requests_correctly
 */

if (entry.mime === 'audio/mpeg') {
    
    
	headers['Accept-Ranges'] = 'bytes';
}

After adding it is ready to use.

Follow up

Take a look at Evernote's voice note implementation.
insert image description here

insert image description here
How does Evernote avoid large file processing?

  • First of all, the real audio line is drawn during the recording process
  • After the recording is over, it is replaced with a fake audio cable

In fact, it’s okay to process audio data in real time during the recording process, so as not to cause the browser to crash. The audio file generated after recording has too much data to process, and the memory directly soars by 2-3G, so it will cause the program to crash.

Subsequent implementation of the audio map

Brother Meng give attention~

Guess you like

Origin blog.csdn.net/woyebuzhidao321/article/details/131293758