There are many audio player libraries on the market, such as wavesurfer.js , howler.js, etc., but none of them support the processing of large audio files, and a file of more than 100 M may cause the program to crash. In short, it does not meet my current needs, so I plan to implement an audio player by myself, so that no matter what requirements are technically controllable. Below we briefly introduce the implementation of
wavesurferJs
, , andhowlerJs
, and then explain how to use the audio API to implement a custom voice player.
Specific resource github download
wavesurferJs
I chose it at the beginning wavesurferJs
mainly because of its audio graph function.
The effect is as follows:
Isn’t it beautiful hh
Here are the implementation steps:
- initialization
this.playWavesurfer = WaveSurfer.create({
container: '#waveform2',
mediaType: 'audio',
height: 43,
scrollParent: false,
hideScrollbar: true,
waveColor: '#ed6c00',
interact: true,
progressColor: '#dd5e98',
cursorColor: '#ddd5e9',
interact: true,
cursorWidth: 1,
barHeight: 1,
barWidth: 1,
plugins: [
WaveSurfer.microphone.create()
]
});
- Dynamically load audio URL
this.playWavesurfer.load(this.audioUrl);
- Set the loading and calculate the total audio duration after completion
this.playWavesurfer.on('loading', (percent, xhr) => {
this.audioLoadPercent = percent - 1;
})
this.playWavesurfer.on('ready', () => {
this.audioLoading = false;
const duration = this.playWavesurfer.getDuration();
this.duration = this.formatTime(duration);
this.currentTime = this.formatTime(0);
})
- Calculate duration during playback
this.playWavesurfer.on('audioprocess', function () {
const duration = that.playWavesurfer.getDuration();
const currentTime = that.playWavesurfer.getCurrentTime();
that.currentTime = that.formatTime(currentTime);
that.duration = that.formatTime(duration);
if (that.currentTime === that.duration) {
that.audioPlayingFlag = false;
}
});
- play / Pause
this.playWavesurfer.playPause.bind(this.playWavesurfer)();
- fast forward, rewind
this.playWavesurfer.skip(15);
//this.playWavesurfer.skip(-15);
- multiple play
this.playWavesurfer.setPlaybackRate(value, true);
In this way, the basic functions are probably realized.
Realized by using howlerJs
- Initialize and dynamically load audio paths
this.howler = new Howl({
src: [this.audioUrl]
});
- Calculate the total audio duration after loading
this.howler.on('load', () => {
this.audioLoading = false;
const duration = this.howler.duration();
this.duration = this.formatTime(duration);
this.currentTime = this.formatTime(0);
});
- Get the current time during playback
this.currentTime = this.formatTime(this.howler.seek());
- finished playing
this.howler.on('end', () => {
this.audioPlayingFlag = false;
this.siriWave2.stop();
this.currentTime = "00:00:00";
this.progressPercent = 0;
cancelAnimationFrame(this.playTimer);
})
- fast forward, rewind
this.howler.seek(this.howler.seek() + 15);
//this.howler.seek(this.howler.seek() - 15);
- Set multiple playback
this.howler.rate(value);
- play / Pause
this.howler.play();
// this.howler.pause();
- Manually locate the playback duration
<div id="waveform2" ref="waveform2" @click="changProgress">
<div class="bar" v-if="!audioLoading&&!audioPlayingFlag"></div>
<div class="progress" :style="{width: `${progressPercent}`}"></div>
</div>
changProgress(e) {
if (this.howler.playing()) {
this.howler.seek((e.offsetX / this.$refs['waveform2'].offsetWidth)*this.howler.duration());
}
},
In this way, the basic functions are probably realized.
Use audio API to implement the player
Effect picture: siriwave.js
temporarily used by the animation library
First define the hidden audio tag, which can be dynamically generated in js
<audio :src="audioUrl" style="display: none;" controls ref="audio"></audio>
this.audio = this.$refs['audio'];
- After obtaining the audio url, dynamic loading needs to be loaded
this.audio.load();
- Audio loaded
this.audio.addEventListener("canplaythrough", () => {
this.audioLoading = false;
console.log('music ready');
}, false);
- Calculate the audio duration after listening can be played
this.audio.addEventListener("canplay", this.showTime, false);
showTime() {
if (!isNaN(this.audio.duration)) {
this.duration = this.formatTime(this.audio.duration);
this.currentTime = this.formatTime(this.audio.currentTime);
}
},
- Time changes during playback to calculate the current time
this.audio.addEventListener("timeupdate", this.showTime, true);
- Listen for playback events
this.audio.addEventListener('play', () => {
this.audioPlaying();
}, false);
- finished playing
this.audio.addEventListener('ended', () => {
this.audioPlayingFlag = false;
this.siriWave2.stop();
this.currentTime = "00:00:00";
this.progressPercent = 0;
cancelAnimationFrame(this.playTimer);
}, false)
- forward, backward
this.audio.currentTime += 15;
// this.audio.currentTime -= 15;
- Set playback multiplier
this.audio.playbackRate = value;
- play / Pause
this.audio.play();
// this.audio.pause();
- audio positioning
<div id="waveform2" ref="waveform2" @click="changProgress">
<div class="bar" v-if="!audioLoading&&!audioPlayingFlag"></div>
<div class="progress" :style="{width: `${progressPercent}`}"></div>
</div>
Calculate positioning time
changProgress(e) {
// if (this.audioPlayingFlag) {
this.audio.currentTime = (e.offsetX / this.$refs['waveform2'].offsetWidth)*this.audio.duration;
this.progressPercent = ((this.audio.currentTime/this.audio.duration) * 100) + '%';
// }
},
- Siri animation realization
this.siriWave = new SiriWave({
container: that.$refs['waveform'],
height: 43,
cover: true,
color: '#ed6c00',
speed: 0.03,
amplitude: 1,
frequency: 6
});
Start animation, stop animation
this.siriWave.start();
// this.siriWave.stop();
In this way, the basic functions are probably realized. Even if you load a large audio file, it will not get stuck.
Step on the pit
The big pit here is the built-in audio, the audio playback positioning function, which can be located in the external browser and the vscode main code, but I can’t locate it in the vscode plug-in, and it will automatically return to 0. Searched through the documentation and found this description on MDN:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Configuring_servers_for_Ogg_media#Handle_HTTP_1.1_byte_range_requests_correctly
Handle HTTP 1.1 byte range requests correctly
In order to support seeking and playing back regions of the media that aren’t yet downloaded, Gecko uses HTTP 1.1 byte-range requests to retrieve the media from the seek target position. In addition, Gecko uses byte-range requests to seek to the end of the media (assuming you serve the Content-Length
header) in order to determine the duration of the media.
Your server should accept the Accept-Ranges
: bytes HTTP header if it can accept byte-range requests. It must return 206: Partial content to all byte range requests; otherwise, browsers can’t be sure you actually support byte range requests.
Your server must also return 206: Partial Content for the request Range: bytes=0- as well.
It has been verified that it is response header
related to . I verified the different MP3 resource sets response header
, and the results are as follows (it seems that segmentfault does not support markdown tables, so the layout below is a bit messy.):
ie
Content-Type
must be played when I set it to audio/mpeg, set it to application/octet- stream cannot.
Content-Length
must. Has nothing to do with Accept-Ranges.
Chrome
Content-Type
has nothing to do, set it application/octet-stream
to be playable.
Content-Length
, Accept-Ranges
must have to change the currentTime.
That is to say , ie needs the response header to be correct Content-Type
. Chrome requires headers with and .Content-Length
Content-Length
Accept-Ranges
Then it occurred to me that the vscode plugin system usesService Worker
const headers = {
'Content-Type': entry.mime,
'Content-Length': entry.data.byteLength.toString(),
'Access-Control-Allow-Origin': '*',
};
Sure enough, the Accept-Ranges field was not added
const headers = {
'Content-Type': entry.mime,
'Content-Length': entry.data.byteLength.toString(),
'Access-Control-Allow-Origin': '*',
};
/**
* @author lichangwei
* @description 音频额外处理 否则无法调节进度
* https://developer.mozilla.org/en-US/docs/Web/HTTP/Configuring_servers_for_Ogg_media#Handle_HTTP_1.1_byte_range_requests_correctly
*/
if (entry.mime === 'audio/mpeg') {
headers['Accept-Ranges'] = 'bytes';
}
After adding it is ready to use.
Follow up
Take a look at Evernote's voice note implementation.
How does Evernote avoid large file processing?
- First of all, the real audio line is drawn during the recording process
- After the recording is over, it is replaced with a fake audio cable
In fact, it’s okay to process audio data in real time during the recording process, so as not to cause the browser to crash. The audio file generated after recording has too much data to process, and the memory directly soars by 2-3G, so it will cause the program to crash.
Subsequent implementation of the audio map