Android 音视频任务2

任务:2. 在 Android 平台使用 AudioRecord 和 AudioTrack API 完成音频 PCM 数据的采集和播放,并实现读写音频 wav 文件

记得加权限:

<uses-permission android:name="android.permission.RECORD_AUDIO"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

AudioRecord使用流程:

  1. 确定好采样率,通道类型,编码方式,录制来源
  2. 由AudioRecord.getMinbufferSize获得能接受的最小buffer大小
  3. new一个AudioRecord对象 audioRecord ,第一个参数表示的是音频录制来源。注意的是record和track都有通道(channel)参数,但一个是in一个是out,不要搞反了
  4. 开始录制:audioRecord.startRecord() 同时要开启一个线程,该线程所做的工作是打开一个output流创建一个文件。不断用audioRecord.read把录制的数据写到一个比特数组,然后把比特数组写到output流中
  5. 暂停录制:audioRecord.stop,但不release,而且output流也停止写入,但不释放
  6. 恢复录制,audioRecord.startRecord() ,继续读取audioRecord的数据并写入output流,
  7. 停止录制:audioRecord.stop,audioRecord.release释放资源,output流flush一下然后关闭
  8. 这样录制的音频是pcm格式的,也就是没有文件头的原始文件,要转成wav文件,只需要保持数据部分不变,然后再文件开始加入wav的文件头

AudioTrack使用流程:

  1. 确定好采样率,通道类型,编码方式
  2. 由AudioTrack.getMinbufferSize获得能接受的最小buffer大小
  3. new一个AudioTrack对象 audioTrack ,第一个参数表示的是音频播放的类型,可选的有演讲、音乐、影视等(可能是针对每种不同的场景类型做一些优化) 。最后一个参数mode是表示创建的模式
    1. MODE_STATIC 其中音频数据从Java传输到本地层仅一次,然后音频开始播放。
    2. MODE_STREAM 当音频播放时,音频数据从Java流到本地层。(两个先后顺序不同)
  4. 开始播放:audioTrack.play() 同时要开启一个线程,该线程所做的工作是打开一个input流读取音频文件(audioTrack既可以读原始的pcm文件,也可以读加入文件头的wav文件), 不断把文件到一个比特数组,然后用audioTrack.write把比特数组写入.
  5. 暂停播放:audioTrack.stop,但不release,而且input流也停止读入,但不释放(文件指针还停留在此)
  6. 恢复播放,audioTrack.play,让input流在停止的地方继续入读,
  7. 停止播放:audioTrack.stop,audioTrack.release释放资源,关闭input流(这样input流保留的文件指针也就丢失了,下一次就会从头读)

AudioRecord的使用:

import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder;
import android.os.Environment;
import android.util.Log;
import android.view.View;
import android.widget.Button;

import com.example.lll.va.R;

import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;

public class AudioRecorder {

    private String tag = "AudioRecorder";
    //音频输入-麦克风
    private final static int AUDIO_INPUT = MediaRecorder.AudioSource.MIC;
    //采用频率
    //44100是目前的标准,但是某些设备仍然支持22050,16000,11025
    //采样频率一般共分为22.05KHz、44.1KHz、48KHz三个等级
    private final static int AUDIO_SAMPLE_RATE = 16000;
    //声道 单声道
    private final static int AUDIO_CHANNEL = AudioFormat.CHANNEL_IN_MONO;
    //编码
    private final static int AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;

    private byte data[];
    private boolean isRecording = false;

    private AudioRecord audioRecord = null;  // 声明 AudioRecord 对象
    private int recordBufSize = 0; // 声明recoordBufffer的大小字段
    private Button btnStart;
    private Button btnStop;

    public static void startTask2byAudioRecord(Activity activity){
        final AudioRecorder audioRecorder = new AudioRecorder();
        audioRecorder.createAudioRecord();
        audioRecorder.btnStart = activity.findViewById(R.id.btn_start_record_audio);
        audioRecorder.btnStop = activity.findViewById(R.id.btn_stop_record_audio);
        audioRecorder.btnStart.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                audioRecorder.startRecord();
            }
        });
        audioRecorder.btnStop.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                audioRecorder.stopRecord();
            }
        });
    }

    public void createAudioRecord() {
        recordBufSize = AudioRecord.getMinBufferSize(AUDIO_SAMPLE_RATE,
                AUDIO_CHANNEL, AUDIO_ENCODING);  //audioRecord能接受的最小的buffer大小
        audioRecord = new AudioRecord(AUDIO_INPUT, AUDIO_SAMPLE_RATE, AUDIO_CHANNEL, AUDIO_ENCODING, recordBufSize);
        data = new byte[recordBufSize];
    }

    public void startRecord() {
        audioRecord.startRecording();
        isRecording = true;
        thread_w.start();
    }

    private String filename = Environment.getExternalStorageDirectory() + "/test";
    Thread thread_w = new Thread(new Runnable() {
        @Override
        public void run() {

            FileOutputStream os = null;

            try {
                os = new FileOutputStream(filename);
            } catch (FileNotFoundException e) {
                e.printStackTrace();
            }

            if (null != os) {
                Log.d(tag, "isRecording = " + isRecording);
                while (isRecording) {
                    int read = audioRecord.read(data, 0, recordBufSize);
                    Log.d(tag, "read size = " + read);
                    // 如果读取音频数据没有出现错误,就将数据写入到文件
                    if (AudioRecord.ERROR_INVALID_OPERATION != read) {
                        try {
                            os.write(data);
                            Log.d(tag, "os writr = " + data);
                        } catch (IOException e) {
                            e.printStackTrace();
                        }
                    }
                }

                try {
                    os.flush();
                    os.close();
                } catch (IOException e) {
                    e.printStackTrace();
                }
            }
        }
    });

    public void stopRecord() {
        isRecording = false;
        audioRecord.stop();
        audioRecord.release();
//        thread_w = null;
        PcmToWavUtil util = new PcmToWavUtil(AUDIO_SAMPLE_RATE, AUDIO_CHANNEL, AUDIO_ENCODING);
        util.pcmToWav(filename, filename + ".wav");
    }
}

给生成的pcm文件加入wav文件头的工具类:


import android.media.AudioFormat;
import android.media.AudioRecord;

import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;

public class PcmToWavUtil {

    /**
     * 缓存的音频大小
     */
    private int mBufferSize;
    /**
     * 采样率
     */
    private int mSampleRate;
    /**
     * 声道数
     */
    private int mChannel;


    /**
     * @param sampleRate sample rate、采样率
     * @param channel channel、声道
     * @param encoding Audio data format、音频格式
     */
    PcmToWavUtil(int sampleRate, int channel, int encoding) {
        this.mSampleRate = sampleRate;
        this.mChannel = channel;
        this.mBufferSize = AudioRecord.getMinBufferSize(mSampleRate, mChannel, encoding);
    }


    /**
     * pcm文件转wav文件
     *
     * @param inFilename 源文件路径
     * @param outFilename 目标文件路径
     */
    public void pcmToWav(String inFilename, String outFilename) {
        FileInputStream in;
        FileOutputStream out;
        long totalAudioLen;
        long totalDataLen;
        long longSampleRate = mSampleRate;
        int channels = mChannel == AudioFormat.CHANNEL_IN_MONO ? 1 : 2;
        long byteRate = 16 * mSampleRate * channels / 8;
        byte[] data = new byte[mBufferSize];
        try {
            in = new FileInputStream(inFilename);
            out = new FileOutputStream(outFilename);
            totalAudioLen = in.getChannel().size();
            totalDataLen = totalAudioLen + 36;

            writeWaveFileHeader(out, totalAudioLen, totalDataLen,
                    longSampleRate, channels, byteRate);
            while (in.read(data) != -1) {
                out.write(data);
            }
            in.close();
            out.close();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }


    /**
     * 加入wav文件头
     */
    private void writeWaveFileHeader(FileOutputStream out, long totalAudioLen,
                                     long totalDataLen, long longSampleRate, int channels, long byteRate)
            throws IOException {
        byte[] header = new byte[44];
        // RIFF/WAVE header
        header[0] = 'R';
        header[1] = 'I';
        header[2] = 'F';
        header[3] = 'F';
        header[4] = (byte) (totalDataLen & 0xff);
        header[5] = (byte) ((totalDataLen >> 8) & 0xff);
        header[6] = (byte) ((totalDataLen >> 16) & 0xff);
        header[7] = (byte) ((totalDataLen >> 24) & 0xff);
        //WAVE
        header[8] = 'W';
        header[9] = 'A';
        header[10] = 'V';
        header[11] = 'E';
        // 'fmt ' chunk
        header[12] = 'f';
        header[13] = 'm';
        header[14] = 't';
        header[15] = ' ';
        // 4 bytes: size of 'fmt ' chunk
        header[16] = 16;
        header[17] = 0;
        header[18] = 0;
        header[19] = 0;
        // format = 1
        header[20] = 1;
        header[21] = 0;
        header[22] = (byte) channels;
        header[23] = 0;
        header[24] = (byte) (longSampleRate & 0xff);
        header[25] = (byte) ((longSampleRate >> 8) & 0xff);
        header[26] = (byte) ((longSampleRate >> 16) & 0xff);
        header[27] = (byte) ((longSampleRate >> 24) & 0xff);
        header[28] = (byte) (byteRate & 0xff);
        header[29] = (byte) ((byteRate >> 8) & 0xff);
        header[30] = (byte) ((byteRate >> 16) & 0xff);
        header[31] = (byte) ((byteRate >> 24) & 0xff);
        // block align
        header[32] = (byte) (2 * 16 / 8);
        header[33] = 0;
        // bits per sample
        header[34] = 16;
        header[35] = 0;
        //data
        header[36] = 'd';
        header[37] = 'a';
        header[38] = 't';
        header[39] = 'a';
        header[40] = (byte) (totalAudioLen & 0xff);
        header[41] = (byte) ((totalAudioLen >> 8) & 0xff);
        header[42] = (byte) ((totalAudioLen >> 16) & 0xff);
        header[43] = (byte) ((totalAudioLen >> 24) & 0xff);
        out.write(header, 0, 44);
    }
}

AudioTrack的使用:

import android.app.Activity;
import android.content.Context;
import android.media.AudioAttributes;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder;
import android.os.Environment;
import android.os.Handler;
import android.os.Message;
import android.util.Log;
import android.view.View;
import android.widget.Button;

import com.example.lll.va.R;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;

public class AudioTracker {
    private String tag = "AudioTracker";
    //采用频率
    private final static int AUDIO_SAMPLE_RATE = 16000;
    //声道 单声道
    private final static int AUDIO_CHANNEL = AudioFormat.CHANNEL_OUT_MONO;
    //编码
    private final static int AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
    private AudioTrack audioTrack;
    private Activity activity;
    private int buffersize;
    private byte[] data;
    private boolean isPlay = false;
    private boolean isFirstPlay = true; //用firstPlay和isPlay组合,可以完成暂停和继续播放的功能

    public static void startTask2byAudioTrack(Activity activity) {
        final AudioTracker audioTracker = new AudioTracker();
        audioTracker.activity = activity;
        Button btnPlay = activity.findViewById(R.id.btn_play_audio);
        Button btnStop = activity.findViewById(R.id.btn_stop_audio);
        Button btnPause = activity.findViewById(R.id.btn_pause_audio);
        btnPlay.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                audioTracker.startPlay();
            }
        });
        btnStop.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                audioTracker.stopPlay();
            }
        });
        btnPause.setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                audioTracker.pause();
            }
        });

    }

    //AudioAttributes取代了流类型的概念(例如参见AudioManager.STREAM_MUSIC或AudioManager.STREAM_ALARM),
    // 用于定义音频播放的行为。 通过允许应用程序定义,属性允许应用程序指定比在流类型中传达的信息更多的信息:


    public void initAudioTrack() {
        if (isFirstPlay) {
            buffersize = AudioTrack.getMinBufferSize(AUDIO_SAMPLE_RATE,
                    AUDIO_CHANNEL, AUDIO_ENCODING);  //audioTracker能接受的最小的buffer大小
            audioTrack = new AudioTrack(AudioAttributes.CONTENT_TYPE_MUSIC, AUDIO_SAMPLE_RATE, AUDIO_CHANNEL
                    , AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM);
            data = new byte[buffersize];
        }

    }

    public void startPlay() {
        initAudioTrack();
        audioTrack.play();
        isPlay = true;
        playThread.start();

    }

    public void pause() {
        audioTrack.stop();
        isPlay = false;
        Log.d(tag, "pasue ");
    }

    public void stopPlay() {
        audioTrack.stop();
        isPlay = false;
        isFirstPlay = true;
        audioTrack.release();
        Log.d(tag, "stop ");
        is = null;
    }

    FileInputStream is = null;
    String fileName = Environment.getExternalStorageDirectory() + "/test.wav";
    Thread playThread = new Thread(new Runnable() {
        @Override
        public void run() {
            File file = new File(fileName);

            try {
                if (isFirstPlay) { //如果是首次播放(停止后再播放也算首次)则初始化流
                    is = new FileInputStream(file);
                    isFirstPlay = false;
                }
                Log.d(tag, "is = " + is);
            } catch (FileNotFoundException e) {
                e.printStackTrace();
            }
            writeData();
        }
    });

    private void writeData() {
        if (is == null) return;
        while (isPlay) {
            try {
                int read = is.read(data);
                Log.d(tag, "read = " + read);
                if (read != 0 && read != -1) {
                    audioTrack.write(data, 0, read);
                } else {
                    stopPlay();
                }
            } catch (IOException e) {
                e.printStackTrace();
            }
        }
    }
}

布局就是很简单的layout和几个分别控制开始、暂停、结束的button,就不贴了

猜你喜欢

转载自blog.csdn.net/weixin_43752854/article/details/84672846