[Unity3D] Unity sound and recording and microphone real-time playback

The relationship between Unity AudioSource and MicroPhone and AudioClip.

The following is a sound with a length of 7 seconds. The actual data of the sound is essentially a list composed of sampling points. The number of sampling points in one second is the sampling frequency. The following sampling frequency is 10, which is 44100 in normal practice. Set it according to your needs; when AudioSource plays a sound, setting its TimeSamples means to start playing from the position of the first timeSamples sampling point of the sound. Therefore, the way to set the offset position of the sound playback is to set timeSamples or set time. The timeSamples are not always fixed during playback, and follow the time to point to the index value of each corresponding sampling point one by one.

Next is to set up the microphone. The principle of the microphone is to first define a Clip. After the recording starts, the recording sample point value is continuously assigned to the corresponding sample point of the clip. This is the same as the timeSamples of Audiosource. Therefore, if you want to play the recording in real time like KTV, when the audiosource is playing, the value of timeSamples is just equal to the value of the current sampling point, or a distance from the sampling point with a small delay (well, I did this, but I don’t know why the noise is very serious, which needs to be studied, and the value of timeSamples cannot be greater than the value of the current recording sampling point, otherwise there will be no sound (fools can think of why)). Well, today’s post, after discussing with the boss of the company, I finally understood that the reason for the noise is that the sound I broadcast was recorded in real time, resulting in an echo-like effect, so it’s best to wear headphones to record.

Below is the code for live playback.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;

public class MicroPhoneTest : MonoBehaviour
{
    public AudioSource aud;

    bool isHaveMicroPhone;
    string device;
    public Text text;

    //Debug Text
    public Text clipLength;//记录音频文件的长度
    public Text devicePosition;//设备音频的位置
    public Text audioTime;//记录音频的时间
    public Text audioSampleTime;//

    
    // Start is called before the first frame update
    void Start()
    {
        aud = GetComponent<AudioSource>();
        string[] devices = Microphone.devices;

        if (devices.Length > 0)
        {
            isHaveMicroPhone = true;
            device = devices[0];
            text.text = devices[0];
        }
        else
        {
            isHaveMicroPhone = false;
            text.text = "没有获取到麦克风";
        }
    }

    //开始录音按钮
    public void OnclickButton()
    {
        if (!isHaveMicroPhone) return;

        aud.clip = Microphone.Start(device, true, 10, 10000);
        //aud.Play();
        //aud.timeSamples = Microphone.GetPosition(device);
        //aud.timeSamples = 0;
        Debug.Log("开始录音");
    }

    //开始播放按钮
    public void OnPlay()
    {
        aud.Play();
        aud.timeSamples = Microphone.GetPosition(device);//这里设置了之后就会近乎实时同步

        int min;
        int max;
        Microphone.GetDeviceCaps(device, out min,out max);
        //aud.timeSamples = 0;
        Debug.Log("开始播放"+min+" "+max);
    }




    private void Update()
    {
        //clipLength.text = "     clipLength:" + aud.clip.length;
        //devicePosition.text = " devicePosition:" + Microphone.GetPosition(device);
        //audioTime.text = "      audioTime:" + aud.time;
        //audioSampleTime.text = "audioSampleTime:" + aud.timeSamples;

        //Debug.Log("     clipLength:" + aud.clip.length);
        //Debug.Log(" devicePosition:" + Microphone.GetPosition(device));
        //Debug.Log("      audioTime:" + aud.time);
        //Debug.Log("audioSampleTime:" + aud.timeSamples);

        //aud.timeSamples = Microphone.GetPosition(device);
    }
}

 Note that after the real-time recording is played, there may be a problem that the value of the playback sample point and the recording sample point become larger and larger, so it may need to be processed at regular intervals in the update function.

Guess you like

Origin blog.csdn.net/hackdjh/article/details/123810211