Introduction to I2S protocol

Table of contents

foreword

1 Overview

analog audio

digital audio

 2. I2S interface

2.1. Features of I2S

2.2 I2S has three main signals

2.3. Typical I2S signal timing diagram

reference


foreword

Based on the information on the Internet, the relevant concepts are summarized, and some of the content is quoted from the article after the article.

Learning objectives: Simple understanding of related concepts and related protocols.

1 Overview

       Digital Audio Interface DAI, Digital Audio Interfaces, as the name suggests, DAI represents the way to transmit digital audio signals at the board level or between boards. Compared with the analog interface, the digital audio interface has stronger anti-interference ability and simpler hardware design. DAI is more and more widely used in audio circuit design. Figure 1 and Figure 2 compare the difference between a traditional audio signal and a digital audio signal chain.

analog audio

      In a traditional audio circuit (Figure 1), there are microphones, preamplifiers, analog-to-digital converters ADC , digital-to-analog converters DAC, output amplifiers, and speakers, all connected using analog signals. With the development of technology and performance considerations, analog circuits are gradually integrated into both ends of the device, and more digital interface forms will appear between integrated circuits in the signal chain.

digital audio

       DSP usually has a digital interface; transducers (Transducers, ie Mic & Speaker) and amplifiers generally only have analog interfaces, but now they are gradually integrating digital interface functions. IC designers are currently integrating the ADC, DAC, and modulator within the transducer into one end of the signal chain, eliminating the need to run any analog audio signals on the PCB and reducing the number of components in the signal chain. Figure 2 shows an example of a complete digital audio interface.

 2. I2S interface

   

2.1. Features of I2S

1. Support full duplex/half duplex

2. Support master/slave mode

3. Compared with PCM, I2S is more suitable for stereo system. Of course, the variant of I2S also supports multi-channel time-division multiplexing, so it can support multi-channel.

2.2 I2S has three main signals

       I2S is a relatively simple digital interface protocol with no address or device selection mechanism. On the I2S bus, only one master device and a sender device can exist at the same time. The master device can be a sending device, a receiving device, or other control devices coordinating the sending device and the receiving device.

       In the I2S system, the device that provides the clock (SCK and WS) is the master device .

        Figure 3 is a block diagram of a common I2S system. In high-end applications, CODEC is often used as the master control device of I2S to precisely control the data flow of I2S.

I2S includes the data of two channels (Left/Right), and the left and right channel data is switched under the control of channel selection/word selection (WS) issued by the master device. Multi-Channels applications can be realized by increasing the number of I2S interfaces or other I2S devices.

In the I2S transmission protocol, data signals, clock signals and control signals are transmitted separately.

The I2S protocol only defines three signal lines: serial clock signal SCLK (BCLK) , data signal SD and left and right channel selection signal WS .

(1) Clock signal Serial Clock Serial
       clock SCLK , also called bit clock BCLK . SCLK is the synchronization signal inside the module, which is provided externally in Slave mode, and generated by the module itself in Master mode. The chip models of different manufacturers may have different names for the clock signal, and may also be called BCLK/Bit Clock or SCL/Serial Clock

       For example: if the sampling frequency of the sound is 44.1 kHz, that is, the frequency of the channel selection signal (frame clock) WS must also be 44.1 kHz; the quantization depth of the left/right two channels is 16 bits, then the frequency of the SCK of I2S For: 44.1 kHz × 16 × 2 = 1.4112 MHz

      If it is necessary to transmit 20 bit, 24 bit or 32 bit left and right channel data, the frequency of SCK can be increased, and the required frequency of SCK can be calculated from the above formula.
 

(2) left and right channel selection signal Word Select

        WS is also called frame clock, namely LRCLK , Left Right Clock. The WS frequency is equal to the sampling rate of the sound. WS can change either on the rising edge of SCK or on the falling edge of SCK. The slave device samples the WS signal on the rising edge of SCK. The MSB of the data signal is valid on the rising edge of the second clock (SCK) after the WS changes (that is, one SCK is delayed), which allows the Slave device to have enough time to store the currently received data and be ready to receive the next set of data.

WS is a channel selection signal, indicating the channel selected by the data sending end. when:

√ WS=0, means to choose the left channel
√ WS=1, means to choose the right channel

(3) The data signal Serial Data
        SD is serial data, which is transmitted on the data line in the form of two's complement in I2S. In the first SCK pulse after WS changes, the highest bit (MSB, Most Significant Bit) is transmitted first. The MSB is transmitted first because the word length of the sending device and the receiving device may be different. When the system word length is longer than the word length of the data sending end, the data transmission will be truncated/Truncated, that is, if the data received by the data receiving end If it is longer than the specified word length, all bits after the LSB (Least Significant Bit) of the specified word length will be ignored. If the received word length is shorter than its specified word length, then the vacant bits will be filled with 0. In this way, the most significant bit of the audio signal can be transmitted, thus ensuring the best hearing effect.

√ According to the input or output characteristics, SD on different chips may also be called SDATA, SDIN, SDOUT , DACDAT, ADCDAT, etc.;
√ Data transmission can be synchronized with the rising edge or falling edge of SCK, but the receiving device Rising edge sampling, the timing of sending data needs to be considered

2.3、Master Clock

In the ADC/DAC system with I2S/PCM interface, in addition to SCK and WS, CODEC often requires the controller to provide MCLK (Master Clock), which is due to the internal Delta-Sigma (ΔΣ)-based architecture design requirements of CODEC. The main reason is that this type of CODEC does not have a crystal oscillator circuit that provides the chip with a working clock. It requires an external clock to feed the internal PLL.

As shown in Figure 8 and Figure 9:

insert image description here

Figure 8 Wolfson WM8960 Stereo AUdio CODEC chip block diagram

insert image description here

Figure 9 WM8960 clock block diagram

2.3. Typical I2S signal timing diagram

With the development of technology, many different data formats have emerged. According to the position of data relative to LRCK Hairen SCLK, it is divided into I2S standard format (the format specified by Philips), left alignment (less used) and right alignment (Japanese format, common format). The sending and receiving ends must use the same Data Format.

I2S standard format:

 

 Align left:

 

Align right: 

3. PDM interface

PDM (Pulse Density Modulation) is a modulation method that uses digital signals to represent analog signals. As a method of converting analog quantities into digital quantities, PCM uses the equal interval sampling method to represent the amplitude of the analog component of each sampling as an N-bit digital component (N = quantization depth), so the result of each sampling in the PCM method is It is the data of N bit word length. PDM uses a clock sampling rate much higher than that of PCM to modulate the analog component, and only 1 bit is output, which is either 0 or 1. Therefore, digital audio represented by PDM is also called Oversampled 1-bit Audio. Compared with a series of 0s and 1s in PDM, the quantization result of PCM is more intuitive and simple.

At the receiving end of the application that uses the PDM method as the analog-to-digital conversion method, a decimation filter (Decimation Filter) is required to convert the density component represented by densely packed 0s and 1s into an amplitude component, and the PCM method already obtains the amplitude. The associated numeric components. Figure 20 shows a sine wave digitized by PDM.
insert image description here

Figure 17. Sine wave represented by PDM

The logic of the PCM method is simpler, but it needs to use three signal lines of data clock, sampling clock and data signal; the logic of the PDM method is relatively complicated, but it only needs two signal lines, namely clock and data. PDM has broad application prospects in occasions with strict space constraints, such as mobile phones and tablets. In the field of digital microphones, the most widely used is the PDM interface, followed by the I2S interface. Audio signals in PDM format can be routed near circuits with strong noise interference such as LCD screens.

Through the PDM interface, only two signal lines are needed to transmit two-channel data. Figure 18 shows the connection between two PDM interface sending devices and the same receiving device. For example, Source 1/2 are used as microphones for the left and right channels respectively. In this way, the collected two-channel data can be transmitted to the receiving device. . The master device (as the receiving device in this example) provides clocks for the two slave devices, and triggers the selection of Source 1/2 as data input on the rising and falling edges of the clock respectively. Figure 19 shows the timing requirements of Maxim's Class-D power amplifier MAX98358 for the PDM interface. It can be seen that it samples the left channel data on the rising edge of PDM_CLK, and samples the right channel data on the falling edge of PDM_CLK.

insert image description here

Figure 18. Schematic diagram of PDM connection (2 sending devices + 1 receiving device)

insert image description here

Figure 19. PDM timing block diagram

The PDM-based architecture is different from I2S and TDM in that the decimation filter (Decimation Filter) is not in the sending device, but inside the receiving device. The source output is the original high sampling rate (oversample) modulated data, such as the output of the Sigma-Delta modulator, rather than the decimated data like in I2S (An I2S output digital microphone includes the decimation filter, so its output is already at a standard audio sample rate that's easy to interface to and process.). The application based on the PDM interface reduces the complexity of the sending device, and because the CODEC as a receiving device integrates a decimation filter inside, the overall system complexity is greatly reduced. For digital microphones, higher-efficiency decimation filters can be achieved by using finer silicon processes for CODEC or processor fabrication than those used for traditional microphones.

reference

1. Author blog post
[Audio] I2S protocol timing and rough solution
[Essence Post] Detailed explanation of digital audio interface - I2S interface & PCM/TDM interface & PDM interface
Digital audio interface (I2S, PCM/TDM, PDM)

Guess you like

Origin blog.csdn.net/qq_22168673/article/details/128288954