"Linux Driver: Using Audio Device Driver Framework-OSS to Build Audio Device Drivers"

I. Introduction

OSS (Open Sound System) is a unified audio interface on the unix platform. ALSA (Advanced Linux Sound Architecture) is a Linux kernel component that provides drivers for sound cards, replacing OSS. Let's analyze the simple OSS first, and then study ALSA later. Analyze the basic framework of OSS, and finally summarize the general steps of implementing an audio device driver under the OSS framework through an example.

Second, the framework

The sound/sound_core.c file in the kernel directory is the core layer of OSS, which provides a unified entry for applications to access audio devices upward, and provides an upward registration interface for different audio devices downward. Audio driver development uses the bus-device-driver model to build audio device drivers for specific audio devices and system hardware resources, calls the interface provided by OSS in the driver probe to register audio devices, and is managed by OSS. The OSS architecture is based on the access method of the file system. The operation of the sound is completed by performing operations such as open, read, and write on the device node file.
insert image description here
insert image description here

Three, OSS implementation

3.1 OSS initialization

Configure in the kernel, compile sound_core.c into the kernel, load it when the kernel is initialized, and call init_soundcore.
make menuconfig
-> Device Drivers -> Sound
-> <*> Sound card support

// linux-2.6.22.6/include/linux/major.h
#define SOUND_MAJOR		14
// linux-2.6.22.6/sound/sound_core.c
static int __init init_soundcore(void)
{
    
    
	// 向内核注册一个字符设备 
	if (register_chrdev(SOUND_MAJOR, "sound", &soundcore_fops)==-1) {
    
    
		printk(KERN_ERR "soundcore: sound device already in use.\n");
		return -EBUSY;
	}
	// 创建一个设备类。/sys/class/sound
	sound_class = class_create(THIS_MODULE, "sound");
	if (IS_ERR(sound_class))
		return PTR_ERR(sound_class);

	return 0;
}

3.2 Register audio device with OSS

There are two most basic audio devices in the OSS standard: mixer (mixer) and DSP (digital signal processor). The function of mixer is to combine or superimpose multiple signals, which can be used to adjust the volume and select the sound source. For different sound cards, the functions of their mixers may vary. The /dev/mixer device node file created when registering the audio interface with the OSS core layer is the software interface for the application to operate the mixer. DSP is also called a codec, which mainly realizes the operation of recording and playback. The corresponding device node file is /dev/dsp. Writing data to the device node file means activating the D/A converter on the sound card for playback. , and reading data to the device means activating the A/D converter on the sound card for recording.
There are multiple different audio devices besides mixer and dsp audio devices. OSS provides different registration interfaces for the lower layers of these devices.

int register_sound_mixer(const struct file_operations *fops, int dev);
int register_sound_midi(const struct file_operations *fops, int dev);
int register_sound_dsp(const struct file_operations *fops, int dev);
int register_sound_special(const struct file_operations *fops, int unit);

The essence of these interfaces is to call the sound_insert_unit function to insert audio devices into different device lists managed by OSS, and create different audio device device nodes for different audio devices. The parameter fops is the operation function of various audio devices, dev generally passes in -1, and automatically assigns the device number.

3.3 OSS manages audio devices

OSS manages multiple linked lists of audio devices of different types through a pointer array. When the OSS under the driver layer registers the audio device, it inserts a sound_unit structure into the corresponding linked list.

/*
 *	Allocations
 *
 *	0	*16		Mixers
 *	1	*8		Sequencers
 *	2	*16		Midi
 *	3	*16		DSP
 *	4	*16		SunDSP
 *	5	*16		DSP16
 *	6	--		sndstat (obsolete)
 *	7	*16		unused
 *	8	--		alternate sequencer (see above)
 *	9	*16		raw synthesizer access
 *	10	*16		unused
 *	11	*16		unused
 *	12	*16		unused
 *	13	*16		unused
 *	14	*16		unused
 *	15	*16		unused
 */
static struct sound_unit *chains[SOUND_STEP];

Such as registering a dsp audio device

register_sound_dsp(&smdk2410_audio_fops, -1) -> 
    sound_insert_unit(&chains[3], fops, dev, 3, 131,"dsp", S_IWUSR | S_IRUSR, NULL) -> 
        __sound_insert_unit(s, list, fops, index, low, top) -> 
            s->unit_minor=n;
        	s->unit_fops=fops;
        	s->next=*list;
        	*list=s;
    	// 以14为主设备号,以s->unit_minor为次设备号创建音频设备,创建设备节点文件
    	device_create(sound_class, dev, MKDEV(SOUND_MAJOR, s->unit_minor),
		      s->name+6);
            

As mentioned above, when OSS is initialized, a character device is registered with the kernel, and its operation function is soundcore_fops. There is only one open interface, which is only used as a transit, and the application program accesses an audio device (open) according to the minor device number of the device node file of the opened audio device (that is, s->unit_minor used when creating), Find the corresponding type of audio device list from the audio device list array (chains) to find the corresponding operation function.

static const struct file_operations soundcore_fops=
{
    
    
	/* We must have an owner or the module locking fails */
	.owner	= THIS_MODULE,
	.open	= soundcore_open,
};

int soundcore_open(struct inode *inode, struct file *file)
{
    
    
	int chain;
	int unit = iminor(inode);
	struct sound_unit *s;
	const struct file_operations *new_fops = NULL;

	chain=unit&0x0F;
	if(chain==4 || chain==5)	/* dsp/audio/dsp16 */
	{
    
    
		unit&=0xF0;
		unit|=3;
		chain=3;
	}
	
	spin_lock(&sound_loader_lock);
	s = __look_for_unit(chain, unit);
	if (s)
		new_fops = fops_get(s->unit_fops);
	if (!new_fops) {
    
    
		spin_unlock(&sound_loader_lock);
		request_module("sound-slot-%i", unit>>4);
		request_module("sound-service-%i-%i", unit>>4, chain);
		spin_lock(&sound_loader_lock);
		s = __look_for_unit(chain, unit);
		if (s)
			new_fops = fops_get(s->unit_fops);
	}
	if (new_fops) {
    
    
		int err = 0;
		const struct file_operations *old_fops = file->f_op;
		file->f_op = new_fops;
		spin_unlock(&sound_loader_lock);
		if(file->f_op->open)
			err = file->f_op->open(inode,file);
		if (err) {
    
    
			fops_put(file->f_op);
			file->f_op = fops_get(old_fops);
		}
		fops_put(old_fops);
		return err;
	}
	spin_unlock(&sound_loader_lock);
	return -ENODEV;
}

insert image description here

Fourth, the basic concept of audio

4.1 Sampling frequency

The frequency at which sound data is recorded once for one sound waveform. The higher the frequency, the more realistic the sound will be when restoring the recorded sound data. The sampling frequency for in-ears does not need to be too high, the sampling rate is between 8KHz - 96KHz, and the sound is already very full at 96KHz.

4.2 Sampling accuracy

How many bits are used to represent a sound data. For example, 8-bit precision, that is, a sound data is represented by 8 bits, that is, a byte; 8-bit precision, that is, a sound data is represented by 16 bits, that is, two bytes.

4.3 Left/Right Channel

The left and right channel data are different, and the sound played by the two kinds of data is more three-dimensional.

4.4 IIS interface

The interface used to transmit sound data consists of four lines, I2SSCLK - data bit clock, I2SLRCK - left and right channel data sampling clock, I2SSDI - sound data input, I2SSDO - sound data output.
insert image description here

4.5 Sound recording and playback

insert image description here

4.6 Control interface

The IIS interface is used to transmit audio data, and the control interface is used to transmit the control data of the main control to the codec chip, such as setting the sound volume, switching the channel output, and setting the MIC gain. It also uses GPIO to read and write data to some registers of the codec chip. Different codec chips use different transmission modes. For example, WM8976G provides two transmission modes, one is three-wire mode and the other is IIC mode.

Five, realize the audio device driver of WM8976G

5.1 Hardware circuit

5.1.1 WM8976G related

L2/GPIO2: Mono audio input or second microphone access or common GPIO port
LRC: left and right channel data sampling clock
BCLK: sound data bit clock
ADCDAT: IIS data input
DACDAT: IIS data output
MCLK: WM8976 working clock , provided by the master
MICBIAS: microphone bias voltage, by adjusting the bias voltage can improve the microphone recording distortion problem
LIP: microphone input channel
LIN: microphone input channel (ground)
AUXL: audio input left channel (external audio)
AUXR: Audio input right channel (external audio)
CSB/GPIO1: chip selection pin when the control interface uses 3-wire mode or as a common GPIO port
SCLK: 3-wire mode clock or IIC mode clock
SDIN: 3-wire mode data or IIC mode data
MODE: control The interface uses the three-wire mode or IIC mode selection pin, and the three-wire mode is used when the level is high
VMID: reference voltage, the noise problem during sound playback can be improved by adjusting the reference voltage
ROUT1: Audio output channel 1, right channel, can be connected to an external earphone or speaker
LOUT1 : Audio output channel 1, left channel, can be connected to external headphones or speakers
ROUT2: Audio output channel 2, right channel, can be connected to external headphones or speakers
LOUT2: Audio output channel 2, left channel, can be connected to external headphones or speakers
OUT3: Audio Output channel 3
OUT4: Audio output channel 4
insert image description here

5.1.2 S3C2440 related

insert image description here
insert image description here

5.2 Build the driver

Build audio device drivers through the bus-device-driver model, initialize hardware resources in the driver's probe, and register dsp audio devices and mixer audio devices with the OSS core layer.

5.2.1 Register platform device

When the system starts and initializes, register the s3c_device_iis platform device. The loading and registration process has been analyzed many times in previous articles, so I won’t go into details here.

// linux-2.6.22.6/arch/arm/plat-s3c24xx/devs.c
static struct resource s3c_iis_resource[] = {
    
    
	[0] = {
    
    
		.start = S3C24XX_PA_IIS,
		.end   = S3C24XX_PA_IIS + S3C24XX_SZ_IIS -1,
		.flags = IORESOURCE_MEM,
	}
};

static u64 s3c_device_iis_dmamask = 0xffffffffUL;

struct platform_device s3c_device_iis = {
    
    
	.name		  = "s3c2410-iis",
	.id		  = -1,
	.num_resources	  = ARRAY_SIZE(s3c_iis_resource),
	.resource	  = s3c_iis_resource,
	.dev              = {
    
    
		.dma_mask = &s3c_device_iis_dmamask,
		.coherent_dma_mask = 0xffffffffUL
	}
};

// linux-2.6.22.6/arch/arm/mach-s3c2440/mach-smdk2440.c
static struct platform_device *smdk2440_devices[] __initdata = {
    
    
	&s3c_device_usb,
	&s3c_device_lcd,
	&s3c_device_wdt,
	&s3c_device_i2c,
	&s3c_device_iis,
    &s3c2440_device_sdi,
};

static void __init smdk2440_machine_init(void)
{
    
    
	s3c24xx_fb_set_platdata(&smdk2440_lcd_cfg);

	platform_add_devices(smdk2440_devices, ARRAY_SIZE(smdk2440_devices));
	smdk_machine_init();
}

MACHINE_START(S3C2440, "SMDK2440")
	/* Maintainer: Ben Dooks <[email protected]> */
	.phys_io	= S3C2410_PA_UART,
	.io_pg_offst	= (((u32)S3C24XX_VA_UART) >> 18) & 0xfffc,
	.boot_params	= S3C2410_SDRAM_PA + 0x100,

	.init_irq	= s3c24xx_init_irq,
	.map_io		= smdk2440_map_io,
	.init_machine	= smdk2440_machine_init,
	.timer		= &s3c24xx_timer,
MACHINE_END

5.2.2 Register platform driver

// linux-2.6.22.6/sound/soc/s3c24xx/s3c2440-wm8976.c
extern struct bus_type platform_bus_type;
static struct device_driver s3c2410iis_driver = {
    
    
	.name = "s3c2410-iis",
	.bus = &platform_bus_type,
	.probe = s3c2410iis_probe,
	.remove = s3c2410iis_remove,
};

static int __init s3c2410_uda1341_init(void) {
    
    
	memzero(&input_stream, sizeof(audio_stream_t));
	memzero(&output_stream, sizeof(audio_stream_t));
	return driver_register(&s3c2410iis_driver);
}

5.2.3 Probe function analysis

The matching process of the device and driver of the bus-device-driver model, and the process of calling the driver's probe function after the matching is successful have been analyzed many times in previous articles, so I won't go into details here. Look directly at the driver's probe function, namely s3c2410iis_probe.

5.2.3.1 Initialize hardware resources

......
iis_base = (void *)S3C24XX_VA_IIS ;

// 使能IIS模块
iis_clock = clk_get(dev, "iis");
clk_enable(iis_clock);

// 设置相关引脚功能
/* GPB 4: L3CLOCK, OUTPUT */
s3c2410_gpio_cfgpin(S3C2410_GPB4, S3C2410_GPB4_OUTP);
s3c2410_gpio_pullup(S3C2410_GPB4,1);
/* GPB 3: L3DATA, OUTPUT */
s3c2410_gpio_cfgpin(S3C2410_GPB3,S3C2410_GPB3_OUTP);
/* GPB 2: L3MODE, OUTPUT */
s3c2410_gpio_cfgpin(S3C2410_GPB2,S3C2410_GPB2_OUTP);
s3c2410_gpio_pullup(S3C2410_GPB2,1);
/* GPE 3: I2SSDI */
s3c2410_gpio_cfgpin(S3C2410_GPE3,S3C2410_GPE3_I2SSDI);
s3c2410_gpio_pullup(S3C2410_GPE3,0);
/* GPE 0: I2SLRCK */
s3c2410_gpio_cfgpin(S3C2410_GPE0,S3C2410_GPE0_I2SLRCK);
s3c2410_gpio_pullup(S3C2410_GPE0,0);
/* GPE 1: I2SSCLK */
s3c2410_gpio_cfgpin(S3C2410_GPE1,S3C2410_GPE1_I2SSCLK);
s3c2410_gpio_pullup(S3C2410_GPE1,0);
/* GPE 2: CDCLK */
s3c2410_gpio_cfgpin(S3C2410_GPE2,S3C2410_GPE2_CDCLK);
s3c2410_gpio_pullup(S3C2410_GPE2,0);
/* GPE 4: I2SSDO */
s3c2410_gpio_cfgpin(S3C2410_GPE4,S3C2410_GPE4_I2SSDO);
s3c2410_gpio_pullup(S3C2410_GPE4,0);

// 初始化IIS模块
init_s3c2410_iis_bus();

......

5.2.3.2 Initialize and set up the audio codec chip

.......
init_wm8976();
.......

static void init_wm8976(void)
{
    
    
	uda1341_volume = 57;
	uda1341_boost = 0;

	/* software reset */
	wm8976_write_reg(0, 0);

	/* OUT2的左/右声道打开
	 * 左/右通道输出混音打开
	 * 左/右DAC打开
	 */
	wm8976_write_reg(0x3, 0x6f);
	
	wm8976_write_reg(0x1, 0x1f);//biasen,BUFIOEN.VMIDSEL=11b  
	wm8976_write_reg(0x2, 0x185);//ROUT1EN LOUT1EN, inpu PGA enable ,ADC enable

	wm8976_write_reg(0x6, 0x0);//SYSCLK=MCLK  
	wm8976_write_reg(0x4, 0x10);//16bit 		
	wm8976_write_reg(0x2B,0x10);//BTL OUTPUT  
	wm8976_write_reg(0x9, 0x50);//Jack detect enable  
	wm8976_write_reg(0xD, 0x21);//Jack detect  
	wm8976_write_reg(0x7, 0x01);//Jack detect 
}    

5.2.3.3 Initialize DMA channels for audio input and output

.......
output_stream.dma_ch = DMACH_I2S_OUT;
if (audio_init_dma(&output_stream, "UDA1341 out")) {
    
    
    audio_clear_dma(&output_stream,&s3c2410iis_dma_out);
    printk( KERN_WARNING AUDIO_NAME_VERBOSE
            ": unable to get DMA channels\n" );
    return -EBUSY;
}

input_stream.dma_ch = DMACH_I2S_IN;
if (audio_init_dma(&input_stream, "UDA1341 in")) {
    
    
    audio_clear_dma(&input_stream,&s3c2410iis_dma_in);
    printk( KERN_WARNING AUDIO_NAME_VERBOSE
            ": unable to get DMA channels\n" );
    return -EBUSY;
}
.......

5.2.3.4 Register audio device with OSS core layer

After registration, the OSS core layer manages and operates the audio device, which is the same as the analysis in the third section.

......
audio_dev_dsp = register_sound_dsp(&smdk2410_audio_fops, -1);
audio_dev_mixer = register_sound_mixer(&smdk2410_mixer_fops, -1);
......

Six, application playback and recording

to be determined

Guess you like

Origin blog.csdn.net/qq_40709487/article/details/127594053