Multimedia Technology Chapter 2 Digital Audio (1)

Table of contents

2.1 Characteristics, types and processing of sounds

sound characteristics

Type of sound

sound processing

2.2 Audio digitization

Why digitize sound?

Basic process of digitalization

Common sound file formats

2.3 Electronic musical instrument digital interface

Introduction to MIDI

MIDI features


audible sound traditional records multimedia recording
voice letter Speech generation and recognition
music sheet music MIDI
Various audible sounds in nature recording equipment waveform sound

2.1 Characteristics, types and processing of sounds

sound characteristics

In nature, sound travels through the air, causing changes in air pressure. The pressure changes are detected by the eardrums of the ears, and electrical signals are generated to stimulate the auditory nerves in the brain.

Auditory elements of sound: pitch, intensity, timbre

Physical elements of sound waves: period, frequency, bandwidth

Sound frequency that humans can hear: 20Hz to 20KHz

Sound characteristics: continuous, directional, timely (strong front-to-back correlation)

Type of sound
sound type bandwidth
infrasound 0-20Hz
phone voice 200Hz-3.4kHz
AM radio 50Hz-15KHz
FM radio
Ultrasound

Natural sounds are analog signals, and digital audio is stored in the computer. The conversion process is called analog-to-digital conversion (A/D conversion analog-digital) and digital-to-analog conversion (D/A conversion).

sound processing

Recording, playback, compression, transmission, editing, etc.

2.2 Audio digitization

  • Waveform audio: digital information is obtained by waveform sampling, the file format is .WAV

  • Symbolized audio: MIDI is a typical representative, and the sound synthesized by an electronic synthesizer has the file format of .MID (electronic keyboard).

Why digitize sound?
  • Computers can only process discrete digital quantities

  • Strong anti-interference and error-correctable

  • Good playback performance

  • Easy to handle

  • Can be compressed

Basic process of digitalization
  1. Sampling: Sampling and discretizing the simulated signal

  2. Quantification: Recording numerical values ​​requires approximation

  3. Encoding: Store in appropriate form

How to reduce losses? Increase sampling frequency or increase quantification accuracy

Nyquist Sampling Theorem: To obtain lossless sampling, the sampling frequency must be twice the highest frequency of the waveform.

The most commonly used sampling frequency: 44.1KHz

Common quantization precision: 8 bits, 16 bits

Since the sound frequency distribution is uneven, the sound can be non-uniformly quantized. The quantization interval is smaller in areas with a high signal occurrence probability, and the quantization interval is larger in areas with a low signal occurrence probability.

Number of bytes in WAV file = \frac{sampling frequency * number of quantization bits * number of channels} {8*1024*1024}

Common sound file formats
  1. WAV

  2. MP3 MPEG-1 audio Layer3 MP3 compression rate 10:1 12:1 and less playback distortion

  3. WMA Windows Media Audio encoded file format has a high compression rate and supports streaming technology, that is, playing while transmitting

  4. APE: lossless compression format

  5. AAC: Technology developed by the MPEG2 specification

2.3 Electronic musical instrument digital interface

Introduction to MIDI

MIDI, musical instrument digital interface, musical instrument digital interface, a standard protocol used to exchange music information between music synthesizers, musical instruments and computers. MIDI is not a sound sampling signal, but a set of instruction sequences, which can be regarded as a musical score that a computer can understand.

How to turn commands into music?

  • One is FM synthesis , that is, frequency modulation synthesis. Different modulation wave frequencies and modulation indexes are used to synthesize waveforms with different spectrum distributions. Modulation combination of basic wave + parameters.

    • Sound envelope generator: ADSR four parameters

    • low cost

    • The waveform is limited and the simulation quality is low.

  • One is the musical tone sample synthesis method , also known as the waveform table synthesis method.

MIDI files are divided into 16 logical channels, and different channels can be used as a synthesizer

MIDI features
  • Small amount of data

  • Can be played simultaneously with waveform sound

  • Easy to modify

  • Music quality is hardware related

  • Cannot be used to describe sounds in nature such as speech

Guess you like

Origin blog.csdn.net/weixin_61720360/article/details/132975642