Brief analysis of CAN bus data link layer & bus synchronization mechanism

The last article talked about the history, standards, and physical layer of the CAN bus. Now, I will introduce the CAN bus data link layer and the CAN bus synchronization mechanism. This article is reproduced in known almost: zero breath read from the CAN bus.

1 CAN data link layer

In SPI communication, the four signals of chip select, clock signal, data input and data output have separate signal lines. But CAN uses two differential signal lines, which can only express one signal. The concise physical layer determines that CAN must be equipped with a more complex protocol. How to use a signal channel to achieve the same or even more powerful functions, the answer is to pack data or operation commands.

1.1 Communication mechanism

1.1.1 Multi-Master

Safety-sensitive applications (such as automotive power) have high requirements for the reliability of communication systems. It is very dangerous to attribute the normal operation of the bus to a single node. A more reasonable solution is to decentralize the bus access, that is, each node has the ability to access the bus. This is also the reason why the CAN bus adopts a multi-master (Multi-Master) linear topology.

On the CAN bus, each node has the ability to send messages to the bus, and the sending of messages does not have to follow any pre-set time sequence, and communication is event-driven. Only when there is new information transfer, the CAN bus is in a busy state, which makes the node access to the bus very fast. The theoretical maximum data transmission rate of CAN bus is 1 Mbps. It responds quickly to asynchronous events and basically has no problems for ms-level real-time applications.

1.1.2 Addressing mechanism

Unlike other types of buses, the CAN bus does not set the address of the node, but distinguishes the message by its identifier. Although this mechanism will increase the complexity of the message (increase the identifier), in this case the nodes can work independently of each other without knowing the status of other nodes. When adding nodes to the bus, only the message type needs to be concerned, not the status of other nodes on the system. This way of addressing by message identifier makes adding nodes on the bus more flexible.

1.1.3 Bus access CSMA/CD+AMP

The CAN bus communication principle can be simply described as multi-channel carrier sensing + conflict detection based on message priority and non-destructive arbitration mechanism (CSMA/CD+AMP). CSMA (Carrie Sense Multiple Access) means that all nodes must wait until the bus is idle before sending messages to the bus; CD+AMP (Collision
Detection + Arbitration on Message Priority) means that if multiple nodes send messages to the bus When sending messages, the message with the highest priority gets the bus.

  • Multi-channel carrier sensing: All nodes on the network are connected to the same bus in a multi-point access mode, and the data sent is broadcast. Before sending data, each node on the network must check whether there is data transmission on the bus: if there is data on the network, do not send the data temporarily, and wait for the network to be idle before sending; if there is no data on the network, immediately send the prepared data.
  • Conflict detection: When a node sends data, it must constantly check the sent data to determine whether it conflicts with other node data transmission. If there is a conflict, ensure that the higher priority message is sent first.
  • Non-destructive arbitration mechanism: through ID arbitration, the smaller the ID value, the higher the message priority.

Bus access
After the node sending the low-priority message exits the arbitration, it will automatically resend the message the next time the bus is free.

Automatically resend messages when the bus is idle
High-priority messages cannot interrupt the transmission of low-priority messages.
High priority messages cannot interrupt the transmission of low priority messages

1.1.4 Packet acceptance filtering

Most CAN controllers have the function of filtering messages based on ID, that is, only receiving messages with certain IDs. The node filters the received messages: compare the message ID with the selector (Accepter) and whether the relevant bits in the acceptance filter are the same. If they are the same, accept; if they are not, filter.

Message acceptance filtering

1.2 CAN message types and structure

1.2.1 Types of messages

Add the transmission start label, chip selection (identification) label, and control label in front of the original data section, and add CRC check label, response label and transmission end label to the end of the data. Pack these contents in a specific format, and then a channel can be used to express various signals. Various tags play a role in cooperative transmission. When the entire data packet is transmitted to other devices, as long as these devices interpret the format according to the format, the original data can be restored. Data packets like this are called CAN data frames.

In order to control communication more effectively, CAN stipulates a total of 5 types of frames, which are also called messages.

Different types of CAN frames

1.2.2 Data Frame

Data frame is the most important and most complex in CAN communication. The data frame starts with a dominant bit (logic 0) and ends with 7 consecutive recessive bits (logic 1). The data frame of CAN bus has standard format (Standard Format) and extended format (Extended Format) distinction.

Standard frame and extended frame
The data frame can be divided into seven segments:

  • The start of frame (SOF)
    identifies the beginning of a data frame and fixes a dominant bit.
    Start of frame
    Used for synchronization, any recessive to dominant transition during the bus idle period will cause the nodes to perform hard synchronization. Only when the bus is idle can the node send SOF.

  • Arbitration Field (Arbitration Field)
    The content of the arbitration field is mainly the ID information of the data frame. The data frame is divided into two types: standard format and extended format. The difference lies in the length of ID information: the standard format ID is 11 bits; the extended format is 29 bits. In the CAN protocol, ID determines the priority of data frame transmission, and also determines whether other devices will receive this data frame.
    Arbitration section
    In addition to the message ID, the arbitration section also has RTR, IDE, and SRR bits.

    Note: RTR (Remote TransmissionRequest) remote transmission request, IDE (Identifier Extension) identifier extension, SRR (Substitude Remote Request) instead of remote request.

  • Control section
    In the control section, r1 (reserved1) and r0 (reserved0) are reserved bits and are set to dominant bits by default. The most important is the DLC (Data Length Code) segment, which uses binary code to indicate how many bytes the data segment in this message contains. The DLC segment consists of 4 bits, DLC 3−DLC 0, and the number represented is 0-8.
    Control section

  • Data segment
    The core content of the data frame, with a length of 0-8 bytes, determined by DLC.
    Data segment

  • CRC segment
    In order to ensure the correct transmission of the message, the CAN message contains a 15-bit CRC check code. Once the CRC code calculated by the receiving end is different from the received CRC code, the error message and error information will be fed back to the sending end. Resend. The calculation and error handling of the CRC part are generally completed by CAN controller hardware, or the maximum number of retransmissions is controlled by software. After the CRC check code, there is a CRC delimiter, which is a recessive bit, whose main function is to separate the CRC check code from the following ACK segment.
    CRC stage

  • The ACK segment duan
    contains an acknowledge bit (ACK slot) and a delimiter (Delimiter, DEL). ACK is a recessive bit when sent by the sending node. When the receiving node correctly receives the message, it is covered with a dominant bit. The DEL delimiter is also a recessive bit for separation.
    ACK stage

  • End-of-Frame (EOF) End-of-Frame (EOF)
    is sent by the sender to indicate the end of the 7 recessive bits.
    End of frame

2 Sync

The CAN bus uses bit synchronization to ensure communication timing and correct sampling of the bus level.

2.1 Bit timing

Before talking about bit timing, first introduce a few basic concepts.

Time Quantum tQ: The smallest unit of time for CAN controller to work, usually obtained by dividing the system clock.
Time Quantum
Baud rate: data bits transmitted in unit time (1 s), formula: 1/bit time. For example, if the system clock frequency is 36 MHz and the prescaler factor is 4, then the CAN clock frequency is 9 MHz, then Tq = 1/9 M. Assuming that a CAN bit contains 10 Tqs, then a bit period T = 10 Tq, so the baud rate is 1 / T = 0.9 MHz.

CAN bit time
In order to achieve bit synchronization, the CAN protocol decomposes the timing of each bit into four segments as shown in the figure below. The total length of these four segments is the length of one CAN data bit. A complete bit consists of 8-25 Tq.
Bit time detailed structure

  • The synchronization end (SS, Synchronization Segment)
    outputs one bit from the synchronization segment. If the transition edge of the bus is included in the range of the SS segment, it means that the timing of the node and the bus is synchronized. When the node is synchronized with the bus, the bus level collected by the sampling point can be determined as the potential of that level. The size of the SS segment is 1 Tq

  • The propagation segment (PTS, Propagation Time Segment) is
    used to compensate the physical delay time of the signal propagation in the network and nodes, which is twice the sum of the input comparator delay and the output driver delay on the bus. Usually 1-8 Tq

  • Phase Buffer Segment 1 (PBS1, Phase Buffer Segment 1) is
    mainly used to compensate the error of the edge phase, and its time length can be lengthened during resynchronization. Initial size 1-8 Tq

  • Phase Buffer Segment 2 (PBS2, Phase Buffer Segment 2)
    is also used to compensate the error of the edge phase, and its time length can be shortened during resynchronization. Initial size 2-8 Tq

2.2 Synchronization

CAN synchronization is divided into hard synchronization and resynchronization.

Synchronization rules:

  • Only one synchronization method is allowed in one bit time
  • Any "recessive" to "dominant" transition can be used for synchronization
  • Hard synchronization occurs in the SOF phase, and all receiving nodes adjust the synchronization segment of their current bits to make them within the sent SOF bits.
  • Resynchronization occurs in other phases of a frame, that is, when the transition edge falls outside the synchronization segment.

2.2.1 Hard Sync

When the frame start signal (SOF, recessive to dominant edge) appears on the bus, the controllers of other nodes adjust their bit timing according to the falling edge on the bus, and include the falling edge in the SS segment Inside. Such synchronization based on the starting frame is called hard synchronization.

It can be seen that when the frame start signal appears on the bus, the original bit timing of the node is not synchronized with the bus timing, so the data collected at the sampling point in this state is incorrect. Therefore, the node adjusts in a hard-synchronized manner, and shifts the SS segment in its own bit sequence to the part where the falling edge of the bus appears to obtain synchronization. At this time, the data collected at the sampling point is the correct data.
Hard sync

2.2.1 Resync

Because hard synchronization only works when there is a frame start signal, it cannot ensure that the subsequent series of bit timings are synchronized, so CAN introduces a resynchronization method. When it is detected that there is a phase difference between the timing on the bus and the timing used by the node (that is, the transition edge on the bus is not in the range of the SS segment of the node timing), synchronization is obtained by extending the PBS1 segment or shortening the PBS2 segment. This method is called Resynchronize.

There are two situations: in the first case, the node detects that its timing is relatively lagging behind the bus timing by 2 Tq during the edge transition of the bus. At this time, the controller increases the time length of 2 Tq in the PBS1 segment of the next timing. , So that the node and the bus timing resynchronization.
Resync 1
In the second type, the node detects that its timing is relatively advanced by 2 Tq from the edge transition of the bus. At this time, the controller reduces the time length of 2 Tq in the PBS2 segment of the previous bit timing to obtain synchronization.
Resync 2
When resynchronizing, the allowable lengthening or shortening of the PBS1 and PBS2 segments is defined as the resynchronization compensation width (SJW, reSynchronization Jump Width). The maximum length of time that PBS1 and PBS2 can be increased or decreased here is SJW = 2 Tq. If SJW is set too small, the resynchronization adjustment speed will be slow; if it is too large, the transmission rate will be affected.

Guess you like

Origin blog.csdn.net/zztiger123/article/details/106642648