Embedded software design principles

There are usually five guidelines when designing embedded software

1. SRP Single Responsibility Principle

Each function or function block has only one responsibility and only one reason for it to change.
A function or functionality should have only one reason for it to change. The single responsibility principle is the simplest but most difficult to apply. It requires dividing large modules according to responsibilities. If a sub-module takes on too many responsibilities, it means coupling these responsibilities together. A change in responsibilities may weaken or inhibit this Ability of the module to fulfill other responsibilities. The basis for division is that there is only one reason that affects its change. It is not simply understood that a module only implements one function. The same is true for the function level.

What is responsibility?
Responsibility is defined in SRP as "a reason for change". If there may be more than one motivation to change a child Module, indicating that this module has multiple responsibilities. Sometimes it's hard to notice this, and you're used to thinking of responsibilities in terms of groups. For example, the Modem program interface, most people will think that this interface looks very reasonable.

//interface Modem 违反 SRP
void connect();
void disconnect();
void send();
void recv();

However, two responsibilities are shown in this interface. The first responsibility is connection management, and the second responsibility is data communication. The connect and disconnect functions perform modem connection processing, and the send and recv functions perform data communication.
Should these two roles be separated? This depends on the way the application changes. If changes in the application program will affect the connection function, such as hot-plugging of peripherals and the host, and data sending and receiving after connection, they need to be separated. If it is a socket, its connection status and data interaction are bound. Changes in the application always cause these two responsibilities to change at the same time. There is no need to separate them. Forced separation will introduce complexity.

Separate coupling
Coupling multiple responsibilities is not desirable, but sometimes it is unavoidable. Some reasons related to the hardware or operating system force the coupling of things that do not want to be coupled together. together. However, the application part should be separated and decoupled as much as possible. Much of what is really required in early software module design is to discover responsibilities and separate those responsibilities from each other.

2. OCP The Open-Closed Principle

It is open for extension and closed for modification.
This must be kept in mind if you expect to develop software that won't be abandoned after the first version. So what kind of design can remain relatively stable in the face of changes in requirements, so that the system can continuously launch new versions after the first version? The open-closed principle guides us.

Software entities (modules, functions, etc.) should be extensible but not modifiable. If one change in a program has a chain reaction that results in changes to related modules, then the design stinks of rigidity. OCP recommends that the system should be refactored so that when such changes are made to the system in the future, only new code needs to be added without having to change the code that is already functioning properly.

Characteristics
The module designed according to the open-closed principle has two main characteristics.

  1. The behavior of the module that is open for extension is extensible. When application requirements change, the module can be extended to meet new requirements.
  2. Closed for modification
    The source code of the module cannot be infringed, and modification of the existing source code is not allowed.

The two characteristics seem to contradict each other. The usual way to extend the behavior of a module is to modify the source code of the module. Modules that do not allow modification are often considered to have fixed behavior. How is it possible to change the behavior of a module without changing its source code? The key is abstraction.

Abstract isolation
In object-oriented design technologies such as C++, you can create an abstract body that is fixed but can describe a set of any possible behaviors. This abstract body is an abstract base class. This set of any possible behaviors is represented by possible derived classes. Modules can operate on abstractions, and since a module relies on a fixed abstraction, it can be closed to change. At the same time, the behavior of this module can also be extended by deriving from this abstraction.
The polymorphic features of object-oriented languages ​​are easy to implement, but what about embedded C? For a functional interface or function, do not directly solidify the relevant logic, but make the specific implementation details open and extensible, so that functions can be added later without affecting other functions.

Violation of OCP
An application needs to draw circles and squares on the window. Circles and squares will be created in the same list and maintained In the proper order, the program goes through the list sequentially and draws all the circles and squares.
If you use C language and adopt a procedural method that does not follow OCP, a set of data structures has the same first member, but the remaining members are different. The first member of each structure is a type code that identifies whether the structure represents a circle or a square. The DrawAllShapes function iterates through an array whose elements are pointers to these data structures, calling the corresponding function (DrawCircle or DrawSquare) based on the type code.

typedef enum
{
    
    
    CIRCLE,
    SQUARE,
} ShapeType;

typedef struct
{
    
    
    ShapeType itsType;
} Shape;

typedef struct
{
    
    
    double x;
    double y;
} Point;

typedef struct
{
    
    
    ShapeType itsType;
    double itsSide;
    Point itsTopLeft;
} Square;

typedef struct
{
    
    
    ShapeType itsType;
    double itsRadius;
    Point itsCenter;
} Circle;

void DrawSquare(struct Square*);
void DrawCircle(struct Circle*);

void DrawAllShapes(Shape **list, int n)
{
    
    
    int i;
    Shape* s;

    for(i = 0; i < n; i++)
    {
    
    
        s = (Shape*)list[i];
        switch(s->itsType)
        {
    
    
            case SQUARE:
                DrawSquare((struct Square*)s);
                break;
            case CIRCLE:
                DrawCircle((struct Circle*)s);
                break;
        }
    }
}

DrawAllShapes function does not comply with OCP. If you want the function to draw a list containing triangles, you must change this function and extend the switch to add triangles. In fact, every time a new shape type is added, this function must be changed. Adding a new shape type to such an application means finding all functions that contain the above switch (or if else statement) and adding judgment on the new shape type everywhere.
In embedded data streams, data parsing is a common scenario. If a novice develops, it may be a universal long function to complete all parsing functions. For example, examples of different types of data parsing errors:

typedef int int32_t;
typedef short int16_t;
typedef char int8_t;
typedef unsigned int uint32_t;
typedef unsigned short uint16_t;
typedef unsigned char uint8_t;

#define NULL ((void *)(0))

//违反OCP的样例
//微信公众号【嵌入式系统】,不同类型的数据集中在一起,使用switch-case处理,与前面DrawAllShapes一样,后续扩展会影响既有函数。
int16_t cmd_handle_body_v1(uint8_t type, uint8_t *data, uint16_t len)
{
    
    
    switch(type)
    {
    
    
        case 0:
            //handle0
            break;
        case  1:
            //handle1
            break;
        default:
            break;
    }
    return -1;
}

Follow OCP
After adjustment of the above data parsing example:

//遵守OCP原则
//微信公众号【嵌入式系统】
typedef int16_t (*cmd_handle_body)(uint8_t *data, uint16_t len);
typedef struct
{
    
    
    uint8_t type;
    cmd_handle_body hdlr;
} cmd_handle_table;

static int16_t cmd_handle_body_0(uint8_t *data, uint16_t len)
{
    
    
    //handle0
    return 0;
}

static int16_t cmd_handle_body_1(uint8_t *data, uint16_t len)
{
    
    
    //handle1
    return 0;
}

//扩展新指令只需要在这里加上就行,不会影响先前的
static cmd_handle_table cmd_handle_table_map[] =
{
    
    
    {
    
    0, cmd_handle_body_0},
    {
    
    1, cmd_handle_body_1}
};

int16_t handle_cmd_body_v2(uint8_t type, uint8_t *data, uint16_t len)
{
    
    
    int16_t ret=-1;
    uint16_t i = 0;
    uint16_t size = sizeof(cmd_handle_table_map) / sizeof(cmd_handle_table_map[0]);

    for(i = 0; i < size; i++)
    {
    
    
        if((type == cmd_handle_table_map[i].type) && (cmd_handle_table_map[i].hdlr != NULL))
        {
    
    
            ret=cmd_handle_table_map[i].hdlr(data, len);
        }
    }
    return ret;
}

Although not as abstract and polymorphic as C++, it achieves the effect of OCP as a whole and extends cmd_handle_table_map without modifying handle_cmd_body_v2. This mode is actually a general table-driven method.
OCP can sometimes use a callback function. The bottom layer remains unchanged, and the application layer itself is extended to implement the differentiated part.

** Strategic closure**
The above example is not 100% closed. Generally speaking, no matter how "open-closed" a module is, there will be some changes that cannot be closed, and there is no appropriate model for all situations. Since complete closure is impossible, this issue must be approached strategically. That is, the designer must make a choice about which changes the module should be closed to. The most likely changes must be estimated first, and then structurally isolated from these changes, which requires designers to have some industry experience and predictive capabilities.
The cost of following OCP is also expensive. Unscrupulous abstract isolation from a software perspective. Creating abstract isolation consumes development time and code space, and also increases the complexity of software design. For example, handle_cmd_body_v1 is more reasonable than handle_cmd_body_v2 from the perspective of design principles if the demand is clear or hardware resources are in short supply, but the former is more direct and suitable for scenarios where resources are in short supply and demand is fixed. For embedded software, frequently changing parts of the program should be abstracted.

3. DIP Dependency Inversion Principle

High-level modules and low-level modules should depend on the intermediate abstraction layer (ie, interface), and the details should depend on the abstraction.
The principle of dependency inversion means that the high-level module (caller) does not depend on the low-level module (callee), and both should rely on abstraction.
Structured program analysis and design always tend to create a structure in which high-level modules depend on low-level modules, and strategies depend on details. This is the structure of most embedded software, from the business layer to the component layer. , and then to the driver layer, top-down design thinking. The dependency structure of a good object-oriented program is "inverted" compared to the structure designed by the traditional procedural method.
High-level modules depend on low-level modules, which means that changes in low-level modules will directly affect high-level modules, forcing them to make changes in turn, making it difficult to reuse high-level modules in different contexts. .

Inverted interface ownership
"Don't call us, we'll call you." (Don't call us, we'll call you), low-level modules are implemented in high-level modules The interface declared and called by the high-level module means that the low-level module implements the functions according to the needs of the high-level module. Through this inverted interface ownership, high-level reuse is satisfied in any context. In fact, even for embedded software, the focus of development is high-level modules that change at any time. Generally, similar upper-level application software runs in different hardware environments, so the reuse of high-level software can improve software quality.

Sample comparison
Suppose the software that controls the furnace regulator reads the current temperature from an external channel and controls the furnace heating on or off by sending a command to another channel. close. The structure according to the data flow is roughly as follows:

//温度调节器的调度算法
//检测到当前温度在设定范围外,开启或关闭熔炉的加热器
void temperature_regulate(int min_temp, int max_temp)
{
    
    
    int tmp;
    while(1)
    {
    
    
        tmp = read_temperature();//读取温度
        if(tmp < min_temp)
        {
    
    
            furnace_enable();//启动加热
        }
        else if(tmp > max_temp)
        {
    
    
            furnace_disable();//停止加热
        }
        wait();
    }
}

The high-level intent of the algorithm is clear, but the implementation code is littered with low-level details. As a result, this code (control algorithm) cannot be reused on different hardware at all, but the code is very small, the algorithm is easy to implement, and it does not seem to cause much damage. What if a complex temperature control algorithm needs to be ported to different platforms, or the requirements change and require additional warnings to be issued when the temperature is abnormal?

void temperature_regulate_v2(Thermometers *t,Heaterk *h,int min_temp, int max_temp)
{
    
    
    int tmp;
    while(1)
    {
    
    
        tmp = t->read();
        if(tmp < min_temp)
        {
    
    
            h->enable();
        }
        else if(tmp > max_temp)
        {
    
    
            h->disable();
        }
        wait();
    }
}

This inverts the dependencies so that the high-level regulation strategy no longer depends on any thermometer or furnace-specific details. The algorithm has good reusability and the algorithm does not rely on details.
Dependency inversion can especially solve the problem of software reuse caused by frequent hardware changes in embedded software. For example, the pedometer of a sports bracelet is based on the calling relationship from high to low in process-oriented development. If the acceleration sensor is subsequently replaced due to materials and other reasons, the upper layer must be modified, especially if there is no internal encapsulation, and the application layer directly calls The method of driving the interface needs to be replaced one by one. If it is later uncertain which sensor may be used, the software needs to automatically adjust according to the sensor characteristics, which requires a large number of switch-case replacements.

app  -> drv_pedometer_a
//调用关系全部替换为
app  -> drv_pedometer_b

If dependency inversion is used, both depend on abstraction:

app  -> get_pedometer_interface
//底层依赖抽象
drv_pedometer_a  -> get_pedometer_interface
drv_pedometer_b  -> get_pedometer_interface

Dependency inversion means that different hardware drivers rely on abstract interfaces, and the upper-layer business also relies on the abstract layer. All development is designed around get_pedometer_interface, so that hardware changes will not affect the reuse of upper-layer software. This implementation is actually a universal proxy pattern.

Conclusion
Using the dependency structure created by traditional procedural programming, the strategy is dependent on details, which will make the strategy affected by changes in details. In fact, it doesn't matter what language you use to write your program. Even in embedded C, if the dependencies of the program are inverted, it is object-oriented design thinking.
The Dependency Inversion Principle is a fundamental mechanism for realizing the claimed benefits of object-oriented technology. Correct application is necessary for creating reusable frameworks, and it is also necessary for building code that is resilient in the face of change. Very important; since abstractions and details are isolated from each other, the code is also easier to maintain.

4. ISP interface segregation principle Interface Segregation Principle

The interface should be as detailed as possible and have as few methods as possible. Don't try to build a powerful interface for all interfaces that rely on it to call.
Use multiple specialized interfaces instead of a single overall interface, i.e. the client should not rely on interfaces it does not need. During object-oriented development, the inherited base class contains unnecessary interfaces, and the interfaces that were originally extended for specific needs become universal, causing all derived classes to implement meaningless interfaces, which is called interface pollution.

Interface pollution
The focus of "Interface Isolation Principle" is the word "interface". There are two understandings at the embedded C level:

1. If "interface" is understood as a set of API interfaces, it can be a series of interfaces for a certain sub-function. If some interfaces are only used by some callers, this part of the interface needs to be isolated and used by these callers alone without forcing other callers to rely on this part of the interface that will not be used. Similar to shopping, no bundling required, just buy what you need.
2. If "interface" is understood as a single API interface or function, some callers only need some functions in the function, and the function can be split into multiple functions with finer granularity, so that The caller relies only on the fine-grained functions it needs. That is, a function should not pass in too many parameter configurations. It would rather be split into multiple similar interfaces to simplify the call, rather than providing a universal interface that requires some irrelevant parameters. Do not over-encapsulate the external interface of the module. Too many parameters will make it difficult to read and use

Risks and Solutions
If a program depends on some methods that it does not use, the program is exposed to changes caused by changes in these unused methods, which inadvertently This leads to coupling between all related programs. In other words, if a client relies on methods that it does not use, but other clients do use those methods, then when other clients request that the method change, it will affect the client. This coupling should be avoided as much as possible and the interfaces should be separated.

In embedded C, with iterative upgrades, new functions will be expanded, or incoming parameters will be directly added to the function, or additional processing will be added inside the function, resulting in redundant interfaces and not friendly to callers of different versions ( If it is an iterative upgrade of functions, there is no problem. To avoid differences between different versions, it is a horizontal relationship). The cost and impact of changes become unpredictable, and the risks associated with changes increase. Changing a function that is not related to you may also have an impact. On the surface, it is modifying function A but causing function B to be abnormal. "The city gate is on fire and the fish in the pond is affected." This kind of unit test coverage is also difficult to grasp.

At the module level, irrelevant interfaces can be shielded with precompiled macros, which also saves code space; when extending new functions at the function level, you can create new interfaces and reimplement an extended version or v2 that is equivalent to the original interface function. Try not to merge through parameter passing. , unless it is clear that the relationship between the two is progressive rather than parallel.

5. LKP Least Knowledge Principle

A submodule should have minimal knowledge about other modules.
Law of Demeter (LOD), also known as the principle of least knowledge (knowledge), the less a function knows about the sub-functions it depends on, the better. No matter how complex the logic of the dependent sub-functions is, Try to encapsulate the logic internally. The popular explanation is that when using a certain sub-module, there is no need to pay attention to its internal implementation and call as few API interfaces as possible.

For example, to perform operation A, you need to call the four interfaces 1-2-3-4 in order, and to perform operation B, you need to call the four interfaces 1-2-4-3 in order. The caller needs to know the details in the module to use it correctly. This kind of interface can be completely merged, encapsulating the two actions A and B, executing specific details internally, hiding and closing it from the outside, and no need to pay attention to it when used by the outside world.

The original intention of the least-known principle (Dimiter principle) is to reduce the coupling between modules, better information hiding and less information overloading of modules, and solidify and close some information. However, excessive closure also has disadvantages. Once customized requirements change, if the new C operation is 4-3-2-1, new interfaces need to be expanded.

6. Reconstruction

Refactoring is ongoing, like cleaning up the kitchen after a meal. The first meal without cleaning will go faster, but because the dishes and dining environment are not cleaned, the preparation time will be longer the next day. This will once again encourage giving up on cleaning. Sure, skipping the cleaning makes mealtimes faster, but the mess starts to build up. Ultimately, it takes a lot of time to find the right cooking utensils, chip away the hardened food residue from the dishes, and scrub them clean. We need to eat every day. Ignoring the cleaning work will not really speed up the cooking process. One-sided pursuit of speed will sooner or later overturn, and haste makes waste. The purpose of refactoring is to clean the code every day and keep the code clean.
Most of software development is based on iterative development of this unclear chaotic state. All principles and patterns will be of no value to dirty code. Before applying various design principles and design patterns, first learn to write clean code.

7. Random thoughts

There are many more object-oriented design principles. There are various general guiding rules for class-based inheritance, encapsulation, and polymorphism, but these design principles are not fully applicable to embedded C. Embedded C is a structured programming, top-down approach, which has more or less disadvantages when the needs change. It is characterized by being fast but chaotic. Therefore, refactoring is essential to improve the internal structure of the code without changing the external behavior; but for what style of modification is appropriate, you can refer to the previous five rules.

Today's embedded software development rarely breaks a byte into eight pieces for use like in the past. When resources are sufficient, embedded application development can appropriately refer to the object-oriented approach to achieve high-quality software; there are two specific solution ideas, function Pointers, abstract isolation. "There is no problem that cannot be solved by adding a layer of abstraction. If there is, add another layer."

Guess you like

Origin blog.csdn.net/m0_66338176/article/details/134751230