Game programming mode - the event queue

  "Send the message or accept the decoupling event on the event."

motivation

  If you've ever worked on user interface programming, it is certainly on the "event" is not a stranger. Every time you click on a button or drop-down menu in the interface, the system will generate an event, the event system will throw your application, your task is to get to these events and associate it with your custom behavior . Then in order to get to these events, your code will usually cycle of events. like this:

while(running)
{
    Event e = pollEvent();
    //handle event
}

  You can see, this code will continue to get events and then processing. However, if another incident occurs during, it should be how to deal with? Here we have to have a position to accommodate these inputs, to avoid missing, and this so-called "rest position" It is a queue.

Center Event Bus

  Event-driven mechanism for the majority of the game is not the case, but for a game in terms of maintaining its own event queue as the backbone of its nervous system is very common. You will often hear the "center" type "global", "major" This description is similar. It is desirable to maintain the role of those for low coupling between modules of the game, the game play advanced internal communication module. For instance, your game has a tutorial novice, the novice tutorial help box will pop up in the game after finishing the specified event. The simplest thing is to determine whether the event occurred in the code, the event of the execution process code. But the situation with the increase in incidents of code need to determine more and more, the code has become increasingly complex. This time you might think of using a central event queue to replace it. Any game system can send events to the queue, but also can be "charged" event from the queue. Beginners guide module queue registration to the event itself, and receives its statement "the enemy of death" event. Thus, the message is the enemy of death can be passed between them in the case of the combat system and the novice modules without direct interaction.

Possible problems

  Although the central event queue to solve some problems, but there are also some problems. We look at an example, such as we play a sound in the game:

class Audio
{
public:
    static void playSound(SoundId id,int volume);
};

void Audio:playSound(SoundId id,int volume)
{
    ReourceId resource = loadSound(id);
    int channel = findOpenChannel();
    if(channel == -1) return;
    startSound(resource,channel,volume);
}

  In this way we call UI code.

class Menu
{
public:
    void onSelect(int index)
    {
        Audio::playSound(SOUND_BLOOP,VOL_MAX);
        //other stuff....     
    }
};

  We need to face the problem might have:

  •   Before complete audio processing engine fully play request, API calls will block until the caller

  Because playSound is "synchronized" executed, it will only return to the caller of the code after the sound was completely played out; In addition In addition, we will be faced with another problem, such as our heroes hit two more monster, it will play the monster cries many times, if you know a little sound, you will know more sounds are mixed together they will be superimposed sound waves. That is, when sound waves are the same, and the first voice sounded the same, but the volume will be twice as large, it makes sounds very harsh, but once the scene of voice more often, because processing hardware limited capacity, exceeds the threshold of sound will be ignored or interrupted. To address these issues, we need to summarize and distinguish all sound. However, the above code can be seen that our processing handle only a sound, unable to meet this demand.

  •   Batch processing can request

  Code base, there are many different subsystems will be called "playSound", and our games are usually run in the top of modern multicore hardware. To take advantage of multi-core, they are usually distributed in different threads. Because of our API is synchronous, it will perform in the caller's thread, so when different system calls playSound, we encountered a case of thread synchronization API calls.

  •   Request is processed in the wrong thread

  Have in common is the sound engine called these problems "playSound" function means "put aside all the things that play music right away." "Now Processing" is the problem. Other games in their system calls the right time "play Sound", and the sound engine at this time but may not be able to deal with this demand, in order to fix this problem, we will request and receive hands decoupling.

Event queue mode

  Event queue is a storage request or notification in accordance with a series of FIFO queue order. When the notification request will be placed in the queue and then returns, the requesting processor then removed event from the event queue and processed. Request by a processor or directly forwarded to the processing module of their interest. This mode of the sender and the receptionist decouples the message processing becomes dynamically and non-real time.

usage scenario

  If you just to decouple a message to the sender and receiver, then as an observer mode and command mode can lower the complexity to meet you. When the time required to decouple a queue often enough on an issue.

  Consideration push and pull to the embodiment according to: A code for another code block B want to do something. A request to initiate the most natural way is to push it to the B. Meanwhile, B timely pulling the request and its processing cycle is very natural. When you have end push and pull ends, it requires a buffer between the two. This is the buffer queue extra advantages than simple decoupled mode.

  Some queue control block provided to a pull request: the recipient may delay processing the request or completely abandoned polymerization thereof. But this is by "depriving" the sender control of the queue to achieve. All the transmitting side can do is to deliver the message queue. This makes the queue seem very applicable when the sender need real-time feedback.

Precautions

  Unlike other simpler models, event queue would be more complicated and the game framework extensive and far-reaching impact. This means that you decide how to apply, must think twice whether the application of this model.

  •   Center event queue is a global variable

  The use of a common mode is referred to as the "central hub" for all messages in the game modules may be passed through it. It is a strong infrastructure, but that does not mean easy to use powerful. On "Global variables are bad," This is something we have talked about before in singleton mode, using global variables means that any part of the game can access it, which will make these parts produced interdependent unconsciously, While the pattern of these states will be packaged into a small good agreement, but still have the risk of global variables contained.

  •   Any state of the game world you control

  Use the event queue, you need to be very cautious. Because decoupling the delivery and reception of the event, an event sent to the event queue most of the time will not be processed immediately, which means dealing with events of the world take for granted can not be considered state of the current state of the world and events consistent such as one entity receives attack may cause nearby companions come back, but if this event is later processed, and the nearby companion then moved to another location, then the common counter this behavior does not occur. This means that the event queue of an event synchronization system view than a more heavyweight structure that is used to record details of the time of the event, to be used later processing. The synchronization system only needs to know the event has occurred, then check the environment to understand these details.

  •   You'll circle your feedback

    Any event or messaging system had to pay attention to cycling.

  1.A sends an event;

  2.B receiving and processing event, and then transmits another event;

  3. As it happens, this event A is concerned, after receiving feedback A while sending the event ......

  4. Go back to 2.

  When your system is time synchronization of information, you will quickly find an infinite loop - they can cause a stack overflow and crash the game. For queues, the asynchronous release stack processing can cause these pseudo-events linger in the system, but the game will keep running. This is hard to find a way to avoid is to avoid re-sent event-handling code in the event. There is another system to achieve a small debug log is also a good idea.

The example code

  Before the code has to have some function, but they are not perfect, now let's use some means to improve them.

  First, the problem playSond is a synchronous call, we have chosen to postpone it to quickly return. This time we need to put this request to save up to play, where we use the most simple array.

struct PlayMessage
{
    SoundId id;
    int volume;
};

class Audio
{
public:
    static void init(){numPending_=0;}
    void playSound(SoundId id, int volume)
    {
           assert(numPending_<MAX_PENDING);
           pending_[numPending_].id = id;
           pending_[numPending_].volume=volume;
           numPending_++;
    }

    static void update()
    {
        for(int i=0;i<numPending_;++i)
        {
             ResourceId resource = loadSound(pending_[i].id);
             int channel = findOpenChannel();
             if(channel == -1) continue;
             startSound(resource,channel,pending_[i].volume);
        }
        numPending_ = 0;
      }
privatestatic const int MAX_PENDING=16;

    static PlayMessage pending_[MAX_PENDING];
    static int numPending_;
};

       Here we move the event handling code in the update, then we can just call it at the right time. For example, call or voice calls in a dedicated thread in the main game loop. This can be a good run, but the above code assumes that we deal with every sound can be done in a "update" in. If you do some other things such as asynchronous processing requests after the sound resource loading, the above code will not work unless the. To ensure "update ()" process only one request, it must be able to ensure that the queue reservation requests and other requests to be processed individually pulled out buffer. In other words, we need a real queue.

Ring buffer

  There are many ways to achieve queue. A recommendable method is a ring buffer. It retains the advantages of an array, allowing us to continuously removed from the front end of the queue elements.

class Audio
{
public:
    static void init(){head_=0;tail_=0;}
    void playSound(SoundId id, int volume)
    {
        assert((tail_+1) % MAX_PENDING_ != head_);
        pending_[tail_].id = id;
        pending_[tail_].volume=volume;
        tail_ = (tail_+1) % MAX_PENDING;
    }

    void update()
    {
        if(head_ == tail_) return;
        
        ResourceId resource = loadSound(pending_[head_].id);
        int channel = findOpenChannel();
        if(channel == -1) return;
        startSound(resource,channel,pending_[head_].volume);

        head_ = (head_+1) % MAX_PENDING;
    }

private:
    static const int MAX_PENDING=16;

     PlayMessage pending_[MAX_PENDING];
     int head_;
     int tail_;
};

  Cyclic buffer is very simple to realize, is to use two pointers, pointing to a head, a pointer to the tail, beginning two pointers coincide expressed as empty, then moved forward at the end of request queue is added, then the processing queue head moving forward the request, when they reach the end of the array, around the back portion. Of course, when we add request, we must first view the current queue is full, full event is discarded.

Request Summary

  Now we have a queue, you can deal with the problem of excessive sound superimposed. Approach is to aggregate and merge request.

void Audio::playSound(SoundId id,int volume)
{
    for(int i=head_;i!=tail_;i=(i+1) % MAX_PENDING)
    {
        if(pending_[i].id == id)
        {
            pending_[i].volume=max(pending_[i].volume,volume);
            return;
        }
    }

    //previous code...       
}

  Here we simply determine whether the request has the same sound, and if so, merge them, the sound is its biggest prevail. It should examine the point is that time of the merger need to traverse the entire queue, if the queue is large, this would be more time-consuming, this time to merge the code can be moved to "update" in to deal with.

  In addition, there is an important thing that we can aggregate "synchronization takes place" in the number of requests and general size of the queue. If we are faster processing request queue kept small, so the opportunity to be on the small batch processing request. Similarly, if the process requests lag, the queue is full, then our chances of crashes is even greater.

  This mode requesting party and the time the request is processed isolation, but when you put the entire queue as a dynamic data structure to operate, made significantly affect the performance of the system after processing between requests and requests. So, before deciding to do so you're ready.

Across threads

  The last problem is the problem of thread synchronization. In today's era of multi-core hardware, which is a problem that must be addressed. And because now we have been using the request queue decouple code and the code to process the request, so we just need to do the queue are not simultaneously modified. The easiest way is to request time to add to the queue when the lock, and when dealing with requests waiting for a condition variable, to avoid the absence of request or event cpu idle. This part can refer to the thread pool implementation: https://blog.csdn.net/caoshangpa/article/details/80374651

Design decisions

  What the team is

    So far, the "event" and "message" are always mixed with use, because it innocuous, no matter what team you Lieliseshi to, it will have the same decoupling and aggregation capabilities, but they still have a number of different conceptual .

  •   If the queue is the event
    • An "event" or "notification" is the description of what has happened. So, if you set it into the team,

    •   You may allow multiple listeners. Because what has happened in the queue, the sender does not care who will receive it. From this perspective, this event has passed and been forgotten;
    •   Domain can access the queue is often wider. Event queue is often used to broadcast events section of any and all interested. In order to allow interested part of greater flexibility, these queues tend to have more global visibility.
  •   If the message queue is
    •   A "message" or "request" describes a "we expect" behavior in the "future" of. Similar to the "play music." You may think this is an asynchronous API service. such,
    •   You are more likely to only a single listener. For example, an example of a request to play music, we do not want other parts of the game to steal messages in the queue.

  Who read from the queue

    In the example of playing music, only the "Audio" category can read the queue. This is a unicast manner, and also work queue broadcast mode, they have a plurality of readers.

  •   Unicast queue

    When a queue is part of a class of API itself Unicast is the most appropriate. For example, in the above example, the caller can only be called "playSound" function. Unicast features queue:

    •   It called read queue's implementation details. All the sender know is that it sends a message;
    •   More queue is encapsulated. Under all other conditions being equal, more usually better package;
    •   You do not have to worry about multiple listeners competition. In multiple listeners, you have to decide whether they will get each queue (broadcast) or queues each of which is assigned to a listener (more like a work queue);
       
  •   Broadcast queue

    This is what most of the "incident" made to the system. When an event comes in, it has multiple listeners. such,

    •   Events can be deleted. In most radio systems, if the event is not a time listener, the event will be discarded;
    •   You may need to filter events. Broadcast queue is usually visible within a wide range of systems, and you have a lot of listeners. A large number of listeners multiplied by the large number of events, so you have to call a large number of event handlers. In order to reduce the size of most of the broadcast system will allow the listener to filter events they receive collection. For example, only accepts UI events.
       
  •   Work Queue

   Similar to the broadcast queue, then you also have multiple listeners. The difference is in the queue will be assigned to each of a listener. This is a support for concurrent threads and the common good assignment mode system. This approach allows you to have a good plan. Because delivery of a project to a listener queue logic needs to find the best choice. This may be a simple cycle or a random selection, or by a more complicated priority system.

  Who can write to the queue

    Write queue of possible modes are: one to one, one to many, many to many.

  •   A writer

    A writer like a synchronization observer mode. You have the privilege object that generated the event, for other module receives. Its unique features are:
    •   You know the source of implicit events. Because only one object is added to the event queue, so you can safely assume that events from the sender to;
    •   Typically allows multiple readers. You can create a queue recipient of one to one. However, this does not look like the communication system, and more like an ordinary queue data structure.
  •   Multiple writers

    This is the principle of our audio engine example. Any game module can be added to the event queue. On the first "global" or "Central" event bus. For it

    •   You must be careful feedback loop. Because anything can enter the queue, it is likely to trigger a feedback loop;
    •   You might want some reference to the sender in the event itself. When the listener to get an event, it may want to know who to send. This time it is necessary to quote the sender is packed to the event object.

  Objects in the queue lifecycle management

  • Transfer Ownership

       This is a time in the amount of memory management shake the traditional method. When a message is queued, the sender no longer has it. When the message processing, recipients take ownership and be responsible for releasing it.

  • Shared ownership
    This is actually a smart pointer. Object holds the message of shared ownership of the message, but the message is not the time to be referenced automatically free up memory.
  • It has a queue

    Another point is always present in the message queue. Do not release their own messages, the sender will start with a request to give the message queue, and the queue already exists returns a reference to the message sender fill it. Like an object pool.

Guess you like

Origin www.cnblogs.com/xin-lover/p/11689335.html