C ++ application performance optimization (IV) - C ++ common data structures Performance Analysis

C ++ application performance optimization (IV) - C ++ common data structures Performance Analysis

The common data structures herein for all practical performance analysis operation (traverse, insert, delete, sort, search) with an example.

First, the common data structure Introduction

1, the array

Arrays are the most commonly used linear table, static can be determined in advance or the size of the data set, using the storage array is the best choice.
One advantage of the array easy to find, using the index to immediately locate the desired data node; the second is added or deleted when no memory fragmentation element; third node pointer is not necessary to consider the data storage. However, the array as a static data structure, the presence of memory use is low, the disadvantages of poor scalability. No matter how many elements in the array actually have the compiler will always be distributed according to predefined memory capacity. If out of bounds, you need to create a new array.

2, the list

Another commonly used list is a linear list, a linked list data link is connected by a pointer. Each node consists of a pointer data field and data field, a pointer generally points to the next node in the linked list, if the node is the last in the list, the pointer is NULL. In the doubly linked list (Double Linked List), a further pointer field comprising a pointer pointing to a data node. The jump list (Skip Linked List), the field contains a pointer to a pointer to an associated arbitrary.

template <typename T>
class LinkedNode
{
public:
    LinkedNode(const T& e): pNext(NULL), pPrev(NULL)
    {
        data = e;
    }
    LinkedNode<T>* Next()const
    {
        return pNext;
    }
    LinkedNode<T>* Prev()const
    {
        return pPrev;
    }
private:
    T data;
    LinkedNode<T>* pNext;// 指向下一个数据节点的指针
    LinkedNode<T>* pPrev;// 指向上一个数据节点的指针
    LinkedNode<T>* pConnection;// 指向关联节点的指针
};

Previously statically allocated arrays with different good storage space, a variable length linked list. As long as enough memory space, the program will be able to continue to insert new data item to the list. All items in the array are stored in a contiguous memory space, the data items in the list will be randomly assigned to a location in memory.

3. hash table

Arrays and linked lists have their own advantages and disadvantages, the array can easily navigate to any data item, but poor scalability; the list can not provide fast data entry position, but the insert, and delete any of the data item is very simple. When the need to process large data set, typically requires an array and the advantages of the binding chain. By combining the advantages of the arrays and lists, hash tables can be extended to achieve better and higher access efficiency.
While each developer can build your own hash table, hash table but have a common basic structure, as follows:
C ++ application performance optimization (IV) - C ++ common data structures Performance Analysis
Hash each item in the array has a pointer to a small list, all data related to a particular It will be stored in the node list. When a program needs to access a data node, you do not need to traverse the entire hash table, but first find items in the array, and then query a linked list to find the target node. Each sub list called a bucket (Bucket), how to locate a storage barrel target node, jointly determined by the data node key field Key and hash functions, although a variety of mapping methods exist, but the most commonly used hash function to achieve The method of division is mapped. Division function has the following form:
F(k) = k % D
K is key data node, D is predesigned constant, F (k) is the number of barrels (equivalent to each item in the hash array subscript), the hash table to achieve the following:

// 数据节点定义
template <class E, class Key>
class LinkNode
{
public:
    LinkNode(const E& e, const Key& k): pNext(NULL), pPrev(NULL)
    {
        data = e;
        key = k;
    }
    void setNextNode(LinkNode<E, Key>* next)
    {
        pNext = next;
    }
    LinkNode<E, Key>* Next()const
    {
        return pNext;
    }
    void setPrevNode(LinkNode<E, Key>* prev)
    {
        pPrev = prev;
    }
    LinkNode<E, Key>* Prev()const
    {
        return pPrev;
    }

    E& getData()const
    {
        return data;
    }
    Key& getKey()const
    {
        return key;
    }
private:
    // 指针域
    LinkNode<E, Key>* pNext;
    LinkNode<E, Key>* pPrev;
    // 数据域
    E data;// 数据
    Key key;//关键字
};

// 哈希表定义
template <class E, class Key>
class HashTable
{
private:
    typedef LinkNode<E, Key>* LinkNodePtr;
    LinkNodePtr* hashArray;// 哈希数组
    int size;// 哈希数组大小
public:
    HashTable(int n = 100);
    ~HashTable();
    bool Insert(const E& data);
    bool Delete(const Key& k);
    bool Search(const Key& k, E& ret)const;
private:
    LinkNodePtr searchNode()const;
    // 哈希函数
    int HashFunc(const Key& k)
    {
        return k % size;
    }

};
// 哈希表的构造函数
template <class E, class Key>
HashTable<E, Key>::HashTable(int n)
{
    size = n;
    hashArray = new LinkNodePtr[size];
    memset(hashArray, 0, size * sizeof(LinkNodePtr));
}
// 哈希表的析构函数S
template <class E, class Key>
HashTable<E, Key>::~HashTable()
{
    for(int i = 0; i < size; i++)
    {
        if(hashArray[i] != NULL)
        {
            // 释放每个桶的内存
            LinkNodePtr p = hashArray[i];
            while(p)
            {
                LinkNodePtr toDel = p;
                p = p->Next();
                delete toDel;
            }
        }
    }
    delete [] hashArray;
}

Analysis of the code, a hash function determines the efficiency and performance of a hash table.
When F (k) = k, the hash table of each bucket is only one node, the hash table is a one-dimensional array, although each data node pointer will cause some memory space wasted, but the maximum search efficiency (time complexity of O (1)).
When F (k) = c, all nodes in the hash table stored in a bucket, a linked list is a hash table degradation, but also add a redundant array of substantially empty, to find a node time efficiency is O (n), the lowest efficiency.
Therefore, building an ideal hash table need to use as much as possible so that a more uniform distribution of the data node hash function, the hash table data structure at the same time is also an important factor in its performance impact. For example, the number of buckets too little can cause huge chain, resulting in low search efficiency, too much can lead to memory waste bucket. Therefore, before designing and implementing a hash table, you need to analyze the data node's key value to determine how much needs to build an array of hash and what kind of hash function in accordance with its distribution.
Implemented as a hash table, a variety of data organization nodes, the list is not limited to, the tub may be a tree, it can be a hash table.

4, binary tree

A common data structure is a binary tree, commonly known developers binary search tree. In a binary search tree, the key value of the left child node of all nodes is less than equal to itself, and the key value of the right child node is greater than or equal itself. Since the balance a binary search tree with an ordered array of binary search algorithm is the same principle, so the query efficiency is much higher than the linked list (O (Log2n)), and the list is O (n). However, since each node in the tree needs to be saved two pointers to child nodes, much higher than the cost of the space and the one-way linked list arrays, and memory allocation for each node in the tree it is not continuous, leading to memory fragmentation. However, in the binary tree insert, delete, and perform well on search and other operations to become one of the most commonly used data structures. Binary tree to achieve the following list:

template <class T>
class TreeNode
{
public:
    TreeNode(const TreeNode& e): left(NULL), right(NULL)
    {
        data = e;
    }
    TreeNode<T>* Left()const
    {
        return left;
    }
    TreeNode<T>* Right()const
    {
        return right;
    }
private:
    T data;
    TreeNode<T>* left;
    TreeNode<T>* right;
};

Second, the data structure traversal operation

1, iterate

Iterate operation is very simple, either sequentially or in reverse can traverse the array, you can start anywhere through the array.

2, linked list traversal

Tracking pointer able to complete traversal of the list:

    LinkNode<E>* pNode = pFirst;
    while(pNode)
    {
        pNode = pNode->Next();
        // do something
    }

It can support two-way linked list traversal sequence and reverse, jump lists can be filtered by rapid traverse some useless node.

3. hash table traversal

If you know in advance Key values ​​of all nodes, you can find every non-empty bucket by Key and hash function values, and then traverse the list barrels. Otherwise, only by walking through each bucket hash array way.

 for(int i = 0; i < size; i++)
    {
        LinkNodePtr pNode = hashArray[i];
        while(pNode) != NULL)
        {
            // do something
            pNode = pNode->Next();
        }
    }

4, binary tree traversal

Traversing the binary tree by the three modes: the preamble, in sequence, after, three kinds of recursive traversal achieved as follows:

// 前序遍历
template <class E>
void PreTraverse(TreeNode<E>* pNode)
{
    if(pNode != NULL)
    {
        // do something
        doSothing(pNode);
        PreTraverse(pNode->Left());
        PreTraverse(pNode->Right());
    }
}

// 中序遍历
template <class E>
void InTraverse(TreeNode<E>* pNode)
{
    if(pNode != NULL)
    {
        InTraverse(pNode->Left());
        // do something
        doSothing(pNode);
        InTraverse(pNode->Right());
    }
}

// 后序遍历
template <class E>
void PostTraverse(TreeNode<E>* pNode)
{
    if(pNode != NULL)
    {
        PostTraverse(pNode->Left());
        PostTraverse(pNode->Right());
        // do something
        doSothing(pNode);
    }
}

The disadvantage of using a recursive way binary tree traversal primarily with increasing depth of the tree, the program uses stack space to function more and more, due to the limited size of the stack space, recursively traverse may cause memory exhaustion. There are two solutions: First, the use of non-recursive algorithm to achieve the preamble, in sequence, after traversing that follow the situation changes when a recursive algorithm implementation stack function, a stack of node traversal path on the current record, depending on the circumstances around the stack node element exists, determine the next operation (push or child node of the current node unstack), whereby the binary tree traversal; second is to use a binary tree cues, i.e., based on a traversal rule in each leaf subsequent node pointer to a node increases.

Third, the data structure of the insert

1, insertion of the array

Since all data nodes in the array are stored in contiguous memory, so insert a new node needs to move all the nodes after the insert position to make room in order to properly copy the new node to the insertion position. If the array is full just need to re-establish a new array of larger capacity, all the nodes in the original array copied to the new array, the array thus a higher degree of insertion operation as compared to other data structures, time complexity.
If the node is inserted into a full array, the best case is inserted into the end of the array, the time complexity is O (1), the worst case is inserted into the head of the array, all nodes need to move the array, the time complexity is O (n).
If you insert node to a full array of common practice is to create a larger array, then all nodes in the original array are copied to the new array, while the new node is inserted, deleted yuan last arrays, time complexity is O (n). Before deleting yuan array, the two arrays must coexist for some time, a large space overhead.

2, insertion of the list

Insert a new node in the linked list is simple, just change the linked list for a single node pNext pointer to point to a position before insertion of the current node, then the node pNext pointer to point to a node (for the head of the list does not exist on a node, the next node for the end of the list does not exist). For a doubly linked list and jump list, we need to modify the relevant node pointer. Insertion length of the list regardless of the time complexity is O (1), of course, the list is usually accompanied by insertion into the positioning position of the node list, it takes some time.

Insert 3, the hash table

Inserting a node in the hash table requires two operations to complete, and the bucket chain is inserted into the positioning node.

template <class E, class Key>
bool HashTable<E, Key>::Insert(const E& data)
{
    Key k = data;// 提取关键字
    // 创建一个新节点
    LinkNodePtr pNew = new LinkNodePtr(data, k);
    int index = HashFunc(k);//定位桶
    LinkNodePtr p = hashArray[index];
    // 如果是空桶,直接插入
    if(NULL == p)
    {
        hashArray[index] = pNew;
        return true;
    }
    // 在表头插入节点
    hashArray[index] = pNew;
    pNew->SetNextNode(p);
    p->SetPrevNode(pNew);
    return true;
}

Hash table insertion operation time complexity is O (1), if the list is ordered bucket, the list need to spend time locating the insertion location, if the chain length is M, the time complexity is O (M).

4, inserted into the binary tree

Binary tree structure directly affect the efficiency of the inserting operation, for a balanced binary search tree, the insertion node time complexity is O (Log2N). For non-balanced binary tree, the time complexity of the insertion node is relatively high, in the worst case, all of the non-balanced binary tree nodes are left NULL, reduced to a binary tree list, the new node is inserted into the time complexity is O (n).
While many number of nodes, the efficiency of the inserting operation to be much higher than the balanced binary tree unbalanced binary tree. Engineering development usually avoid unbalanced binary tree, or convert non-balanced binary tree is a balanced binary tree. The simple way as follows:
(1) non-balanced binary tree traversal sequence, all node pointer stored in an array.
(2) Since the array is an ordered arrangement of all the elements, a binary search may be used to traverse the array, a balanced binary tree constructed layer by layer from top to bottom.

Fourth, delete data structure

1, delete the array

To delete a node from the array, the array is not required if empty, need to move forward in all subsequent nodes will delete nodes. The worst case (first deleting nodes), the time complexity is O (n), the best case (deleted tail node), the time complexity is O (1).
In some cases (such as dynamic array), when the deletion is completed, if a large number of idle position where the array, the array is necessary to reduce, i.e., to create a new, smaller array, all the nodes in the original array are copied to the new array, and then the original array deleted. Therefore, it will result in a larger space and time costs, care should be taken to set the size of the array, that is, to avoid wasting memory space but also to reduce the enlarged array or out. Typically, whenever the need to remove a node in the array, it is not actually deleted, but the design of a flag in the location of the nodes bDelete, which is set to true, while prohibiting other programs using the node, the array to be you want to delete a node reaches a certain threshold, then delete unity, to avoid multiple mobile node operations, reduce time complexity.

2, the list of deleted

Pointer list delete node operation, the node will be deleted directly on a node to the next node to be deleted to the node, delete the time complexity is O (1).

3, delete the hash table

To delete a node from the hash table follows: First (implemented by the bucket chain) chain by traversing the hash function and find the node to be deleted and then delete the node and to reset the pointer to the front and rear. If the deleted node is the first node of the barrel, the barrel head pointer is pointing to subsequent nodes.

template <class E, class Key>
bool HashTable<E, Key>::Delete(const Key& k)
{
    // 找到关键值匹配的节点
    LinkNodePtr p = SearchNode(k);

    if(NULL == p)
    {
        return false;
    }
    // 修改前向节点和后向节点的指针
    LinkNodePtr pPrev = p->Prev();
    if(pPrev)
    {
        LinkNodePtr pNext = p->Next();
        if(pNext)
        {
            pNext->SetPrevNode(pPrev);
            pPrev->SetNextNode(pNext);
        }
        else
        {
            // 如果前向节点为NULL,则当前节点p为首节点
            // 修改哈希数组中的节点的指针,使其指向后向节点。
            int index = HashFunc(k);
            hashArray[index] = p->Next();
            if(p->Next() != NULL)
            {
                p->Next()->SetPrevNode(NULL);
            }
        }
    }
    delete p;
    return true;
}

4, delete the binary tree

Discussions need to remove a node from a binary tree:
(1) If the node is a leaf node, delete it.
(2) If you delete a node is only one child, then the child nodes are replaced delete nodes.
(3) If the left and right child nodes delete nodes are present, since each child node may have its own sub-tree, you need to find the appropriate sub-tree node, and established as a new root node, and the integration of two sub-tree, re-added to the original binary tree.

Fifth, the sorting operation of the data structure

1. Sort the array

Sorted array includes a bubble, select, insert, etc. sorting method.

template <typename T>
void Swap(T& a, T& b)
{
  T temp;
  temp = a;
  a = b;
  b = temp;
}

Bubble sort to achieve:

/**********************************************
* 排序方式:冒泡排序
* array:序列
* len:序列中元素个数
* min2max:按从小到大进行排序
* *******************************************/
template <typename T>
static void Bubble(T array[], int len, bool min2max = true)
{
    bool exchange = true;
    //遍历所有元素
    for(int i = 0; (i < len) && exchange; i++)
    {
            exchange = false;
            //将尾部元素与前面的每个元素作比较交换
            for(int j = len - 1; j > i; j--)
            {
                    if(min2max?(array[j] < array[j-1]):(array[j] > array[j-1]))
                    {
                            //交换元素位置
                            Swap(array[j], array[j-1]);
                            exchange = true;
                    }
            }
    }
}

Bubble sort time complexity is O (n ^ 2), bubble sort is stable sorting method.
Select the sort to achieve:

/******************************************
* 排序方式:选择排序
* array:序列
* len:序列中元素个数
* min2max:按从小到大进行排序
* ***************************************/
template <typename T>
void Select(T array[], int len, bool min2max = true)
{
 for(int i = 0; i < len; i++)
 {
     int min = i;//从第i个元素开始
     //对待排序的元素进行比较
     for(int j = i + 1; j < len; j++)
     {
         //按排序的方式选择比较方式
         if(min2max?(array[min] > array[j]):(array[min] < array[j]))
         {
             min = j;
         }
     }
     if(min != i)
     {
        //元素交换
        Swap(array[i], array[min]);
     }
 }
}

Selection sort time complexity of O (n ^ 2), selection sort is unstable sorting method.
Insertion sort implementation:

/******************************************
 * 排序方式:选择排序
 * array:序列
 * len:序列中元素个数
 * min2max:按从小到大进行排序
 * ***************************************/
template <typename T>
void Select(T array[], int len, bool min2max = true)
{
  for(int i = 0; i < len; i++)
  {
      int min = i;//从第i个元素开始
      //对待排序的元素进行比较
      for(int j = i + 1; j < len; j++)
      {
          //按排序的方式选择比较方式
          if(min2max?(array[min] > array[j]):(array[min] < array[j]))
          {
              min = j;
          }
      }
      if(min != i)
      {
         //元素交换
         Swap(array[i], array[min]);
      }
  }
}

Insertion sort time complexity of O (n ^ 2), insertion sort is stable sorting method.

2, list sorting

While the superior performance list on insertion and deletion operations, but the sorting complexity is very high, especially way linked list. Since access to a linked list node needs to rely on other nodes, can not be positioned according to any one of the subscripts, the node location time complexity is O (N), low efficiency of sorting.
Engineering can be used linked list array, when an array structure need to sort, store a pointer for each node in the linked list. During sequencing through the array location of each node, and to achieve a switching node.
List array for direct access node list provides a convenient, but a method using space for time, if you want to get a sorted linked list, the linked list is preferably inserted in the construction of each node to the appropriate location.

3. hash table sorting

As a result of the hash function to access each barrel, and therefore meaningless hash table hash array is sorted, but need to locate specific nodes by querying each bucket list completed (implemented by the bucket list), the list can be sorted barrel improve the efficiency of query nodes.

4, binary tree sort

For a binary search tree, which itself is ordered, preorder search tree can be ordered binary output node. For unsorted binary tree, all the nodes are random time complexity of organization, location node is O (N).

Sixth, the data structure of the search operation

1. Find an array of

The biggest advantage of the array is accessible by any node index, without resorting to a pointer, an index, or traverse, the time complexity is O (1). For find nodes subscript unknown, only through the array, the time complexity is O (N). For an ordered array, the best search algorithm is a binary search.

template <class E>
int BinSearch(E array[], const E& value, int start, int end)
{
    if(end - start < 0)
    {
        return INVALID_INPUT;
    }
    if(value == array[start])
    {
        return start;
    }
    if(value == array[end])
    {
        return end;
    }
    while(end > start + 1)
    {
        int temp  = (end + start) / 2;
        if(value == array[temp])
        {
            return temp;
        }
        if(array[temp] < value)
        {
            start = temp;
        }
        else
        {
            end = temp;
        }
    }
    return -1;
}

Binary search time complexity is O (Log2N), with the same binary query efficiency.
For an array of out of order, can only look through the node traversal methods, project development usually set a variable to hold the index update identifies a node, the query is executed from the identification tag variable index began to traverse the array, the efficiency is higher than starting from scratch.

2. Find the list

For one-way linked list, the worst case need to traverse the entire list to find the desired node, time complexity is O (N).
For ordered list, you can get the data in advance of certain nodes, you can select the target data to find the closest one node, depending on the efficiency of the distribution of known nodes in the linked list, ordered for two-way linked list is more efficient, if the middle known node, then the query time complexity of O (N / 2).
For the jump list, if the pointer association can be established in advance the relationship between the nodes in the list, search efficiency will be greatly enhanced.

3. hash table lookup

Data structure efficiency and bucket hash table queries related. Barrel implemented by a linked list, the linked list is related to the length and query efficiency, time complexity is O (M). Search algorithm to achieve the following:

template <class E, class Key>
bool HashTable<E, Key>::SearchNode(const Key& k)const
{
    int index = HashFunc(k);
    // 空桶,直接返回
    if(NULL == hashArray[index])
        return NULL;
    // 遍历桶的链表,如果由匹配节点,直接返回。
    LinkNodePtr p = hashArray[index];
    while(p)
    {
        if(k == p->GetKey())
            return p;
        p = p->Next();
    }
}

4, binary tree lookup

Find a tree node in a binary tree shape about. For a balanced binary tree, look for efficiency of O (Log2N); For complete unbalanced binary tree, look for efficiency of O (N);
engineering usually try to balance the need to build a binary tree to improve query performance, but balanced binary tree by inserting, deleting, operate a significant impact, it is necessary to adjust the structure of a binary tree after the insertion or deletion of nodes, usually, when inserted into the binary tree, the delete operation when a lot, does not need every insertion, deletion will adjust the balance of the operation, but in intensive query operations prior to unification adjusted once.

Seven, analyze and implement dynamic arrays

1, dynamic arrays Profile

Engineering development, the array is a common data structure, if known at compile time array of all the dimensions, it can statically defined array. After statically defined array, space and position of the array occupied in memory are fixed, if the definition of the global array, the compiler array allocate space in the static data area, if it is a local array, the compiler is on the stack an array of distribution space. But if you can not know in advance the dimension of the array, the program only knows how much needs to be allocated at runtime array, then the C ++ compiler can be dynamically allocated on the heap array space.
Dynamic array advantages as follows:
(1) a large space can be allocated. The size of the stack has a limit, Linux system can use ulimit -s to view, usually 8K. Although developers can set up, but because of the need to ensure the efficiency program, usually not too much. Heap memory space is usually available for distribution is relatively large, reaching GB level.
(2) the use of flexible. Developers can determine the size and dimensions of the array according to actual needs.
Disadvantage dynamic array as follows:
(1) space allocation efficient than static array. Static arrays generally consist of stack space allocation, dynamic arrays are generally space from the heap allocation. Stack machines are data structures provided by the system, the computer will provide support at the bottom of the stack, i.e., assigned an address in a special register storage stack, push and stack machine has a dedicated instruction execution, and thus more efficient stack. Heap provided by the C ++ library, its memory allocation mechanism than the stack is much more complex, in order to allocate a block of memory, the library function will search space of sufficient size available in the heap memory in accordance with a certain algorithm, if we find enough space will be called the kernel ways to increase the storage space program data segment, so that the program have the opportunity to allocate a large enough memory. Therefore, the efficiency of the heap than stack low.
(2) is likely to cause a memory leak. Dynamic Memory requires developers to manually allocate and free memory, easily due to the negligence of the developer cause a memory leak.

2, to achieve dynamic array

In real-time video systems, video servers cache and undertake work forward video data. Generally, each camera server to open a certain size and a separate cache. After the video frame buffer is written to this server at some point before reading it out, and forwarded to the client. Forwarding the video frame is temporary, due to the limited buffer size of a steady stream of video data, the frame data after being written, over a period of time and will be overwritten by new video frame. Buffer time determined by the length of a video frame and a video frame buffer.
Due to the huge amount of video frame data, and a server typically needs to support dozens or even hundreds of cameras, design cache structure is an important part of the system. On the one hand, if pre-allocate a fixed amount of memory, not increase the runtime, delete, the server can only support a certain number of cameras, small flexibility; on the other hand, since the video server program at startup will occupy a chunk of memory , will lead to a decline overall system performance, so consider using a dynamic array for video buffer.
First, the server allocates a certain size of cache blocks for each camera, implemented by a class CamBlock. Each object has two CamBlock dynamic array, respectively store the video data and storing _data _frameIndex video frame index information. Whenever the program in the buffer memory (read) a video frame, the object corresponding to the location to which CamBlock video frame in a video frame in accordance with _data index table _frameIndex, then the data is written or read. _data is a circular queue, according to the general reading the FIFO, i.e., if there is a new frame into the queue, the end of the program is written in the frame start copying _data recently, if longer than the array, the covering from head.
C ++ application performance optimization (IV) - C ++ common data structures Performance Analysis

// 视频帧的数据结构
typedef struct
{
    unsigned short idCamera;// 摄像机ID
    unsigned long length;// 数据长度
    unsigned short width;// 图像宽度
    unsigned short height;// 图像高度
    unsigned char* data; // 图像数据地址
} Frame;

// 单台摄像机的缓存块数据结构
class CamBlock
{
public:
    CamBlock(int id, unsigned long len, unsigned short numFrames):
        _data(NULL), _length(0), _idCamera(-1), _numFrames(0)
    {
        // 确保缓存区大小不超过阈值
        if(len > MAX_LENGTH || numFrames > MAX_FRAMES)
        {
            throw;
        }
        try
        {
            // 为帧索引表分配空间
            _frameIndex = new Frame[numFrames];
            // 为摄像机分配指定大小的内存
            _data = new unsigned char[len];
        }
        catch(...)
        {
            throw;
        }
        memset(this, 0, len);
        _length = len;
        _idCamera = id;
        _numFrames = numFrames;

    }
    ~CamBlcok()
    {
        delete [] _frameIndex;
        delete [] _data;
    }
    // 根据索引表将视频帧存入缓存
    bool SaveFrame(const Frame* frame);
    // 根据索引表定位到某一帧,读取
    bool ReadFrame(Frame* frame);
private:
    Frame* _frameIndex;// 帧索引表
    unsigned char* _data;//存放图像数据的缓存区
    unsigned long _length;// 缓存区大小
    unsigned short _idCamera;// 摄像机ID
    unsigned short _numFrames;//可存放帧的数量
    unsigned long _lastFrameIndex;//最后一帧的位置
};

In order to manage each camera independently of the memory block to quickly locate caching Renyiyitai camera, or even any one, it needs to be indexed table CameraArray to manage all CamBlock objects.

class CameraArray
{
    typedef CamBlock BlockPtr;
    BlockPtr* cameraBufs;// 摄像机视频缓存
    unsigned short cameraNum;// 当前已经连接的摄像机台数
    unsigned short maxNum;//cameraBufs容量
    unsigned short increaseNum;//cameraBufs的增量
public:
    CameraArray(unsigned short max, unsigned short inc);
    ~CameraArray();
    // 插入一台摄像机
    CamBlock* InsertBlock(unsigned short idCam, unsigned long size, unsigned short numFrames);
    // 删除一台摄像机
    bool RemoveBlock(unsigned short idCam);
private:
    // 根据摄像机ID返回其在数组的索引
    unsigned short GetPosition(unsigned short idCam);
};

CameraArray::CameraArray(unsigned short max, unsigned short inc):
    cameraBufs(NULL), cameraNum(0), maxNum(0), increaseNum(0)
{
    // 如果参数越界,抛出异常
    if(max > MAX_CAMERAS || inc > MAX_INCREMENTS)
        throw;
    try
    {
        cameraBufs = new BlockPtr[max];
    }
    catch(...)
    {
        throw;
    }
    maxNum = max;
    increaseNum = inc;
}

CameraArray::~CameraArray()
{
    for(int i = 0; i < cameraNum; i++)
    {
        delete cameraBufs[i];
    }
    delete [] cameraBufs;
}

Typically, arrange an integer ID for each camera, in CameraArray, program objects each CamBlock arranged in ascending order of camera ID for consultation. When a new camera system access, based on its program ID CameraArray found in an appropriate location, then the pointer position corresponding to create a new object is CamBlock; when a camcorder is disconnected, the program will be based on its ID, CamBlock find the corresponding cache block, and delete it.

CamBlock* CameraArray::InsertBlock(unsigned short idCam, unsigned long size,
                                   unsigned short numFrames)
{
    // 在数组中找到合适的插入位置
    int pos = GetPosition(idCam);
    // 如果已经达到数组边界,需要扩大数组
    if(cameraNum == maxNum)
    {
        // 定义新的数组指针,指定其维数
        BlockPtr* newBufs = NULL;
        try
        {
            BlockPtr* newBufs = new BlockPtr[maxNum + increaseNum];
        }
        catch(...)
        {
            throw;
        }
        // 将原数组内容拷贝到新数组
        memcpy(newBufs, cameraBufs, maxNum * sizeof(BlockPtr));
        // 释放原数组的内存
        delete [] cameraBufs;
        maxNum += increaseNum;
        // 更新数组指针
        cameraBufs = newBufs;
    }
    if(pos != cameraNum)
    {
        // 在数组中插入一个块,需要将其后所有指针位置后移
        memmov(cameraBufs + pos + 1, cameraBufs + pos, (cameraNum - pos) * sizeof(BlockPtr));
    }
    ++cameraNum;
    CamBlock* newBlock = new CamBlock(idCam, size, numFrames);
    cameraBufs[pos] = newBlock;
    return cameraBufs[pos];
}

If the number of cameras access system beyond the design capacity of CameraArray originally created, considering that the scalability of the system, as long as the conditions allow the hardware necessary to increase the length of cameraBufs.

bool CameraArray::RemoveBlock(unsigned short idCam)
{
    if(cameraNum < 1)
        return false;
    // 在数组中找到要删除的摄像机的缓存区的位置
    int pos = GetPosition(idCam);
    cameraNum--;
    BlockPtr deleteBlock = cameraBufs[pos];
    delete deleteBlock;
    if(pos != cameraNum)
    {
        // 将pos后所有指针位置前移
        memmov(cameraBufs + pos, cameraBufs + pos + 1, (cameraNum - pos) * sizeof(BlockPtr));
    }
    // 如果数组中有过多空闲的位置,进行释放
    if(maxNum - cameraNum > increaseNum)
    {
        // 重新计算数组的长度
        unsigned short len = (cameraNum / increaseNum + 1) * increaseNum;
        // 定义新的数组指针
        BlockPtr* newBufs = NULL;
        try
        {
            newBufs = new BlockPtr[len];
        }
        catch(...)
        {
            throw;
        }
        // 将原数组的数据拷贝到新的数组
        memcpy(newBufs, cameraBufs, cameraNum * sizeof(BlockPtr));
        delete cameraBufs;
        cameraBufs = newBufs;
        maxNum = len;
    }
    return true;
}

If you delete a video camera and found an array of space there is too much free space is required to release the corresponding free space.

Guess you like

Origin blog.51cto.com/9291927/2406342