Common data structures and operations summarized algorithm complexity

[TOC]

Blog: blog.shinelee.me | blog park | CSDN

time complexity

How to evaluate an algorithm to calculate the time?

An algorithm of the actual running time is difficult to assess , at the time of input, CPU clock speed, memory, data transfer speeds, whether there are other programs that affect the actual running time of the algorithm to seize resources and so on, these factors will. To compare the efficiency of different algorithms fairly, detached from the physical conditions required, a mathematical description of the abstract. In all of these factors, the scale of the problem is often the most important factor determining time algorithm . Therefore, the time complexity of the algorithm defined $ T (n) $, used to describe the execution time of the algorithm as how to enter the size of growth will change, what is the growth rate .

When entering a smaller scale, the running time is small, little differences in different algorithms. Therefore, the time complexity often focus on the size of the input $ n $ large run-time trend, known as progressive complexity , the use of big O notation indicating the upper bound of progressive , for any $ n >> 2 $, if the function and the constant $ c $ $ f (n) $ satisfies

\[ T(n) \leq c \cdot f(n) \]

= Put

\[ T(n) = O(f(n)) \]

You can simply think, $ O (f (the n-)) $ represents the running time and $ f (n) $ is proportional, such as $ O (n ^ 2) $ denote the running time proportional to the square of the input size, say this though It is not strict, but harmless under normal circumstances .

** In the $ n $ is large, constant $ c $ irrelevant, more focused on $ f (n) between different algorithms size $ part. ** For example, when the n->> 100 $ $, the n-$ 2 $ is much larger than the $ 100n $ therefore focus on the n-$ 2 $ and $ n $ growth rate difference can be.

Different growth time complexity as contrast, images from Big-O the Cheat Sheet Poster ,

Big-O Complexity

In addition to the big $ O $ mark, there is a large $ \ Omega $ symbol and $ \ Theta $ mark, respectively lower and infimum,

\[ \Omega(f(n)) : \ c\cdot f(n) \leq T(n) \\\Theta(f(n)) : \ c_1 \cdot f(n) \leq T(n) \leq c_2 \cdot f(n) \]

Their relationship shown below, the cross-sectional image from Deng Junhui - C ++ data structure described in the third edition

Progressive complexity of the relationship between the different marks

Complexity of common data structures and algorithms operating

The following excerpt summary of the complexity of the operation of common data structures and sorting algorithms, see the source cited. Which contains the complexity of the worst time, average time complexity and space complexity, etc., further comprising sorting algorithm for time complexities.

Common Data Structure Operations

Graph and Heap Operations

array sorting algorithms

Link comes:

Array Stack Queue Singly-Linked List Doubly-Linked List Skip List Hash Table Binary Search Tree Cartesian Tree B-Tree Red-Black Tree Splay Tree AVL Tree KD Tree

Quicksort Mergesort Timsort Heapsort Bubble Sort Insertion Sort Selection Sort Tree Sort Shell Sort Bucket Sort Radix Sort Counting Sort Cubesort

以及 Data Structures in geeksforgeeks

Enter the case when a smaller scale

Asymptotic complexity analysis of the input time is larger, the smaller the size of the input it?

When entering a smaller scale, it can not easily ignore the role of constant $ c $ , as shown below, images from Growth Rates Review . Complexity grows faster when entering a smaller scale may be less than the complexity of slower growth .

complexity growth rates

So in the choice algorithm, the brain can not seem faster without high-level data structures and algorithms, have to analyze specific issues, because of advanced data structures and algorithms in the realization often comes with additional computational overhead, if it brings gains unable to offset the implied cost might outweigh the benefits.

It also gives us revelation in the direction of code optimization,

  • One optimized from $ f (n) $, such as the use of more advanced algorithms and data structures;
  • There is a constant $ c $ optimization, such as removing unnecessary loop calculation index, calculation is repeated.

the above

Quote

Guess you like

Origin www.cnblogs.com/shine-lee/p/11913229.html