C++ Advanced — [Hash]

Table of contents

1. Unordered series of associative containers

Two, unordered_map

 1. Introduction to the documentation of unordered_map

 2. Interface description of unordered_map

   2.1 The capacity of unordered_map

   2.2 Iterator of unordered_map 

   2.3 element access of unordered_map 

   2.4 Unordered_map query

   2.5 Unordered_map modification operation 

   2.6 Bucket operation of unordered_map

3. The underlying structure

 1. Hash concept

 2. Hash Collision

 3. Hash function

 4. Common hash functions

 5. Closed hashing

  5.1 Linear probing

  5.2 Structure

  5.3 Insert

  5.4 Search

  5.5 Delete

  5.6 Functors

  secondary detection

 6. Open hash (hash bucket)

  6.1 Structure

  6.2 Insertion

  6.3 Delete

  6.4 Search

 7. Comparison between open hash and closed hash

Fourth, the transformation of the hash table

 1. Iterators

 2. Encapsulate unordered_map

 3. Transform the function of the hash bucket


 

1. Unordered series of associative containers

In C++98, STL provides a series of associative containers whose bottom layer is a red-black tree structure. The query efficiency can reach O(log_2 N), that is, the height of the red-black tree needs to be compared in the worst case. When When there are many nodes in the tree, the query efficiency is not ideal. The best query is to find elements with a small number of comparisons. Therefore, in C++11, STL provides four unordered series of associative containers. The association between these four containers and the red-black tree structure Containers are basically used in the same way, but their underlying structures are different.


Two, unordered_map

 1. Introduction to the documentation of unordered_map

  1. unordered_map is an associative container that stores key-value pairs, which allows fast indexing to the corresponding value through the key.
  2. In unordered_map, the key value is usually used to uniquely identify the element, and the map value is an object whose content is associated with this key. Keys and map values ​​may be of different types.
  3. Internally, the unordered_map is not sorted in any particular order. In order to find the value corresponding to the key within a constant range, the unordered_map puts key-value pairs with the same hash value in the same bucket.
  4. An unordered_map container is faster than a map for accessing individual elements by key, but it is generally less efficient at range iteration over a subset of elements.
  5. unordered_map implements the direct access operator (operator[]), which allows direct access to the value using the key as an argument.
  6. Its iterators only have forward iterators, not reverse iterators.

 2. Interface description of unordered_map

   2.1 The capacity of unordered_map

   2.2 Iterator of unordered_map 

   2.3 element access of unordered_map 

 Note: This function actually calls the insertion operation of the hash bucket, and uses the parameters key and V() to construct a default value and insert it into the underlying hash bucket. If the key is not in the hash bucket, the insertion is successful, return V(), and insert If it fails, it means that the key is already in the hash bucket, and the value corresponding to the key will be returned.

   2.4 Unordered_map query

Note: the key in unordered_map cannot be repeated, so the return value of the count function is at most 1 

   2.5 Unordered_map modification operation 

    2.6 Bucket operation of unordered_map

 The interfaces of set and unordered_set are similar here, and detailed documents can be found in: unordered_set - C++ Reference (cplusplus.com)

set - C++ Reference (cplusplus.com)

unordered_map - C++ Reference (cplusplus.com)

map - C++ Reference (cplusplus.com)


3. The underlying structure

The reason why the unordered series of associative containers are more efficient is that the underlying layer uses a hash structure.

 1. Hash concept

        In the sequential structure and balanced tree, there is no corresponding relationship between the element key and its storage location, so when looking for an element, it must go through multiple comparisons of the key. The time complexity of sequential search is O(N), and the height of the tree in the balanced tree is O(log_2 N). The efficiency of the search depends on the number of comparisons of elements during the search process .

Ideal search method: the element to be searched can be directly obtained from the table at one time without any comparison.

        If a storage structure is constructed, and a one-to-one mapping relationship can be established between the storage location of an element and its key code through a certain function (hashFunc), then the element can be found quickly by this function when searching.

When adding to this structure:

  • Insert element: According to the key code of the element to be inserted, use this function to calculate the storage location of the element and store it according to this location
  • Search element: perform the same calculation on the key code of the element, use the obtained function value as the storage position of the element, compare the element according to this position in the structure, if the key code is equal, the search is successful

This method is the hash (hash) method, the conversion function used in the hash method is called the hash (hash) function, and the constructed structure is called the hash table (or hash table)

The hash function is set to: hash(key) = key % capacity;         capacity is the total size of the underlying space of the storage element.
Putting the key% on the capacity will get the subscript mapped to the position of the array, and then store the key at the subscript.

 2. Hash Collision

        Different keywords calculate the same hash address through the same hash function. This phenomenon is called hash collision or hash collision. Data elements with different keys but the same hash address are called "synonyms".
One reason for the hash collision may be that the design of the hash function is not reasonable enough. So how to solve the hash conflict?

Two common methods for resolving hash collisions are: closed hashing and open hashing

 3. Hash function

Hash function design principles:

  • The domain of the hash function must include all key codes that need to be stored, and if the hash table allows m addresses, its value range must be between 0 and m-1
  • The addresses calculated by the hash function can be evenly distributed in the entire space
  • The hash function should be relatively simple

  4. Common hash functions

1. Direct addressing method -- (commonly used)

Take a linear function of the key as the hash address: Hash (Key) = A*Key + B

Advantages: Simple and uniform Disadvantages
: Need to know the distribution of keywords in advance Use scenarios
: suitable for searching for relatively small and continuous situations Not greater than m, but the prime number p that is closest to or equal to m is used as the divisor. According to the hash function: Hash(key) = key% p(p<=m), the key code is converted into a hash address. 3. Square method Suppose the keyword is 1234, its square is 1522756, and the middle 3-digit 227 is extracted as the hash address; another example is the keyword 4321, its square is 18671041, and the middle 3-digit 671 (or 710) is extracted as the hash address The middle square method is more suitable: the distribution of keywords is not known, and the number of digits is not very large . can be shorter), and then these parts are superimposed and summed, and according to the length of the hash table, the last few bits are taken as the hash address. 5. Random number method Select a random function, take the random function value of the keyword as its hash address, that is, H(key) = random(key), where random is a random number function. This method is usually used when the keyword length is not equal. 6. Mathematical analysis method There are n d-digits, and each digit may have r different symbols. The frequency of these r different symbols appearing on each digit is not necessarily In the same way, it may be evenly distributed on some bits, and each symbol has an equal chance of appearing, and the uneven distribution on some bits only

















Certain symbols appear frequently. According to the size of the hash table, a number of bits in which various symbols are evenly distributed can be selected as the hash
address.
The digital analysis method is usually suitable for dealing with the situation where the number of keywords is relatively large. If the distribution of keywords is known in advance and the
distribution of several bits of keywords is relatively uniform.
Note: the more sophisticated the hash function is designed, the possibility of hash conflicts will occur. The lower, but can not avoid hash collision


 5. Closed hashing

        Closed hashing: also known as open addressing method, when a hash conflict occurs, if the hash table is not full, it means that there must be an empty space in the hash table, then the key can be stored in the "bottom" of the conflicting position a" to go in the empty slot . So how to find the next empty position
?

  5.1 Linear probing

        Linear detection: start from the position where the conflict occurs, and probe backwards in turn until the next empty position is found.

  5.2 Structure

    enum State// 三种状态
	{
		EMPTY,//空
		EXIST,//存在
		DELETE,//删除
	};
    template<class K, class V>
	struct HashData
	{
		pair<K, V> _data;
		State state = EMPTY;
	};
    template<class K, class V, class Hash = HashFunc<K>>
	class HashTable
	{
		typedef HashData<K, V> Data;
	public:
		HashTable()
			:_n(0)
		{	//提前开好空间,避免第一次插入的地方计算负载因子时报错
			_tables.resize(10);
		}
    private:
		vector<Data> _tables;
		int _n;
	};

  5.3 Insert

        Still take the K, V structure as an example. First, regardless of the problem of expansion and storage, first find the subscript to be mapped. Then the key value may be plastic or not. If plastic, you can directly take the modulus. What if it's not plastic surgery? At this time, the functor comes in handy. Use the functor to convert other types into integers, and then take the modulus to find the subscript. Why do I need % size here? That's because [ ] will be implemented later, and the judgment of [ ] is whether the subscript position is smaller than size. If the % is capacity, then if the subscript is not in size, an error will be reported when using [ ].

        For capacity expansion, the load factor is used to judge. Of course, the smaller the load factor, the better. If the load factor is too large, it will always look backward when inserting, resulting in low efficiency. Therefore, reinserting after expansion here will change the original mapping relationship and disperse the data.

		bool insert(const pair<K, V>& data)
		{	//如果key存在就不插入
			if (Find(data.first))
			{
				return false;
			}
			//负载因子大于0.7就扩容,浮点数不好比较就直接乘10,按整形比
			if (_n * 10 / _tables.size() > 7)
			{
				HashTable newHT;
				newHT._tables.resize(_tables.size() * 2);
				for (auto& e : _tables)//开一个二倍size的哈希表,把旧表数据插入新表,
				{
					if (e.state == EXIST)
					{
						newHT.insert(e._data);
					}
				}
				_tables.swap(newHT._tables);
			}
			Hash hf;//仿函数
			//求映射下标,如果是其他类型用仿函数转化为整形取模
			size_t hashi = hf(data.first) % _tables.size();
			while (_tables[hashi].state == EXIST)
			{
				++hashi;
				hashi %= _tables.size();
			}
			_tables[hashi]._data = data;
			_tables[hashi].state = EXIST;
			++_n;
			return true;
		}

  5.4  Search

        Finding here is very similar to inserting, it is to find the subscript, and then find whether the data exists. Because of the deletion, a judgment is added here to determine whether the status exists, and only if the data exists can it be found. Because deleting here is only to modify the state, if this condition is not set, the data may be found after being deleted. The return value is Data* because it can be modified directly after searching, which is very convenient.

		Data* Find(const K& key)
		{
			Hash hf;
			size_t hashi = hf(key) % _tables.size();//求下标
            size_t start = hashi;
			while (_tables[hashi].state != EMPTY)
			{	//下标位置的数据状态要为存在才可以被找到
				if (_tables[hashi].state == EXIST && _tables[hashi]._data.first == key)
				{
					return &_tables[hashi];
				}
				++hashi;
				hashi %= _tables.size();
//极端场景下,表里的状态没有空,全是存在和删除,如果不写这个可能导致死循环
                if(start == hashi)	
				    break;
			}
			return nullptr;
		}

  5.5 Delete

        Deletion is very simple, just get the address of the data by searching, and change the status to delete.

bool erase(const K& key)
		{
			Data* ret = Find(key);
			if (ret)
			{
				ret->state = DELETE;
				_n--;
				return true;
			}
			else
			{
				return false;
			}
		}

  5.6 Functors

template<class K>
	struct HashFunc
	{	//将key转化为整形
		size_t operator()(const K& key)
		{
			return (size_t)key;
		}
	};
	template<>//模板特化
	struct HashFunc<string>
	{	//将字符串转化为整形,
	    //BKDR算法
		size_t operator()(const string& key)
		{
			size_t num = 0;
			for (auto& ch : key)
			{
				num *= 131;//减少冲突概率
				num += ch;
			}
			return num;
		}
	};

Advantages of linear detection: the implementation is very simple,
disadvantages of linear detection: once a hash conflict occurs, all the conflicts are connected together, which is easy to generate data "accumulation", that is, different
key codes occupy available empty positions, making it difficult to find a certain key code The position of , requires many comparisons, resulting in a decrease in search efficiency
.

  Complete code: HashTable/HashTable/HashTable(closed).h The evening wind is not as good as your smile/job library-code cloud-open source China (gitee.com) https://gitee.com/auspicious-jing/job-library/ blob/master/HashTable/HashTable/HashTable(%E9%97%AD).h

  secondary detection

        The defect of linear detection is that conflicting data is piled up together, which is related to the method of finding the next empty position (looking backward one by one).

        Therefore, in order to avoid this problem in secondary detection, the method to find the next empty position is: H_i = (H_0 + i ^ 2) % m, or: H_i = (H_0 - i ^ 2) % m. Among them: i = 0,1,2,3...H_0 is the position obtained by calculating the key code key of the element through the hash function Hash(x), and m is the size of the table.

Research shows that: when the length of the table is a prime number and the table load factor a does not exceed 0.5, new table entries must be inserted, and any
position will not be probed twice. Therefore, as long as there are half of the empty positions in the table, there will be no problem of the table being full. You can ignore the fullness of the table when
searching, but you must ensure that the load factor a of the table does not exceed 0.5 when inserting. If it exceeds, you
must consider increasing the capacity.


The biggest defect of closed hashing is that the space utilization rate is relatively low, which is also a defect of hashing.


 6. Open hash (hash bucket)

        The open hash method is also called the chain address method (open chain method). First, the hash function is used to calculate the hash address for the key code set. The key codes with the same
address sub-set is called a bucket. The elements in each bucket are linked through a singly linked list
, and the head node of each linked list is stored in the hash table.

   6.1 Structure

        The open hash structure uses an array to store the mapping position, regards each mapping position as a bucket, and uses a singly linked list to "hang up" elements with the same mapping position. The functor of open hash is the same as that of closed hash, so I won't write it.

//单链表节点
template<class T>
struct HashBucketNode
{
	HashBucketNode(const T& data)
		:_data(data)
		,_next(nullptr)
	{}
	T _data;
	HashBucketNode<T>* _next;
};

template<class K,class T,class Hash = HashFunc<K>>
class HashBucket
{
	typedef HashBucketNode<T> Data;
public:
	
	HashBucket()
		:_n(0)
	{
		_tables.resize(10);
	}
private:
	vector<Data*> _tables;    
	size_t _n;//哈希表里存储元素的个数
};

   6.2 Insertion

        Regardless of the same elements and expansion issues, insertion is very simple. First, the element is transformed into an integer through a functor, and its mapping position is obtained. A new singly linked list node is created with this element, and the mapping position is inserted by head insertion ( tail insertion efficiency is low).

        The processing of the same element can be processed after the search is realized. The essence is to find whether the element is in the table. If it exists, return the node, and if it does not exist, continue to insert.

        Expansion here is different from closed hashing. The load factor of closed hashing cannot be too large, which will affect the efficiency. In open hashing, the best situation is that there is only one element under each mapping position, that is to say, the load factor If it is equal to 1, it can be expanded (the load factor is too small, wasting space, too large, the efficiency will deteriorate, and the standard also stipulates that the load factor should be 1).

        Create a new array that is twice the size of the old array, and then take the linked lists of the old array one by one, redefine the mapping position and insert it into the new array, and set the mapping position to null. After all the fetching, there is no linked list in the old array, it is just a bare array, and then the new and old arrays are exchanged (or find a worker), and the expansion is completed.

ps: The linked list of the old array is taken here. First, when the old array is destroyed, the linked list will not be destroyed, which will cause a memory leak. Second, the old array will be destroyed anyway, so I will use it directly. , can also reduce the time consumption of re-insertion and save efficiency; it should be noted that the linked list of the old array cannot be directly put the linked list in the bucket into the new array as it is, because the mapping relationship will change after expansion. So pay attention!

Data* Insert(const T& data)
	{
		Data* node = Find(data);
		if (node)
			return node;
		//扩容
		if (_tables.size() == _n)
		{
			vector<Data*> newHT;
			newHT.resize(_tables.size() * 2);
			for (size_t i = 0; i < _tables.size(); ++i)
			{
				Data* cur = _tables[i];
				
				while (cur)
				{
					Data* next = cur->_next;
					size_t hashi = Hash()(cur->_data) % newHT.size();
					cur->_next = newHT[hashi];
					newHT[hashi] = cur;
					cur = next;
				}
				_tables[i] = nullptr;
			}
			_tables.swap(newHT);
		}
		//插入
		size_t hashi = Hash()(data) % _tables.size();
		Data* newNode = new Data(data);
		newNode->_next = _tables[hashi];
		_tables[hashi] = newNode;
		++_n;
		return newNode;
	}

  6.3 Delete

        Deleting a certain value is to delete the node of the singly linked list under the mapping position. We also know that only one node in the singly linked list cannot be deleted, so a forward pointer must be added to point to the previous node of the linked list. Traverse the singly linked list to find the position to be deleted, and when found, let the next pointer of the previous node point to the next node of the current node, and finally delete the current node, return true after deletion, and return false if not found.

bool Erase(const K& key)
	{
		size_t hashi = Hash()(key) % _tables.size();
		Data* cur = _tables[hashi];
		Data* prev = nullptr;
		while (cur)
		{
			if (cur->_data == key)
			{
				//删除
				if (cur == _tables[hashi])
				{
					_tables[hashi] = cur->_next;
				}
				else
				{
					prev->_next = cur->_next;
				}
				delete cur;
				return true;
			}
			else
			{
				prev = cur;
				cur = cur->_next;
			}
			
		}
		return false;
	}

  6.4 Search

        The search is very simple. It is to find the mapping position first, and then find the node in the mapping position with the same value as the value to be searched. After finding it, return to the current node. If it is not found, it returns empty, which proves that there is no such value in the table.

Data* Find(const K& key)
	{
		size_t hashi = Hash()(key) % _tables.size();
		Data* cur = _tables[hashi];
		while (cur)
		{
			if (cur->_data == key)
			{
				return cur;
			}
			cur = cur->_next;
		}
		return nullptr;
	}

 7. Comparison between open hash and closed hash

        Applying the chain address method to deal with overflow requires adding a link pointer, which seems to increase storage overhead. In fact: Since the open address method must maintain a large amount of free space to ensure search efficiency, such as the secondary detection method requires a loading factor a <= 0.7, and the space occupied by the table entry is much larger than the pointer, so using the chain address method instead Compared with the open address method, it saves storage space.


Fourth, the transformation of the hash table

 1. Iterators

        For convenience, a hash table member is defined here, which will be used in ++. Then the construction here will initialize both members.

        Because the structure of the hash table is special, it is a singly linked list, so the iterator only has forward direction and no reverse direction, that is to say, there is no --. The implementation of ++ is not difficult. The essence of iterator traversal is to go through all the singly linked lists in the table. If the next node is not empty, go to the next node; if the next node is empty, then find the next one For buckets that are not empty, we only know the node, but how to find the mapping position if we don’t know it? Simple, the current mapping position can be found through the value of the current node, so it is very simple to find the next bucket that is not empty. It should be noted that if all the buckets have been traversed, it proves that it has reached the end, and before Like a red-black tree, end is empty, which proves that the iterator is over.

//前置声明
template<class K, class T, class Hash, class KeyOfT>
class HashBucket;

template<class K, class T, class Hash, class KeyOfT>
struct _IteratorHash
{
	typedef HashBucketNode<T> Data;
	typedef _IteratorHash<K, T, Hash, KeyOfT> Self;
	//为了实现简单,在哈希桶的迭代器类中需要用到hashBucket本身
	typedef HashBucket<K, T, Hash, KeyOfT> HT;

	_IteratorHash(Data* node,HT* ht)
		:_node(node)
		,_ht(ht)
	{}
	T& operator*()
	{
		return _node->_data;
	}
	T* operator->()
	{
		return &_node->_data;
	}
	Self& operator++()
	{	//如果桶内还有节点,那就取他的下一个节点
		if (_node->_next)
		{
			_node = _node->_next;
		}
		else
		{
			Hash hs;
			KeyOfT kot;
			size_t hashi = hs(kot(_node->_data)) % _ht->_tables.size();//找当前桶的映射位置
			while (++hashi < _ht->_tables.size())	//找到下一个不为空的桶
			{
				if (_ht->_tables[hashi])
				{
					_node = _ht->_tables[hashi];
					break;
				}

			}
			if (hashi == _ht->_tables.size())	//数组已经走完了,都遍历一遍了
				_node = nullptr;
		}
		return *this;
	}
    //后置
	//Self operator++(int)
	//{
	//	Self ret = *this;
	//	if (_node->_next)
	//	{
	//		_node = _node->_next;
	//	}
	//	else
	//	{
	//		Hash hs;
	//		KeyOfT kot;
	//		size_t hashi = hs(kot(_node->_data)) % _ht->_tables.size();
	//		while (++hashi < _ht->_tables.size())
	//		{
	//			if (_ht->_tables[hashi])
	//			{
	//				_node = _ht->_tables[hashi];
	//				break;
	//			}

	//		}
	//		if (hashi == _ht->_tables.size())
	//			_node = nullptr;
	//	}
	//	return ret;
	//}

	bool operator!=(const Self& s)const
	{
		return _node != s._node;
	}

	bool operator==(const Self& s)const
	{
		return _node == s._node;
	}
	HT* _ht;
	Data* _node;
};

 begin: find the first non-empty bucket, the top of the bucket is the first node,

 end: end is empty, go to empty, prove that the traversal of nodes is completed

	template<class K, class T,class Ref,class Ptr, class Hash, class KeyOfT>
	friend struct _IteratorHash;	//友元

public:
	typedef _IteratorHash<K, T, Hash, KeyOfT> iterator;
	

	iterator begin()
	{
		for (size_t i = 0; i < _tables.size(); ++i)
		{    //找第一个不为空的桶
			if (_tables[i])
				return iterator(_tables[i], this);
		}
		return iterator(nullptr, this);
	}
	iterator end()
	{
		return iterator(nullptr, this);
	}

 2. Encapsulate unordered_map

        There are still several interfaces encapsulated here, so that we can use them easily. These interfaces are encapsulated with hash buckets.

template<class K,class V ,class Hash = HashFunc<K>>
	class unordered_map
	{
		struct mapKeyOfT    
		{    //获取key值
			const K& operator()(const pair<K,V>& kv)
			{
				return kv.first;
			}
		};
	public:
		typedef HashBucket<K, pair<const K,V>, Hash, mapKeyOfT> HaSh;
		typedef typename HashBucket<K, pair<const K, V>, Hash, mapKeyOfT>::iterator iterator;
        iterator begin()
		{
			return _ht.begin();
		}
		iterator end()
		{
			return _ht.end();
		}

		pair<iterator, bool> Insert(const pair<K,V>& kv)
		{
			return _ht.Insert(kv);
		}
		iterator Erase(iterator pos)
		{
			return _ht.Erase(pos->first);
		}
        //bool Erase(const K& key)
        //{
        //    return _ht.Erase(key);
        //}
		V& operator[](const K& key)
		{
			pair<iterator, bool> ret = _ht.Insert(make_pair(key,V()));
			return ret.first->second;
		}

		iterator Find(const K& key)
		{
			return _ht.Find(key);
		}
	private:
		HaSh _ht;
	};

 The interface of unordered_set is basically the same as that of unordered_map. I won’t write the const iterator here. If you are interested, you can look at the source code of stl or search the views of the big guys on the Internet.

 3. Transform the function of the hash bucket

Because we want to encapsulate, we need to make some modifications to the function of the hash bucket. The return type of Find is changed to an iterator, and Erase can be changed or not. Look at yourself, the insert here is no different from the red-black tree, but there is one more parameter only. The search will return an iterator, you can use the iterator to construct a return node, delete the original bool value erase, change it to an iterator and delete the node, the return value can only return the previous node of the deleted node .

// 插入
pair<iterator,bool> Insert(const T& data)
	{
		KeyOfT kot;
		Hash hs;
		 //查找表里有没有该值
		iterator node = Find(kot(data));
		if (node != end())
			return make_pair(node,false);
		//扩容
		if (_tables.size() == _n)
		{
			vector<Data*> newHT;
			newHT.resize(_tables.size() * 2);
			for (size_t i = 0; i < _tables.size(); ++i)
			{
				Data* cur = _tables[i];
				
				while (cur)
				{
					Data* next = cur->_next;
					size_t hashi = hs(kot(cur->_data)) % newHT.size();
					cur->_next = newHT[hashi];
					newHT[hashi] = cur;
					cur = next;
				}
				_tables[i] = nullptr;
			}
			_tables.swap(newHT);
		}
		//插入
		size_t hashi = hs(kot(data)) % _tables.size();
		Data* newNode = new Data(data);
		newNode->_next = _tables[hashi];
		_tables[hashi] = newNode;
		++_n;
		return make_pair(iterator(newNode, this), true);
	}
//删除
iterator Erase(const K& key)
	{
		size_t hashi = Hash()(key) % _tables.size();
		Data* cur = _tables[hashi];
		Data* prev = nullptr;
		while (cur)
		{
			if (KeyOfT()(cur->_data) == key)
			{
				//删除
				if (cur == _tables[hashi])
				{
					_tables[hashi] = cur->_next;
				}
				else
				{
					prev->_next = cur->_next;
				}
				delete cur;
				return iterator(prev,this);
			}
			else
			{
				prev = cur;
				cur = cur->_next;
			}
		}
		return iterator(nullptr,this);
	}
//查找
	iterator Find(const K& key)
	{
		size_t hashi = Hash()(key) % _tables.size();
		Data* cur = _tables[hashi];
		while (cur)
		{
			if (KeyOfT()(cur->_data) == key)
			{
				return iterator(cur, this);
			}
			cur = cur->_next;
		}
		return iterator(nullptr,this);
	}

Full code:

Guess you like

Origin blog.csdn.net/weixin_68993573/article/details/129254654
Recommended