Huffman coding large root heap small heap root JieXi

Huffman coded
data encoded according to the occurrence frequency data, thereby compressing the original data.

For example, a text file, in which the number of various characters appear as follows:

a: 10
B: 20 is
C: 40
D: 80
may convert each character into a binary encoding, for example, convert a is 00, b is converted to 01, c is converted to 10, d is converted to 11. This is the simplest way of encoding, without considering the weight of each character (frequency of occurrence). The Huffman coding uses a greedy strategy, the most frequent character encoding shortest, thus ensuring the shortest overall length coding.

First generates a Huffman tree generation process each frequency select at least two nodes to generate a new node as their parent node, and the frequency for the new node and two nodes. Minimum frequency selection reason generating process such that the selected first node of the tree of the lower layer, then the desired code length is longer, less frequent so that the total code length can be less.
Here Insert Picture Description

import java.util.HashMap;
import java.util.Map;
import java.util.PriorityQueue;

public class hufman {
private class Node implements Comparable<Node>{

	char ch;
	int frequency;
	boolean isleaf;
	Node left;
	Node right;
	public Node(char a,int f) {
		ch=a;
		frequency=f;
		isleaf=true;
	}
	public Node(Node left,Node right,int frequency) {
		this.left=left;
		this.right=right;
		this.frequency=frequency;
	}
	@Override
	public int compareTo(Node o) {
		// TODO Auto-generated method stub
		return this.frequency-o.frequency;
	
	}
}
	public Map<Character,String> encode(Map<Character,Integer> frquencyChar){
		PriorityQueue<Node> p=new PriorityQueue();
		for(char c:frquencyChar.keySet()) {
			p.add(new Node(c,frquencyChar.get(c)));
		}
		while(p.size()!=1) {
		Node n1=	p.poll();
		Node n2=	p.poll();
		p.add(new Node(n1,n2,n1.frequency+n2.frequency));
		}
		return Encode(p.poll());
		
	}
	
	public Map<Character,String> Encode(Node root){
		Map<Character,String> m=new HashMap();
		Encode(root,"",m);
		return m;
	}
	public void Encode(Node node,String a,Map<Character,String> m) {
		while(node.isleaf) {
			m.put(node.ch, a);
			return;
		}
		Encode(node.left,a+"0",m);
		Encode(node.right,a+"1",m);
	}

}

man.java
import java.util.HashMap;
import java.util.Map;

public class Teat {

	public static void main(String[] args) {
		// TODO Auto-generated method stub
		Map <Character,Integer> frquencyChar=new HashMap();
		frquencyChar.put('a', 1);
		frquencyChar.put('c', 19);
		frquencyChar.put('2', 13);
		frquencyChar.put('5', 14);
	
		hufman a=new hufman();
		Map<Character,String> b=a.encode(frquencyChar);
		
		for(char m:b.keySet()) {
			System.out.println(m+b.get(m));
		}
	}

}

This is a large root heap, so rewrite the Node object compareTo function, inherits the Comparable interface,
first character with the added weight of a map, I understood that the construction of small roots heap, so every time the smallest visible. Summing node configured like this.
After rewriting the function, we compare this function I think it is "necessary to exchange 0, <0 no exchange, this.fre-o.fre> 0 SMALL latecomer to be exchanged.

PriorityQueue <Integer> maxHeap = new PriorityQueue<Integer>( new Comparator<Integer>() {
			 
			@Override
			public int compare(Integer o1, Integer o2) {
				// TODO Auto-generated method stub
				return o2.compareTo(o1);
			}
			
		});

Large root heap can also be converted by small root heap like this rewrite.
https://blog.csdn.net/liou825/article/details/21322403
https://blog.csdn.net/zcf1784266476/article/details/68961473

Published 39 original articles · won praise 6 · views 10000 +

Guess you like

Origin blog.csdn.net/poppyl917/article/details/89452964