Experiment No 1 Dce (Huffman Encoding)
July 19, 2022 | Author: Anonymous | Category: N/A
Short Description
Download Experiment No 1 Dce (Huffman Encoding)...
Description
EXPERIMENT NO. 1 NAME :Aditi Khot STD :TE EXTC A BATCH : A3 ROLL NO : 117A2051 AIM :
To Study Huffman Encoding Technique.
CODE :
probabilities = [0.4 0.3 0.2 0.1]; % Normalise the probabilities probabilities = probabilities/sum(probabilities); probabilities/sum(probabilities); % For each probability... for index index = 1:length(probabilities) for % ...create an empty codeword codewords{index} = [] % Create a set containing only this codeword set_contents{index} = index % Store the probability associated with this set set_probabilities(index) set_probabilities(inde x) = probabilities(index) end '-------------------------------------------------------------------------'); ) ; disp('-------------------------------------------------------------------------' disp( disp('The disp( 'The sets of symbols and their probabilities are:' are:')) for set_index set_index = 1:length(set_probabilities) for disp([num2str(set_probabilities(set_index)),' ', num2str(set_contents{set_index})]) end % Keep going until all the sets have been merged into one while length(set_contents) > 1 while length(set_contents) % Determine which sets have the t he lowest total probabilities [temp, sorted_indices] = sort(set_probabilities) sort (set_probabilities) % Get the set having the lowest probability zero_set = set_contents{sorted_indices(1)} % Get that probability zero_probability = set_probabilities(sorted_indices( set_probabilities(sorted_indices(1)) 1)) % For each codeword in the set... for codeword_index for codeword_index = 1:length(zero_set) % ...append a zero codewords{zero_set(codeword_index)} = [codewords{zero_set(codeword_index)}, 0] end % Get the set having the second lowest probability one_set = set_contents{sorted_indices(2)} % Get that probability one_probability = set_probabilities(sorted_indices(2)) set_probabilities(sorted_indices(2)) % For each codeword in the set... for codeword_index codeword_index = 1:length(one_set) for % ...append a one codewords{one_set(codeword_index)} codewords{one_set(codeword_in dex)} = [codewords{one_set(codeword_index)}, [codewords{one_set(codeword_index)}, 1] end
disp('The symbols, their probabilities and the allocated bits are:'); disp('The are:'); % For each codeword... for index for index = 1:length(codewords) % ...display its bits disp([num2str(index),'' ',num2str(probabilities(index)), disp([num2str(index), ,num2str(probabilities(index)),'' ',num2str(codewords{index})]) end % Remove the two sets having the lowest probabilities... set_contents(sorted_indices(1:2)) = [] % ...and merge them into a new set set_contents{length(set_contents)+1} set_contents{length(set_contents)+ 1} = [zero_set, one_set] % Remove the two lowest probabilities... set_probabilities(sorted_indices(1:2)) set_probabilities(sorted_ indices(1:2)) = [] % ...and give their sum to the new set set_probabilities(length(set_probabilities)+1) set_probabilities(length( set_probabilities)+1) = zero_probability + one_probability disp('The disp( 'The sets and their probabilities are:' are:')) for for set_index set_index = 1:length(set_probabilities) disp([num2str(set_probabilities(set_index)),' ', num2str(set_contents{set_index})]) end end '-------------------------------------------------------------------------'); ) ; disp('-------------------------------------------------------------------------' disp( are:'); ); disp('The symbols, their probabilities and the allocated Huffman codewords are:' disp('The % For each codeword... for index = 1:length(codewords) for index % ...display its bits in reverse order disp([num2str(index), ' ', num2str(probabilities(index)), num2str(probabilities(index)),'' ',num2str(codewords{index}(length(codewords{index}):-1:1))]) end % Calculate the symbol entropy entropy = sum(probabilities.*log2(1./probabilities)) sum(probabilities.*log2(1./probabilities)) % Calculate the average Huffman codeword length av_length = 0 for for index index = 1:length(codewords) av_length = av_length + probabilities(index)*length(codewords{index}) end disp(['The disp([ 'The symbol entropy is: ',num2str(entropy)]) 'The average Huffman codeword length is: ',num2str(av_length)]) disp(['The disp([ 'The Huffman coding rate is: ',num2str(entropy/av_length)]) disp(['The disp([ OUTPUT :
>> kunalhuffman codewords = [] [1x2 double] [1x3 double] [1x3 double] set_contents = [1] set_probabilities = 0.4000
codewords = [] [] [1x3 double] [1x3 double] set_contents = [1] [2] set_probabilities = 0.4000 0.3000 codewords = [] [] [] [1x3 double] set_contents = [1] [2] [3] set_probabilities = 0.4000 0.3000 0.2000 codewords = [] [] [] [] set_contents = [1] [2] [3] [4] set_probabilities = 0.4000 0.3000 0.2000 0.1000 The sets of symbols and their probabilities are: 0.4 1 0.3 2 0.2 3 0.1 4 temp = 0.1000 0.2000 0.3000 0.4000 sorted_indices = 4 3 2 1 zero_set = 4 zero_probability = 0.1000 codewords = [] [] [] [0] one_set = 3 one_probability = 0.2000 codewords = [] [] [1] [0] The symbols, their probabilities and the allocated bits are: 1 0.4 2 0.3 3 0.2 1 4 0.1 0 set_contents = [1] [2] set_contents = [1] [2] [1x2 double] set_probabilities =
0.4000 0.3000 set_probabilities = 0.4000 0.3000 0.3000 The sets and their probabilities are: 0.4 1 0.3 2 0.3 4 3 temp = 0.3000 0.3000 0.4000 sorted_indices = 2 3 1 zero_set = 2 zero_probability = 0.3000 codewords = [] [0] [1] [0] one_set = 4 3 one_probability = 0.3000 codewords = [] [0] [1] [1x2 double] codewords = [] [0] [1x2 double] [1x2 double] The symbols, their probabilities and the allocated bits are: 1 0.4 2 0.3 0 3 4
0.2 1 1 0.1 0 1
set_contents = [1]
set_contents = [1] [1x3 double] set_probabilities = 0.4000
set_probabilities = 0.4000 0.6000
The sets and their probabilities are: 0.4 1 0.6 2 4 3 temp = 0.4000 0.6000 sorted_indices = 1 2 zero_set = 1 zero_probability = 0.4000 codewords = [0] [0] [1x2 double] [1x2 double] one_set = 2 4 3 one_probability = 0.6000 codewords = Columns 1 through 3 [0] [1x2 double] [1x2 double] Column 4 [1x2 double] codewords = Columns 1 through 3 [0] [1x2 double] [1x2 double] Column 4 [1x3 double] codewords = Columns 1 through 3 [0] [1x2 double] [1x3 double] Column 4 [1x3 double] The symbols, their probabilities and the allocated bits are: 1 0.4 0 2 0.3 0 1
3 4
0.2 1 1 1 0.1 0 1 1
set_contents = Empty cell array: 1-by-0 set_contents = [1x4 double] set_probabilities = Empty matrix: 1-by-0 set_probabilities = 1.0000 The sets and their probabilities are: 1 1 2 4 3 ------------------------------------------------------------------------The symbols, their probabilities and the allocated Huffman codewords are: 1 0.4 2 0.3 3 0.2 4 0.1
0 1 0 1 1 1 1 1 0
entropy = 1.8464 av_length = 0 av_length = 0.4000 av_length = 1.0000 av_length = 1.6000 av_length = 1.9000
The symbol entropy is : 1.8464 The average Huffman codeword length is : 1.9 The Huffman coding coding rate is : 0.97181 >>
CONCLUSION : Huffman encoding is a compression technique that uses variable length encoding where
variable length codes are assigned to all the characters depending on how frequently they occur in the given text. Here we have successfully implemented Huffman code in Matlab and also performed dry run to check whether the answer obtained is correct.
View more...
Comments