# What are the cons of Heap

Heapsort is a solid process that first partially presorts the data and then selects the largest element in each case in order to transport it to the end of the field.

### Mapping of a field on a binary tree, no heap!

Heapsort works on a field, but logically regards it as a binary tree, whereby the previous node is always larger than the two successor nodes.

### Illustration of a field on a binary tree, this is a heap!

Therefore, the largest element is always at the root of the binary tree, which means that it can be sorted out and only the other elements have to move up in such a way that this partial order is maintained. To do this, the depth of the tree must be traversed once, which was an effort of the order of magnitude2(N) means. Since this has to be done for each of the N elements, the effort is N · log2(N) restricted.

If the field elements are numbered from 1 to N, the element field [k] has the direct successor elements field [2 · k] and field [2 · k + 1].

Show that with this definition every element of the field appears as a node in the tree and that no nodes are descendants of two different ancestors.

A partial order on this tree is given if the relationships apply to every node and its descendants:

Field [k] ≥ field [2 · k]
Field [k] ≥ field [2 · k + 1]

A tree with such an order is also called a Heap designated. A heap is built by combining different heaps by first considering the elements of the last half of the field as single-element heaps.

then the elements of the previous quarter are added, for which the heap property must be fulfilled together with the two successor elements, etc.

then the elements of the previous quarter are added, for which the heap property must be fulfilled together with the two successor elements, etc.

etc. down to the last element at the top of the binary tree.

In order to fulfill the heap property, a procedure sink is used which arranges a node and all of its descendants in such a way that the heap property applies to all nodes of this subtree.

declare
integer array field [1..end];
procedure sink (integer node, end);
declare
father of integrity, son, sonL, sonR;
do
Father: = node; - start heap pass
For run do
SonL: = 2 * father; - left son
if sonL> end then return; endif; - End of field reached
SonR: = sonL + 1; - right son
Son: = sonL; - “Son” becomes the greatest son
if sonR <= end then
if field [sonL] endif;
if field [son] <= field [father] then return; - if ok, return
else - swap father <-> son
Field [son], field [father]: = field [father], field [son];
Father: = son; - search further in the path of the son
endif;
endfor run;
done procedure sink;
do
for node: = end / 2 to 1 by -1 do sink (node, end); endfor;
...

In Java this method could be formulated as follows.

public void sink (CDataSet [] field, int father, int end) {
while (true) {
int sonL = father * 2 + 1;
if (sonL> end) return;
int sonR = sonL + 1;
int son = sonL;
if (sonR <= end && field [sonL] .key if (field [son] .key <= field [father] .key) return;
CDatenSatz swap = field [son];
Field [son] = field [father];
Field [father] = swap;
Father = son;
}
}
public boolean heap (CDataSet [] field) {
int end = field.length-1;
for (int node = end / 2; node> = 0; node--) sink (field, node, end);
...
}

The effort for building a heap is linear even in the worst case.

If the tree has the depth T, there are between 2T-1 and 2T-1 elements in the tree. For a node in depth k, at most O (T - k) comparisons and swaps must be carried out so that it and its successors become a heap; at depth k there are at most 2k-1 Node. This results in the following estimate of the effort:

• Calculate the sum. Hint: use math induction on T.

• Explain why this proves that the effort increases linearly with the number of elements.

Since the largest element is now at the beginning of the heap, it can be brought to the end of the heap, where it obviously belongs, and the element that was previously at the end can be swapped forwards. Now the heap property is violated, so that this element has to be sunk into the tree, for which the above procedure sink can be used.

Since the smaller elements are at the end of a heap, each element will usually travel a long way through the tree, which is limited by the height of the tree.

...
For nodes: = end to 2 by -1 do
Field [node], field [1]: = field [1], field [node];
Sink (1, node-1);
end for node;

The complete solution in Java then looks like this:

public boolean heap (CDataSet [] field) {
int end = field.length-1;
for (int node = end / 2; node> = 0; node--) sink (field, node, end);
for (int node = end; node> = 1; node--) {
CDataSet swap = field [node];
Field [node] = field [0];
Field [0] = swap;
Sink (field, 0, node-1);
}
return TestField (field);
}

Here each call of the procedure sink must run through the depth of the rest of the tree, so that this effort is logically proportional2(N) is. Since this has to be done for each element, the overall effort is limited by N · log (N), which means a reasonable computing time even for very large N.

If each element is always lowered through the entire heap to one sheet, the effort can be calculated exactly. If N is the number of elements, T is the depth of the tree, so N ≤ 2T - 1, then in the worst case the sum must obviously be formed:

Heapsort is a very interesting algorithm for a number of reasons. First, a data structure is used that does not have many of the disadvantages of a tree built by lists. No additional memory is required in the elements for pointers to the successors; the ancestors and successors of elements can easily be determined, and the tree can easily be traversed horizontally, since this corresponds to the arrangement of the elements in the field. If the maximum depth of a tree is known, this data structure can be used to advantage.

Heapsort can be improved even further, for example by using a ternary or quaternary tree instead of the binary one. In addition, Heapsort is an algorithm in which, unlike Quicksort, the maximum computing time is limited by the factor N · log (N).