2.6k views

The unusual $\Theta(n^2)$ implementation of Insertion Sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. If, instead, we use binary search to identify the position, the worst case running time will

1. remain $\Theta(n^2)$
2. become $\Theta(n (\log n)^2)$
3. become $\Theta(n \log n)$
4. become $\Theta(n)$
edited | 2.6k views
0
now for the best case, it will take O(N log N) time.
0
For best case...

searching takes logN and swap takes 1  , so total (logN + 1) becomes 1 , and for n elements  it becomes N (1) . so total for best case it becomes N.

correct me if i am wrong ?
0
best case input for insertion sort is a sorted array (already) hence we do only the find the place of the cureent element we are looking at in the sorted part of the array  we dont need to do swaps so it will be n*logn for n elements

In insertion sort, with linear search, it takes

(worst case) $n$ comparisons for searching the right position, and $n$ swaps to make room to place the element.

Hence for n elements, a total of $n\times(n+n)$; $n$ for search and $n$ for swaps.

$= \Theta (2n^2) = \Theta (n^2)$

If we replace it with binary search, it takes

(worst case) $\log n$ comparisons for searching the right position, and $n$ swaps to make room to place the element.

Hence for n elements, a total of $n\times(\log n+n)$; $n$ for search and $n$ for swaps.

$= \Theta (n \times \log n + n^2) = \Theta (n^2)$

edited
+2
I think this explanation is correct and more clearer
+3
replace with swap with word shifting
0
why you have take it for n elements. there is nothing specified for number of elements. please explain.
0
0
Then what do you think 'n' stands for in the given question?
0

@ryan sequeira You mean for every element in the array we need to do max. n comparisons to search for the right position?

But if you see the code for each iteration, max no. of comparisons that can occur in an iteration will be from j to 1 where j=(i-1) so i.e. i-1-1+1=i-1 where i∈[2,n].

InsertionSort(arr,n)
{
for i=2 to n
{
key = arr[i]
j = i-1
while (j > 0 && arr[j] > key) // will run from j=(i-1) to max. j=1
{
arr[j+1] = arr[j]
j = j-1
}
arr[j+1] = key
}
}

So what I mean to say is no. of comparisons is not n for all elements and neither is the no. of swaps. It depends on the element's position in the array. Isn't it so? Correct me if it is wrong.

The answer is still the same though.

0

Explanation how we get θ(n2) time complexity in insertion sort-

T(n)=0+0+1+1+2+2......+(n-1)+(n-1) = 2 [0+1+2+.....(n-1] = θ(n2)

Why I have written every number twice? Since first one represent no of comparisons and second one no of swaps... is my understanding correct?

0
nice explanation
0
in this problem for time complexity comparison does not matter only swaps will derive the required result

A. Complexity remains same.θ(n2)

To place the element x in the correct position first we are finding its correct position in the sorted array using binary search but we have to make the space for it by shifting all elements to the right, which in worst case may be equal to the size of the sorted array.