19
loading...
This website collects cookies to deliver better user experience
We have a set of appointments, {1, 2,..., n}
, where the ith appointment corresponds to an interval spanning from a starting time, call it si, to a finishing time, call it fi. We say that a subset of the intervals (a collection of said intervals) are compatible , if no two of the intervals overlap in time. Our goal is to accept as large a compatible subset as possible.
Let I be the set of all intervals.
Let A be the set of accepted intervals, which is initially empty
While I is not empty
Choose interval i from I with the smallest finishing time
Add interval i to A
Remove all intervals not compatible with i from I
EndWhile
Return the set of all accepted intervals, A.
A = theta
. We should rather simply prove that the size of both sets are equal, i.e. |A| = |theta|
. We do so by proving, that for all indices the following holds for the finishing times,f(ir) <= f(jr)
f(i1) <= f(j1)
O(n log n)
, which we learned is linearithmic time in the previous blog post (part 1). Let us learn why this is the case by considering an implementation of said algorithm. This will get quite technical, but I merely want to give you an idea of the process.O(n log n)
time. Then, we construct an array S, which contains the value si of interval i at position S[i]
in the array, which takes O(n)
time. O(n)
time, which leaves us with a total running time of O(n log(n)) + O(n) + O(n) = O(n log(n))
.A series of numbers, where each number is the sum of the two preceding numbers, i.e. 1, 1, 2, 3, 5, 8, 13, 21...
def fib(n)
if n == 1 or n == 2 # (1) base case
return 1
else
return fib(n - 2) + fib(n - 1) # (2) recursive step
n = 4
, which means we go down the else-branch of our algorithm, in which we return the addition of the result of two new calls to the same function, i.e. our recursive step (step 2). To figure out the result of these two function calls, we look at each function call individually, which is represented by the second layer of our tree. fib(1) = 1
and fib(2) = 1
.fib(n)
, i.e. fib(n) = fib(n - 2) + fib(n - 1)
, which we memoize to avoid computing the same result multiple times. cache = [] # (1) cache for memoization
def fib(n)
if n in cache:
return cache[n] # (2) reuse old memoized/cached computation
else:
if n = 1 or n = 2: # (3) base case
cache[n] = 1 # (4.1) memoize result
else:
cache[n] = fib(n - 2) + fib(n - 1) # (4.2) memoize result
return cache[n]
O(n)
computations, which intuitively gives us a linear-time algorithm.def mergesort(l):
if len(l) == 1:
return l # base case
mid = len(l) // 2
left = l[:mid] # divide
right = l[mid:] # divide
a, b = mergesort(left), mergesort(right) # conquer
return merge(a, b) # combine
def merge(a, b): # combine function
l = list()
i, j = 0, 0
while i < len(a) and j < len(b): # 1. still elements in both lists
if a[i] < b[j]:
l.append(a[i])
i += 1
else:
l.append(b[j])
j += 1
while i < len(a): # 2. still elements in a
l.append(a[i])
i += 1
while j < len(b): # 3. still elements in b
l.append(b[j])
j += 1
return l
mergesort()
functions (line 7, conquer step) can now return two sorted lists, as they contain only one element. We will call these two lists a and b. Then, we pass these two lists to our merge()
function, to merge the two lists (line 8).mergesort()
algorithm recursively with the two halves of the list, until we reach our base case. We focus on the left branch, as the process is similar for both branches.mergesort()
function call, setting the two lists, a and b.a, b = mergesort(left), mergesort(right)
[27,38]
, with the sorted list from the other branch [21, 42]
, which eventually will give us our final merged, sorted list [21, 27, 38, 42]
.If there are still elements left to be added from both lists, i.e. i < len(a)
and j < len(b)
.
while i < len(a) and j < len(b): # 1. still elements in both lists
if a[i] < b[j]:
l.append(a[i])
i += 1
else:
l.append(b[j])
j += 1
a[i] < b[j]
is true, which is why we add 1 as the first element to the combined list.If there are only elements left to be added for list a, in which case j < len(b)
is no longer true, as j > len(b)
.
while i < len(a): # 2. still elements in a
l.append(a[i])
i += 1
If there are only elements left to be added for list b, in which case i < len(a)
is no longer true, as i > len(a)
.
while j < len(b): # 3. still elements in b
l.append(b[j])
j += 1
Iterate over the list a[1]
to a[n]
, and for each element a[i]
,
a[1]
to a[i-1]
.a[i]
up one position, essentially moving them to the right of a[i]
.n = 8
, i = 6
, and a[i] = 5
, for which two elements are to be moved, to correctly position the element a[i]
.a[i]
with one more element, until we finally compare the algorithm with all but one element, i.e. n - 1
elements. This means, that in average, we compare each element with n/2
elements, which gives us a total of n * n/2
comparisons. In asymptotic time, this means O(n2) comparisons, i.e. a quadratic running time. From the first blog post, we know that this is not a favorable running time.O(log(n))
time.O(n)
time. Now, recall that our mergesort algorithm simply iterates over the two lists to combine them. In the worst case, the two lists are of length n/2, in which case it takes a total of O(n)
time to iterate and combine the two lists. O(log(n))
times, we must recursively merge that many lists to reach the final, sorted list. Therefore, the overall running time must be O(n * log(n))
. Again, this might not be very intuitive, but in reality, the process of the analysis should be the real takeaway here.