Understanding time/space complexity

12 minutes
Share the link to this page
Copied
  Completed
You need to have access to the item to view this lesson.
This is a free item
$0.00
د.إ0.00
Kz0.00
ARS$0.00
A$0.00
৳0.00
Лв0.00
Bs0.00
B$0.00
P0.00
CA$0.00
CHF 0.00
CLP$0.00
CN¥0.00
COP$0.00
₡0.00
Kč0.00
DKK kr0.00
RD$0.00
DA0.00
E£0.00
ብር0.00
€0.00
FJ$0.00
£0.00
Q0.00
GY$0.00
HK$0.00
L0.00
Ft0.00
₪0.00
₹0.00
ISK kr0.00
¥0.00
KSh0.00
₩0.00
DH0.00
L0.00
ден0.00
MOP$0.00
MX$0.00
RM0.00
N$0.00
₦0.00
C$0.00
NOK kr0.00
रु0.00
NZ$0.00
S/0.00
K0.00
₱0.00
₨0.00
zł0.00
₲0.00
L0.00
QR0.00
SAR0.00
SEK kr0.00
S$0.00
฿0.00
₺0.00
$U0.00
R0.00
ZK0.00
Already have an account? Log In

Transcript

All right, now that we were done with abstract data types, we need to have a quick look at the wild world of computational complexity, to understand the performance that our data structures are providing. So as programmers, we often find ourselves asking the same two questions over and over and over again. That is, how much time does this algorithm need to finish? And also, how much space does this algorithm need for my computation? So if your program takes a lifetime of the universe to finish, then it's no good. Similarly, if your program runs in constant time, but requires a space equal to the sum of all the bytes of all the files on the internet, internet, your algorithm is also useless.

So to standardize a way of talking Talking about how much time and how much space is required for an algorithm to run. theoretical computer scientists have invented big O notation, amongst other things like big data, big omega, and so on. But we're interested in Big O because it tells us about the worst case. Big O notation only cares about the worst case. So if your algorithm sorts numbers, imagine the input is the worst possible arrangement of numbers for your particular sorting algorithm. Or, as a concrete examples, suppose, have an unordered list of unique numbers, and you're searching for the number seven, or the position where seven occurs in your list.

And the worst case isn't when seven that is at the beginning, or in the middle of the list. It's one seven is the very last element of the list. for that particular example, the time complexity He would be linear with respect to the number of elements in the size of your list. Because you have to traverse every single element until you find seven. The same concept applies to space, you could just consider the worst possible amount of space your algorithm is going to need for that particular input. There's also the fact that big O really only cares about what happens when your input becomes arbitrarily large.

We're not interested in what happens when the input is small. For this reason, you'll see that we ignore things like constants and multiplicative factors. So in our big O notation, you'll often see these coming up again again again. So to clarify one thing, when I say n n is usually always going to be the input size coming into your algorithm. There's always going to be some limitation of size. And so constant time referred to as a one wrapped around a big O.

If your algorithm takes a longer rhythmic amount of time to finish, we say that's big O of log n. If it's linear, then we just say n. If it takes like quadratic time or cubic time, then we say that's n raised to that power, it's exponential. Usually, the ceiling is something like two to the n three, the N, any number be greater than one to the n. And then we also have n factorial. But we also have things in between these like square root of n log log of n, n to the fifth and so on. Actually, almost any mathematical expression containing n can be wrapped around a big O, and is big O notation valid. Now, we want to Talking about some properties of Big O. So to reiterate what we just saw in the last two slides, big ol only really cares about what happens when input becomes really big.

So we're not interested when n is small only what happens when n goes to infinity. So this is how and why we get the first two properties. The first that we can simply remove constant values added to our big O notation. Because if you're adding a constant to infinity, well, let's just infinity or if you're multiplying a constant by infinity, you have that still infinity. So we can ignore constants. But of course, this is all theoretical.

In the real world. If your constant is of size, 2 billion, probably, that's gonna have a substantial impact on the running time. If your algorithm however, let us look at a function f, which I've defined over some input size. And if f of n is seven log of n cubed plus 15 n squared plus two n cubed plus eight, well, big O of f of n is just n cubed, because n cubed is the biggest, most dominant term in this function. All right, now let's look at some concrete examples of how big O is used. So both of the following run in constant time with respect to the input size, and because they don't depend on n at all, so on the left, when we're just adding or doing some mathematical formula, yes, that's constant time.

On the right, okay, we're doing a loop, but the loop itself doesn't depend on n. So it runs also in a constant amount of time. So as our input size gets arbitrarily large, well, that loop still gonna run in the same amount of time regardless. Now let's look at a linear example. So both the following actually run in linear time with respect to the input size N, because we do a constant amount of work and times. So on the left, we're incrementing, the counter I by one each time. So f of n is and, and clearly when we wrap this in a big go get big O of n. On the right, a little bit more complicated.

We're not incrementing by one, we're incrementing by three. So we're going to finish that loop three times faster. So f of n is n over three. So our second example His two algorithms that run in quadratic time so the first one seems obvious we do n work at times. So n times as big O of n squared. But observe the second one, I changed the zero with an eye.

So pause the video and try figure out maybe why that's big O of n squared. Okay, let's go over the solution. So first just focus on the second loop. The first loop isn't as important. So since I go from zero to n, the amount of looping done is going to be directly related to what AI is and remember is changing from the outer loop. So if we fix it to be zero, we do n work.

We fix it one we do n minus one work. If we fix it to be two, we do n minus two To work and excetera. So the question then becomes, what is n plus n minus one plus n minus two plus n minus three and so on? Well, this is a well known identity, and it turns out to be n times n plus one divided by two. So if we wrap this in a big O, we split our equation, we can see that this is big O of n squared. Now let's look at perhaps a more complicated example.

Earlier, you may have wondering, how do we ever get these log Eurythmics or linear rhythmic time complexities? So here's a classic algorithm of doing a binary search which yields actually a logarithmic time complexity. So what this algorithm does is it starts by making two pointers. One of the very start in the very end of the array Then selects a midpoint between two and checks if the value we're looking for was found at the midpoint, and then has either found it or not. If it has found it, it stops. Otherwise it discards one half of the array and adjusts either the high or the low pointer.

Remark that even in the worst case, we're still discarding half of the array, each iteration. So very, very quickly, we're going to run out of a range check. So if you do the math, the worst case is you will do exactly log base two of N iterations mean that the binary search will runs in logarithmic time. Very cool algorithm, a very powerful algorithm. Here's a slightly different example worth going over. So first, notice that there is an outer loop with a camera eye that does and work Then notice that there are two inner loops, one that does three n work, and another one that does two and work.

So the rule we use to determine the complexity of this algorithm is to multiply loops on different levels and add those that are the same, generally speaking. So, so using the rule of mov, we can see that it takes n work to do the outer loop, multiply by three n plus two n for both inner loops, which gives us five x squared, which is big O of n squared. All right, so this next one looks very similar, but it's actually quite different. So on the outer loop with AI, we have AI going from zero to three. And so there's three and work done on the outside. But we have multiple Find that but what's going on inside, so the inside j goes from 10 to 50.

So that does 40 loops exactly every loop. So that's a constant for the amount of work. Plus however, the second loop, so j is less than and cubed, but j is J equals j plus two, so it's accelerated a little. So we're going to finish that loop a little faster. So we're going to get on the inside 14 plus n cubed divided by two, but we have to multiply that by three n. So we wrap that in a big O. So big O of f of n, is going to give us big O of n to the fourth, because And the fourth is the dominant term in our function f of n. Some other classic examples of big So if we have to find all the subsets of a set, that takes an exponential amount of time, because there are to the n subsets, finding all permutations of a string takes big O of n factorial, another classic ones merge sort.

So we have n times log n for your merge sort. And if we have to iterate over all the cells with an array of size n by n, we have big O of M time.

Sign Up

Share

Share with friends, get 20% off
Invite your friends to LearnDesk learning marketplace. For each purchase they make, you get 20% off (upto $10) on your next purchase.