Edmonds Karp | Network Flow

Graph Theory Algorithms Graph Theory Algorithms
9 minutes
Share the link to this page
Copied
  Completed
You need to have access to the item to view this lesson.
This is a free item
$0.00
د.إ0.00
Kz0.00
ARS$0.00
A$0.00
৳0.00
Лв0.00
Bs0.00
B$0.00
P0.00
CA$0.00
CHF 0.00
CLP$0.00
CN¥0.00
COP$0.00
₡0.00
Kč0.00
DKK kr0.00
RD$0.00
DA0.00
E£0.00
ብር0.00
€0.00
FJ$0.00
£0.00
Q0.00
GY$0.00
HK$0.00
L0.00
Ft0.00
₪0.00
₹0.00
ISK kr0.00
¥0.00
KSh0.00
₩0.00
DH0.00
L0.00
ден0.00
MOP$0.00
MX$0.00
RM0.00
N$0.00
₦0.00
C$0.00
NOK kr0.00
रु0.00
NZ$0.00
S/0.00
K0.00
₱0.00
₨0.00
zł0.00
₲0.00
L0.00
QR0.00
SAR0.00
SEK kr0.00
S$0.00
฿0.00
₺0.00
$U0.00
R0.00
ZK0.00
Already have an account? Log In

Transcript

Hello and welcome. My name is William and today we're going to probe even further into network flow. We're going to be talking about a specific implementation of the Ford Fulkerson method, which is the Edmonds Karp algorithm. And mins Karp is another maximum flow algorithm which uses a different technique to find augmenting paths through the flow graph. Before we get started, let me give you a refresher on what we're trying to do. We are trying to find the maximum flow on a flow graph because we know that finding the maximum flow is really useful for finding bipartite matchings and also to solve a whole host of problems.

So far, we've looked at one other technique to find the maximum score. Which is to use the portfolio method with a depth first search. At a high level it says that all we want to do is repeatedly find augmenting paths from the source to the sink argument in the flow and then repeat this process until no more paths exist. The key takeaway here is that the portable course method does not specify how to actually find these augmenting paths. So this is where we can optimize the algorithm. A few videos ago, we saw that the Ford Fulkerson method can be implemented with a depth first search to find the maximum flow however, the pitfall with that technique was that the time complexity depended on the capacity values of the edges in the graph.

This is because the depth per search picks edges to traverse in such a way that we might only ever be able to push one unit of flow in each iteration. This is really The bad and can kill the time complexity even though it's highly unlikely to happen in practice, but it's absolutely something we want to avoid should it happen right now the time complexity of course Fulkerson with a depth for search is big O of E times f, where he is the number of edges and f is the maximum flow. The idea behind Edmonds Karp says that instead of using a depth first search to find augmenting paths, we should use a breadth first search instead, to get a better time complexity. big O of V times East squared may not look like a better time complexity, but it actually is. What's different is that the time complexity while it might not look great, does not depend on the capacity value of any edge in the flow graph, which is crucial.

We call such an algorithm that doesn't depend on the actual input values a strong polynomial algorithm and that's exactly what Edmonds Karp is and why it was so revolutionary at the time. Edmonds Karp can also be thought of as an algorithm which finds the shortest augmenting path from s to t that is in terms of the number of edges used in each iteration. Using a breadth first search during Edmonds Karp ensures that we find the shortest path. This is a consequence of each edge being unweighted. When I say unweighted, I mean that as long as the edge has a positive capacity, we don't distinguish it between one edge being any better or worse than any other edge. Now, let's look at why we might care about using Edmonds Karp.

Suppose we have this flow graph and we want to find what the maximum flow is. If we're using a duck for search, we might do something like this. Start at the source and do a random depth first search for words. So after a while of zigzagging through the flow graph, we are able to find the sync. As we just saw a depth first search has the chance to cause long augmenting paths and longer paths are generally undesirable because the longer the path, the higher the chance for a small bottleneck value which results in a longer runtime. Finding the shortest path from s to t, again in terms of number of edges is a great approach to avoid the depth first search worst case scenario and reduce the length of augmenting paths to find the shortest path from s to t. Do a breadth first search starting at the source and ending at the sink while exploring the flow graph.

Remember that we can only take an edge if the remaining capacity of that edge is greater than zero. In this example, all edges outwards from Have a remaining capacity greater than zero. So we can add all the neighbors to the queue when we're doing the breadth first search step. And then we keep going forwards, so add all reachable neighbors to the queue and continue. And now the breadth first search has reached the sink so we can stop. In the real algorithm, we would stop as soon as any of the edges reach the sink.

But just for symmetry, I show three edges here entering the sink, while in reality, we would stop as soon as one of them reaches the sink. If we assume that the bottom edge made it to the sink first and we retrace the path, we get the following augmenting path. But we didn't just find any recommended path we found a shortest length augmenting path. So to augment the flow, do the usual find the bottleneck value by finding the smallest remaining capacity of all the edges along the path, then augment the flow values along the path by the bottleneck. So that was the first path. However, we're not done yet.

Let's continue finding paths until the entire graph is saturated. Recall that while exploring the flow graph, we can only reach a node if the remaining capacity of the edge to get to that node is greater than zero. For instance, all the reachable neighbors of the source node in this case does not include the bottom left node because the edge from the source to the bottom left node has a remaining capacity of zero. All right, keep exploring until the sink is reached. And now we've reached the sink once more. So find the bottleneck value along this path.

Then use the bottleneck value to update the flow along the augmenting path. Don't forget to update the residual edges. And we're still not done because there still exists another augmenting path. So now there only exists one edge outwards from the source with a capacity greater than zero, so it's the only edge we can take, so we follow it. There's also only one edge to follow from the second node because the other edges have a remaining capacity to zero. And now the breadth first search has reached the sink, we can trace back the edges that were used.

We can find the bottleneck by finding the minimum capacity along the path and also augment the flow. And now you can see that there are no more augmenting paths left to be found because all the edges leading outwards from the source have a remaining capacity of zero. However, more generally, we know to stop Edmonds Karp when there are no more augmenting paths from stt Because we know we cannot increase the flow anymore. If this is the case, the maximum flow we get from running Edmonds Karp is the sum of the bottleneck values. If you recall, in the first iteration, we were able to push five units of flow in the second iteration, 10 units, and then the last iteration five units for a total of 20 units of flow. Another way to find the maximum flow is the sum the capacity values going into the sink, which I have circled in red.

In summary, this is what we learned. Using depth first search on a flow graph can sometimes find long, windy paths from the source to the sink. This is usually undesirable because the longer the path, the smaller the bottleneck value, and the longer the runtime. Edmonds Karp tries to resolve this problem by finding the shortest length or augmenting paths from the source to the sink using a breadth first search. However, more importantly, the big achievement of Edmonds Karp is that it's time complexity of big O of V times v squared is independent of the max flow. So it doesn't depend on the capacity values of the flow graph.

And that's Edmonds Karp in a nutshell. Thank you for watching. next video, we'll cover some source code for now. Please Like this video if you learned something and subscribe for more mathematics and computer science videos. Thank you

Sign Up

Share

Share with friends, get 20% off
Invite your friends to LearnDesk learning marketplace. For each purchase they make, you get 20% off (upto $10) on your next purchase.