Bellman-Ford algorithm

Graph Theory Algorithms Graph Theory Algorithms
15 minutes
Share the link to this page
Copied
  Completed
You need to have access to the item to view this lesson.
This is a free item
$0.00
د.إ0.00
Kz0.00
ARS$0.00
A$0.00
৳0.00
Лв0.00
Bs0.00
B$0.00
P0.00
CA$0.00
CHF 0.00
CLP$0.00
CN¥0.00
COP$0.00
₡0.00
Kč0.00
DKK kr0.00
RD$0.00
DA0.00
E£0.00
ብር0.00
€0.00
FJ$0.00
£0.00
Q0.00
GY$0.00
HK$0.00
L0.00
Ft0.00
₪0.00
₹0.00
ISK kr0.00
¥0.00
KSh0.00
₩0.00
DH0.00
L0.00
ден0.00
MOP$0.00
MX$0.00
RM0.00
N$0.00
₦0.00
C$0.00
NOK kr0.00
रु0.00
NZ$0.00
S/0.00
K0.00
₱0.00
₨0.00
zł0.00
₲0.00
L0.00
QR0.00
SAR0.00
SEK kr0.00
S$0.00
฿0.00
₺0.00
$U0.00
R0.00
ZK0.00
Already have an account? Log In

Transcript

Of all the shortest path algorithms in graph theory, Bellman Ford is definitely one of the simplest. Yet I struggled as an undergrad to trying to learn this algorithm, which is part of the reason I'm making this video. So what is the Bellman Ford algorithm? In short, it's a single source shortest path algorithm. This means that it can find the shortest path from the starting node to all other nodes in the graph. As you can imagine, this is very useful.

However, Bellman Ford is not ideal for single source shortest path algorithms, because it has a much worse time complexity than Dykstra is algorithm. In practice, Bellman Ford runs in a time complexity proportional to the product of the number of edges and the number of vertices, while Dykstra can do much better at around big O of E plus V, log V with a binary sheep So when would we ever use the Bellman Ford algorithm? The answer is when Dykstra fails, and this can happen when the graph has negative edge weights. When a graph has negative edge weights is possible that a negative cycle can manifest itself. And when it does, it is of critical importance that we are able to detect it. If this happens, and we're using Dykstra is to find the shortest path, we'll get stuck in an infinite loop because the algorithm will keep finding better and better paths.

A neat application of Bellman Ford and negative cycles in finance and economics when performing an arbitrage between two or more markets. I'm not an expert. But this is when prices between different markets are such that you can cycle through each market with a security such as a stock or currency, and they end up with more profit than you originally started with essentially getting risk free gains. Let's look at how negative cycles can arise because this seems important. Here is a graph I made with directed edges, some of which are negative. I've labeled our starting node to be node zero.

And our goal would be to define the distance from zero to every other node in a single source shortest path context. But right now, we are only interested in detecting negative cycles. I will label blue nodes as regular nodes, red nodes as nodes directly involved in a negative cycle, and yellow nodes as those reachable by a negative cycle. One way negative cycles can emerge is through negative self loops. What happens is that once we reach a self loop, we can stay in that loop for a near infinite amount of time before exiting as a risk Result everywhere reachable by the cycle has a best cost of negative infinity, which depending on your problem may either be good or bad. In this graph, nodes 234 and five are all reachable by node one.

So they all have a best cost of negative infinity with regards to the single source shortest path problem. Let's look at another example. In this graph, a negative cycle manifests itself, but not as the result of a negative self loop. Instead through a cycle of nodes whose net gain is less than zero. If you add up the edge values one four and minus six, attached to the nodes, one, two, and three, the net change is minus one. If we look at where the cycle can reach, we can see that the entire right side of the graph is affected.

So hardly any notes are safe from this negative cycle. Now let's look at the actual steps involved in the Bellman Ford algorithm. First, we'll need to define a few variables. Let A be the number of edges in the graph. Let v be the number of vertices. let S be the ID of the starting node.

In this case, S is short for start. And lastly, let d be an array of size of V that tracks the best distance from S to each node. The first thing we'll want to do is set every entry in D to positive infinity. This is because the distance to each node is initially unknown, and we have no idea how far each node is. Next we'll want to set the distance to the starting note to be zero because we're already there. The last part of the algorithm To relax each edge v minus one times, relaxing an edge simply means taking an edge and trying to update the value from where the edge starts to where it ends.

In terms of code, this is all we need to do. We loop v minus one times n for each edge, we relaxed the edge. In the relaxing step, what we do is we look at the value of where the edge starts at the edge cost and see if that's better than where we're trying to go. And if so, update with the shorter path value. To actually detect negative cycles. We don't do anything too special.

All we do is run the algorithm a second time. What we're doing in the second pass is checking for any nodes that update to a better value than the known best value. And if they do, then they're part of a negative cycle and we want to mark that node as having a cause. of negative infinity. Let's look at a full example. Here is a graph I made.

Again, we will start on node zero and find the shortest path to every other node. On the right I have illustrated the distance array D. Watch the values in this array change as the algorithm executes. Right now, all the values in the array are set to positive infinity as per the first step of the algorithm. In the second step, we set the starting nodes value to zero. Now the algorithm starts and we are on the first iteration where we attempt to relax each edge. But before we continue, I have an important note at the bottom of the screen which says that the edges do not need to be processed in any particular order.

I may process the edges with one ordering and you process them with another ordering and we may not end up with the same values on each duration, but we will get the same result in the distance array at the very end, I will highlight the edge currently being processed in orange and update the distance array whenever appropriate. Right now the best value to node one is five, because a distance of five is better than a distance of positive infinity. Then node two gets its value updated to 25 because node one had a value of five from the last term, and the edge from node one to node two is 20 for a total of 25. Then a similar thing happens to node five and node six as well. By the way, and edges dark gray has already been processed in this iteration. Next up, node three gets its value updated from infinity to 35.

He has the best value in too, so far as 25 plus the edge cost of 10 is 35, then the edge from two to four updates to a best value of 100. Up next is an interesting edge because it was able to update node twos best value from 25 to 20 by taking the value in note three, which is currently 35, adding a weight of minus 15 and giving us a better value of 20. So this is all the algorithm does a processes each edge performing relaxation operations. I'll let the animation play for the rest of this iteration. So iteration one is over, and there are eight more iterations to go. But of course, simplicity, I'll just play it one more iteration.

To give you an idea of how the algorithm works. We reset all the edges and start processing the edges. Again, you'll notice that a lot less updating happens in the distance array this round, particularly because I unintentionally selected the edges to be processed in a more or less optimal way. So that's the end of the second iteration. If we fast forward to the end, here's the resulting distance array. However, we're not done, we still need to find the negative cycles.

Let's execute the algorithm a second time. Same procedure as usual, just relax each edge. But when we are able to relax the edge, update the nodes value to negative infinity instead. Let's process on edges until something interesting happens. So it appears that when we went from node two to node three, we are able to relax the edge and obtain a better value for node three than was previously there. So node three is part of some native cycle, therefore I will mark it as red.

Similarly, note four is connected to a negative cycle, although in directly in the distance table, I do not distinguish between nodes which are reachable by a negative cycle, and those which are primarily involved in one, so there's no way to tell them apart, feel free to add some logic in the Bellman Ford algorithm if you need to make this distinction. Continuing on, new two is also trapped in the cycle. And the last node also affected by the cycle is no nine on the right. Let's finish up with this iteration by processing the rest of the edges. So that's it for the first iteration, there are another eight iterations to perform. In this example, we happen to detect all cycles on the first iteration.

But this was a coincidence. In general, you really need another eight iterations. This is because you want the negative cycle minus infinity values to propagate throughout the graph. The propagation is highly dependent on the order in which the edges are being processed. But having v minus one iterations ensures that this propagation occurs correctly. All right, now I want to have a look at some source code.

You can find a link in the description below. Or you can go to github.com slash William fiza slash algorithms. Here we are on GitHub in my algorithms repository. Now if you scroll down And look for Bellman Ford. Under the graph theory section, you can see that currently there are two different implementations, one for graph represented as an edge list, another one for a graph represented as an adjacency list. Today we'll have a look at the edge list implementation.

So in the edge list implementation, first thing I do is I define a directed edge. And a directed edge simply consists of an edge that goes from a node to a node with a certain cost. Next, let's have a look at the actual algorithm itself. So from Bellman Ford, what we need is well a graph. So since this is an edge list, we just pass in all the edges. I'll also need the number of vertices in the graph and some starting node.

And what we're returning is that distance array. All right, so let's initialize the distance array, and then populate it with this special value double dot positive infinity, then set disk of start to be zero. And then just as the pseudocode said, just loop v minus one time, then for each edge, just to relax the edge. So that's what we're doing. And here. Now this second pass of the algorithm is to detect negative cycles.

So run the algorithm a second time, so loop v minus one times for each edge, relax the edge, but this time instead of updating the extra value, We set the value to double negative infinity. And this is a special value defined in Java, that represents negative infinity. And no matter what value you add to double negative infinity, it will still be negative infinity. Unless you add double positive infinity, then I think gives you double dot, not a number, or something like that. And that's the entire algorithm and we just returned the distance array. If you look in the main method, it shows you how to actually create a graph.

Add some edges, and then run Bellman Ford and find the distance from a starting node to all other nodes in the graph. And that is Bellman Ford. Guys, thank you for watching, and please leave your comments and description below. I'll catch you next time.

Sign Up

Share

Share with friends, get 20% off
Invite your friends to LearnDesk learning marketplace. For each purchase they make, you get 20% off (upto $10) on your next purchase.