Bidirectional Iterative Deepening Depth First Search: The shortest path algorithm with the longest name.

Say this one 10 times fast: Bidirectional Iterative Deepening Depth Limited Depth First Search

Bidirectional Iterative Deepening Depth First Search(BDIDDFS), some times also called Bidirectional Iterative Deepening Depth Limited Depth First Search by those amongst us who like unnecessarily long names, is a graph traversal/search technique. Originally developed by Richard Korf at Colombia University, and described in his 1985 paper on iterative deepening search algorithms. It can be viewed as the culmination of of  several different but related algorithms in graph and tree traversal thrown in a blender and poured out on ice. Heres the recipe for an IDBDDLDFS cocktail: you take one part Depth First Search, One part Depth Limited Search, and One part Bidirectional Search. Shake well, serve over crushed Data. Just kidding, but when the tecniques described above are combined together as outlined in Korfs paper you get one seriously interesting Shortest Path Algorithm that makes a strong argument as a memory efficient competitor to Breadth First Search.

In its self, Depth First Search is a staple classic - often the first tree/graph traversal technique one encounters. In it's iterative deepening form, you gain the ability to perform what is in effect, a breadth first search performed via DFS. This affords us better memory efficiency than BFS can provide by eliminating the need to store unpromising branches. If that wasn't enough, were now starting TWO of these searches from opposite ends and working towards each other to gain even more efficiency by increasing the overall speed of the search, while simultaneously reducing the memory footprint of each branch.

As if not being able to decided between two name wasn't enough, It's also difficult to exactly categorize this algorithm. Though one thing is for sure: It finds a shortest path.

One could argue that it's technically a single source shortest path algorithm because that is in fact what it does: find the shortest path connecting one vertex to another in a graph. On the other hand, it works by starting separate searches from two different sources and ending them when they cross paths. I'll let you decided.

How it works

No matter what you want to name it, or how you want to categorize it, it is one seriously sweet algorithm. To fully understand what makes this algorithm so freaking awesome, you have to understand the algorithms its pieced together from which i named above but will cover in greater detail here:

  1. Depth First Search - DFS is the prototypical tree/graph traversal technique: You start at your chosen node (the root vertex), You then explore its adjacent nodes, and their adjacent nodes ad infinitum using a stack: either implicitly via the call stack using recursion, or explicitly using a data structure and iteration. This is probably one of the most well studied of all tree/graph traversal techniques. Interestingly it was invented in the late 19th century for the purpose of solving mazes.
  2. Iterative Deepening DFS - IDDFS uses a combination of a loop and an ever increasing arbitrary depth limit - hence, iterative deepening - with the end result being what is effectively Breadth First Search performed by manipulating the branching of a Depth First Search. While this algorithm was originally developed for Directed Acyclic Graphs, it is possible to adapt this algorithm for undirected graphs containing cycles (as i will show here)
  3. Bidirectional Search - Bidirectional Search is more of a "technique" that can be implemented with any number of search algorithms: One could implement Bidirectional Dijkstra's Algorithm if you really wanted to. The important thing to know is that this technique is accomplished by using two searches: One search starting from the source vertex looking for the target vertex, and another search started from the intended target vertex looking for the intended starting vertex of the other search. When the two searches find a common vertex the search is over, as a path between source and target vertexes has been found.

We're going to implement this algorithm piece by piece, so you can really see how it comes together as the sum of it's parts by starting from regular to DFS to become IDBDDFS. To represent the graph i'll be using the data structure we created for the Dijkstra's Algorithm article which you can obtain from my github here: 

 https://github.com/maxgoren/examples/blob/main/better-graph.h

The first thing we'll need to do is implement a regular Depth First Search algorithm for graphs as our starting point. To focus more on the algorithm and less on the implementation side of things we are going to use the recursive version of DFS:

bool dfs(Graph& G, string current, string target, map<string, string> path)
{
  cout<<current<<" ";
  if (current==target)
  {
    cout<<"found!\n";
    return true;
  }
  for (auto next : G.adjList[current])
  {
    if (path.contains(next.vertex) == path.end())
    {
      path[next.vertex] = current;
      return dfs(G, next.vertex, target, path);
    }
  }
  return false;
}

void depthFirstSearch(Graph& G, string start, string target)
{
  map<string, string> path;
  path[start] = start;
  dfs(G, start, target, path);
}

Not much to comment there, pretty much standard operating procedure for a recursive depth first search graph traversal. The only thing i WILL point out is the use of the "path" variable to keep track of which vertexes we've already been to: this enables traversal of undirected graphs without getting caught up in endless cycles as well as being able to reconstruct our path.

With that in place, its a simple exercise to convert DFS to its iterative deepening counter part. We'll need to add a 'current_depth' and 'max_depth' variable, and of course the deepening loop from which it garners its name:

bool depthLimitedDepthFirstSearch(Graph& G, string current_vertex, string target, int depth, int max_depth, std::map<string,string>& path)
{
  int current_depth = depth;
  cout<<current_vertex<<" ";
  if (current_vertex == target)
  {
    cout<<"Found!\n";
    return true;
  }
  if (current_depth < max_depth)
  {
   for (auto next : G.adjList[current_vertex])
   {
    if (path.find(next) == path.end())
    {
      path[next] = current_vertex;
      return depthLimitedDepthFirstSearch(G, next.vertex, target, current_depth + 1, max_depth, path);
    }
   }
  }
  return false;
}

The iterative deepening part is what really separates this from regular depth first search by changing the order in which vertexes are discovered. It's this arbitrary depth limit that turns our depth first search into a de facto breadth first search. By limiting the branching factor of the path length, it prematurely forces the back tracking part of DFS to turn around and explore each vertexes other adjacent vertexes instead of continuing depth wise. Smart. 

void iterativeDeepeningDepthFirstSearch(Graph& G, string start, string finish, int max_depth) {
   for (int depth_limit = 1; depth_limit <= max_depth; depth_limit++) {
      std::map<string,string> path;
      path[start] = start;
      cout<<"Starting Traversal. Depth Limit: "<<depth_limit<<endl;
     depthLimitedDepthFirstSearch(G, start, finish, 0, depth_limit, path);
     cout<<endl;
  }
}
 

And now we come to the crux of our changes: converting our algorithm to a bidirectional search. Our depthLimitedDepthFirstSearch() function requires no changes. The conversion occurs in the control loop and by the addition of a function to check if our search paths have crossed:

void iterativeDeepeningBidirectionalDepthFirstSearch(Graph& G, string start, string finish, int max_depth)
{
  for (int depth_limit = 1; depth_limit <= max_depth; depth_limit++)
  {
    std::map<string, string> path1, path2;
    path1[start] = start;
    path2[finish] = finish;
    cout<<"Starting Traversal. Depth Limit: "<<depth_limit<<endl;
    depthLimitedDepthFirstSearch(G, start, finish, 0, depth_limit, path1);
    cout<<endl;
    depthLimitedDepthFirstSearch(G, finish, start, 0, depth_limit, path2);
    cout<<endl;
    if (pathsCrossed(start, finish, path1, path2))
    {
      cout<<"Brilliant!\n";
      break;
    }
  }
}

As you can see we are now keeping two 'path' maps as we are conducting two searches: one from our intended source to our intended target, the other from our intended target towards our original source. The clever part of this algorithm is this: neither search needs to find its target, instead we're checking to see if our searches find a common vertex. If path intersection occurs than our search is over. This is because if our two searches have found a common vertex, than they have found a link which bridges the two search paths into one path! Here is the code for checking path intersection:

bool pathsCrossed(string start, string finish, map<string,string> path1, map<string,string> path2)
{
  for (auto check : path1) {
     if (path2.find(check.first) != path2.end()) {
         string intersect = check;
         cout<<"Found path intersection at: "<<intersect<<"\n";
         return buildpath(start, finish, intersect, path1, path2);
      }
  }
  return false;
}

Now, we've technically accomplished what we set out to do, but just checking if a path exists is not nearly as useful as also being able to show what that path actually is, so here is the code to reconstruct our path from the two paths to our intersection vertex. Remember: were going to need to reverse the first portion of our path before we concatenate the two!

bool buildpath(string start, string finish, string intersect, map<string, string> path1, map<string, string> path2)
{

    vector<string> finalpath;
    string front = intersect;
    finalpath.push_back(intersect);
    while (front != start)
    {
        front = path1[front];
        finalpath.push_back(front);
    }
    reverse(finalpath.begin(), finalpath.end());
    string back = intersect;
    while (back != finish)
    {
        back = path2[back];
        finalpath.push_back(back);
    }
    finalpath.pop_back();
    for (auto p : finalpath)
    {
        cout<<p<<" -> ";
    }
    cout<<finish<<" [Goal]\n";
    return true;
}
 

We now have our full implementation of our Bidirectional Iterative Deepening Depth Limited Depth First Search! Lets take it for a test drive and see how did:

int main()
{
  Graph G("Graph");
  G.addEdge("A", "B",1);
  G.addEdge("A", "D",3);
  G.addEdge("A", "E",3);
  G.addEdge("B", "C",2);
  G.addEdge("B", "D",2);
  G.addEdge("C", "E",5);
  G.addEdge("C", "F",3);
  G.addEdge("F", "G",1);
  G.showAdjList();
  iterativeDeepeningBidirectionalDepthFirstSearch(G, "A", "G", 10);
}
Output:

Graph
A: B, D, E, 
B: A, C, D, 
C: B, E, F, 
D: A, B, 
E: A, C, 
F: C, G, 
G: F, 

Starting Traversal. Depth Limit: 1
A -> B -> 
G -> F -> 
Starting Traversal. Depth Limit: 2
A -> B -> C -> 
G -> F -> C -> 
Found path intersection at: C
A -> B -> C -> F -> G [Goal]
Brilliant!

I must say, as far as graph searching algorithms go, this one is pretty fun, and pretty awesome. So where do we go here? Well, both this, and its mono-directional analog can be guided by a heuristic a'la A* search. Sounds like the makings of a future article!

The source code for the examples shown in this article are available at my github:

I've included below two versions of this algorithm, one which uses std::vector to keep track of visited nodes and reconstruct the path, the other using std::map. std::vector may offer a slight increase in memory efficiency over std::map but this has not been tested thoroughly.

Using std::map:

https://github.com/maxgoren/examples/blob/main/IDBDDFS.cpp

Using std::vector: 

https://github.com/maxgoren/examples/blob/main/IDBDDFSv2.cpp

https://github.com/maxgoren/examples/blob/main/better-graph.h

 


Leave A Comment