Solving the N Queens Problem with Breadth First Search

I often like to circle back around to things I've previously explored. It's often beneficial to see things from a fresh perspective, especially when it comes to thinking algorithmically. The N queens problem is often used to introduce computer science students to the idea of combinatorial explosion, and by extension the algorithmic technique of backtracking. I know my previous post was also on solving a classic backtracking problem by a different means - but that post was about an improvement on the back tracking solution. Today's post is not an improvement, it's an alternative method. If anything, solving the N queens problem with breadth first search should drive home why we prefer back tracking instead of BFS for a certain class of problem (spoiler alert: memory usage).

Like the backtracking algorithm for solving N queens, the BFS solution also makes use of a promising function, otherwise we'd be simply doing an iterative enumeration brute force approach to the problem, which while this might lead to a result (eventually) using depth first search, the memory requirements to do such breadth first would be completely unfeasible.

The N Queens Problem as a Shortest Path Problem

Traditionally, the N queens problem is framed as a constraint satisfaction problem. And viewed from the angle of combinatorial optimization back tracking is a natural solution. When distilled down to its most basic components we find that backtracking is reliant on two techniques to arrive at its solution: a "promising" function to determine if the current node being examined is worth being expanded, and depth first search which is used to traverse the nodes as they are expanded. If were to remove the promising function from the picture, we'd have the brute force solution - it would find the answer but it would take a long time doing it. Still, this is important because we know that the N queens problem at its heart reduces to a depth first search, and if we can find a simple path with depth first search, than we can find a shortest path with breadth first search. If we change vour depth first search to a breadth first search and keep the promising function we can speed up our breadth first search in the same way we sped up our depth first search!

I will be using a simple 2d vector to represent the chessboard, though more compact representations are not only possible, but highly recommended in order to help alleviate the aforementioned high memory usage.

typedef vector<vector<int>> ChessBoard;

void init(ChessBoard& board, int n) {
    board = vector<vector<int>>(n, vector<int>(n, 0));
} 
 

Pruning the Search Tree with our Promising Function

The promising function is used to determine if the move we are considering taking is worth taking. It checks to see if placing a queen at that position would put her under attack from any queens that have been placed previously. This means we have to check if any queens have already been placed in the same column or row, and if any queens can attack her from the diagonal positions. We can approach this task two ways: row wise or column wise. They proceed identically, except the moves are rotated 90 degrees.
The algorithms and description below assume we are proceeding in row-wise order (top to bottom).

When a queen is placed in a position, if there is another queen in any of the places marked with an 'X', than the node is not promising and we can't place our queen there.


| X | | | | X | | | | X | |
| | X | | | X | | | X | | |
| | | X | | X | | X | | | |
| | | | X | X | X | | | | |
| X | X | X | X | Q | X | X | X | X | X |
| | | | X | X | X | | | | |
| | | X | | X | | X | | | |
| | X | | | X | | | X | | |
| X | | | | X | | | | X | |
| | | | | X | | | | | X |

If you're looking at the above diagram and thinking that it's pretty ridiculous if we have to scan what seems to be half of the positions on the board for every move, you can breath a sigh of relief. Thankfully It's not as bad as it first appears: we don't actually need to check each spot. We can observe that some spots couldn't possibly be occupied yet, and as such we don't need to bother checking those spots. Proceeding row wise we are placing a queen in a fresh row at each step, and so we don't have to check if any other queens are in our row as that would be impossible. Like wise for checking if any queens have been placed in the same column, we only need to check the column up to the last queen placed. So for checking the row and columns, out of these positions:

 |   |   |   |   | X |   |   |   |   |   |
| | | | | X | | | | | |
| | | | | X | | | | | |
| | | | | X | | | | | |
| X | X | X | X | Q | X | X | X | X | X |
| | | | | X | | | | | |
| | | | | X | | | | | |
| | | | | X | | | | | |
| | | | | X | | | | | |
| | | | | X | | | | | |

The only positions we actually need to check are these positions:


| | | | | X | | | |
| | | | | X | | | |
| | | | | X | | | |
| | | | | X | | | |
| | | | | Q | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |

The same observation holds true for the diagonal checks as well! We only need to check the diagonals that could have been placed so far:


| X | | | | | | | |
| | X | | | | | | X |
| | | X | | | | X | |
| | | | X | | X | | |
| | | | | Q | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |

This gives us the following O(q) algorithm with q being the number of queens placed thus far to serve as our promising function:

bool promising(ChessBoard& cb, int row, int col) {
    //column
    for (int k = 0; k < row; k++)
        if (cb[k][col] == 1 && k != row)
            return false;

    //right diagonal
    int dx = row-1, dy = col-1, dz = col+1;
    while ((dx >= 0 && dy >= 0))
            if (cb[dx--][dy--] == 1)
                return false;

    //left diagonal
    dx = row-1;
    while (dx >= 0 && dz < cb.size())
        if(cb[dx--][dz++] == 1)
            return false;

    return true;
} 
 

Before placing a queen and pushing the node onto the queue, we check it with the promising function, and if it comes back false, we don't bother enqueueing the node. In this we avoid filling up the queue with nodes that couldn't possibly lead to a solution.

Searching For the Path

With our promising function set up we can start putting our search together. Breadth First Search of a tree uses a FIFO queue to completely process one level of the tree before processing that levels children and so on, which is why when applied to a tree, breadth first search is sometimes called level order traversal. For our search tree, every time we place a queen, that is a new node in the tree. If when we pop the next node off of the queue during our traversal, and we have already placed N queens, than that node is a valid solution. Otherwise we try to place a queen in the next row and continue our search.

At every iteration we have to know which queens have already been placed where, and which row we are trying to place a queen on for the current iteration. To ensure this we use the following data structure to represent a Node in our queue. This way we when we pop a Node of the queue, we have that information available to us immediately. This to start the search, we enqueue a blank board, and row 0 for our node, as we are placing the first queen.

typedef pair<int,ChessBoard> Node;
 
bool solveBFS(ChessBoard& cb, int row) {
    bool found = false;
    queue<Node> fq;
    fq.push(make_pair(row, cb));  

We then continue until there are no more nodes to explore, popping a node from the queue, checking which of its child nodes are promising, and enqueueing those that can possibly lead to a valid solution. By only expanding promising nodes we can have confidence that when N queens have been placed, we have a valid solution.

 
  while (!fq.empty()) {
       Node cr = fq.front(); fq.pop();
       int row = cr.first;
       ChessBoard cb = cr.second;
       if (row == cb.size()) {
          if (validate(cb)) {
              printBoard(cb);
              soln++;
              found = true;
          }
       } else { 
           for (int i = 0; i < cb.size(); i++) {
               if (promising(cb, row, i)) {
                   cb[row][i] = 1;
                   fq.push(make_pair(row+1, cb));
                   cb[row][i] = 0;
               }
           }
       }
    }
    return found;
} 

It's actually quite similar to the inner loop of the backtracking solution, and with good reason. Those with enquiring minds may find it interesting to know that we can swap the FIFO queue out for a LIFO stack data structure to arrive back at an iterative version of the backtracking algorithm, albeit one which explores the tree in a slightly different order than the recursive version. Something to keep in mind however, is that when are using the recursive backtracking algorithm, we are making and then undoing changes to one, single instance of the board which is passed around by reference. This is not possible when using Breadth First Search, or even the iterative version of the back tracking algorithm. The reason for this is because we need to enqueue the current state of the board, and if we enqueued references instead of copies of the board, we'd have a queue or stack full of references to identical boards!

Solution 92
| | | | | | | | Q |
| | | | Q | | | | |
| Q | | | | | | | |
| | | Q | | | | | |
| | | | | | Q | | |
| | Q | | | | | | |
| | | | | | | Q | |
| | | | | Q | | | |

So, now we know that we can find our solution using breadth first search, and we know why the back tracking version is preferable from a memory usage point of view. But is there something to suggest using BFS instead? Is it any faster at the cost of all that memory? Err... Well... no. Timing both algorithms to find all solutions to the 8 queens problem shows the back tracking algorithm to be about 2.5x faster than breadth first search.*

N-Queens BFS: 19.4765ms, Solutions: 92
N-Queens DFS: 7.38989ms. Solutions: 92

* However, when you compile using gcc's -O3 level of optimization, BFS speeds up considerably while backtracking stays fairly stable. One can speculate that should somebody focus on optimizing the board allocator, BFS could potentially be faster than backtracking.

 

 

 


Leave A Comment