For many people, the task of solving the maze is central to the micromouse robot problem. After a while they come to realise that it is secondary to actually making the robot move quickly and reliably through the maze. Even so, the maze has to be solved. My micromouse software expects to be able to test the maze for a complete solution every time it enters a cell while exploring. Not everyone does this but, if they do, how long does it take? I benchmarked my software to find out.

What is ‘solved’?

you.Before getting to the results, I should explain what I mean by ‘solved’. For me, the maze is solved when I am sure that enough of it has been explored that I can be sure that the optimal path does not pass through any unexplored cells. Actually, you only need to be sure not to pass through undiscovered walls but I will leave that distinction for another time. To make that test, the maze is flooded twice. First a flood is done assuming that all unknown walls are absent – this is the ‘open’ maze. Next, another flood is done assuming all unknown walls are present – this is the ‘closed’ maze.

Note that, until the goal has been found, the closed maze has no path to the goal.

The result of each flood is a value for the cost of the route. For the simplest kind of flood, that will be the manhattan distance from start to goal. For my flood, the cost is a calculation based on how long the mouse is expected to take to get to the goal using the chosen path. The assumption is that any path that is having to go around unexplored cells will be longer and so the cost will be higher.

In the closed maze, unexplored cells are virtually walled off and so the path has to go around them. In the open maze, unexplored cells are available because they have virtually open walls. Thus, the closed cost will be higher than the open cost as long as there are unexplored cells on the optimal path. Once the open cost and the closed cost are the same, the maze can be considered as solved and the micromouse can return to the start for speed runs.

Performance requirements

All of that means that a test for a solution requires the maze to be flooded twice and I need to do that every time I enter a cell while exploring. Clearly, the flood needs to be efficient. The mouse is moving and there is a limited amount of time available to make a decision about what to do next.

I never worried too much about it until I started work on a half-size mouse. Now the issue is potentially more serious. The cells are half the size and so, for the same speed, there is half the time available to test for a solution. Worse, the maze has four times as many cells to test in the available time. For a little extra challenge, the physically smaller processor is running at a maximum of 100MHz rather than the 168MHz I am used to.

It turned out that the method I was using for the classic contest was just too slow. Much too slow. A little refactoring and some small compromise of the quality of routes found reduced the time to an acceptable level.

Criteria

What is acceptable? Well, suppose that the mouse finishes a turn 5mm before the boundary of a cell. It samples the walls, updates the map and tests for a solution. A decision about its next move must be made before it gets 5mm inside the next cell. It has 10mm of travel to decide. Also assume it is moving at a fairly optimistic 600mm/s. To move 10mm at 600mm/s takes about 17ms so the mouse has that long to flood the maze twice, compare the two costs and decide what to do next.

Results

I ran the revised solution test for the recent All Japan half size micromouse contests mazes as well as an empty maze with no interior walls. To get an idea of the worst case solutions time, I ran the test using each cell in turn as the goal and recorded the shortest and longest test time.

Time for test (microseconds)
MazeShortestLongest
Japan201015956676
Japan201116498274
Japan201215875828
Japan201315986361
Japan201415816306
Japan201517416845
Japan201615686730
Japan201715906800
emptyMaze45015396

For the tests, the mouse software was not doing all of its normal motion and sensor processing so it would be best to add about 15% to these times to reflect that additional load. It seems safe then to assume that the maze solution can be tested in less than about 10ms during normal running. That is comfortably within the available 17ms.

While doing these tests I was surprised to find that the longest time for a real maze was nearly twice the best time for an empty maze. I am guessing there are mazes with longer times. Also, I need to record the solution time while the mouse is exploring to find out how partially-known mazes affect the timings.

Notice also that the shortest solution times are fairly consistent at about 1600us. These will be for solutions attempting to get to completely walled off areas. the fact that there is apparently an irreducible minimum tells me that there may be some overhead common to all tests that could be reduced.

Optimisation

I should not that these results are from using optimisation at -O3 when building the code. Anything less is a disaster waiting to happen. For example, testing the empty 32×32 maze with no optimisation (-O0) takes up to 37.8ms – that is 6 times as long!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.