NEVER trust the autorouter

You just finished your last design, after simulating several times your circuit, spending countless hours looking for the right parts, reading datasheets and creating libraries for your components, the design is finally done!

Now the boring part begins, placing components in a PCB and connect all the nodes trying to minimise the area (or trying to fit everything in a predefined area) with the least possible number of layers to keep your manufacturing costs low.

maze-solved

Connecting points from A to B in a complicated maze sounds more like a game for a kid than the job of an engineer, a perfect task to be delegated to a computer. After all, computers are much better than humans in many optimisation tasks.

For example, compilers can read a text file and not only translate it into machine code, but also optimise operations to the specific instruction set of a given architecture, all of that using languages as old as C. Also synthesizers can take complicated behavioural HDL and translate it to gate-level circuitry, something really hard to do even for skilled engineers.

However, we are told not to rely on autorouters for our PCBs. Why? With all the computing power available today, how is it possible that algorithms haven't been able to outperform humans in such a simple task? Or is that old engineers are afraid of losing their job and being replaced by machines?

About Optimisation Algorithms

In a nutshell, optimisation algorithms have two parts. The first one does something following some logic, and the second one assesses the work done by the first part. So, it tries again and again until the second part thinks the result is good enough.

Here we can start to see where the fundamental limitation of autorouters come from. We can have a perfect algorithm to place components and find paths for every connection, but if we don't have a good way to assess that work, there is nothing we can do to get a good result.

Figure of Merit

Compilers and synthesizers mentioned before have a set of well defined and measurable parameters that can be used to define a figure of merit of how well the optimiser is doing. For example, the number of instructions or memory usage in the case of the compiler, or the number of transistors and area in the case of the synthesizer. But for PCB design, the outcome is not quite predictable because of all the assumptions already made in the schematic like lumped parameters, traces with no resistance, coupling, inductance, capacitance, etc.

In other words, compilers and synthesizers have well-defined building blocks to work with. The behaviour of an instruction is completely defined with no ambiguity, whereas in PCB design a schematic is more like a declaration of intention of how we want the actual circuit to be in terms of electrical parameters, a document intended to be understood by a human, not a computer.

It is true that many components have spice models and can be simulated, also the geometry and materials in a PCB can be simulated to predict all its parasitic characteristics. That is okay for passive components but what about a CPU? an ADC? an FPGA?

What is Good Enough?

Even if we had models of everything, for a complex real-world system there is often no possible solution that satisfies all the requirements. The engineer is constantly making trade-offs and modifying the requirements on the fly. The final result is always a non-perfect system but good enough. How can you teach an optimization algorithm to bend the appropriate rules in a sensible way?

It is fun!

And last but not least, it is fun! When you understand the complexity of the problem and start to do routing, you will find that it is actually fun because you are constantly facing small challenges. Every section you finish is rewarding and you start to get surprised how beautiful and professional your own design looks like 🙂

 


BY Rodrigo Maureira

Melbourne University Electrical Engineering Club

2016

Trackbacks & Pings

Leave a Reply