X and Y coordinates are squared to calculate euclidean distance. For manattan distance you just sum the cooordinate differences.
When I started writing first version of this patch, there was no NoAI, so there were no similar problems. The overflow occured sooner, because it have overflown in internal cordinates, which have better precision than tile (I think these coordinates have precision of 1/16th of a tile), because you need also to store exact position of train "inside" the tile. I think AI's does not have means to access these precise coordinates, it have only tile numbers.
Problematic could be for example AIMap::DistanceSquare - it return squared distance, so map sizes larger than 32K would cause trouble ....
There are two versions of the patch, one posted on flypray with "reasonable limits" as requested by dev (max 4M tiles, max map size 8K) and this "less limited" version (max 64M tiles, max map size 1M - maximum you can have while still being able to run on 32bit architectures and at top sizes it is quite slow even on fastest currenty available computers). The point is, once this gets into trunk, you can change both of the limits by merely changing two constants and recompile.
I think reasonable limits lie somewhere between - perhaps 8M map tiles maximum and 32K max map size, where things should work still fine (even if noAI does this calculation manually inside its code) and also reasonably fast on modern PC's. AI's not using euclidean distance should work fine even with the 1M map size :), though when (if) this gets in trunk, it will be the devs who will set the default limits. Unless they've changed their mind over last year or so, we can expect same maximal number of tiles, but larger maximal size of map in one dimension, so we'll get extra noodly sizes like 4K x 1K and 8K x 512. Could be nice for maps like Chile and Italia... :)
The most speed-limiting function at large map sizes is currently the tile-loop, even at current maximum 4M tiles and decent CPU (core 2 Duo) it eats about 12% of your CPU, at 64M it will be about 200% of your cpu, which mean your CPU is not fast enough and the game will run with reduced speed ....
Solution would be to rewrite the tile loop to either use more CPU cores (not easy), use GPU (even harder) or be more effective even when single threaded (I looked briefly at the code and there is nothing that you can easily simplify to speed things up, so this would be also pretty hard). Or change the tile loop mechanism to be run less often, but that would change the game ...
Or perhaps change the order in which the tiles are processed. Now it processes one tile and then skips to tile 16 positions later, which is quite unfriendly to CPU cache.