It must be understood that the memory available to applications is not the physical memory, but instead the virtual memory, that is the size of the pagefile (some claim that it's the size of pagefile plus the size of virtual memory). Of course, the larger the size of physical memory the less the swapping (between the physical memory and the disk) and thus the better the performance. So the RAM size has to do with performance, while the amount of memory available to applications is determined primarily by the pagefile size. But applications allocate memory in the so called Virtual Address Space. For 32-bit application running under 64-bit Windows, the address space is limited to 2GB or 4GB according to the executable's options. However an application can also allocate memory by employing memory-mapping techniques. It's unknown if the applications of interest (Terraformer etc) make use of memory-mapping, either directly or indirectly (through the DirectX module or the python engine). As for the HW archirecture (channnel, northbridge etc) should not make any difference (at least in theory) except for performance. Also the case with memory sticks sizes, as memory management is virtual, that is an application can allocate a large memory block as long as there is enough memory available; the memory space does not even need to be (physically) contiguous, eg you can allocate 1.5 GB of memory (literally: address space), and have the 500MB on a contiguous block in chip B, 500MB on the pagefile and another 500MB on multiple page blocks (fragmented) on chip A! The application sees all this as a contiguous memory block, which can be accessed via a pointer, eg p[n] accesses the n-th byte in the virtual block.
Another consideration is the real amount of memory needed. A 80x80 km map is 20x20 large cities, containing 20x20x256x256≈26M vertices, or 52MB of memory, as each vertex is represented as a 2-byte integer (elevation in tenths of a meter), or 104MB if it's represented as a 4-byte real. This is nowhere close to the GBs you mnetion.
I wouldn't bother with overclocking at all, as reliability is the problem here. Even performance might not be benefited as much as you think or at all. By increasing the CPU spead by 20% or so, the computation time may be decreased by 16% or so, however the bottleneck might not be calculations but instead I/O (ie reading/writing, disk swapping and/or memory-mapping), as is the case for the majority of applications today. In addition there is the risk of unreliable operation or even damage.