The answer is that GPUs would not currently run our code faster, for the types of problems we want to solve, than CPUs.
The computationally expensive part of our code is evaluating the energy and forces on a given atomic configuration. We typically have only a couple hundred atoms in each configuration. However, we want to calculate the energy and forces for many different configurations. If we had thousands, instead of hundreds, of atoms per configuration then we might see a speedup when running on GPUs.
A simplified picture of a GPU is to think of it as having many more processing elements than a CPU, but each one running at lower clockrate. If it is possible to keep all of the processing elements on the GPU busy at all times, then it might be much faster than a CPU. The difficultly is that not all programs can be parallelized in such a way as to keep the GPU 100% busy or use it with high efficiency. It is not as easy as just porting your program to OpenCL or CUDA and seeing a 10 times speedup. Some problems/algorithms are not a good match for GPU hardware.