One of the more challenging tasks in deep learning is the design and tuning of neural networks for specific tasks. This process is often more of an art form than science, requiring expertise in deep learning and a significant amount of time for trial and error. We'll present the use of genetic algorithms to automate the process of tuning the hyper-parameters of recurrent neural networks, including the size of the network, the number of time-steps through which to back propagate and the learning rate. This approach allows us to take advantage of the model parallelism of GPU-based neural network training while also taking advantage of the data parallelism of genetic algorithms. We show that this approach reduces the barrier to entry to using neural networks and is faster than other automated network tuning approaches.