We'll address a novel memory system for deep reinforcement learning (DRL) agents. A critical component to enabling intelligent reasoning in partially observable tasks is memory. Despite this importance, DRL agents have so far used simple memory architectures, typically either a temporal convolution over the past k frames or an LSTM layer. We've developed the neural map, which uses a spatially structured 2D memory image along with an adaptable sparse write operator to learn to store information about the environment over long time lags. By making use of the NVIDIA DGX-1 supercomputer, we demonstrate empirically that the neural map surpasses previous DRL memories on a set of challenging 2D and 3D maze environments and show generalization to environments not seen during training.