Recurrent neural network (RNN) based on long short-term memory (LSTM) has been successfully developed for single-channel source separation. Temporal information is learned by using dynamic states which are evolved through time and stored as an internal memory. The performance of source separation is constrained due to the limitation of internal memory which could not sufficiently preserve long-term characteristics from different sources. This study deals with this limitation by incorporating an external memory in RNN and accordingly presents a memory augmented neural network for source separation. In particular, we carry out a neural Turing machine to learn a separation model for sequential signals of speech and noise in presence of different speakers and noise types. Experiments show that speech enhancement based on memory augmented neural network consistently outperforms that using deep neural network and LSTM in terms of short-term objective intelligibility measure.