Monaural source separation based on recurrent neural network is learned to characterize the sequential patterns in source signals based on dynamic states which are propagated through time. The hidden states are assumed to be deterministic along a single path where a shared long short-term memory (LSTM) is used. Such assumptions may not faithfully reflect the randomness and the variety of temporal features in mixed signals. To strengthen the capability of LSTM in source separation, we propose a stochastic Markov LSTM where the regression from the mixed signal to its source signals is learned with a stochastic indicator of Markov state which selects the state-dependent LSTM for signal separation at each time. A set of LSTMs is discovered to capture the structural diversity of temporal signals or the stochastic trajectory of state transitions for sequential prediction. A new state machine is constructed to learn the complicated latent semantics in heterogeneous and structural mappings between mixed signals and source signals. The Gumbel-softmax sampling is implemented in the backpropagation algorithm with discrete Markov states. Experiments on speech enhancement illustrate the merit of the proposed stochastic Markov LSTM in terms of short-term objective intelligibility measure of the separated speech.