In the multilayer perceptron (MLP), there was a theorem about the maximum number of separable regions (M) given the number of hidden nodes (H) in the input d-dimensional space. We propose a recurrence relation in the high dimensional space and prove the theorem using the expansion of recurrence relation instead of proof by induction. The MLP model has input layer, one hidden layer, and output layer. We use different MLP models on the well log data inversion to test the number of hidden nodes determined by the theorem. The inputs are the first order, second order, and third order features. Higher order neural network (HONN) has the property of more nonlinear mapping. In the experiments, we have 31 simulated well log data. 25 well log data are used for training, and 6 are for testing. The experimental results can support the number of hidden nodes determined by the theorem.