Generative adversarial network (GAN) has been successfully developing as a generative model where the artificial data drawn from the generator are misrecognized as real samples by a discriminator. Although GAN achieves the desirable performance, the challenge is that the mode collapse easily happens in the joint optimization of generator and discriminator. This study copes with this challenge by improving the model regularization by means of representing the weight uncertainty in GAN. A new Bayesian GAN is formulated and implemented to learn a regularized model from diverse data where the strong modes are flattened via the marginalization and the issues of model collapse and gradient vanishing are alleviated. In particular, we present a variational GAN (VGAN) where the encoder, generator and discriminator are jointly estimated according to the variational Bayesian inference. The experiments on image generation over two tasks (MNIST and CeleA) demonstrate the superiority of the proposed VGAN to the variational autoencoder, the standard GAN and the Bayesian GAN based on the sampling method. The learning efficiency and generation performance are evaluated.