Traditionally, speech recognition system is established assuming that acoustic and linguistic information sources are independent, Parameters of hidden Markov model and n-gram are estimated individually and then plugged in a maximum a posteriori classification rule. However, acoustic and linguistic features are correlated in essence. Modeling performance is limited accordingly, This study aims to relax the independence assumption and achieve sophisticated acoustic and linguistic modeling for speech recognition. We propose an integrated approach based on maximum entropy (ME) principle where acoustic and linguistic features are optimally merged in a unified framework. The correlations between acoustic and linguistic features are explored and properly represented in the integrated models. Due to the flexibility of ME model, we can further combine other high-level linguistic features, In the experiments, we carry out the proposed methods for broadcast news transcription using MATBN database. We obtain significant improvement compared to conventional speech recognition system using individual maximum likelihood training.