Deep Bayesian data mining

Jen Tzung Chien*

*Corresponding author for this work

研究成果: Conference contribution同行評審

摘要

This tutorial addresses the fundamentals and advances in deep Bayesian mining and learning for natural language with ubiquitous applications ranging from speech recognition [7, 55] to document summarization [8], text classification [5, 75], text segmentation [18], information extraction [50], image caption generation [69, 72], sentence generation [25, 46], dialogue control [22, 76], sentiment classification, recommendation system, question answering [58] and machine translation [2], to name a few. Traditionally, “deep learning” is taken to be a learning process where the inference or optimization is based on the real-valued deterministic model. The “semantic structure” in words, sentences, entities, actions and documents drawn from a large vocabulary may not be well expressed or correctly optimized in mathematical logic or computer programs. The “distribution function” in discrete or continuous latent variable model for natural language may not be properly decomposed or estimated. This tutorial addresses the fundamentals of statistical models and neural networks, and focus on a series of advanced Bayesian models and deep models including hierarchical Dirichlet process [61], Chinese restaurant process [4], hierarchical Pitman-Yor process [60], Indian buffet process [35], recurrent neural network (RNN) [26, 41, 48, 65], long short-term memory, sequence-to-sequence model [59], variational auto-encoder (VAE) [44], generative adversarial network (GAN) [36], attention mechanism [27, 56], memory-augmented neural network [39, 58], skip neural network [6], temporal difference VAE [40], stochastic neural network [3, 47], stochastic temporal convolutional network [1], predictive state neural network [31], and policy neural network [49, 74]. Enhancing the prior/posterior representation is addressed [53, 62]. We present how these models are connected and why they work for a variety of applications on symbolic and complex patterns in natural language. The variational inference and sampling method are formulated to tackle the optimization for complicated models [54]. The word and sentence embeddings, clustering and co-clustering are merged with linguistic and semantic constraints. A series of case studies, tasks and applications are presented to tackle different issues in deep Bayesian mining, searching, learning and understanding. At last, we will point out a number of directions and outlooks for future studies. This tutorial serves the objectives to introduce novices to major topics within deep Bayesian learning, motivate and explain a topic of emerging importance for data mining and natural language understanding, and present a novel synthesis combining distinct lines of machine learning work.

原文English
主出版物標題WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining
發行者Association for Computing Machinery, Inc
頁面865-868
頁數4
ISBN(電子)9781450368223
DOIs
出版狀態Published - 20 一月 2020
事件13th ACM International Conference on Web Search and Data Mining, WSDM 2020 - Houston, United States
持續時間: 3 二月 20207 二月 2020

出版系列

名字WSDM 2020 - Proceedings of the 13th International Conference on Web Search and Data Mining

Conference

Conference13th ACM International Conference on Web Search and Data Mining, WSDM 2020
國家United States
城市Houston
期間3/02/207/02/20

指紋 深入研究「Deep Bayesian data mining」主題。共同形成了獨特的指紋。

引用此