To Top
首页 > 深度学习 > 正文

deep learning book-第3章 Probability and Information Theory

标签:deep learning book


几个git链接:

几个问题:

  • 1.什么是熵,熵代表什么意义
  • 2.熵与KL散度有什么关系
  • 3.KL散度代表什么意义,为什么是不对称的
  • 4.有向概率图模型和无向图模型的区别
  • 5.概率密度函数的数值意义是什么?
  • 6.We can thus think of the normal distribution as being the one that inserts the least amount of prior knowledge into a model. 原文中这句话怎么理解

目录:

  • 3.1 Why Probability?
  • 3.2 Random Variables
  • 3.3 Probability Distributions
    • Discrete Variables and Probability Mass Functions
    • Continuous Variables and Probability Density Functions
  • 3.4 Marginal Probability
  • 3.5 Conditional Probability
  • 3.6 The Chain Rule of Conditional Probabilities
  • 3.7 Independence and Conditional Independence
  • 3.8 Expectation, Variance and Covariance
  • 3.9 Common Probability Distributions
    • Bernoulli Distribution
    • Multinoulli Distribution
    • Gaussian Distribution
    • Exponential and Laplace Distributions
    • The Dirac Distribution and Empirical Distribution
    • Mixtures of Distributions
  • 3.10 Useful Properties of Common Functions
  • 3.11 Bayes’ Rule
  • 3.12 Technical Details of Continuous Variables
  • 3.13 Information Theory
  • 3.14 Structured Probabilistic Models

原创文章,转载请注明出处!
本文链接:http://daiwk.github.io/posts/dl-dlbook-chap3.html
上篇: deep learning book-第2章 Linear Algebra
下篇: deep learning book-第4章 Numerical Computation

comment here..