首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >专栏 >SP Module 10 Connected Speech & HMM Training

SP Module 10 Connected Speech & HMM Training

作者头像
杨丝儿
发布2022-12-22 18:13:49
发布2022-12-22 18:13:49
3120
举报
文章被收录于专栏:杨丝儿的小站杨丝儿的小站

From subword units to n-grams: hierarchy of models

Defining a hierarchy of models: we can compile different HMMs to create models of utterances

We can do some pruning, remove some tokens while proceeding, reduce computation cost (Maybe Heuristic is also can be helpful in such case.)

Conditional independence and the forward algorithm

We use the Markov property of HMMs (i.e. conditional independence assumptions) to make computing probabilities of observation sequences easier

HMM training with the Baum-Welch algorithm

HMM training using the Baum-Welch algorithm. This gives a very high level overview of forward and backward probability calculation on HMMs and Expectation-Maximization as a way to optimise model parameters. The maths is in the readings (but not examinable).

Origin: Module 10 – Speech Recognition – Connected speech & HMM training Translate + Edit: YangSier (Homepage)

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2022-11-17,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • From subword units to n-grams: hierarchy of models
  • Conditional independence and the forward algorithm
  • HMM training with the Baum-Welch algorithm
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档