前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >SP Module 10 Connected Speech & HMM Training

SP Module 10 Connected Speech & HMM Training

作者头像
杨丝儿
发布2022-12-22 18:13:49
2530
发布2022-12-22 18:13:49
举报
文章被收录于专栏:杨丝儿的小站杨丝儿的小站

From subword units to n-grams: hierarchy of models

Defining a hierarchy of models: we can compile different HMMs to create models of utterances

s11243611272022
s11243611272022
s11201111272022
s11201111272022

s11241211272022
s11241211272022
s11250411272022
s11250411272022
s11270611272022
s11270611272022
s11275311272022
s11275311272022
s11285111272022
s11285111272022
s11295011272022
s11295011272022

We can do some pruning, remove some tokens while proceeding, reduce computation cost (Maybe Heuristic is also can be helpful in such case.)

Conditional independence and the forward algorithm

We use the Markov property of HMMs (i.e. conditional independence assumptions) to make computing probabilities of observation sequences easier

s14114011282022
s14114011282022
s14145911282022
s14145911282022

HMM training with the Baum-Welch algorithm

HMM training using the Baum-Welch algorithm. This gives a very high level overview of forward and backward probability calculation on HMMs and Expectation-Maximization as a way to optimise model parameters. The maths is in the readings (but not examinable).

s14164611282022
s14164611282022
s14293011282022
s14293011282022

Origin: Module 10 – Speech Recognition – Connected speech & HMM training Translate + Edit: YangSier (Homepage)

本文参与 腾讯云自媒体同步曝光计划,分享自作者个人站点/博客。
原始发表:2022-11-17,如有侵权请联系 cloudcommunity@tencent.com 删除

本文分享自 作者个人站点/博客 前往查看

如有侵权,请联系 cloudcommunity@tencent.com 删除。

本文参与 腾讯云自媒体同步曝光计划  ,欢迎热爱写作的你一起参与!

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • From subword units to n-grams: hierarchy of models
  • Conditional independence and the forward algorithm
  • HMM training with the Baum-Welch algorithm
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档