kaldi跑timit数据全过程记录

 电脑配置:i5-7300H,4G,1050 2G,运行近四个小时。

wxy@HP-WXY:~/kaldi/egs/timit/s5$ ./run.sh
============================================================================
                Data & Lexicon & Language Preparation
============================================================================
wav-to-duration --read-entire-file=true scp:train_wav.scp ark,t:train_dur.ark
LOG (wav-to-duration[5.4.208~1-6f214]:main():wav-to-duration.cc:92) Printed duration for 3696 audio files.
LOG (wav-to-duration[5.4.208~1-6f214]:main():wav-to-duration.cc:94) Mean duration was 3.06336, min and max durations were 0.91525, 7.78881
wav-to-duration --read-entire-file=true scp:dev_wav.scp ark,t:dev_dur.ark
LOG (wav-to-duration[5.4.208~1-6f214]:main():wav-to-duration.cc:92) Printed duration for 400 audio files.
LOG (wav-to-duration[5.4.208~1-6f214]:main():wav-to-duration.cc:94) Mean duration was 3.08212, min and max durations were 1.09444, 7.43681
wav-to-duration --read-entire-file=true scp:test_wav.scp ark,t:test_dur.ark
LOG (wav-to-duration[5.4.208~1-6f214]:main():wav-to-duration.cc:92) Printed duration for 192 audio files.
LOG (wav-to-duration[5.4.208~1-6f214]:main():wav-to-duration.cc:94) Mean duration was 3.03646, min and max durations were 1.30562, 6.21444
Data preparation succeeded
LOGFILE:/dev/null
$bin/ngt -i="$inpfile" -n=$order -gooout=y -o="$gzip -c > $tmpdir/ngram.${sdict}.gz" -fd="$tmpdir/$sdict" $dictionary $additional_parameters >> $logfile 2>&1
$scr/build-sublm.pl $verbose $prune $prune_thr_str $smoothing "$additional_smoothing_parameters" --size $order --ngrams "$gunzip -c $tmpdir/ngram.${sdict}.gz" -sublm $tmpdir/lm.$sdict $additional_parameters >> $logfile 2>&1
inpfile: data/local/lm_tmp/lm_phone_bg.ilm.gz
outfile: /dev/stdout
loading up to the LM level 1000 (if any)
dub: 10000000
OOV code is 50
OOV code is 50
Saving in txt format to /dev/stdout
Dictionary & language model preparation succeeded
utils/prepare_lang.sh --sil-prob 0.0 --position-dependent-phones false --num-sil-states 3 data/local/dict sil data/local/lang_tmp data/lang
Checking data/local/dict/silence_phones.txt ...
--> reading data/local/dict/silence_phones.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/silence_phones.txt is OK

Checking data/local/dict/optional_silence.txt ...
--> reading data/local/dict/optional_silence.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/optional_silence.txt is OK

Checking data/local/dict/nonsilence_phones.txt ...
--> reading data/local/dict/nonsilence_phones.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/nonsilence_phones.txt is OK

Checking disjoint: silence_phones.txt, nonsilence_phones.txt
--> disjoint property is OK.

Checking data/local/dict/lexicon.txt
--> reading data/local/dict/lexicon.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/lexicon.txt is OK

Checking data/local/dict/lexiconp.txt
--> reading data/local/dict/lexiconp.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/lexiconp.txt is OK

Checking lexicon pair data/local/dict/lexicon.txt and data/local/dict/lexiconp.txt
--> lexicon pair data/local/dict/lexicon.txt and data/local/dict/lexiconp.txt match

Checking data/local/dict/extra_questions.txt ...
--> reading data/local/dict/extra_questions.txt
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/local/dict/extra_questions.txt is OK
--> SUCCESS [validating dictionary directory data/local/dict]

fstaddselfloops data/lang/phones/wdisambig_phones.int data/lang/phones/wdisambig_words.int
prepare_lang.sh: validating output directory
utils/validate_lang.pl data/lang
Checking data/lang/phones.txt ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/lang/phones.txt is OK

Checking words.txt: #0 ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/lang/words.txt is OK

Checking disjoint: silence.txt, nonsilence.txt, disambig.txt ...
--> silence.txt and nonsilence.txt are disjoint
--> silence.txt and disambig.txt are disjoint
--> disambig.txt and nonsilence.txt are disjoint
--> disjoint property is OK

Checking sumation: silence.txt, nonsilence.txt, disambig.txt ...
--> summation property is OK

Checking data/lang/phones/context_indep.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 1 entry/entries in data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.int corresponds to data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.csl corresponds to data/lang/phones/context_indep.txt
--> data/lang/phones/context_indep.{txt, int, csl} are OK

Checking data/lang/phones/nonsilence.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 47 entry/entries in data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.int corresponds to data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.csl corresponds to data/lang/phones/nonsilence.txt
--> data/lang/phones/nonsilence.{txt, int, csl} are OK

Checking data/lang/phones/silence.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 1 entry/entries in data/lang/phones/silence.txt
--> data/lang/phones/silence.int corresponds to data/lang/phones/silence.txt
--> data/lang/phones/silence.csl corresponds to data/lang/phones/silence.txt
--> data/lang/phones/silence.{txt, int, csl} are OK

Checking data/lang/phones/optional_silence.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 1 entry/entries in data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.int corresponds to data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.csl corresponds to data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.{txt, int, csl} are OK

Checking data/lang/phones/disambig.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 2 entry/entries in data/lang/phones/disambig.txt
--> data/lang/phones/disambig.int corresponds to data/lang/phones/disambig.txt
--> data/lang/phones/disambig.csl corresponds to data/lang/phones/disambig.txt
--> data/lang/phones/disambig.{txt, int, csl} are OK

Checking data/lang/phones/roots.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 48 entry/entries in data/lang/phones/roots.txt
--> data/lang/phones/roots.int corresponds to data/lang/phones/roots.txt
--> data/lang/phones/roots.{txt, int} are OK

Checking data/lang/phones/sets.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 48 entry/entries in data/lang/phones/sets.txt
--> data/lang/phones/sets.int corresponds to data/lang/phones/sets.txt
--> data/lang/phones/sets.{txt, int} are OK

Checking data/lang/phones/extra_questions.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 2 entry/entries in data/lang/phones/extra_questions.txt
--> data/lang/phones/extra_questions.int corresponds to data/lang/phones/extra_questions.txt
--> data/lang/phones/extra_questions.{txt, int} are OK

Checking optional_silence.txt ...
--> reading data/lang/phones/optional_silence.txt
--> data/lang/phones/optional_silence.txt is OK

Checking disambiguation symbols: #0 and #1
--> data/lang/phones/disambig.txt has "#0" and "#1"
--> data/lang/phones/disambig.txt is OK

Checking topo ...

Checking word-level disambiguation symbols...
--> data/lang/phones/wdisambig.txt exists (newer prepare_lang.sh)
Checking data/lang/oov.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 1 entry/entries in data/lang/oov.txt
--> data/lang/oov.int corresponds to data/lang/oov.txt
--> data/lang/oov.{txt, int} are OK

--> data/lang/L.fst is olabel sorted
--> data/lang/L_disambig.fst is olabel sorted
--> SUCCESS [validating lang directory data/lang]
Preparing train, dev and test data
utils/validate_data_dir.sh: Successfully validated data-directory data/train
utils/validate_data_dir.sh: Successfully validated data-directory data/dev
utils/validate_data_dir.sh: Successfully validated data-directory data/test
Preparing language models for test
arpa2fst --disambig-symbol=#0 --read-symbol-table=data/lang_test_bg/words.txt - data/lang_test_bg/G.fst
LOG (arpa2fst[5.4.208~1-6f214]:Read():arpa-file-parser.cc:94) Reading \data\ section.
LOG (arpa2fst[5.4.208~1-6f214]:Read():arpa-file-parser.cc:149) Reading \1-grams: section.
LOG (arpa2fst[5.4.208~1-6f214]:Read():arpa-file-parser.cc:149) Reading \2-grams: section.
WARNING (arpa2fst[5.4.208~1-6f214]:ConsumeNGram():arpa-lm-compiler.cc:313) line 60 [-3.26717    <s> <s>] skipped: n-gram has invalid BOS/EOS placement
LOG (arpa2fst[5.4.208~1-6f214]:RemoveRedundantStates():arpa-lm-compiler.cc:359) Reduced num-states from 50 to 50
fstisstochastic data/lang_test_bg/G.fst
0.000510126 -0.0763018
utils/validate_lang.pl data/lang_test_bg
Checking data/lang_test_bg/phones.txt ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/lang_test_bg/phones.txt is OK

Checking words.txt: #0 ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> data/lang_test_bg/words.txt is OK

Checking disjoint: silence.txt, nonsilence.txt, disambig.txt ...
--> silence.txt and nonsilence.txt are disjoint
--> silence.txt and disambig.txt are disjoint
--> disambig.txt and nonsilence.txt are disjoint
--> disjoint property is OK

Checking sumation: silence.txt, nonsilence.txt, disambig.txt ...
--> summation property is OK

Checking data/lang_test_bg/phones/context_indep.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 1 entry/entries in data/lang_test_bg/phones/context_indep.txt
--> data/lang_test_bg/phones/context_indep.int corresponds to data/lang_test_bg/phones/context_indep.txt
--> data/lang_test_bg/phones/context_indep.csl corresponds to data/lang_test_bg/phones/context_indep.txt
--> data/lang_test_bg/phones/context_indep.{txt, int, csl} are OK

Checking data/lang_test_bg/phones/nonsilence.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 47 entry/entries in data/lang_test_bg/phones/nonsilence.txt
--> data/lang_test_bg/phones/nonsilence.int corresponds to data/lang_test_bg/phones/nonsilence.txt
--> data/lang_test_bg/phones/nonsilence.csl corresponds to data/lang_test_bg/phones/nonsilence.txt
--> data/lang_test_bg/phones/nonsilence.{txt, int, csl} are OK

Checking data/lang_test_bg/phones/silence.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 1 entry/entries in data/lang_test_bg/phones/silence.txt
--> data/lang_test_bg/phones/silence.int corresponds to data/lang_test_bg/phones/silence.txt
--> data/lang_test_bg/phones/silence.csl corresponds to data/lang_test_bg/phones/silence.txt
--> data/lang_test_bg/phones/silence.{txt, int, csl} are OK

Checking data/lang_test_bg/phones/optional_silence.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 1 entry/entries in data/lang_test_bg/phones/optional_silence.txt
--> data/lang_test_bg/phones/optional_silence.int corresponds to data/lang_test_bg/phones/optional_silence.txt
--> data/lang_test_bg/phones/optional_silence.csl corresponds to data/lang_test_bg/phones/optional_silence.txt
--> data/lang_test_bg/phones/optional_silence.{txt, int, csl} are OK

Checking data/lang_test_bg/phones/disambig.{txt, int, csl} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 2 entry/entries in data/lang_test_bg/phones/disambig.txt
--> data/lang_test_bg/phones/disambig.int corresponds to data/lang_test_bg/phones/disambig.txt
--> data/lang_test_bg/phones/disambig.csl corresponds to data/lang_test_bg/phones/disambig.txt
--> data/lang_test_bg/phones/disambig.{txt, int, csl} are OK

Checking data/lang_test_bg/phones/roots.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 48 entry/entries in data/lang_test_bg/phones/roots.txt
--> data/lang_test_bg/phones/roots.int corresponds to data/lang_test_bg/phones/roots.txt
--> data/lang_test_bg/phones/roots.{txt, int} are OK

Checking data/lang_test_bg/phones/sets.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 48 entry/entries in data/lang_test_bg/phones/sets.txt
--> data/lang_test_bg/phones/sets.int corresponds to data/lang_test_bg/phones/sets.txt
--> data/lang_test_bg/phones/sets.{txt, int} are OK

Checking data/lang_test_bg/phones/extra_questions.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 2 entry/entries in data/lang_test_bg/phones/extra_questions.txt
--> data/lang_test_bg/phones/extra_questions.int corresponds to data/lang_test_bg/phones/extra_questions.txt
--> data/lang_test_bg/phones/extra_questions.{txt, int} are OK

Checking optional_silence.txt ...
--> reading data/lang_test_bg/phones/optional_silence.txt
--> data/lang_test_bg/phones/optional_silence.txt is OK

Checking disambiguation symbols: #0 and #1
--> data/lang_test_bg/phones/disambig.txt has "#0" and "#1"
--> data/lang_test_bg/phones/disambig.txt is OK

Checking topo ...

Checking word-level disambiguation symbols...
--> data/lang_test_bg/phones/wdisambig.txt exists (newer prepare_lang.sh)
Checking data/lang_test_bg/oov.{txt, int} ...
--> text seems to be UTF-8 or ASCII, checking whitespaces
--> text contains only allowed whitespaces
--> 1 entry/entries in data/lang_test_bg/oov.txt
--> data/lang_test_bg/oov.int corresponds to data/lang_test_bg/oov.txt
--> data/lang_test_bg/oov.{txt, int} are OK

--> data/lang_test_bg/L.fst is olabel sorted
--> data/lang_test_bg/L_disambig.fst is olabel sorted
--> data/lang_test_bg/G.fst is ilabel sorted
--> data/lang_test_bg/G.fst has 50 states
fstdeterminizestar data/lang_test_bg/G.fst /dev/null
--> data/lang_test_bg/G.fst is determinizable
--> utils/lang/check_g_properties.pl successfully validated data/lang_test_bg/G.fst
--> utils/lang/check_g_properties.pl succeeded.
--> Testing determinizability of L_disambig . G
fstdeterminizestar
fsttablecompose data/lang_test_bg/L_disambig.fst data/lang_test_bg/G.fst
--> L_disambig . G is determinizable
--> SUCCESS [validating lang directory data/lang_test_bg]
Succeeded in formatting data.
============================================================================
         MFCC Feature Extration & CMVN for Training and Test set
============================================================================
steps/make_mfcc.sh --cmd run.pl --nj 4 data/train exp/make_mfcc/train mfcc
steps/make_mfcc.sh: moving data/train/feats.scp to data/train/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/train
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
Succeeded creating MFCC features for train
steps/compute_cmvn_stats.sh data/train exp/make_mfcc/train mfcc
Succeeded creating CMVN stats for train
steps/make_mfcc.sh --cmd run.pl --nj 4 data/dev exp/make_mfcc/dev mfcc
steps/make_mfcc.sh: moving data/dev/feats.scp to data/dev/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/dev
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
Succeeded creating MFCC features for dev
steps/compute_cmvn_stats.sh data/dev exp/make_mfcc/dev mfcc
Succeeded creating CMVN stats for dev
steps/make_mfcc.sh --cmd run.pl --nj 4 data/test exp/make_mfcc/test mfcc
steps/make_mfcc.sh: moving data/test/feats.scp to data/test/.backup
utils/validate_data_dir.sh: Successfully validated data-directory data/test
steps/make_mfcc.sh: [info]: no segments file exists: assuming wav.scp indexed by utterance.
Succeeded creating MFCC features for test
steps/compute_cmvn_stats.sh data/test exp/make_mfcc/test mfcc
Succeeded creating CMVN stats for test
============================================================================
                     MonoPhone Training & Decoding
============================================================================
steps/train_mono.sh --nj 4 --cmd run.pl data/train data/lang exp/mono
steps/train_mono.sh: Initializing monophone system.
steps/train_mono.sh: Compiling training graphs
steps/train_mono.sh: Aligning data equally (pass 0)
steps/train_mono.sh: Pass 1
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 2
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 3
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 4
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 5
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 6
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 7
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 8
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 9
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 10
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 11
steps/train_mono.sh: Pass 12
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 13
steps/train_mono.sh: Pass 14
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 15
steps/train_mono.sh: Pass 16
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 17
steps/train_mono.sh: Pass 18
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 19
steps/train_mono.sh: Pass 20
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 21
steps/train_mono.sh: Pass 22
steps/train_mono.sh: Pass 23
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 24
steps/train_mono.sh: Pass 25
steps/train_mono.sh: Pass 26
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 27
steps/train_mono.sh: Pass 28
steps/train_mono.sh: Pass 29
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 30
steps/train_mono.sh: Pass 31
steps/train_mono.sh: Pass 32
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 33
steps/train_mono.sh: Pass 34
steps/train_mono.sh: Pass 35
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 36
steps/train_mono.sh: Pass 37
steps/train_mono.sh: Pass 38
steps/train_mono.sh: Aligning data
steps/train_mono.sh: Pass 39
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/mono
steps/diagnostic/analyze_alignments.sh: see stats in exp/mono/log/analyze_alignments.log
2 warnings in exp/mono/log/align.*.*.log
exp/mono: nj=4 align prob=-99.15 over 3.12h [retry=0.0%, fail=0.0%] states=144 gauss=985
steps/train_mono.sh: Done training monophone system in exp/mono
tree-info exp/mono/tree
tree-info exp/mono/tree
fstminimizeencoded
fsttablecompose data/lang_test_bg/L_disambig.fst data/lang_test_bg/G.fst
fstdeterminizestar --use-log=true
fstpushspecial
fstisstochastic data/lang_test_bg/tmp/LG.fst
-0.00841336 -0.00928521
fstcomposecontext --context-size=1 --central-position=0 --read-disambig-syms=data/lang_test_bg/phones/disambig.int --write-disambig-syms=data/lang_test_bg/tmp/disambig_ilabels_1_0.int data/lang_test_bg/tmp/ilabels_1_0.28685
fstisstochastic data/lang_test_bg/tmp/CLG_1_0.fst
-0.00841336 -0.00928521
make-h-transducer --disambig-syms-out=exp/mono/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_1_0 exp/mono/tree exp/mono/final.mdl
fsttablecompose exp/mono/graph/Ha.fst data/lang_test_bg/tmp/CLG_1_0.fst
fstminimizeencoded
fstdeterminizestar --use-log=true
fstrmsymbols exp/mono/graph/disambig_tid.int
fstrmepslocal
fstisstochastic exp/mono/graph/HCLGa.fst
0.000381709 -0.00951555
add-self-loops --self-loop-scale=0.1 --reorder=true exp/mono/final.mdl
steps/decode.sh --nj 4 --cmd run.pl exp/mono/graph data/dev exp/mono/decode_dev
decode.sh: feature type is delta
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/mono/graph exp/mono/decode_dev
steps/diagnostic/analyze_lats.sh: see stats in exp/mono/decode_dev/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(5,25,122) and mean=59.2
steps/diagnostic/analyze_lats.sh: see stats in exp/mono/decode_dev/log/analyze_lattice_depth_stats.log
 steps/decode.sh --nj 4 --cmd run.pl exp/mono/graph data/test exp/mono/decode_test
decode.sh: feature type is delta
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/mono/graph exp/mono/decode_test
steps/diagnostic/analyze_lats.sh: see stats in exp/mono/decode_test/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(5,27,140) and mean=71.3
steps/diagnostic/analyze_lats.sh: see stats in exp/mono/decode_test/log/analyze_lattice_depth_stats.log
============================================================================
           tri1 : Deltas + Delta-Deltas Training & Decoding
============================================================================
steps/align_si.sh --boost-silence 1.25 --nj 4 --cmd run.pl data/train data/lang exp/mono exp/mono_ali
steps/align_si.sh: feature type is delta
steps/align_si.sh: aligning data in data/train using model from exp/mono, putting alignments in exp/mono_ali
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/mono_ali
steps/diagnostic/analyze_alignments.sh: see stats in exp/mono_ali/log/analyze_alignments.log
steps/align_si.sh: done aligning data.
steps/train_deltas.sh --cmd run.pl 2500 15000 data/train data/lang exp/mono_ali exp/tri1
steps/train_deltas.sh: accumulating tree stats
steps/train_deltas.sh: getting questions for tree-building, via clustering
steps/train_deltas.sh: building the tree
steps/train_deltas.sh: converting alignments from exp/mono_ali to use current tree
steps/train_deltas.sh: compiling graphs of transcripts
steps/train_deltas.sh: training pass 1
steps/train_deltas.sh: training pass 2
steps/train_deltas.sh: training pass 3
steps/train_deltas.sh: training pass 4
steps/train_deltas.sh: training pass 5
steps/train_deltas.sh: training pass 6
steps/train_deltas.sh: training pass 7
steps/train_deltas.sh: training pass 8
steps/train_deltas.sh: training pass 9
steps/train_deltas.sh: training pass 10
steps/train_deltas.sh: aligning data
steps/train_deltas.sh: training pass 11
steps/train_deltas.sh: training pass 12
steps/train_deltas.sh: training pass 13
steps/train_deltas.sh: training pass 14
steps/train_deltas.sh: training pass 15
steps/train_deltas.sh: training pass 16
steps/train_deltas.sh: training pass 17
steps/train_deltas.sh: training pass 18
steps/train_deltas.sh: training pass 19
steps/train_deltas.sh: training pass 20
steps/train_deltas.sh: aligning data
steps/train_deltas.sh: training pass 21
steps/train_deltas.sh: training pass 22
steps/train_deltas.sh: training pass 23
steps/train_deltas.sh: training pass 24
steps/train_deltas.sh: training pass 25
steps/train_deltas.sh: training pass 26
steps/train_deltas.sh: training pass 27
steps/train_deltas.sh: training pass 28
steps/train_deltas.sh: training pass 29
steps/train_deltas.sh: training pass 30
steps/train_deltas.sh: aligning data
steps/train_deltas.sh: training pass 31
steps/train_deltas.sh: training pass 32
steps/train_deltas.sh: training pass 33
steps/train_deltas.sh: training pass 34
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri1
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri1/log/analyze_alignments.log
41 warnings in exp/tri1/log/update.*.log
69 warnings in exp/tri1/log/init_model.log
1 warnings in exp/tri1/log/compile_questions.log
exp/tri1: nj=4 align prob=-95.29 over 3.12h [retry=0.0%, fail=0.0%] states=1864 gauss=15029 tree-impr=5.43
steps/train_deltas.sh: Done training system with delta+delta-delta features in exp/tri1
tree-info exp/tri1/tree
tree-info exp/tri1/tree
fstcomposecontext --context-size=3 --central-position=1 --read-disambig-syms=data/lang_test_bg/phones/disambig.int --write-disambig-syms=data/lang_test_bg/tmp/disambig_ilabels_3_1.int data/lang_test_bg/tmp/ilabels_3_1.4198
fstisstochastic data/lang_test_bg/tmp/CLG_3_1.fst
0 -0.00928518
make-h-transducer --disambig-syms-out=exp/tri1/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/tri1/tree exp/tri1/final.mdl
fstrmepslocal
fstdeterminizestar --use-log=true
fsttablecompose exp/tri1/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst
fstrmsymbols exp/tri1/graph/disambig_tid.int
fstminimizeencoded
fstisstochastic exp/tri1/graph/HCLGa.fst
0.000475639 -0.0175772
HCLGa is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/tri1/final.mdl
steps/decode.sh --nj 4 --cmd run.pl exp/tri1/graph data/dev exp/tri1/decode_dev
decode.sh: feature type is delta
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/tri1/graph exp/tri1/decode_dev
steps/diagnostic/analyze_lats.sh: see stats in exp/tri1/decode_dev/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(3,12,43) and mean=19.4
steps/diagnostic/analyze_lats.sh: see stats in exp/tri1/decode_dev/log/analyze_lattice_depth_stats.log
steps/decode.sh --nj 4 --cmd run.pl exp/tri1/graph data/test exp/tri1/decode_test
decode.sh: feature type is delta
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/tri1/graph exp/tri1/decode_test
steps/diagnostic/analyze_lats.sh: see stats in exp/tri1/decode_test/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(3,12,50) and mean=22.0
steps/diagnostic/analyze_lats.sh: see stats in exp/tri1/decode_test/log/analyze_lattice_depth_stats.log
============================================================================
                 tri2 : LDA + MLLT Training & Decoding
============================================================================
steps/align_si.sh --nj 4 --cmd run.pl data/train data/lang exp/tri1 exp/tri1_ali
steps/align_si.sh: feature type is delta
steps/align_si.sh: aligning data in data/train using model from exp/tri1, putting alignments in exp/tri1_ali
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri1_ali
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri1_ali/log/analyze_alignments.log
steps/align_si.sh: done aligning data.
steps/train_lda_mllt.sh --cmd run.pl --splice-opts --left-context=3 --right-context=3 2500 15000 data/train data/lang exp/tri1_ali exp/tri2
steps/train_lda_mllt.sh: Accumulating LDA statistics.
steps/train_lda_mllt.sh: Accumulating tree stats
steps/train_lda_mllt.sh: Getting questions for tree clustering.
steps/train_lda_mllt.sh: Building the tree
steps/train_lda_mllt.sh: Initializing the model
steps/train_lda_mllt.sh: Converting alignments from exp/tri1_ali to use current tree
steps/train_lda_mllt.sh: Compiling graphs of transcripts
Training pass 1
Training pass 2
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 3
Training pass 4
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 5
Training pass 6
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 7
Training pass 8
Training pass 9
Training pass 10
Aligning data
Training pass 11
Training pass 12
steps/train_lda_mllt.sh: Estimating MLLT
Training pass 13
Training pass 14
Training pass 15
Training pass 16
Training pass 17
Training pass 18
Training pass 19
Training pass 20
Aligning data
Training pass 21
Training pass 22
Training pass 23
Training pass 24
Training pass 25
Training pass 26
Training pass 27
Training pass 28
Training pass 29
Training pass 30
Aligning data
Training pass 31
Training pass 32
Training pass 33
Training pass 34
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri2
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri2/log/analyze_alignments.log
1 warnings in exp/tri2/log/compile_questions.log
80 warnings in exp/tri2/log/init_model.log
114 warnings in exp/tri2/log/update.*.log
exp/tri2: nj=4 align prob=-47.91 over 3.12h [retry=0.0%, fail=0.0%] states=2016 gauss=15024 tree-impr=5.60 lda-sum=28.37 mllt:impr,logdet=1.61,2.20
steps/train_lda_mllt.sh: Done training system with LDA+MLLT features in exp/tri2
tree-info exp/tri2/tree
tree-info exp/tri2/tree
make-h-transducer --disambig-syms-out=exp/tri2/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/tri2/tree exp/tri2/final.mdl
fstminimizeencoded
fstrmsymbols exp/tri2/graph/disambig_tid.int
fstrmepslocal
fsttablecompose exp/tri2/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst
fstdeterminizestar --use-log=true
fstisstochastic exp/tri2/graph/HCLGa.fst
0.000485452 -0.0175772
HCLGa is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/tri2/final.mdl
steps/decode.sh --nj 4 --cmd run.pl exp/tri2/graph data/dev exp/tri2/decode_dev
decode.sh: feature type is lda
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/tri2/graph exp/tri2/decode_dev
steps/diagnostic/analyze_lats.sh: see stats in exp/tri2/decode_dev/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(2,8,28) and mean=13.3
steps/diagnostic/analyze_lats.sh: see stats in exp/tri2/decode_dev/log/analyze_lattice_depth_stats.log
steps/decode.sh --nj 4 --cmd run.pl exp/tri2/graph data/test exp/tri2/decode_test
decode.sh: feature type is lda
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/tri2/graph exp/tri2/decode_test
steps/diagnostic/analyze_lats.sh: see stats in exp/tri2/decode_test/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(2,9,33) and mean=14.5
steps/diagnostic/analyze_lats.sh: see stats in exp/tri2/decode_test/log/analyze_lattice_depth_stats.log
============================================================================
              tri3 : LDA + MLLT + SAT Training & Decoding
============================================================================
steps/align_si.sh --nj 4 --cmd run.pl --use-graphs true data/train data/lang exp/tri2 exp/tri2_ali
steps/align_si.sh: feature type is lda
steps/align_si.sh: aligning data in data/train using model from exp/tri2, putting alignments in exp/tri2_ali
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri2_ali
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri2_ali/log/analyze_alignments.log
steps/align_si.sh: done aligning data.
steps/train_sat.sh --cmd run.pl 2500 15000 data/train data/lang exp/tri2_ali exp/tri3
steps/train_sat.sh: feature type is lda
steps/train_sat.sh: obtaining initial fMLLR transforms since not present in exp/tri2_ali
steps/train_sat.sh: Accumulating tree stats
steps/train_sat.sh: Getting questions for tree clustering.
steps/train_sat.sh: Building the tree
steps/train_sat.sh: Initializing the model
steps/train_sat.sh: Converting alignments from exp/tri2_ali to use current tree
steps/train_sat.sh: Compiling graphs of transcripts
Pass 1
Pass 2
Estimating fMLLR transforms
Pass 3
Pass 4
Estimating fMLLR transforms
Pass 5
Pass 6
Estimating fMLLR transforms
Pass 7
Pass 8
Pass 9
Pass 10
Aligning data
Pass 11
Pass 12
Estimating fMLLR transforms
Pass 13
Pass 14
Pass 15
Pass 16
Pass 17
Pass 18
Pass 19
Pass 20
Aligning data
Pass 21
Pass 22
Pass 23
Pass 24
Pass 25
Pass 26
Pass 27
Pass 28
Pass 29
Pass 30
Aligning data
Pass 31
Pass 32
Pass 33
Pass 34
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri3
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri3/log/analyze_alignments.log
17 warnings in exp/tri3/log/update.*.log
1 warnings in exp/tri3/log/compile_questions.log
43 warnings in exp/tri3/log/init_model.log
steps/train_sat.sh: Likelihood evolution:
-50.2363 -49.3655 -49.1625 -48.9579 -48.3156 -47.6139 -47.143 -46.8836 -46.6649 -46.1464 -45.8874 -45.5681 -45.3851 -45.2436 -45.1131 -44.9983 -44.8893 -44.7822 -44.6776 -44.5167 -44.3818 -44.2918 -44.2077 -44.1263 -44.0484 -43.9711 -43.8974 -43.8247 -43.7526 -43.6564 -43.5809 -43.5552 -43.5391 -43.5278
exp/tri3: nj=4 align prob=-47.14 over 3.12h [retry=0.0%, fail=0.0%] states=1960 gauss=15021 fmllr-impr=4.02 over 2.79h tree-impr=8.78
steps/train_sat.sh: done training SAT system in exp/tri3
tree-info exp/tri3/tree
tree-info exp/tri3/tree
make-h-transducer --disambig-syms-out=exp/tri3/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/tri3/tree exp/tri3/final.mdl
fsttablecompose exp/tri3/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst
fstminimizeencoded
fstrmepslocal
fstrmsymbols exp/tri3/graph/disambig_tid.int
fstdeterminizestar --use-log=true
fstisstochastic exp/tri3/graph/HCLGa.fst
0.000474884 -0.0175772
HCLGa is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/tri3/final.mdl
steps/decode_fmllr.sh --nj 4 --cmd run.pl exp/tri3/graph data/dev exp/tri3/decode_dev
steps/decode.sh --scoring-opts  --num-threads 1 --skip-scoring false --acwt 0.083333 --nj 4 --cmd run.pl --beam 10.0 --model exp/tri3/final.alimdl --max-active 2000 exp/tri3/graph data/dev exp/tri3/decode_dev.si
decode.sh: feature type is lda
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/tri3/graph exp/tri3/decode_dev.si
steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_dev.si/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(2,9,33) and mean=15.2
steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_dev.si/log/analyze_lattice_depth_stats.log
steps/decode_fmllr.sh: feature type is lda
steps/decode_fmllr.sh: getting first-pass fMLLR transforms.
steps/decode_fmllr.sh: doing main lattice generation phase
steps/decode_fmllr.sh: estimating fMLLR transforms a second time.
steps/decode_fmllr.sh: doing a final pass of acoustic rescoring.
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/tri3/graph exp/tri3/decode_dev
steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_dev/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(1,5,16) and mean=7.7
steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_dev/log/analyze_lattice_depth_stats.log
steps/decode_fmllr.sh --nj 4 --cmd run.pl exp/tri3/graph data/test exp/tri3/decode_test
steps/decode.sh --scoring-opts  --num-threads 1 --skip-scoring false --acwt 0.083333 --nj 4 --cmd run.pl --beam 10.0 --model exp/tri3/final.alimdl --max-active 2000 exp/tri3/graph data/test exp/tri3/decode_test.si
decode.sh: feature type is lda
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/tri3/graph exp/tri3/decode_test.si
steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_test.si/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(2,10,36) and mean=16.2
steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_test.si/log/analyze_lattice_depth_stats.log
steps/decode_fmllr.sh: feature type is lda
steps/decode_fmllr.sh: getting first-pass fMLLR transforms.
steps/decode_fmllr.sh: doing main lattice generation phase
steps/decode_fmllr.sh: estimating fMLLR transforms a second time.
steps/decode_fmllr.sh: doing a final pass of acoustic rescoring.
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/tri3/graph exp/tri3/decode_test
steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_test/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(1,5,18) and mean=8.5
steps/diagnostic/analyze_lats.sh: see stats in exp/tri3/decode_test/log/analyze_lattice_depth_stats.log
============================================================================
                        SGMM2 Training & Decoding
============================================================================
steps/align_fmllr.sh --nj 4 --cmd run.pl data/train data/lang exp/tri3 exp/tri3_ali
steps/align_fmllr.sh: feature type is lda
steps/align_fmllr.sh: compiling training graphs
steps/align_fmllr.sh: aligning data in data/train using exp/tri3/final.alimdl and speaker-independent features.
steps/align_fmllr.sh: computing fMLLR transforms
steps/align_fmllr.sh: doing final alignment.
steps/align_fmllr.sh: done aligning data.
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/tri3_ali
steps/diagnostic/analyze_alignments.sh: see stats in exp/tri3_ali/log/analyze_alignments.log
steps/train_ubm.sh --cmd run.pl 400 data/train data/lang exp/tri3_ali exp/ubm4
steps/train_ubm.sh: feature type is lda
steps/train_ubm.sh: using transforms from exp/tri3_ali
steps/train_ubm.sh: clustering model exp/tri3_ali/final.mdl to get initial UBM
steps/train_ubm.sh: doing Gaussian selection
Pass 0
Pass 1
Pass 2
steps/train_sgmm2.sh --cmd run.pl 7000 9000 data/train data/lang exp/tri3_ali exp/ubm4/final.ubm exp/sgmm2_4
steps/train_sgmm2.sh: feature type is lda
steps/train_sgmm2.sh: using transforms from exp/tri3_ali
steps/train_sgmm2.sh: accumulating tree stats
steps/train_sgmm2.sh: Getting questions for tree clustering.
steps/train_sgmm2.sh: Building the tree
steps/train_sgmm2.sh: Initializing the model
steps/train_sgmm2.sh: doing Gaussian selection
steps/train_sgmm2.sh: compiling training graphs
steps/train_sgmm2.sh: converting alignments
steps/train_sgmm2.sh: training pass 0 ...
steps/train_sgmm2.sh: training pass 1 ...
steps/train_sgmm2.sh: training pass 2 ...
steps/train_sgmm2.sh: training pass 3 ...
steps/train_sgmm2.sh: training pass 4 ...
steps/train_sgmm2.sh: training pass 5 ...
steps/train_sgmm2.sh: re-aligning data
steps/train_sgmm2.sh: training pass 6 ...
steps/train_sgmm2.sh: training pass 7 ...
steps/train_sgmm2.sh: training pass 8 ...
q2qsteps/train_sgmm2.sh: training pass 9 ...
steps/train_sgmm2.sh: training pass 10 ...
steps/train_sgmm2.sh: re-aligning data
steps/train_sgmm2.sh: training pass 11 ...
steps/train_sgmm2.sh: training pass 12 ...
steps/train_sgmm2.sh: training pass 13 ...
steps/train_sgmm2.sh: training pass 14 ...
steps/train_sgmm2.sh: training pass 15 ...
steps/train_sgmm2.sh: re-aligning data
steps/train_sgmm2.sh: training pass 16 ...
steps/train_sgmm2.sh: training pass 17 ...
steps/train_sgmm2.sh: training pass 18 ...
steps/train_sgmm2.sh: training pass 19 ...
steps/train_sgmm2.sh: training pass 20 ...
steps/train_sgmm2.sh: training pass 21 ...
steps/train_sgmm2.sh: training pass 22 ...
steps/train_sgmm2.sh: training pass 23 ...
steps/train_sgmm2.sh: training pass 24 ...
steps/train_sgmm2.sh: building alignment model (pass 25)
steps/train_sgmm2.sh: building alignment model (pass 26)
steps/train_sgmm2.sh: building alignment model (pass 27)
216 warnings in exp/sgmm2_4/log/update_ali.*.log
1867 warnings in exp/sgmm2_4/log/update.*.log
1 warnings in exp/sgmm2_4/log/compile_questions.log
Done
tree-info exp/sgmm2_4/tree
tree-info exp/sgmm2_4/tree
make-h-transducer --disambig-syms-out=exp/sgmm2_4/graph/disambig_tid.int --transition-scale=1.0 data/lang_test_bg/tmp/ilabels_3_1 exp/sgmm2_4/tree exp/sgmm2_4/final.mdl
fstrmsymbols exp/sgmm2_4/graph/disambig_tid.int
fsttablecompose exp/sgmm2_4/graph/Ha.fst data/lang_test_bg/tmp/CLG_3_1.fst
fstrmepslocal
fstdeterminizestar --use-log=true
fstminimizeencoded
fstisstochastic exp/sgmm2_4/graph/HCLGa.fst
0.000474884 -0.0175772
HCLGa is not stochastic
add-self-loops --self-loop-scale=0.1 --reorder=true exp/sgmm2_4/final.mdl
steps/decode_sgmm2.sh --nj 4 --cmd run.pl --transform-dir exp/tri3/decode_dev exp/sgmm2_4/graph data/dev exp/sgmm2_4/decode_dev
steps/decode_sgmm2.sh: feature type is lda
steps/decode_sgmm2.sh: using transforms from exp/tri3/decode_dev
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/sgmm2_4/graph exp/sgmm2_4/decode_dev
steps/diagnostic/analyze_lats.sh: see stats in exp/sgmm2_4/decode_dev/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(2,6,20) and mean=9.3
steps/diagnostic/analyze_lats.sh: see stats in exp/sgmm2_4/decode_dev/log/analyze_lattice_depth_stats.log
steps/decode_sgmm2.sh --nj 4 --cmd run.pl --transform-dir exp/tri3/decode_test exp/sgmm2_4/graph data/test exp/sgmm2_4/decode_test
steps/decode_sgmm2.sh: feature type is lda
steps/decode_sgmm2.sh: using transforms from exp/tri3/decode_test
steps/diagnostic/analyze_lats.sh --cmd run.pl exp/sgmm2_4/graph exp/sgmm2_4/decode_test
steps/diagnostic/analyze_lats.sh: see stats in exp/sgmm2_4/decode_test/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(2,6,23) and mean=10.7
steps/diagnostic/analyze_lats.sh: see stats in exp/sgmm2_4/decode_test/log/analyze_lattice_depth_stats.log
============================================================================
                    MMI + SGMM2 Training & Decoding
============================================================================
steps/align_sgmm2.sh --nj 4 --cmd run.pl --transform-dir exp/tri3_ali --use-graphs true --use-gselect true data/train data/lang exp/sgmm2_4 exp/sgmm2_4_ali
steps/align_sgmm2.sh: feature type is lda
steps/align_sgmm2.sh: using transforms from exp/tri3_ali
steps/align_sgmm2.sh: aligning data in data/train using model exp/sgmm2_4/final.alimdl
steps/align_sgmm2.sh: computing speaker vectors (1st pass)
steps/align_sgmm2.sh: computing speaker vectors (2nd pass)
steps/align_sgmm2.sh: doing final alignment.
steps/align_sgmm2.sh: done aligning data.
steps/diagnostic/analyze_alignments.sh --cmd run.pl data/lang exp/sgmm2_4_ali
steps/diagnostic/analyze_alignments.sh: see stats in exp/sgmm2_4_ali/log/analyze_alignments.log
steps/make_denlats_sgmm2.sh --nj 4 --sub-split 4 --acwt 0.2 --lattice-beam 10.0 --beam 18.0 --cmd run.pl --transform-dir exp/tri3_ali data/train data/lang exp/sgmm2_4_ali exp/sgmm2_4_denlats
steps/make_denlats_sgmm2.sh: Making unigram grammar FST in exp/sgmm2_4_denlats/lang
steps/make_denlats_sgmm2.sh: Compiling decoding graph in exp/sgmm2_4_denlats/dengraph
tree-info exp/sgmm2_4_ali/tree
tree-info exp/sgmm2_4_ali/tree
fstminimizeencoded
fstdeterminizestar --use-log=true
fsttablecompose exp/sgmm2_4_denlats/lang/L_disambig.fst exp/sgmm2_4_denlats/lang/G.fst
fstpushspecial
fstisstochastic exp/sgmm2_4_denlats/lang/tmp/LG.fst
1.27271e-05 1.27271e-05
fstcomposecontext --context-size=3 --central-position=1 --read-disambig-syms=exp/sgmm2_4_denlats/lang/phones/disambig.int --write-disambig-syms=exp/sgmm2_4_denlats/lang/tmp/disambig_ilabels_3_1.int exp/sgmm2_4_denlats/lang/tmp/ilabels_3_1.11851
fstisstochastic exp/sgmm2_4_denlats/lang/tmp/CLG_3_1.fst
1.27657e-05 0
make-h-transducer --disambig-syms-out=exp/sgmm2_4_denlats/dengraph/disambig_tid.int --transition-scale=1.0 exp/sgmm2_4_denlats/lang/tmp/ilabels_3_1 exp/sgmm2_4_ali/tree exp/sgmm2_4_ali/final.mdl
fstdeterminizestar --use-log=true
fsttablecompose exp/sgmm2_4_denlats/dengraph/Ha.fst exp/sgmm2_4_denlats/lang/tmp/CLG_3_1.fst
fstminimizeencoded
fstrmsymbols exp/sgmm2_4_denlats/dengraph/disambig_tid.int
fstrmepslocal
fstisstochastic exp/sgmm2_4_denlats/dengraph/HCLGa.fst
0.000495123 -0.000486612
add-self-loops --self-loop-scale=0.1 --reorder=true exp/sgmm2_4_ali/final.mdl
steps/make_denlats_sgmm2.sh: feature type is lda
steps/make_denlats_sgmm2.sh: using fMLLR transforms from exp/tri3_ali
steps/make_denlats_sgmm2.sh: Merging archives for data subset 1
steps/make_denlats_sgmm2.sh: Merging archives for data subset 2
steps/make_denlats_sgmm2.sh: Merging archives for data subset 3
steps/make_denlats_sgmm2.sh: Merging archives for data subset 4
steps/make_denlats_sgmm2.sh: done generating denominator lattices with SGMMs.
steps/train_mmi_sgmm2.sh --acwt 0.2 --cmd run.pl --transform-dir exp/tri3_ali --boost 0.1 --drop-frames true data/train data/lang exp/sgmm2_4_ali exp/sgmm2_4_denlats exp/sgmm2_4_mmi_b0.1
steps/train_mmi_sgmm2.sh: feature type is lda
steps/train_mmi_sgmm2.sh: using transforms from exp/tri3_ali
steps/train_mmi_sgmm2.sh: using speaker vectors from exp/sgmm2_4_ali
steps/train_mmi_sgmm2.sh: using Gaussian-selection info from exp/sgmm2_4_ali
Iteration 0 of MMI training
Iteration 0: objf was 0.501034926978036, MMI auxf change was 0.0162451692399604
Iteration 1 of MMI training
Iteration 1: objf was 0.515739493397113, MMI auxf change was 0.0023044336753427
Iteration 2 of MMI training
Iteration 2: objf was 0.518331482425786, MMI auxf change was 0.000590636926876495
Iteration 3 of MMI training
Iteration 3: objf was 0.519183080917834, MMI auxf change was 0.000380760350739628
MMI training finished
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 1 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it1
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/1.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 1 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it1
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/1.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 2 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it2
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/2.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 2 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it2
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/2.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 3 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it3
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/3.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 3 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it3
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/3.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 4 --transform-dir exp/tri3/decode_dev data/lang_test_bg data/dev exp/sgmm2_4/decode_dev exp/sgmm2_4_mmi_b0.1/decode_dev_it4
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_dev
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_dev
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/4.mdl
steps/decode_sgmm2_rescore.sh --cmd run.pl --iter 4 --transform-dir exp/tri3/decode_test data/lang_test_bg data/test exp/sgmm2_4/decode_test exp/sgmm2_4_mmi_b0.1/decode_test_it4
steps/decode_sgmm2_rescore.sh: using speaker vectors from exp/sgmm2_4/decode_test
steps/decode_sgmm2_rescore.sh: feature type is lda
steps/decode_sgmm2_rescore.sh: using transforms from exp/tri3/decode_test
steps/decode_sgmm2_rescore.sh: rescoring lattices with SGMM model in exp/sgmm2_4_mmi_b0.1/4.mdl
============================================================================
                    DNN Hybrid Training & Decoding
============================================================================
steps/nnet2/train_tanh.sh --mix-up 5000 --initial-learning-rate 0.015 --final-learning-rate 0.002 --num-hidden-layers 2 --num-jobs-nnet 4 --cmd run.pl data/train data/lang exp/tri3_ali exp/tri4_nnet
steps/nnet2/train_tanh.sh: calling get_lda.sh
steps/nnet2/get_lda.sh --transform-dir exp/tri3_ali --splice-width 4 --cmd run.pl data/train data/lang exp/tri3_ali exp/tri4_nnet
steps/nnet2/get_lda.sh: feature type is lda
steps/nnet2/get_lda.sh: using transforms from exp/tri3_ali
feat-to-dim 'ark,s,cs:utils/subset_scp.pl --quiet 2500 data/train/split4/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split4/1/utt2spk scp:data/train/split4/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split4/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- |' -
splice-feats --left-context=3 --right-context=3 ark:- ark:-
transform-feats --utt2spk=ark:data/train/split4/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:-
apply-cmvn --utt2spk=ark:data/train/split4/1/utt2spk scp:data/train/split4/1/cmvn.scp scp:- ark:-
transform-feats exp/tri4_nnet/final.mat ark:- ark:-
WARNING (feat-to-dim[5.4.208~1-6f214]:Close():kaldi-io.cc:515) Pipe utils/subset_scp.pl --quiet 2500 data/train/split4/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split4/1/utt2spk scp:data/train/split4/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split4/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | had nonzero return status 36096
feat-to-dim 'ark,s,cs:utils/subset_scp.pl --quiet 2500 data/train/split4/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split4/1/utt2spk scp:data/train/split4/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split4/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | splice-feats --left-context=4 --right-context=4 ark:- ark:- |' -
splice-feats --left-context=3 --right-context=3 ark:- ark:-
transform-feats exp/tri4_nnet/final.mat ark:- ark:-
apply-cmvn --utt2spk=ark:data/train/split4/1/utt2spk scp:data/train/split4/1/cmvn.scp scp:- ark:-
transform-feats --utt2spk=ark:data/train/split4/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:-
splice-feats --left-context=4 --right-context=4 ark:- ark:-
WARNING (feat-to-dim[5.4.208~1-6f214]:Close():kaldi-io.cc:515) Pipe utils/subset_scp.pl --quiet 2500 data/train/split4/1/feats.scp | apply-cmvn  --utt2spk=ark:data/train/split4/1/utt2spk scp:data/train/split4/1/cmvn.scp scp:- ark:- | splice-feats --left-context=3 --right-context=3 ark:- ark:- | transform-feats exp/tri4_nnet/final.mat ark:- ark:- | transform-feats --utt2spk=ark:data/train/split4/1/utt2spk ark:exp/tri3_ali/trans.1 ark:- ark:- | splice-feats --left-context=4 --right-context=4 ark:- ark:- | had nonzero return status 36096
steps/nnet2/get_lda.sh: Accumulating LDA statistics.
steps/nnet2/get_lda.sh: Finished estimating LDA
steps/nnet2/train_tanh.sh: calling get_egs.sh
steps/nnet2/get_egs.sh --transform-dir exp/tri3_ali --splice-width 4 --samples-per-iter 200000 --num-jobs-nnet 4 --stage 0 --cmd run.pl --io-opts --max-jobs-run 5 data/train data/lang exp/tri3_ali exp/tri4_nnet
steps/nnet2/get_egs.sh: feature type is lda
steps/nnet2/get_egs.sh: using transforms from exp/tri3_ali
steps/nnet2/get_egs.sh: working out number of frames of training data
utils/data/get_utt2dur.sh: segments file does not exist so getting durations from wave files
utils/data/get_utt2dur.sh: successfully obtained utterance lengths from sphere-file headers
utils/data/get_utt2dur.sh: computed data/train/utt2dur
feat-to-len 'scp:head -n 10 data/train/feats.scp|' ark,t:-
steps/nnet2/get_egs.sh: Every epoch, splitting the data up into 1 iterations,
steps/nnet2/get_egs.sh: giving samples-per-iteration of 283054 (you requested 200000).
Getting validation and training subset examples.
steps/nnet2/get_egs.sh: extracting validation and training-subset alignments.
copy-int-vector ark:- ark,t:-
LOG (copy-int-vector[5.4.208~1-6f214]:main():copy-int-vector.cc:83) Copied 3696 vectors of int32.
Getting subsets of validation examples for diagnostics and combination.
Creating training examples
Generating training examples on disk
steps/nnet2/get_egs.sh: rearranging examples into parts for different parallel jobs
steps/nnet2/get_egs.sh: Since iters-per-epoch == 1, just concatenating the data.
Shuffling the order of training examples
(in order to avoid stressing the disk, these won't all run at once).
steps/nnet2/get_egs.sh: Finished preparing training examples
steps/nnet2/train_tanh.sh: initializing neural net
Training transition probabilities and setting priors
steps/nnet2/train_tanh.sh: Will train for 15 + 5 epochs, equalling
steps/nnet2/train_tanh.sh: 15 + 5 = 20 iterations,
steps/nnet2/train_tanh.sh: (while reducing learning rate) + (with constant learning rate).
Training neural net (pass 0)
Training neural net (pass 1)
Training neural net (pass 2)
Training neural net (pass 3)
Training neural net (pass 4)
Training neural net (pass 5)
Training neural net (pass 6)
Training neural net (pass 7)
Training neural net (pass 8)
Training neural net (pass 9)
Training neural net (pass 10)
Training neural net (pass 11)
Training neural net (pass 12)
Mixing up from 1960 to 5000 components
Training neural net (pass 13)
Training neural net (pass 14)
Training neural net (pass 15)
Training neural net (pass 16)
Training neural net (pass 17)
Training neural net (pass 18)
Training neural net (pass 19)
Setting num_iters_final=5
Getting average posterior for purposes of adjusting the priors.
Re-adjusting priors based on computed posteriors
Done
Cleaning up data
steps/nnet2/remove_egs.sh: Finished deleting examples in exp/tri4_nnet/egs
Removing most of the models
steps/nnet2/decode.sh --cmd run.pl --nj 4 --num-threads 6 --transform-dir exp/tri3/decode_dev exp/tri3/graph data/dev exp/tri4_nnet/decode_dev
steps/nnet2/decode.sh: feature type is lda
steps/nnet2/decode.sh: using transforms from exp/tri3/decode_dev
steps/diagnostic/analyze_lats.sh --cmd run.pl --iter final exp/tri3/graph exp/tri4_nnet/decode_dev
steps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_dev/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(4,15,67) and mean=30.3
steps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_dev/log/analyze_lattice_depth_stats.log
score best paths
score confidence and timing with sclite
Decoding done.
steps/nnet2/decode.sh --cmd run.pl --nj 4 --num-threads 6 --transform-dir exp/tri3/decode_test exp/tri3/graph data/test exp/tri4_nnet/decode_test
steps/nnet2/decode.sh: feature type is lda
steps/nnet2/decode.sh: using transforms from exp/tri3/decode_test
steps/diagnostic/analyze_lats.sh --cmd run.pl --iter final exp/tri3/graph exp/tri4_nnet/decode_test
steps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_test/log/analyze_alignments.log
Overall, lattice depth (10,50,90-percentile)=(4,17,80) and mean=36.2
steps/diagnostic/analyze_lats.sh: see stats in exp/tri4_nnet/decode_test/log/analyze_lattice_depth_stats.log
score best paths
score confidence and timing with sclite
Decoding done.
============================================================================
                    System Combination (DNN+SGMM)
============================================================================

============================================================================
               DNN Hybrid Training & Decoding (Karel's recipe)
============================================================================
steps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --transform-dir exp/tri3/decode_test data-fmllr-tri3/test data/test exp/tri3 data-fmllr-tri3/test/log data-fmllr-tri3/test/data
steps/nnet/make_fmllr_feats.sh: feature type is lda_fmllr
utils/copy_data_dir.sh: copied data from data/test to data-fmllr-tri3/test
utils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/test
steps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/test --> data-fmllr-tri3/test, using : raw-trans None, gmm exp/tri3, trans exp/tri3/decode_test
steps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --transform-dir exp/tri3/decode_dev data-fmllr-tri3/dev data/dev exp/tri3 data-fmllr-tri3/dev/log data-fmllr-tri3/dev/data
steps/nnet/make_fmllr_feats.sh: feature type is lda_fmllr
utils/copy_data_dir.sh: copied data from data/dev to data-fmllr-tri3/dev
utils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/dev
steps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/dev --> data-fmllr-tri3/dev, using : raw-trans None, gmm exp/tri3, trans exp/tri3/decode_dev
steps/nnet/make_fmllr_feats.sh --nj 10 --cmd run.pl --transform-dir exp/tri3_ali data-fmllr-tri3/train data/train exp/tri3 data-fmllr-tri3/train/log data-fmllr-tri3/train/data
steps/nnet/make_fmllr_feats.sh: feature type is lda_fmllr
utils/copy_data_dir.sh: copied data from data/train to data-fmllr-tri3/train
utils/validate_data_dir.sh: Successfully validated data-directory data-fmllr-tri3/train
steps/nnet/make_fmllr_feats.sh: Done!, type lda_fmllr, data/train --> data-fmllr-tri3/train, using : raw-trans None, gmm exp/tri3, trans exp/tri3_ali
Speakers, src=462, trn=416, cv=46 /tmp/wxy_ldRFl/speakers_cv
utils/data/subset_data_dir.sh: reducing #utt from 3696 to 3328
utils/data/subset_data_dir.sh: reducing #utt from 3696 to 368
local/nnet/run_dnn.sh: line 56: exp/dnn4_pretrain-dbn/log/pretrain_dbn.log: No such file or directory


wxy@HP-WXY:~/kaldi/egs/timit/s5$ for x in exp/{mono,tri,sgmm,dnn,combine}*/decode*; do [ -d $x ] && echo $x | grep "${1:-.*}" >/dev/null && grep WER $x/wer_* 2>/dev/null | utils/best_wer.sh; done
wxy@HP-WXY:~/kaldi/egs/timit/s5$ for x in exp/{mono,tri,sgmm,dnn,combine}*/decode*; do [ -d $x ] && echo $x | grep "${1:-.*}" >/dev/null && grep Sum $x/score_*/*.sys 2>/dev/null | utils/best_wer.sh; done
============================================================================
                     MonoPhone Training & Decoding
============================================================================
%WER 31.6 | 400 15057 | 71.9 19.3 8.8 3.5 31.6 100.0 | -0.477 | exp/mono/decode_dev/score_5/ctm_39phn.filt.sys
%WER 32.6 | 192 7215 | 70.2 19.5 10.3 2.9 32.6 100.0 | -0.210 | exp/mono/decode_test/score_6/ctm_39phn.filt.sys
============================================================================
           tri1 : Deltas + Delta-Deltas Training & Decoding
============================================================================
%WER 24.8 | 400 15057 | 79.1 15.6 5.4 3.9 24.8 99.8 | -0.114 | exp/tri1/decode_dev/score_10/ctm_39phn.filt.sys
%WER 26.1 | 192 7215 | 77.8 16.6 5.6 3.9 26.1 100.0 | -0.086 | exp/tri1/decode_test/score_10/ctm_39phn.filt.sys
============================================================================
                 tri2 : LDA + MLLT Training & Decoding
============================================================================
%WER 22.9 | 400 15057 | 80.8 14.4 4.8 3.6 22.9 99.5 | -0.241 | exp/tri2/decode_dev/score_10/ctm_39phn.filt.sys
%WER 23.7 | 192 7215 | 79.9 14.8 5.3 3.6 23.7 99.5 | -0.294 | exp/tri2/decode_test/score_10/ctm_39phn.filt.sys
%WER 20.6 | 400 15057 | 82.5 12.9 4.5 3.1 20.6 99.5 | -0.573 | 
============================================================================
              tri3 : LDA + MLLT + SAT Training & Decoding
============================================================================
exp/tri3/decode_dev/score_10/ctm_39phn.filt.sys
%WER 23.6 | 400 15057 | 79.8 15.0 5.2 3.4 23.6 99.8 | -0.181 | exp/tri3/decode_dev.si/score_10/ctm_39phn.filt.sys
%WER 21.8 | 192 7215 | 82.2 13.6 4.3 4.0 21.8 99.5 | -1.123 | exp/tri3/decode_test/score_7/ctm_39phn.filt.sys
%WER 24.4 | 192 7215 | 79.0 15.3 5.6 3.4 24.4 99.5 | -0.222 | exp/tri3/decode_test.si/score_10/ctm_39phn.filt.sys

%WER 19.8 | 400 15057 | 82.4 12.1 5.5 2.3 19.8 99.8 | -0.271 | exp/tri4_nnet/decode_dev/score_8/ctm_39phn.filt.sys
%WER 21.3 | 192 7215 | 81.6 12.9 5.5 3.0 21.3 100.0 | -0.639 | exp/tri4_nnet/decode_test/score_6/ctm_39phn.filt.sys
============================================================================
                        SGMM2 Training & Decoding
============================================================================
%WER 18.2 | 400 15057 | 84.9 11.2 3.9 3.1 18.2 99.0 | -0.464 | exp/sgmm2_4/decode_dev/score_7/ctm_39phn.filt.sys
%WER 19.7 | 192 7215 | 83.2 12.2 4.6 2.9 19.7 99.5 | -0.327 | exp/sgmm2_4/decode_test/score_8/ctm_39phn.filt.sys
============================================================================
                    MMI + SGMM2 Training & Decoding
============================================================================
%WER 18.4 | 400 15057 | 85.0 11.4 3.6 3.3 18.4 98.8 | -0.428 | exp/sgmm2_4_mmi_b0.1/decode_dev_it1/score_7/ctm_39phn.filt.sys
%WER 18.4 | 400 15057 | 84.7 11.5 3.8 3.1 18.4 98.8 | -0.324 | exp/sgmm2_4_mmi_b0.1/decode_dev_it2/score_8/ctm_39phn.filt.sys
%WER 18.5 | 400 15057 | 84.7 11.6 3.7 3.2 18.5 99.0 | -0.332 | exp/sgmm2_4_mmi_b0.1/decode_dev_it3/score_8/ctm_39phn.filt.sys
%WER 18.6 | 400 15057 | 84.6 11.6 3.8 3.2 18.6 99.0 | -0.341 | exp/sgmm2_4_mmi_b0.1/decode_dev_it4/score_8/ctm_39phn.filt.sys
%WER 19.8 | 192 7215 | 83.5 12.4 4.1 3.3 19.8 99.5 | -0.342 | exp/sgmm2_4_mmi_b0.1/decode_test_it1/score_8/ctm_39phn.filt.sys
%WER 19.8 | 192 7215 | 83.4 12.4 4.1 3.3 19.8 99.5 | -0.365 | exp/sgmm2_4_mmi_b0.1/decode_test_it2/score_8/ctm_39phn.filt.sys
%WER 19.9 | 192 7215 | 83.9 12.4 3.7 3.9 19.9 99.5 | -0.525 | exp/sgmm2_4_mmi_b0.1/decode_test_it3/score_7/ctm_39phn.filt.sys
%WER 19.8 | 192 7215 | 83.9 12.4 3.7 3.7 19.8 100.0 | -0.549 | exp/sgmm2_4_mmi_b0.1/decode_test_it4/score_7/ctm_39phn.filt.sys
============================================================================
                    DNN Hybrid Training & Decoding
============================================================================
%WER 16.8 | 400 15057 | 86.2 10.9 2.9 3.0 16.8 99.0 | -0.430 | exp/combine_2/decode_dev_it1/score_5/ctm_39phn.filt.sys
%WER 16.9 | 400 15057 | 86.4 10.9 2.7 3.3 16.9 99.0 | -0.632 | exp/combine_2/decode_dev_it2/score_4/ctm_39phn.filt.sys
%WER 16.9 | 400 15057 | 86.1 11.0 2.9 3.0 16.9 99.0 | -0.425 | exp/combine_2/decode_dev_it3/score_5/ctm_39phn.filt.sys
%WER 16.9 | 400 15057 | 85.6 11.1 3.3 2.5 16.9 99.3 | -0.152 | exp/combine_2/decode_dev_it4/score_7/ctm_39phn.filt.sys
============================================================================
                    System Combination (DNN+SGMM)
============================================================================
%WER 18.1 | 192 7215 | 84.8 12.0 3.2 2.9 18.1 99.5 | -0.271 | exp/combine_2/decode_test_it1/score_6/ctm_39phn.filt.sys
%WER 18.1 | 192 7215 | 84.8 11.9 3.2 2.9 18.1 99.5 | -0.280 | exp/combine_2/decode_test_it2/score_6/ctm_39phn.filt.sys
%WER 18.2 | 192 7215 | 84.4 12.1 3.4 2.6 18.2 99.5 | -0.163 | exp/combine_2/decode_test_it3/score_7/ctm_39phn.filt.sys
%WER 18.1 | 192 7215 | 84.5 12.1 3.4 2.6 18.1 99.5 | -0.169 | exp/combine_2/decode_test_it4/score_7/ctm_39phn.filt.sys

猜你喜欢

转载自blog.csdn.net/sun___shy/article/details/82430604