English  |  正體中文  |  简体中文  |  Items with full text/Total items : 784/1209
Visitors : 1073311      Online Users : 1
RC Version 4.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Adv. Search
LoginUploadHelpAboutAdminister

Please use this identifier to cite or link to this item: https://dyhuir.dyhu.edu.tw/ir/handle/987654321/823

Title: 應用類神經模糊理論之嵌入式語音系統
Embedded Speech System Based on Neuro-Fuzzy Theory
Authors: 許志旭
Keywords: 語音辨認;特徵抽取;小波轉換;類神經模糊;嵌入式系統;peech recognition;feature extraction;wavelet transform;neuro-fuzzy;embedded system
Date: 2009
Issue Date: 2012-01-02T04:13:56Z
Abstract: 本計畫主要的工作為語音辨認,在語音特徵方面萃取語音訊號的小波轉換參數,而辨識器選用多維矩形類神經模糊系統(Hyper Rectangular Neuro-Fuzzy System, HRNFS)。多維矩形類神經模糊系統具有複雜決策邊界的辨識器,藉由誤差的傳遞修改權重值,並採用監督式決定導向學習演算法(Supervised Decision-Directed Learning, SDDL)訓練多維矩形類神經模糊系統,以If-then的形式萃取分類規則。本計畫利用模糊化之適應共振理論(Fuzzy Adaptive Resonance Theory, FART)將語音訊號以小波轉換後之特徵作群聚分析,以降低語音分類之混淆度。測試語料以德州儀器與麻省理工學院共同開發的語音資料庫為主。
The objective of this project is to find an effective middleware of speech recognition system. We extract a set of features of each speech frame from wavelet transform and use hyper rectangular neuro-fuzzy system (HRNFS) as the classifier. HRNFS is a classifier with complicated decision boundary. The weighted values were tuned by propagation errors, and the HRNFS was trained by supervised decision-directed learning (SDDL) algorithm. It is easy to represent rules by If-then. The wavelet transform decomposition is applied to extract features of each frame. In order to reduce confusion, we utilize fuzzy adaptive resonance theory (FART) to cluster each frame. In our experiments, the speech database is the Texas Instrument / Massachusetts Institute of Technology acoustic-phonetic corpus of read speech.
Appears in Collections:[資訊多媒體應用系] 校內研究案

Files in This Item:

File SizeFormat
index.html0KbHTML604View/Open


All items in CKUIR are protected by copyright, with all rights reserved.

 


DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback