The major problem in distant-talking speech recognition is the corruption of speech signals by both interfering sounds and reverberation. While a range of successful techniques has been developed since the beginnings of speech recognition research to combat additive and short convolutive noise, compensating for long-term distortion caused by reverberation has not gained wide attention until recently. This thesis further develops an uncertainty decoding approach, named REverberation MOdeling for Speech recognition (REMOS), to adapt the acoustic model of a conventional Hidden Markov Model-based recognizer to reverberant environments. By incorporating a convolutive observation model, the Viterbi decoder is extended in order to implicitly provide a state-wise late reverberation estimate leading to a relaxation of the hidden Markov models' conditional independence assumption. The experimental evaluation confirms that REMOS yields strong speech recognition performance under noisy and reverberant conditions and furthermore allows for a rapid adaptation to changing acoustic conditions.