Software decoding of Dolby Digital (an adaptive transform-based coder using a frequency-linear, critically sampled filterbank) allows it to become a baseline capability on the personal computer (PC), with greater flexibility than a hardware approach. Intel's MMX technology provides instructions that can significantly speed up the execution of the Dolby Digital decoder, freeing up the processor to perform other tasks such as video decoding and/or audio enhancement. Intel's MMX instructions operate on 8, 16, and 32 bits. The smaller the data size, the more operations per instruction are performed. Using 16 bits of accuracy uniformly through a Dolby Digital decoder is insufficient to pass the test suite. The challenge was to obtain both good execution speed and good audio quality. Although 32-bit floating-point numbers could be used throughout the data path and only use MMX technology for bit manipulation, this would not be the most processor-efficient method. To this end, we used 16-bit SIMD (single-instruction, multiple data) operations during much of the decoder, but performed 8- and 32-bit SIMD operations on certain sections. While we discuss a particular use of MMX technology, the MMX instruction set is general purpose in nature. We provide a description of MMX technology and then describe the major functional blocks of a Dolby Digital decoder and the special techniques used that take advantage of MMX technology. We also include a description of precision enhancements that were implemented to maintain accuracy and a description of other performance enhancements that were made. We conclude with results in terms of efficient processor utilization, numerical accuracy, and audio quality.