Abstract

Bandwidth of wireless multichannel neural recording systems is one of the most significant limitation to increase the number of channels monitored. Data compression is being efficiently used to process multichannel recordings. This paper explores discrete wavelet transform (DWT) processor architectures suited to compress ENGs and so, increase the number of channels. Low power consumption, low silicon area and specificity of multichannel neural recording systems are considered for this investigation. Six architectures were implemented and compared. All of them implement a 3 level Daubechies-4 wavelet decomposition. This comparative study allows to conclude that an excellent trade-off between power consumption and silicon area is obtained through a DWT polyphase structure using a careful balance of parallelism and folding. Also, it arises that multiplexing several channels toward a shared DWT processor provides the best savings for both, power and area.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call