Abstract

The deployment of connectionist models on resource-constrained, low-power embedded systems brings about specific implementation issues. The paper presents a design strategy, aimed at low-end reconfigurable devices, for implementing the prediction operation supported by a single hidden-layer feedforward neural network (SLFN). The paper first shows that a considerable efficiency can be obtained when hard-limiter thresholding operators support the activation functions of the neurons. Secondly, the analysis highlights the advantages of using random basis networks, thanks to their limited memory requirements. Finally, the paper presents a pair of different architectural approaches to the effective support of SLFNs on CPLDs and low-end FPGAs. The alternatives differ in the specific trade-off strategy between area utilization and latency. Experiments confirm the effectiveness of both schemes, yielding a pair of viable implementation options for satisfying the respective constraints, namely effective area utilization or low latency.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.