Abstract

Delay and transition timetables plus voltage waveforms are used to characterize standard cell delays. More accurate models explode cell library size and degrades design flow performance. Our proposed deep learning non-linear delay model, DL-NLDM, technique outperformed 7×7 NLDM-LUT in average percentage errors with up to 1.4% error compared to SPICE and outperformed the non-standard 100×100 NLDM-LUT in maximum percentage errors. The proposed DL Autoencoder-based waveform compression outperformed singular value decomposition by 1.79×. Additionally, a novel DL waveform-delay model, DL-WFDM, models cell delays using encoded waveforms instead of delay and transition time. DL-WFDM outperformed DL-NLDM in maximum delay percentage errors.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call