Abstract

We demonstrate that an optimization-based model predictive pulse pattern controller can be designed with a complexity in terms of hardware resource usage on a field-programmable gate array (FPGA) that is comparable to that of a conventional controller. The keys to the superior performance and resource usage over existing solution methods for model predictive pulse pattern control are an appropriate problem reformulation and a newly derived result for the projection on the truncated monotone cone that composes the feasible set in this application. Using a coldstarted classic gradient method in fixed-point arithmetic, a numerically stable implementation is shown to require less than 300 clock cycles to meet the stringent accuracy specification for problems with at most three switching transitions per phase. For the case of four (five) transitions, only about 550 (690) cycles are required. At the same time, merely two digital signal processor (DSP)-type multipliers on an FPGA are used for all problem sizes. These results indicate a speed improvement of ten times and a resource reduction by 17 times in terms of DSP-type multipliers for the case of three transitions per phase compared with an existing solution method. When there are four or five transitions per phase, the resource reduction is even more impressive.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.