Complex real-time video processing applications with strict throughput constraints are commonly found in a typical healthcare application. The video processing chain is implemented as Field-Programmable Gate Array (FPGA) accelerators (processing blocks) communicating through a number of First-In First-Out (FIFO) buffers. The FIFO buffers are made out of Block RAM (BRAM) and limited in availability. Therefore, a key design question is the optimal sizes of the FIFO buffers with respect to the throughput constraint. In this paper, we use model-driven analysis and detailed hardware level simulation to address the question of buffer dimensioning in an efficient way. Using a Cyclo-Static Dataflow (CSDF) model and an optimization method, we identify and optimize the FIFO buffers. The results are confirmed using a detailed hardware level simulation and validated by comparison with VHDL simulations. The technique is illustrated on a use case from Philips Healthcare Image Guided Therapy (IGT) on the imaging pipeline of an Interventional X-Ray (i XR) system.