Visual sensors can be used to retrieve still images or video streams in a large set of monitoring applications, providing valuable information of the monitored environment. In fact, camera-enabled sensors are directional and the viewed area depends on their orientations. As the positions and orientations of the embedded cameras may not be optimal after deployment, the area viewed by sensors may be adjusted using some optimization algorithm. Although the problem of coverage optimization has been addressed by some research works, coverage maximization arises as a challenging problem where the number of camera views over a set of targets has to be maximized for higher availability and increased number of viewed perspectives of targets. This paper proposes centralized algorithms to compute orientations of rotatable cameras to achieve maximized coverage in wireless visual sensor networks, bringing significant results to this research area.