Nuclei segmentation plays a crucial role in disease understanding and diagnosis. In whole slide images, cell nuclei often appear overlapping and densely packed with ambiguous boundaries due to the underlying 3D structure of histopathology samples. Instance segmentation via deep neural networks with object clustering is able to detect individual segments in crowded nuclei but suffers from a limited field of view, and does not support amodal segmentation. In this work, we introduce a dense feature pyramid network with a feature mixing module to increase the field of view of the segmentation model while keeping pixel-level details. We also improve the model output quality by adding a multi-scale self-attention guided refinement module that sequentially adjusts predictions as resolution increases. Finally, we enable clusters to share pixels by separating the instance clustering objective function from other pixel-related tasks, and introduce supervision to occluded areas to guide the learning process. For evaluation of amodal nuclear segmentation, we also update prior metrics used in common modal segmentation to allow the evaluation of overlapping masks and mitigate over-penalization issues via a novel unique matching algorithm. Our experiments demonstrate consistent performance across multiple datasets with significantly improved segmentation quality.