Applicative languages have been proposed for defining algorithms for parallel architectures because they are implicitly parallel and lack side effects. However, straightforward implementations of applicative‐language compilers may induce large amounts of copying to preserve program semantics. The unnecessary copying of data can increase both the execution time and the memory requirements of an application. To eliminate the unnecessary copying of data, the Sisal compiler uses both build‐in‐place and update‐in‐place analyses. These optimizations remove unnecessary array copy operations through compile‐time analysis. Both build‐in‐place and update‐in‐place are based on hierarchical ragged arrays, i.e., the vector‐of‐vectors array model. Although this array model is convenient for certain applications, many optimizations are precluded, e.g., vectorization. To compensate for this deficiency, new languages, such as Sisal 2.0, have extended array models that allow for both high‐level array operations to be performed and efficient implementations to be devised. In this article, we introduce a new method to perform update‐in‐place analysis that is applicable to arrays stored either in hierarchical or in contiguous storage. Consequently, the array model that is appropriate for an application can be selected without the loss of performance. Moreover, our analysis is more amenable for distributed memory and large software systems.