Abstract
A failure in an overloaded system is a complex engineering problem. Arrays of pillars belong to a group of modern nanodevices composed of a large number of identical parts that function as a unit. This is because the pillars are fixed on a flat support and interactions among them emerge due to the support's rigidity. When a growing load is applied to the pillars it induces a sequence of failures among the pillars, decreases the device performance and eventually triggers a catastrophic avalanche of failures. A key aspect of how does such a critical destruction evolve is the so-called load transfer rule from destroyed pillars to the intact ones, where a particular transfer rule reflects a distance on which the pillars interact effectively. A common approach to study arrays-of-nanopillars-crushing processes is to employ an appropriate load transfer rule within computer simulations. In our simulations we use different load transfer rules to analyze amount of allocated memory (m) and CPU time (t) consumed to handle these rules for arrays with an increasing number of pillars (N). Specifically, we discuss distributions of m and t, referred to the employed rules as well as the corresponding mean values: m ∼ N and t ∼ Nb+c ln N.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.