Recently, Hopfield and Krotov introduced the concept of dense associative memories [DAM] (close to spin-glasses with P-wise interactions in a disordered statistical mechanical jargon): they proved a number of remarkable features these networks share and suggested their use to (partially) explain the success of the new generation of Artificial intelligence. Thanks to a remarkable ante-litteram analysis by Baldi & Venkatesh, among these properties, it is known these networks can handle a maximal amount of stored patterns K scaling as .In this paper, once introduced a minimal dense associative network as one of the most elementary cost-functions falling in this class of DAM, we sacrifice this high-load regime -namely we force the storage of solely a linear amount of patterns, i.e. (with )- to prove that, in this regime, these networks can correctly perform pattern recognition even if pattern signal is and is embedded in a sea of noise , also in the large N limit. To prove this statement, by extremizing the quenched free-energy of the model over its natural order-parameters (the various magnetizations and overlaps), we derived its phase diagram, at the replica symmetric level of description and in the thermodynamic limit: as a sideline, we stress that, to achieve this task, aiming at cross-fertilization among disciplines, we pave two hegemon routes in the statistical mechanics of spin glasses, namely the replica trick and the interpolation technique.Both the approaches reach the same conclusion: there is a not-empty region, in the noise-T versus load- phase diagram plane, where these networks can actually work in this challenging regime; in particular we obtained a quite high critical (linear) load in the (fast) noiseless case resulting in .