“Data mining” for “knowledge discovery in databases” and associated computational operations first introduced in the mid-1990 s can no longer cope wit h the analytical issues relating to the so-called “ big data”. The recent buzzword big data refers to large volumes of diverse, dynami c, complex, longitudinal and/or distributed data generated from instruments, sensors, Internet transactions, email, video, clic k streams, noisy, structured/unstructured and/or all other digital sources available today and in the fu ture at speeds and on scales never seen before in human his tory. The big data also being described using 3 Vs, volume, variety and velocity (with an additional 4t h V for “veracity” and more recently with a 5th V f or “value”), requires a set of new technologies, such as high performance computing i.e., exascale, architectures (distributed or grid), algorithms (fo r data clustering and generating association rules) , programming languages, automated and scalable software tools, to uncover hidden patterns, unknown correlations and other useful information lately re ferred to as “actionable knowledge” or “data produc ts” from the massive volumes of complex raw data. In view of the above facts, the paper gives an introduct ion to the synergistic challenges in “data-intensive” s cience and “exascale” computing for resolving “big data analytics” and “data science” issues in four main d isciplines namely, computer science, computational science, statistics and mathematics. For the realis ation of vital identified foundational aspects of a n effective cyber infrastructure, basic problems need to be add ressed adequately in the respective disciplines and are outlined. Finally, the paper looks at five scientif ic research projects that are urgently in need of h igh performance computing; this is in contrast to the e arlier situations where private business enterprise s were the drivers of better modern and faster technologie s.