Guest editorial As our industry pushes the frontiers of hydrocarbon recovery, complexity and risk management are increasingly pervasive dimensions of the landscape. High-performance computing (HPC) is providing new ways to address complexity and risk by opening more workflows to the “real time” world of operations. Cray supercomputers—named for inventor Seymour Cray—were introduced in the 1960s and became synonymous with this new breed of computer. Supercomputer performance is measured in floating point operations per second (FLOPS), which measures the number of calculations or instructions performed in a given time. This is generally used with an International System of Units (SI) prefix such as mega-, giga-, tera-, or peta- to describe the machine’s power or class. Today’s supercomputers are “petascale” machines, capable of processing one quadrillion (1015) FLOPS. However, they are the size of several rooms and require megawatts of power to operate, making them impractical and costly for all but the most esoteric and specialized problems. Computer clusters have become a more common way to build this processing power. By combining high-speed local networks and associated software, or “middleware,” massive parallel systems can be built that rival supercomputing power. Cloud computing is the latest extension of this concept in which computer resources, processing, and memory can be temporarily built into the required computing framework. Add in new job scheduling software, advanced virtualization techniques, and dynamic architecture configurations and you have the rapidly expanding high-performance computing discipline. Our industry is turning to high performance and cloud computing as a cost-effective way to build on-demand supercomputer capability for our most complex or intensive problems. At Baker Hughes, a three-tier approach has been designed to unlock this processing power for our research and development groups. The Dedicated Tier is an array of high-performance central processing units and high-memory servers that can provide massive total compute power joined into clusters with fast interconnects. This configuration is used mainly for applications that require dedicated, uninterrupted access to hardware and benefit from parallelization in a traditional, high-performance computing environment. The available performance capacity is often in the tens of teraFLOPS. The Opportunistic Tier is basically “cycle scavenging,” or the use of idle assets such as desktop computers and unused capacity on servers. Using Wake on LAN (WOL) technology, these computers can be powered on or off as needed. This approach is suitable for short-run applications with small footprints. It is useful where the synchronization among parallel jobs can be loosely coupled as well as when a user is far away from traditional HPC resources, but close to underused powerful desktops and servers. The available power scales linearly with the number of personal computers and nondedicated servers; it is not unusual to see power capabilities in the hundreds of teraFLOPS or more at a Fortune 250 company.
Read full abstract