We introduce a new and efficient sorting technique, referred to as BT-Sort after the Bayer-balanced Tree (B-Tree) method. It is fast and parsimonious on usage of memory and scratch space, and is thus deemed very attractive for sorting very large datasets. One property of BT-Sort is that it minimizes the use of random I/O needed for accessing the scratch disk, regardless of the size of datasets being sorted. Furthermore, it uses sequential I/O to/from scratch disks in a very efficient way, and does not require large memory. For example, with such features, sorting 1.2 TB of input shot gathers into ordered common-midpoint gathers with BT-Sort would require only three days using four computing nodes (each of say 32 GB, 375 MHz RISC architecture), whereas the same exercise would typically require six to eight weeks to complete using conventional sorting techniques. From a seismic operational point of view, a rapid increase in seismic traces might result in altering some of the algorithms needed in the processing. Taking Saudi Aramco as an example, the current level of seismic-data acquisition there stands at 10 seismic crews, eight of which are 3D-seismic. The average count on these crews is about 2500 channels per crew. With an average daily production rate of some 2000 vibration points, the corresponding volume of seismic data flowing into processing is some 40 million traces per day. Consequently, some of the processing algorithms are being constantly updated in order to handle these massively increasing seismic volumes. Frequent sorting of seismic data into various domains (e.g. common-midpoint, common-receiver, etc.) has traditionally been avoided mainly because of the lack of efficient sorting algorithms. In the extreme case, sorting very large datasets with only limited computer resources (a few GB of memory) is a hugely inefficient task as the operation could take months to complete. As a compromise, geophysicists at times have opted to reformulate some existing processing algorithms such that seismic data would be kept in a certain preferred order throughout the processing sequence. One example was the reformulation of the dip-moveout operator by Biondi and Ronen (1987), so that it can be applied directly to recorded shot profiles instead of common-offset gathers if data cannot be sorted to common offsets (due to cost or other practical reasons), thereby avoiding the cost-prohibitive sorting process. Another, even worse shortcoming lies in the inability to apply some domain-specific processing algorithms, such as those for noise suppression and Radon multiple removal that require implementation in the common-receiver and common-midpoint (CMP) domains, respectively. In the case of (PRT) to normal-moveout-corrected CMP gathers, for example, geophysicists have attempted to speed up the process by optimizing the PRT operator (Kelamis and Chiburis 1992; Beylkin and Vassiliou 1998); this multiple elimination process in its entirety can benefit even further if data were sorted from shot to CMP gathers in a much faster way such as that proposed here. Unfortunately, lack of a fast sort may inevitably force the seismic data to suffer from unnecessary noisy contaminations if the expensive (slow) sort is to be avoided (Al Dossary et al. 1998). When using conventional (quick or heap) sorting algorithms, one may in theory estimate the cost (in terms of the number of operations to be performed) as N log2 N (Press et al. 1988), where N is the number of traces. On this basis, 500 million traces, for example, would require about 15 billion operations to sort them. In practice, however, such an implementation is more complicated than simply coping with a formidable number of operations. Specifically, the entire dataset should physically reside in memory (internal sorting) in a stand-alone system, a task most computer-hardware configurations cannot support.
Read full abstract