With the continuous development of exploration technology, the magnitude of seismic data is getting larger and larger, and the access efficiency of seismic data has become the main bottleneck in the process of seismic data processing. The cluster structure realizes the separation of computing and storage. The data is stored in the disk array, and the data is calculated in the computing node. The maximum network speed between each computing node and the management node is 1GB/s, and the total maximum network speed of all computing nodes is 5GB/S (peak value). In addition, for disk array, the maximum read speed can be ignored (more than 1GB/s) without considering the network bandwidth, while the maximum write speed is 350MB/s. In the process of data processing, it needs to flow between the disk array and the computing node, and its speed must be affected by the network bandwidth. Therefore, the theoretical maximum I/O speed of the system is 1GB/s for reading and 350MB/s for writing. To build a multi format seismic data model, the loading of the system before optimization can not support the original seismic data format, so the original seismic data (such as SEGY and SEGD) needs to be converted into the internal ATT format of the system. In this process, the whole original data needs to be read circularly, and the new file is written circularly, and then the new file is decompiled, Save the header part into the table in the database. Export is the opposite process, which needs to convert ATT format to SEGY or SEGD format.
Read full abstract