Global Technical Service Hotline Tel: 1-844-860-0011 1-646-517-4966 E-mail:serversupport@inspur.com
solution > solution-detail

Inspur Petroleum Industry High-performance Solution

[summary]

General descriptions: the requirements on high-performance computing in the petroleum industry mainly include: offset treatment (pre-overlap offset, post-overlap offset, RNA, etc.), reservoir simulation and other applications. In previous years, data processing mainly relied on the CPU-based cluster. However, GPU is gradually applied in many new technologies today so as to increase the processing speed on a large scale.

I. Demand analysis

In petroleum mining, drilling expenses are very high. Drilling an oil well takes hundreds of millions of Yuan. The petroleum company will suffer immense losses if petroleum resources are acquired only through drilling during the early-stage exploration. The common practice today is using the complicated earthquake imaging method to create a detailed 3D map of hidden oil rocks and natural gas reserves before drilling, hence increasing the success rate of drilling and reducing the exploration risk. The internationally-recognized advanced technology of today is the offset imaging technology that can realize accurate structural imaging. This technology has been applied extensively in the petroleum exploration and development sector. Meanwhile, the substantially increased computing load and data processing also imposes exponentially-increased requirements on high-performance computing.

Petroleum exploration and high-performance computer

Why is a high-performance computer required in the processing of earthquake data? Firstly, the calculation load of number processing is too great. Normally, calculation of only one 10,000m-long 2D earthquake survey line require 28,800,000 calculations and that is equivalent to 300,000,000 calculations for a 100,000m-long earthquake survey line. The length of earthquake survey line that requires 2D earthquake data calculation is over 10,000m nationwide each year. The calculation has to be performed more than once and more than one treatment method is adopted for the calculation, which shows that the calculation load of earthquake data is too massive. Secondly, a simple processing flow requires 20+ types of processing methods and some of these methods involve: convolution, deconvolution, wave equation, iteration and other complicated mathematic algorithms, which take time and labor. Thirdly, the underground geological conditions are getting more and more complicated and the earthquake exploration becomes more and more difficult. As such, some new and more complicated processing methods have to be adopted for number processing, which requires reprocessing and repeated processing. Therefore, number processing can barely be completed without high-performance computers.

Data processing flow

 “Processing flow” is commonly seen during earthquake data and number processing. Yet we need to determine what processing flow is. It is like the production process flow required for the production of an automobile. Such a production flow includes detailed scope and quality standards of the production work and standardizes complicated production work in a scientific and orderly manner. Earthquake data and number processing work is also a kind of production process and actually a highly complicated production process that involves the knowledge of various disciplines. In order to ensure the order and quality of the processing work, relevant production process flows are formulated according to the features of field collection work and requirements of geological duties. Such production process flow is called the processing flow in a professional context. In order to control the processing quality at each step, quality checkpoints are compulsorily established at critical stages of the processing flow, i.e. the next work stage can start only after the work of the present work stage passes the acceptance test, hence effectively guaranteeing the production quality at each step.

Earthquake data processing flow changes from time to time. Considering the features of field collection, 2D earthquake data processing flow, and 3D earthquake data processing flow are compiled. Routine processing flow and special processing flow are established for different geological tasks. During the processing flow, the landform conditions and the features of disturbing waves of the worksite may be considered and more specific processing methods and means may be adopted. With the development of processing technology, new processing technologies and methods are required in the processing flow in order to continuously improve the processing quality and provide more and more accurate data for the explanation work. It shows that earthquake data and number processing work is an extremely complicated job.

Routine processing of earthquake data

The earthquake data collected in the field has to undergo routine processing firstly. Routine processing is a basic processing which involves a set of processing methods for clear representation of the underground stratum forms and various geological conditions.

Some of the main processing flows of routine processing are pretreatment, horizontal overlap processing and overlap offset processing.

Pretreatment: after the field records are delivered to the processing station, they have to be converted into a format identifiable by the computer. This is the main job during the pretreatment stage. Moreover, some other fundamental work has to be completed at the pretreatment stage.

Horizontal overlap processing: it involves overlapping the positions of reflective points of each seismic trace subject to repeated observation into one single trace until all reflective points on one line are completed. In this way, one seismic section that reflects the configuration of the underground stratum can be obtained, which is the horizontal overlap section.

Overlap offset processing: it is performed on the basis of horizontal overlap and expected to correct the offset of the underground stratum configuration on the horizontal overlap section. Such offset is higher when the stratum obliquity is bigger, hence making the correction more necessary via overlap offset processing. Therefore, overlap offset processing can also be called correction processing. The overlap offset section that reflects the real configuration of underground stratum can be obtained only through such processing.

Features of high-performance applications in the petroleum industry

The high-performance computing applications relating to petroleum exploration in the seismic wave method mainly include seismic processing and reservoir simulation based on the nature of computation. Moreover, industry insiders may also classify computation visualization as a separate type of workstation application. The commonly used software includes:

Type

Application

Supplier

Seismic data processing

ProMax, SeisSpace

Landmark

Geodepth, Focus

Paradigm

Omega

Western Geco

Geocluster

CGG

VIP/ Nexus

Landmark

Reservoir simulation

Eclipse/ Intersect

Schlumberger

RMS

Roxar

Geoprobe

Landmark

Computation visualization

Petrel

Schlumberger

VoxelGeo, GoCad

Paradigm

The following conclusions are drawn based on years of application experiences in the petroleum industry and through communication with the key users in the petroleum industry. The requirements on high-performance computing in the petroleum industry mainly include offset treatment (pre-overlap offset, post-overlap offset, RNA, etc.), reservoir simulation and other applications. In previous years, data processing mainly relied on the CPU-based cluster. However, GPU is gradually applied in many new technologies today so as to increase the processing speed on a large scale.  

The applications of the petroleum industry generally have the following features:

Massive calculation load

Normally, calculation of only one 10,000m-long 2D earthquake survey line require 28,800,000 calculations and that is equivalent to 300,000,000 calculations for a 100,000m-long earthquake survey line. The length of earthquake survey line that requires 2D earthquake data calculation is over 10,000m nationwide each year. The calculation has to be performed more than once and more than one treatment method is adopted for the calculation, which shows that the calculation load of earthquake data is too massive.

Highly dense communication

Routine processing does not have high requirements on network. With the improvement of calculation method, there are higher and higher requirements on the network. Many oilfields start to try 10G or IB network. For large-scale oilfield exploration applications, we recommend the use of high-bandwidth and low-delay IB network.

II. Inspur specific solution for the petroleum industry

The foregoing analysis shows that the core of petroleum exploration computation is high-performance computation. Moreover, there are high requirements on the CPU performance, IO and network of high-performance computing. Based on years of experiences, we offer specific and professional solutions to our customers.

Inspur high-performance application cluster mainly addresses four main problems in petroleum application:

High performance

Intel processor is recommended for high-performance computing considering floating-point processing capacity and the general performances of CPU and according to features of the petroleum industry. It not only has a higher processing capacity, but also enjoys great advantages in energy efficiency ratio, memory support and CPU architecture.

With the development of GPU technology, some petroleum-related software can run on GPU for parallel computation. As such, the customer can realize trillion floating-point computations on one GPU under the current GPU architecture and thus substantially increase the computational efficiency.

Network bandwidth problem

Petroleum software application has very high requirements on network delay and bandwidth. 10G or IB solution can satisfy the computation and exchange needs at all nodes to the maximum extent and reduce network delay. Inspur can provide all-around, fully linear and non-blocking network interaction and give full rein to the performances of various nodes.

Storage bandwidth problem

An excellent storage system can satisfy the software’s requirements on network bandwidth. Inspur provides not only professional direct storage, but also optical fiber storage system. Special storage node is adopted to establish Lustre parallel file system and access the Ethernet and even 56GB Infiniband network, so as to avoid CPU waiting for data and substantially increase the calculation efficiency.

Inspur can also provide highly stable parallel file system of the commercial version to substantially increase the storage bandwidth and stability of the storage system.

High system stability

One highly stable system can make applications in the petroleum industry more convenient and faster, enable highly efficient data processing and guarantee continuation of operation. Inspur guarantees stability of the whole system in all aspects, substantially increases the operational stability of the user and reduces fault rate through uniform cluster monitoring and management and operation dispatching and with the aid of Inspur high-performance server, hence providing uninterrupted support to the increase of user productivity.

III. Inspur solution advantages and customer value

The processing of seismic data involves massive data load, computation load and communication load and therefore imposes very high requirements on the storage capacity, computing capacity and I/O communication capacity of the computer. This is the sector where a high-performance computer has advantages.

High performance: the computation node is established with the latest Inspur server platform, characterized by dual-channel rack computation node, core 10G computing network and core 1G management network.

Energy efficiency: the computation node has an energy-efficient power source which can save electricity by more than 20% each year.

Ease of use: the cluster is configured with easily-operable cluster management system for the convenience of remote management.

High scalability: high-performance computing has infinite requirements on computing capacity. The system will satisfy certain scalability requirements and will be easily scalable.

中科汇联承办,easysite内容管理系统,portal门户,舆情监测,搜索引擎,政府门户,信息公开,电子政务