Abstract

An enormous amount of petrotechnical data has been acquired during the 50 years of oil exploration and production (E&P) in Kuwait. More and more data continues and will continue to flow in on a daily basis, as Kuwait remains a major oil producer with reserve expected to last well into the next century. The data classes range from seismic, geological, petrophysical, drilling, production operations, production and injection volumes, surface and downhole facilities, PVT, well surveillance, etc. Historically, these were stored in various media—paper, magnetic tapes, diskettes, and spreadsheets—and scattered over several departments and private files. The level of data quality was not uniformly high and data access/sharing by geoscientists and planners was problematic. Data duplication, data corruption, data loss, and media deterioration were not uncommon. Further disruption in data acquisition occurred during and after the Iraqi invasion of Kuwait in 1990/91. In 1998, Kuwait Oil Company (KOC) embarked on the implementation of an integrated E&P database system to serve as a secure corporate repository of its “core business” data. The project was to take advantage of the rapid advances in computer technology and align KOC with the high-efficiency trends in the international oil industry, and promote the concept of “data as company asset”. The company selected a commercial data model called “FINDER” which is built on “Oracle” relational database software. This paper focuses on some of the unique features of the project, technical problems encountered and resolved, and future directions. The project accomplishments so far include: • successful gathering of historical data from disparate sources, data-cleaning, batch-loading, post-loading data validation and quality control; • standardisation in the naming and coding of objects—wells, fields, reservoirs, facilities, symbols, etc; • standardisation of templates (formats) for incoming well surveillance data; • replacement of the legacy production “back allocation” system with a more robust and year-2000 compliant one; • creation of a wide variety of forms and reports for data entry and retrieval; • extension of computing network to all 20 remote oil and gas gathering centers (GCs); • formulation of distinct dataflow processes for capturing each data class; • customization of data model to mimic complex dual completions, selective multi-reservoir completions, well-rerouting from a gathering center to another; • data transfer to a number of engineering and geological software used in KOC; • interface with distributed control system (DCS)/supervisory control and data acquisition (SCADA) system. The current and future tasks include: • extension to archive petrophysical log traces; • extension to manage physical assets like core plugs, crude samples, tapes, photographs, documents, reports, etc.; • migration of forms and reports to the intranet using Java technology; • development of decision support systems for converting stored data to information. It is shown that judged by industry standards, KOC has attained a high level of maturity in its data management activities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call