Research Article: Efficient big data assimilation through sparse representation: A 3D benchmark case study in petroleum engineering

Date Published: July 27, 2018

Publisher: Public Library of Science

Author(s): Xiaodong Luo, Tuhin Bhakta, Morten Jakobsen, Geir Nævdal, Leonie Anna Mueck.

http://doi.org/10.1371/journal.pone.0198586

Abstract

Data assimilation is an important discipline in geosciences that aims to combine the information contents from both prior geophysical models and observational data (observations) to obtain improved model estimates. Ensemble-based methods are among the state-of-the-art assimilation algorithms in the data assimilation community. When applying ensemble-based methods to assimilate big geophysical data, substantial computational resources are needed in order to compute and/or store certain quantities (e.g., the Kalman-gain-type matrix), given both big model and data sizes. In addition, uncertainty quantification of observational data, e.g., in terms of estimating the observation error covariance matrix, also becomes computationally challenging, if not infeasible. To tackle the aforementioned challenges in the presence of big data, in a previous study, the authors proposed a wavelet-based sparse representation procedure for 2D seismic data assimilation problems (also known as history matching problems in petroleum engineering). In the current study, we extend the sparse representation procedure to 3D problems, as this is an important step towards real field case studies. To demonstrate the efficiency of the extended sparse representation procedure, we apply an ensemble-based seismic history matching framework with the extended sparse representation procedure to a 3D benchmark case, the Brugge field. In this benchmark case study, the total number of seismic data is in the order of O(106). We show that the wavelet-based sparse representation procedure is extremely efficient in reducing the size of seismic data, while preserving the salient features of seismic data. Moreover, even with a substantial data-size reduction through sparse representation, the ensemble-based seismic history matching framework can still achieve good estimation accuracy.

Partial Text

Data assimilation is an important discipline in geosciences that aims to combine the information contents from both prior geophysical models and observational data (observations) to obtain improved model estimates [1]. The advance of modern technologies has led to a massive growth of high-resolution observational data in geosciences [2, 3]. For instance, in the petroleum industry, Permanent Reservoir Monitoring (PRM) system is the cutting-edge technology used to collect 4-dimensional (4D) seismic data. The frequent multiple vintages of 4D seismic result in huge datasets, with the size of total data often in the order of hundreds of millions, or even higher. Therefore, there is a high demand from the petroleum industry for efficient methods of big data analytics to extract, analyze and utilize the information from big 4D seismic data. Similar problems are also faced in many other fields that involve abundant data obtained through, for example, satellite remote sensing [4], medical imaging [5], geophysical surveys [6, 7], and so on. As a result, big (geophysical) data assimilation has become an important topic in practice.

The proposed framework consists of three key components (see Fig 3), namely, forward AVA simulation, sparse representation (in terms of leading wavelet coefficients) of both observed and simulated AVA data, and the history matching algorithm. It is expected that the proposed framework can also be extended to other types of seismic data, and more generally, geophysical data with spatial correlations.

We demonstrate the performance of the proposed workflow through a 3D Brugge benchmark case study. Table 1 summarizes the key information of the experimental settings. Readers are referred to [68] for more information of the benchmark case study.

In this work, we extend a 2D wavelet-based sparse representation procedure used in [8] to handle 3D seismic datasets, and integrate it into an ensemble-based seismic history matching framework. To demonstrate the efficiency of the integrated workflow, we apply it to a 3D benchmark case, the Brugge field case. The seismic data used in this study are near- and far-offset amplitude versus angle (AVA) attributes, with the data size being more than 7 million.

 

Source:

http://doi.org/10.1371/journal.pone.0198586

 

0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments