A recently released voxel model quantifying aggregate resources of the Belgian part of the North Sea includes lithological properties of all Quaternary sediments and modelling-related uncertainty. As the underlying borehole data come from various sources and cover a long time-span, data-related uncertainties should be accounted for as well. Applying a tiered data-uncertainty assessment to a composite lithology dataset with uniform, standardized lithological descriptions and rigorously completed metadata fields, uncertainties were qualified and quantified for positioning, sampling and vintage. The uncertainty on horizontal positioning combines navigational errors, on-board and off-deck offsets and underwater drift. Sampling-gear uncertainty evaluates the suitability of each instrument in terms of its efficiency of sediment yield per lithological class. Vintage uncertainty provides a likelihood of temporal change since the moment of sampling, using the mobility of fine-scale bedforms as an indicator. For each uncertainty component, quality flags from 1 (very uncertain) to 5 (very certain) were defined and converted into corresponding uncertainty percentages meeting the input requirements of the voxel model. Obviously, an uncertainty-based data selection procedure, aimed at improving the confidence of data products, reduces data density. Whether or not this density reduction is detrimental to the spatial coverage of data products, will depend on their intended use. At the very least, demonstrable reductions in spatial coverage will help to highlight the need for future data acquisition and to optimize survey plans. By opening up our subsurface model with associated data uncertainties in a public decision support application, policy makers and other end users are better able to visualize overall confidence and identify areas with insufficient coverage meeting their needs. Having to work with a borehole dataset that is increasingly limited with depth below the seabed, engineering geologists and geospatial analysts in particular will profit from a better visualization of datarelated uncertainty.
Located in
Library
/
RBINS Staff Publications 2020
Multi-model ensembles for sea surface temperature (SST), sea surface salinity (SSS), sea surface currents (SSC), and water transports have been developed for the North Sea and the Baltic Sea using outputs from several operational ocean forecasting models provided by different institutes. The individual models differ in model code, resolution, boundary conditions, atmospheric forcing, and data assimilation. The ensembles are produced on a daily basis. Daily statistics are calculated for each parameter giving information about the spread of the forecasts with standard deviation, ensemble mean and median, and coefficient of variation. High forecast uncertainty, i.e., for SSS and SSC, was found in the Skagerrak, Kattegat (Transition Area between North Sea and Baltic Sea), and the Norwegian Channel. Based on the data collected, longer-term statistical analyses have been done, such as a comparison with satellite data for SST and evaluation of the deviation between forecasts in temporal and spatial scale. Regions of high forecast uncertainty for SSS and SSC have been detected in the Transition Area and the Norwegian Channel where a large spread between the models might evolve due to differences in simulating the frontal structures and their movements. A distinct seasonal pattern could be distinguished for SST with high uncertainty between the forecasts during summer. Forecasts with relatively high deviation from the multi-model ensemble (MME) products or the other individual forecasts were detected for each region and each parameter. The comparison with satellite data showed that the error of the MME products is lowest compared to those of the ensemble members.
Located in
Library
/
RBINS Staff Publications