Data-driven and Physically-based Models for Characterization of Processes in Hydrology, Hydraulics, Oceanography and Climate Change
(6 - 28 Jan 2008)
~ Abstracts ~
Linear stability theory is a classical theory for analysis of flow instability, and it is already recognized by the community. As such, it is widely used for the study and prediction of flow instability and turbulent transition. However, this theory is only successful for few problems (Rayleigh-Benard convection; Taylor-Couette flow) and failed for most other problems (plane Poiseuille flow, pipe Poiseuille flow, plane Couette flow, and boundary layer flow). Up to now, there is no experimental data which show that linear instability is related to turbulent transition.
Recently, we proposed a new mechanism for flow instability and turbulent transition. In this mechanism, for the first time, a theory derived strictly from physics is proposed to show that the flow instability under finite amplitude disturbance leads to turbulent transition. The proposed theory is named as “energy gradient theory.” It is demonstrated that it is the transverse energy gradient that leads to the disturbance amplification while the disturbance is damped by the energy loss due to viscosity along the streamline. The threshold of disturbance amplitude obtained is scaled with the Reynolds number by an exponent of -1, which exactly explains the recent modern experimental results for pipe flow. This result resolved the big controversial issue for long time speculations and analysis for parallel flows in many groups. Following from this analysis, it can be demonstrated that the critical value of the so called energy gradient parameter Kmax is constant for turbulent transition in parallel flows, and this is confirmed by experiments for pipe Poiseuille flow, plane Poiseuille flow, and plane Couette flow. It is also inferred from the proposed theory that the transverse energy gradient can serve as the power for the self-sustaining process of wall bounded turbulence.
The proposed “energy gradient theory,” which physically explains the phenomena of flow instability and turbulent transition in shear flows and has been shown to be valid for parallel flows, is extended to curved flows in this study.
In this talk, I will talk about various multiscale simulation techniques for porous media flow and transport.
Next, I will describe how these multiscale methods can be extended to problems without scale separation. In this case, some type of limited global information will be incorporated into basis functions in order to represent accurately the large-scale effects. I will discuss the advantages of these approaches and present numerical results which demonstrate the importance of incorporating limited global information. Some applications to statistical downscaling will be also discussed.
With increasing interest in accurate prediction of subsurface properties, subsurface characterization based on dynamic data takes on greater importance.
In this talk, I will describe how coarse-scale models can be used to speed-up the uncertainty quantification in subsurface flows. The proposed techniques are implemented within Markov chain Monte Carlo methods. Theoretical results will be presented. We will also discuss the use of coarse-scale models in Ensemble Kalman Filter methods. Numerical results will be presented to demonstrate the efficiency of the proposed methodologies.
The various numerical codes which are used throughout the world to compute the propagation of tsunamis across the oceans must be informed by initial conditions. It is shown that the classical approach, consisting of translating the frozen deformation of the sea bottom towards the free surface and letting it propagate, has some drawbacks.
The generation of freak waves is a vast topic. Here we describe the formation of freak waves by directional focusing. Ocean data, laboratory experiments and numerical experiments will be presented. In particular, during the past ten years, efficient numerical wave tanks which efficiently mimics laboratory wave tankshave been developed.
Analytical and asymptotic methods still have an essential role to play despite the dominant role played by numerical methods. Mathematical approaches for the study of waves in dispersive systems will be presented. Very recently, new systems of equations for the study of water waves in the presence of viscous dissipation have been derived.
The sloshing of a free liquid inside a closed container leads to impacts on its walls, which can be damaging. More generally extreme waves lead to various types of impacts. New pressure impact formulas which show the importance of compressible effects will be described.
Inference regarding complex physical systems (e.g. subsurface
aquifers, charged particle accelerators, physics experiments) is
typically plagued by a lack of information available from relevant,
experimental data. What data is available is usually limited and
informs inderctly about the phenomena of interest. However,
when the physical system is amenable to computer simulation,
these simulations can be combined with experimental observations
An encompassing framework for carrying out such analyses has the
potential to shed light on a number of important issues in simulation-
based predictive investigations:
- combining information from multiple experiments;
The details of any analysis will likely to vary with a given application.
However common aspects of any analysis are bound to include the following
- selection of input settings over which to cary out simulation runs;
This talk will discuss the above aspects focusing on applications relevant to investigations at Los Alamos National Laboratory.
As computing capabilities expand, we're tempted to model more
and more detailed and complicated phenomena. Utilizing these
next generation models for prediction and scientific discovery
will likely require that we go beyond the black box approaches
that dominate current analysis methods.
Embedding uncertainty quantification tools into high performance
computing codes (and platforms) has the potential to reduce the
overall computational burden while delivering more accurate results
for parameter optimization, sensitivity analysis and error estimation
for simulation-based predictions. One step in this direction is the
augmentation of forward models with adjoint routines to facilitate
a search for input settings that best match experimental data. But
broader progress in embedding UQ tools within computational models has
been slow because the design of new computing architectures,
simulation codes/models and analysis methods/tools are not done
in concert. In this talk I'll highlight some potential opportunities
on the interface between computing, modeling and analysis.
Many problems of river management and civil protection consist of the evaluation of the maximum water levels and discharges that may be attained at particular locations during the development of an exceptional meteorological event. There is another category of events of catastrophic nature whose effects also fall into the civil protection area. It is the prevision of the scenario subsequent to the almost instantaneous release of a great volume of liquid. The situation is that of the breaking of a man made dam. In many countries, the determination of the parameters of the wave likely to be produced after the failure of a dam is required by law (Molinaro, 1991, Betamio de Almeida and Bento Franco, 1993), and systematic studies are mandatory. There are works based on scaled physical models of natural valleys, but they represent too expensive efforts not devoid of difficulties. There is therefore a necessity to develop adequate numerical models able to reproduce situations originated by the irregularities of a non-prismatic bed. It is also necessary to trace their applicability considering the difficulty of developing a model capable of producing solutions of the complete equations despite the irregular character of the river bed.
In recent years, finite volume techniques have been successfully applied in hydrology to simulate two-dimensional free surface flows. In river basin modelling the bottom friction and irregularities have been shown to influence not only flood waves behaviour but also numerical methods performance drastically. Recently, two-dimensional numerical models have been developed as tools to design and manage river basin systems. In this work, a finite volume based upwind scheme is used to build a simulation model considering differences in bottom level. The discretization is made on triangular unstructured grids and the source terms of the equations are given a special treatment.
Study of Open Channel Flows are fundamental to Hydraulics. Such flows apply to flow in irrigation channels and rivers that are characterized by variety of complexities. The complexities arise from possible curvature of a channel, variable cross section with varying shapes. The channel bed may be erodible consisting of clay, sand, gavel and boulders. In such a scenario suitable mathematical modeling is required for the different phenomena that crop up in practice. To start with a channel is assumed straight and wide enough for the mean flow to be assumed one dimensional. The theories of such flows in the literature are essentially phenomenological that require some precision. This talk approaches these questions in a mathematically systematic manner. Noting the fact that the bed causes turbulence in the fluid, we proceed with the Navier-Stokes equations and average them over time to study the mean flow
and the associated Reynolds stress. The resulting Reynolds Averaged Navier-Stokes (RANS) equations form an underdetermined system, falling short of one equation. For steady state fully developed flow the traditional approach is to invoke Prandtl’s Mixing Length hypothesis that is based on several assumptions about the flow. This is derived here in a precise manner. Alternatively a turbulence closure assumption that the Reynolds stress contributes to the forward momentum equation far in excess of the viscous stress is proposed. It leads to the identical logarithmic velocity profile as obtained by the traditional method and observed in actual experiments. The near-bed viscous sub layer and the intermediate layer between the two is treated rigorously to obtain a single fifth degree expression for the forward velocity, for the inner layer close to the bed. The expressions for the corresponding Reynolds stress are obtained rigorously for the two layers combined. The problem of accelerated flows is examined next for a wide channel. The traditional way is to use St. Venant’s equations derived from energy considerations. Here we use the depth averaging technique, which is applicable in principle, to derive a generalized equation. The generalized St. Venant equations are applied to some special problems.
In this talk we consider channel flows over erodible beds. In such flows finer particles flow with the stream never to come in contact with the bed. Such sediment transport is known as wash load.
The classical extreme value theory will be reviewed with the aim is highlighting its key results and applications to environmental problems. Generalized extreme value distribution (GEV) is frequently used to model a data set that can be viewed as a realization of a sequence of independent and identically distributed random variables (block maxima). Methods of parameters identification, estimation and inferences will be discussed. Hydrological and water quality examples will be used to illustrate the application of the theory.
The drawback of block maxima theory is the serious loss of information due to basing the analysis on one or very few extreme order statistics. To increase the amount of data all values, a threshold is established and all values above the threshold are considered in the analysis. This approach is known as the method of peak above threshold (POT) and this talk will be dedicated to describing its theory and applications. Some issues related to threshold selection, lack of independence and the inclusion of explanatory variables will be also discussed. Applications to water quantity and quality data will be used for illustrations.
In this talk we consider weakly nonlinear long waves. Here the basic paradigm is the well-known Korteweg-de Vries equation and its solitary wave solution.
We present a brief historical discussion, followed by a typical derivation in the context of internal and surface water waves. Then we describe two extensions,
the first to the variable-coefficient Korteweg-de Vries equation for the description of solitary waves in a variable environment, and the second to the forced Korteweg-de
Vries equation and the theory of undular bores.
In the coastal oceans, the interaction of currents (such as the barotropic tide) with topography can generate large-amplitude, horizontally propagating internal solitary waves. These waves often occur in regions where the waveguide properties vary in the direction of propagation. We consider the modelling of these waves by nonlinear evolution equations of the Korteweg-de Vries type with variable coefficients, and we describe how these models are used to describe the shoaling of internal solitary waves over the continental shelf and slope. The theories are compared with various numerical simulations.
Although the talks are essentially independent, there will be some small overlap in the material covered.
Analyzing input and structural uncertainty of a hydrological model with Stochastic, time-dependent parameters
A recently developed technique for identifying continuous-time, time-dependent, stochastic parameters of dynamic models is applied in a systematic framework for identifying the causes of bias in model results of a simple hydrological model. Model parameters are sequentially replaced by stochastic processes and the degree of bias reduction of model output is analyzed for each parameter. In a next step, the identified time-dependences of all parameters are analyzed for dependences on external influence factors and model states. If significant relationships between time-dependent parameters and influence factors or states are found, the deterministic model must be improved. Otherwise, or after improving the deterministic model in a first step, the description of uncertainty in model predictions can be improved by replacing selected model parameters by stochastic processes. The application of this framework to a simple 8-parameter conceptual hydrological model demonstrates its capabilities. Different time-dependent parameters (including additional parameters for input and output modification) have significantly different potential for bias reduction. The degree of achievable bias reduction leads to the identification of the soil and runoff sub-model as the one that has the highest potential for improvement. Attempts of reducing the deficits of the deterministic runoff model lead to a considerable improvement of the fit, particularly of outliers during strong storm events. After improving this sub-model, the dominant fraction of the remaining bias is attributed to random input uncertainty of rainfall and described by a stochastic input modification factor.
Emulators provide computationally efficient interpolation between outputs of simulator runs available at design points in input space. For this reason, they are very important tools to make systems analytical techniques that require many model evaluations, such as optimization, sensitivity analysis, or statistical inference, available for computationally demanding simulation models. So far, the dominant tool for developing such emulators have been priors in the form of Gaussian processes that were conditioned with the design data set. These emulators do not consider our knowledge of the structure of the simulation model and run into numerical difficulties if there is a large number of closely spaced input points. This is usually the case in the time dimension of dynamic models. To address these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state-space model of the temporal evolution of the simulation model with Gaussian processes in the other input dimensions. Conditioning of this prior to the design data set is done by Kalman smoothing. The feasibility of the approach is demonstrated by the application to a simple hydrological model.
Statistics for ordinary and stochastic differential equation models
Physically based models in the environmental sciences are usually formulated as differential equations. If they contain unknown parameters, these can be estimated by nonlinear least squares. However, in practically all applications the deviations between model output and observations show systematic patterns. Hence the assumption of i.i.d. observation noise is not tenable. In particular, obtaining reliable uncertainty measures for parameter estimates and model predictions and identifying model deficits become difficult. In order to solve these problems, people have introduced either a nonparametric bias term in the model or time varying stochastic inputs and parameters or they have added a noise term into the differential equation, leading to stochastic differential equations. I will discuss the differences and advantages of these approaches.
In the main part of my talks, I review and discuss statistical techniques for stochastic differential equations. This will include the following topics:
Water quality has been the major concern in areas where scarcity is not an issue. Monitoring programs were developed and when data records were of sufficient length, analysis of the data for detection and estimation of trend was undertaken. Trend detection and estimation methods are reviewed. Methods which model low and high frequency variability may also provide useful characterizations of attribute measurements in the development of models used for hydrological predictions. The relevance of methods is also considered for the contrasting case of an over-abundance of measurements obtained when automatic monitoring systems are used.
Determination of regions homogeneous with respect to attribute variables may be accomplished by cluster analysis and an example of this, for the purpose of reducing the number of sampling stations, is considered. Such methods may be useful in determining homogeneous sub-regions of catchments. Several methods are reviewed.
An established method for addressing structural uncertainty in hydrological models is to incorporate information from several models at once. We describe an approach that goes beyond simple model averaging by using mixtures of experts models. In this framework the catchment is assumed to exist in one of a finite number of states with different rainfall runoff models in each of the states capturing different dominant physical processes. A statistical model describes uncertainty about the states probabilistically and how this varies according to catchment indicators (predictors). We discuss issues of model comparison, of computation using adaptive Monte Carlo methods, and of predictive performance and interpretability of the methods in a number of examples.
The cultures of physical and statistical modellers differ greatly. However, a search for reconciliation has begun, driven by the practical requirements of handling processes over very large space-time domains, and the risks attached to them. In this talk and its sequel, I will explore some of the emerging directions in a near frontier for statistical science.
My first talk derives from the experience, I have had with my UBC co-researchers,
Nhu Le, Yiping Dou, and Zhong Liu with much input from Douw Steyn,
an atmospheric scientist. We have been examining hourly ground level ozone
Francis Zwiers posed problem that this talk addresses. His asked how the return values for the annual maxima of precipation over 312 grid cells covering Canada could be jointly specified. Since precipation is not measured over much of that surface, the "data" were in fact simulated from a numerical coupled climate model (CCM3). [That model also enables the future to be projected under various assumptions about the parameters such as CO2 emissions that determine climate.]
The approach taken to model the field of precipation extreme values
is hierarchical Bayes since the current state of multivariate
extreme value theory does not seem capable of handling fields of such a
high (312) dimensional field as that under consideration.
Moreover, the approach yields a log-multivariate t model, one that offers the practical advantage of tractability and promise of applicability in
other contexts. (In fact, it worked well for modelling air pollution
extremes in London England.) That distribution is generated from
a log-Gaussian process with an unknown spatial covariance matrix. To
assess the performance in that application suggested by Zwiers, we use
Time permitting, I will also discuss some of the problems associated with the design of a network that could measure extremes.
joint work with Audrey Fu, U of Washington and Nhu D Le, BC Cancer Agency
ABSTRACT: I will discuss methods for estimation and interpolation of parameters of geophysics-based statistical models for predicting nonstationary and nonhomogeneous space-time marked point process. Spline functions are used to characterize the evolution and variation of the parameter in time and space, respectively. Since many coefficients of the spline functions are required, I use the penalized log-likelihood with the standard roughness penalties for the spline functions to obtain sensible estimates. The penalized log-likelihood is interpreted by the Bayesian framework, and weights for the penalties are adjusted objectively by maximizing the integrated posterior function. Comparison of priors includes isotropic versus anisotropic the roughness penalties. The current methods and models are recently applied to the early forecasting of aftershock probability where the data are only partially available immediately after the main shock.
ABSTRACT: Seismic quiescence and activation, as the precursors to large earthquakes, have attracted much attention among seismologists. Of particular interest is the hypothesis that the stress-changes transferred from a rupture or silent slip in one region can cause seismic changes in other regions. However, the clustering feature of earthquakes prevents us from detecting the seismicity change that is caused by the stress change transferred from other region. This is because mostly successive earthquakes are triggered by nearby events under heterogeneous complex media. Nevertheless, we can use the statistical empirical laws as a practical method for predicting earthquake clusters. The objective of this talk is to demonstrate that diagnostic analysis based on fitting the Epidemic Type Aftershock Sequence (ETAS) model to regional seismicity can be helpful in detecting exogenous stress changes there. In particular, the changes due to silent slips on a fault are usually so slight that one can barely recognize systematic anomalies in seismicity without the aid of the ETAS model. The space-time version of this model shows various regional physical characteristic of the crust, and can be well used for the anomaly of the seismic activity.
Regional climate model
downscaling of USA present climate and future projection:
Uncertainty and dimension reduction
Mesoscale regional climate model (RCM) integrations driven
by the NCEP-DOE AMIP-II reanalysis, the NCAR Parallel
Climate Model (PCM), and the Hadley Centre HadCM3 GCMs
(fully coupled atmosphere-ocean general circulation models)
simulations for the present climate are inter-compared with
observations to study the RCM downscaling skill and
uncertainty. The comparison indicates that the RCM, with its
finer resolution (30-km grid spacing) and more detailed
physics, simulates the present U.S. climate with more
accuracy than the driving reanalysis and GCMs’ output,
especially for precipitation and surface temperature,
including annual and diurnal cycles as well as interannual
and daily variability. The RCM downscaling skill, however,
is very sensitive to the parameterization of cumulus
convection. In particular, the RCM using the Grell versus
Kain-Fritsch cumulus schemes produces substantially
different downscaling results, which depends on climate
regimes, temporal scales and driving sources. Their ensemble
mean with statistically optimized weights captures most of
the observed precipitation variations.
Applying data assimilation
methods in Delft-FEWS to improve real time forecasting
Numerical study of wave and
submerged breakwater interaction
Dynamical and statistical
downscaling of New Zealand climate and linking to impact
Measuring uncertainty in spatial data via Bayesian melding