
Datadriven and Physicallybased Models for Characterization of Processes in Hydrology, Hydraulics, Oceanography and Climate Change
(6  28 Jan 2008)
Jointly organized with Pacific Institute for Mathematical Sciences, UBC
~ Abstracts ~
Basic equations of open channel flows Sujit K. Bose, S.N.Bose National Centre for Basic Sciences, India
Study of Open Channel Flows are fundamental to Hydraulics. Such flows apply to flow in irrigation channels and rivers that are characterized by variety of complexities. The complexities arise from possible curvature of a channel, variable cross section with varying shapes. The channel bed may be erodible consisting of clay, sand, gavel and boulders. In such a scenario suitable mathematical modeling is required for the different phenomena that crop up in practice. To start with a channel is assumed straight and wide enough for the mean flow to be assumed one dimensional. The theories of such flows in the literature are essentially phenomenological that require some precision. This talk approaches these questions in a mathematically systematic manner. Noting the fact that the bed causes turbulence in the fluid, we proceed with the NavierStokes equations and average them over time to study the mean flow and the associated Reynolds stress. The resulting Reynolds Averaged NavierStokes (RANS) equations form an underdetermined system, falling short of one equation. For steady state fully developed flow the traditional approach is to invoke Prandtl?s Mixing Length hypothesis that is based on several assumptions about the flow. This is derived here in a precise manner. Alternatively a turbulence closure assumption that the Reynolds stress contributes to the forward momentum equation far in excess of the viscous stress is proposed. It leads to the identical logarithmic velocity profile as obtained by the traditional method and observed in actual experiments. The nearbed viscous sub layer and the intermediate layer between the two is treated rigorously to obtain a single fifth degree expression for the forward velocity, for the inner layer close to the bed. The expressions for the corresponding Reynolds stress are obtained rigorously for the two layers combined. The problem of accelerated flows is examined next for a wide channel. The traditional way is to use St. Venant?s equations derived from energy considerations. Here we use the depth averaging technique, which is applicable in principle, to derive a generalized equation. The generalized St. Venant equations are applied to some special problems.
« Back... Some equations of sediment transport with application to sand waves Sujit K. Bose, S.N.Bose National Centre for Basic Sciences, India
In this talk we consider channel flows over erodible beds. In such flows finer particles flow with the stream never to come in contact with the bed. Such sediment transport is known as wash load.
On the other hand bigger sediment particles move along the bed as bed load. Some other particles are lifted of and go into suspension in the flowing fluid. This is known as suspension load. In this case the particles occasionally return to the bed again to be lifted of. Models of bed load transport are described here at some length. The suspension load transport is described adequately by the diffusionadvection model systematically derived here. Some solution models are considered for steady and unsteady states. The theory is applied to the problem of dune and antidune propagation of sand due to flow of fluid over sandy bed. For this purpose appropriate generalized St. Venant equations over undulating bed are developed by the depth averaging method. The empirical MeyerPeter formula is used in place of the momentum equation for bed load transport. The depth averaged equation for the diffusionadvection equation for suspended load transport supply the necessary equations for the problem. The instability of the bed, for the formation of dunes and antidunes is examined by seeking bounded sinusoidal solution of the system of equations so developed. The criteria for the dune and antidune formation are numerically obtained as the solution of a fourth degree equation whose numerical solution leads to results that are in complete agreement with experimental results.
« Back... Physicallybased models for the generation, propagation and impact of water waves  part 2 Frédéric Dias, Ecole Normale Supérieure de Cachan, France
Analytical and asymptotic methods still have an essential role to play despite the dominant role played by numerical methods. Mathematical approaches for the study of waves in dispersive systems will be presented. Very recently, new systems of equations for the study of water waves in the presence of viscous dissipation have been derived.
The sloshing of a free liquid inside a closed container leads to impacts on its walls, which can be damaging. More generally extreme waves lead to various types of impacts. New pressure impact formulas which show the importance of compressible effects will be described.
« Back... Physicallybased models for the generation, propagation and impact of water waves  part 1 Frédéric Dias, Ecole Normale Supérieure de Cachan, France
The various numerical codes which are used throughout the world to compute the propagation of tsunamis across the oceans must be informed by initial conditions. It is shown that the classical approach, consisting of translating the frozen deformation of the sea bottom towards the free surface and letting it propagate, has some drawbacks.
The generation of freak waves is a vast topic. Here we describe the formation of freak waves by directional focusing. Ocean data, laboratory experiments and numerical experiments will be presented. In particular, during the past ten years, efficient numerical wave tanks which efficiently mimics laboratory wave tankshave been developed.
« Back... Routes of transition to turbulence HuaShu Dou, National University of Singapore
Linear stability theory is a classical theory for analysis of flow instability, and it is already recognized by the community. As such, it is widely used for the study and prediction of flow instability and turbulent transition. However, this theory is only successful for few problems (RayleighBenard convection; TaylorCouette flow) and failed for most other problems (plane Poiseuille flow, pipe Poiseuille flow, plane Couette flow, and boundary layer flow). Up to now, there is no experimental data which show that linear instability is related to turbulent transition.
Recently, we proposed a new mechanism for flow instability and turbulent transition. In this mechanism, for the first time, a theory derived strictly from physics is proposed to show that the flow instability under finite amplitude disturbance leads to turbulent transition. The proposed theory is named as "energy gradient theory." It is demonstrated that it is the transverse energy gradient that leads to the disturbance amplification while the disturbance is damped by the energy loss due to viscosity along the streamline. The threshold of disturbance amplitude obtained is scaled with the Reynolds number by an exponent of 1, which exactly explains the recent modern experimental results for pipe flow. This result resolved the big controversial issue for long time speculations and analysis for parallel flows in many groups. Following from this analysis, it can be demonstrated that the critical value of the so called energy gradient parameter Kmax is constant for turbulent transition in parallel flows, and this is confirmed by experiments for pipe Poiseuille flow, plane Poiseuille flow, and plane Couette flow. It is also inferred from the proposed theory that the transverse energy gradient can serve as the power for the selfsustaining process of wall bounded turbulence.
The proposed "energy gradient theory," which physically explains the phenomena of flow instability and turbulent transition in shear flows and has been shown to be valid for parallel flows, is extended to curved flows in this study.
Then, three important theorems for flow instability in curved shear flows are deduced. These theorems are (1) Potential flow (inviscid and irrotational) is stable; (2) Inviscid rotational (nonzero vorticity) flow is unstable; (3) Velocity profile with an inflectional point is unstable when there is no work input or output to the system, for both inviscid and viscous flows. These theorems are, for the first time, deduced, and are of great significance for the understanding of generation of turbulence and the explanation of complex flows. It is demonstrated that existence of inflection point on velocity profile is a sufficient condition, but not a necessary condition for flow instability, for both inviscid and viscous flows. The physical mechanism of tornado sustenance is well explained with the theory. The azimuthal velocity distribution in a tornado can be expressed by a combined Rankine vortex: forced vortex and free vortex. The free vortex takes up most of the domain and is of a uniform energy field. The disturbance could not be amplified in such a uniform energy field. Thus, the free vortex is always stable and it could last long time until the energy contained is consumed out. Then, it is exactly proved that the classical Rayleigh Theorem on inflectional velocity instability is wrong which states that the necessary condition for instability of inviscid flow is the existence of an inflection point on the velocity profile. This is because it does not account for the threedimensionality of the disturbance for amplification rate larger than zero. It is shown that the disturbance amplified in twodimensional inviscid flows is necessarily threedimensional. After the breakdown of TollmienSchlichting waves in 2D parallel flows, the disturbance becomes a type of spiral waves which proceed along the streamwise direction. The spiral behavior of the propagation of traveling waves in shear flows is the origin of the streamwise vortex and hairpin vortex as well as the other events in the transition.
In the following, we discuss the route of transition to turbulence from laminar flow in parallel flows. We clarify the relation of linear instability to turbulent transition. From our findings, turbulent transition in parallel flows can be classified as:
(1) Small disturbance Base flow + TollmienSchlichting waves > linear instability > spiral waves > streamwise vortex (3D laminar flow) > Re increase/disturbance development (nonlinear) > hairpin vortex > nonlinear instability > turbulence.
(2) Large disturbance (bypass) Base flow + disturbance > 3D disturbance > spiral waves > streamwise vortex (3D laminar flow) > Re increase/disturbance development (nonlinear) > hairpin vortex > nonlinear instability > turbulence.
Thus, at small disturbance, the transition in parallel flows needs twice instabilities: The first is the linear instability, which makes the 2D laminar flow become 3D laminar flow. The second is the nonlinear instability, which makes the 3D laminar flow become 3D turbulent flow. At large disturbance, the transition in parallel flows needs only once instability: the nonlinear instability, which makes the 3D laminar flow become 3D turbulent flow. This consists of: development of 3D disturbance, streamwise vortex, hairpin vortex, and nonlinear instability.
From above discussion, it is clear that linear instability has nothing to do with turbulence transition. It only causes secondary flow in laminar flows to lead to streamwise vortex. Linear instability only makes the laminar flow from one state become another state.
The role of linear instability can be explained as that linear instability could not change the transversal distribution of total mechanical energy of mean flow since linear disturbance is an infinitesmall disturbance. While turbulent generation needs distinguished change of the transversal distribution of total mechanical energy of mean flow. This action requires finite amplitude of disturbance which is a nonlinear process.
The origin of streamwise vortex rests on the occurrence of 3D disturbance and the effect of vortex merging process. The 3D disturbance can be generated by the instability of 2D disturbance, or can be directly input of 3D disturbance.
The following conclusions can be summarized for parallel flows:
(1) Turbulence transition is resulted from 3D nonlinear instability;
(2)Linear instability is not necessary for turbulent transition. Linear instability has nothing to do with turbulence if there is no further change of mean flow.
(3)Before turbulent transition sets in, streamwise vortex is necessarily to exist.
(4) Turbulent transition in parallel flows consists of two steps: 2D laminar flow becomes 3D laminar flow; 3D laminar flow becomes 3D turbulence.
« Back... Uncertainty quantification using coarsescale models Yalchin Efendiev, Texas A&M University, USA
With increasing interest in accurate prediction of subsurface properties, subsurface characterization based on dynamic data takes on greater importance. Uncertainties on the detailed description of subsurface porosity and permeability are large contributors to the overall uncertainty in performance forecasting. Reducing this uncertainty can be achieved by integrating additional data in subsurface modeling. Integration of data from different sources is a nontrivial task because different data sources scan different length scales of heterogeneity and can have different degrees of precision.
In this talk, I will describe how coarsescale models can be used to speedup the uncertainty quantification in subsurface flows. The proposed techniques are implemented within Markov chain Monte Carlo methods. Theoretical results will be presented. We will also discuss the use of coarsescale models in Ensemble Kalman Filter methods. Numerical results will be presented to demonstrate the efficiency of the proposed methodologies.
« Back... Multiscale techniques for porous media flows Yalchin Efendiev, Texas A&M University, USA
In this talk, I will talk about various multiscale simulation techniques for porous media flow and transport. First, I will describe multiscale methods which use local information to coarsen flow and transport properties. The main idea of these approaches is to use basis functions which contain the appropriate small scale information. The basis functions are coupled via a variational formulation of the global problem. These approaches share some similarities with traditional upscaling methods and can be also used for downscaling flow and transport properties.
Next, I will describe how these multiscale methods can be extended to problems without scale separation. In this case, some type of limited global information will be incorporated into basis functions in order to represent accurately the largescale effects. I will discuss the advantages of these approaches and present numerical results which demonstrate the importance of incorporating limited global information. Some applications to statistical downscaling will be also discussed.
« Back... Models for environmental extremes II Abdel ElShaarawi, The National Water Research Institute, Canada
The drawback of block maxima theory is the serious loss of information due to basing the analysis on one or very few extreme order statistics. To increase the amount of data all values, a threshold is established and all values above the threshold are considered in the analysis. This approach is known as the method of peak above threshold (POT) and this talk will be dedicated to describing its theory and applications. Some issues related to threshold selection, lack of independence and the inclusion of explanatory variables will be also discussed. Applications to water quantity and quality data will be used for illustrations.
« Back... Models for environmental extremes I Abdel ElShaarawi, The National Water Research Institute, Canada
The classical extreme value theory will be reviewed with the aim is highlighting its key results and applications to environmental problems. Generalized extreme value distribution (GEV) is frequently used to model a data set that can be viewed as a realization of a sequence of independent and identically distributed random variables (block maxima). Methods of parameters identification, estimation and inferences will be discussed. Hydrological and water quality examples will be used to illustrate the application of the theory.
« Back... Determining homogeneous regions: considerations for water quality management Sylvia Esterby, University of British Columbia, Canada
Determination of regions homogeneous with respect to attribute variables may be accomplished by cluster analysis and an example of this, for the purpose of reducing the number of sampling stations, is considered. Such methods may be useful in determining homogeneous subregions of catchments. Several methods are reviewed.
« Back... Trend analysis: considerations for water quality management Sylvia Esterby, University of British Columbia, Canada
Water quality has been the major concern in areas where scarcity is not an issue. Monitoring programs were developed and when data records were of sufficient length, analysis of the data for detection and estimation of trend was undertaken. Trend detection and estimation methods are reviewed. Methods which model low and high frequency variability may also provide useful characterizations of attribute measurements in the development of models used for hydrological predictions. The relevance of methods is also considered for the contrasting case of an overabundance of measurements obtained when automatic monitoring systems are used.
« Back... Measuring uncertainty in spatial data via Bayesian melding Matthew Falk, Queensland University of Technology, Australia
It is expensive and timeconsuming to collect enough data for water quality monitoring and measurement of catchment areas. Deterministic simulation models are thus used to understand the environmental processes involved and guide the policy development by decision makers. To make informed decisions requires uncertainty to be incorporated into the modelling. We use a method known as Bayesian Melding which allows for prior distributions and likelihoods to be attached to each input and the output, representing the uncertainty around their true values. This information is combined using Bayes' Theorem and Monte Carlo simulation is used to generate a posterior distribution. The application given in this presentation is to the Revised Universal Soil Loss Equation (RUSLE) which uses spatially variable Geographic Information System images as inputs. The output of the RUSLE is also an image and the presentation of uncertainty within this image is also discussed.
« Back... Kortewegde Vries equation: applications Roger Grimshaw, Loughborough University, UK
In this talk we consider weakly nonlinear long waves. Here the basic paradigm is the wellknown Kortewegde Vries equation and its solitary wave solution. We present a brief historical discussion, followed by a typical derivation in the context of internal and surface water waves. Then we describe two extensions, the first to the variablecoefficient Kortewegde Vries equation for the description of solitary waves in a variable environment, and the second to the forced Kortewegde Vries equation and the theory of undular bores.
« Back... Internal solitary waves in the atmosphere and ocean Roger Grimshaw, Loughborough University, UK
In the coastal oceans, the interaction of currents (such as the barotropic tide) with topography can generate largeamplitude, horizontally propagating internal solitary waves. These waves often occur in regions where the waveguide properties vary in the direction of propagation. We consider the modelling of these waves by nonlinear evolution equations of the Kortewegde Vries type with variable coefficients, and we describe how these models are used to describe the shoaling of internal solitary waves over the continental shelf and slope. The theories are compared with various numerical simulations.
Although the talks are essentially independent, there will be some small overlap in the material covered.
« Back... Towards entwining computing, modeling and analysis David Higdon, Los Alamos National Laboratory, USA
As computing capabilities expand, we're tempted to model more and more detailed and complicated phenomena. Utilizing these next generation models for prediction and scientific discovery will likely require that we go beyond the black box approaches that dominate current analysis methods. Embedding uncertainty quantification tools into high performance computing codes (and platforms) has the potential to reduce the overall computational burden while delivering more accurate results for parameter optimization, sensitivity analysis and error estimation for simulationbased predictions. One step in this direction is the augmentation of forward models with adjoint routines to facilitate a search for input settings that best match experimental data. But broader progress in embedding UQ tools within computational models has been slow because the design of new computing architectures, simulation codes/models and analysis methods/tools are not done in concert. In this talk I'll highlight some potential opportunities on the interface between computing, modeling and analysis.
« Back... A framework for uncertainty quantification combining detailed computer simulations and experimental data David Higdon, Los Alamos National Laboratory, USA
Inference regarding complex physical systems (e.g. subsurface aquifers, charged particle accelerators, physics experiments) is typically plagued by a lack of information available from relevant, experimental data. What data is available is usually limited and informs inderctly about the phenomena of interest. However, when the physical system is amenable to computer simulation, these simulations can be combined with experimental observations to give useful information regarding calibration parameters, prediction uncertainty, and model inadequacy. This paper discusses general methodology for carrying out such analyses.
An encompassing framework for carrying out such analyses has the potential to shed light on a number of important issues in simulation based predictive investigations:
 combining information from multiple experiments;
 uncertainty quantification for simulationbased predictions;
 high dimensional calibration / baselining of model parameters;
 assessment of the value of various types of experimental data;
 assessment of discrepancy between simulation output and experimental data.
The details of any analysis will likely to vary with a given application. However common aspects of any analysis are bound to include the following components:
 selection of input settings over which to cary out simulation runs;
 sensitivity analysis  ie. understanding which simulation inputs are impacting the simulation output;
 response surface modeling of the simulation output  finding parsimonious models in very high dimensional settings;
 constraining the range of possible simulation output with experimental data;
 accounting for systematic discrepancies between the simulation output and experimental data.
This talk will discuss the above aspects focusing on applications relevant to investigations at Los Alamos National Laboratory.
« Back... Part II: Filtering and sequential estimation HansRudolf Künsch, ETH Zürich, Switzerland
Physically based models in the environmental sciences are usually formulated as differential equations. If they contain unknown parameters, these can be estimated by nonlinear least squares. However, in practically all applications the deviations between model output and observations show systematic patterns. Hence the assumption of i.i.d. observation noise is not tenable. In particular, obtaining reliable uncertainty measures for parameter estimates and model predictions and identifying model deficits become difficult. In order to solve these problems, people have introduced either a nonparametric bias term in the model or time varying stochastic inputs and parameters or they have added a noise term into the differential equation, leading to stochastic differential equations. I will discuss the differences and advantages of these approaches.
In the main part of my talks, I review and discuss statistical techniques for stochastic differential equations. This will include the following topics:
Estimating equations, Monte Carlo maximum likelihood and the Monte Carlo EM algorithm, MAP and MCMC methods, extended Kalman filter, particle filters and ensemble filters.
« Back... Part I: Modeling issues and offline estimation HansRudolf Künsch, ETH Zürich, Switzerland
Physically based models in the environmental sciences are usually formulated as differential equations. If they contain unknown parameters, these can be estimated by nonlinear least squares. However, in practically all applications the deviations between model output and observations show systematic patterns. Hence the assumption of i.i.d. observation noise is not tenable. In particular, obtaining reliable uncertainty measures for parameter estimates and model predictions and identifying model deficits become difficult. In order to solve these problems, people have introduced either a nonparametric bias term in the model or time varying stochastic inputs and parameters or they have added a noise term into the differential equation, leading to stochastic differential equations. I will discuss the differences and advantages of these approaches.
In the main part of my talks, I review and discuss statistical techniques for stochastic differential equations. This will include the following topics:
Estimating equations, Monte Carlo maximum likelihood and the Monte Carlo EM algorithm, MAP and MCMC methods, extended Kalman filter, particle filters and ensemble filters.
« Back... Regional climate model downscaling of USA present climate and future projection: Uncertainty and dimension reduction XinZhong Liang, Illinois State Water Survey Illinois State Department of Natural Resources and University of Illinois at UrbanaChampaign, USA
Mesoscale regional climate model (RCM) integrations driven by the NCEPDOE AMIPII reanalysis, the NCAR Parallel Climate Model (PCM), and the Hadley Centre HadCM3 GCMs (fully coupled atmosphereocean general circulation models) simulations for the present climate are intercompared with observations to study the RCM downscaling skill and uncertainty. The comparison indicates that the RCM, with its finer resolution (30km grid spacing) and more detailed physics, simulates the present U.S. climate with more accuracy than the driving reanalysis and GCMs' output, especially for precipitation and surface temperature, including annual and diurnal cycles as well as interannual and daily variability. The RCM downscaling skill, however, is very sensitive to the parameterization of cumulus convection. In particular, the RCM using the Grell versus KainFritsch cumulus schemes produces substantially different downscaling results, which depends on climate regimes, temporal scales and driving sources. Their ensemble mean with statistically optimized weights captures most of the observed precipitation variations. A parallel RCM run driven by the GCMs' future projections is then compared to determine the downscaling impact on localregional climate change. It is shown that the RCM generates very different patterns of U.S. climate change projections, with more spatially variable precipitation changes and smaller temperature increases than the driving GCMs. The RCM downscaling reduces significantly driving GCMs' presentclimate biases and narrows intermodel differences in representing climate sensitivity and hence in simulating the present and future climates. Very high spatial pattern correlations of the RCM minus GCM differences in precipitation and surface temperature between the present and future climates indicate that major model presentclimate biases are systematically propagated into futureclimate projections at regionallocal scales. The total impacts of the biases on trend projections also depend strongly on regions and cannot be linearly removed. The result suggests the RCM enhancement on projecting future changes and the necessity of developing model fidelity metrics based on localized climate characteristics to constrain the projection likelihood. We seek collaboration with statisticians to reduce the uncertainty and dimensionality of the future climate change projections by GCMs and RCMs.
« Back... Dynamical and statistical downscaling of New Zealand climate and linking to impact assessment studies Brett Mullan, National Institute of Water and Atmospheric Research, New Zealand
Climate change scenarios for New Zealand have been developed based on statistical downscaling of a suite of global climate models run for the IPCC A1B emissions scenario. The models used are a subset of those available from the PCMDI multimodel dataset archive, selected on the basis of how well they simulate the current climate. NIWA also runs the UK MetOffice Unified Model, both as a weather prediction tool and for climate variability and climate change studies. Dynamical downscaling over New Zealand is carried out with a regional climate model, whose boundary conditions are driven by a high resolution global model (HadAM3P).
Recently, we have been comparing the climate changes derived from the statistical and dynamical downscaled results. This talk will discuss our methodology and our experience to date. Climate data are also being used to drive downstream models that assess impacts on streamflow, droughts, agricultural productivity and land use. Examples from these studies will be given, as well as a brief discussion of other NIWA modelling capabilities.
« Back... Numerical simulation of shallow flows: 2D approach Pilar Garcia Navarro, University of Zaragoza, Spain
In recent years, finite volume techniques have been successfully applied in hydrology to simulate twodimensional free surface flows. In river basin modelling the bottom friction and irregularities have been shown to influence not only flood waves behaviour but also numerical methods performance drastically. Recently, twodimensional numerical models have been developed as tools to design and manage river basin systems. In this work, a finite volume based upwind scheme is used to build a simulation model considering differences in bottom level. The discretization is made on triangular unstructured grids and the source terms of the equations are given a special treatment.
« Back... Numerical simulation of shallow flows: 1D approach Pilar Garcia Navarro, University of Zaragoza, Spain
Many problems of river management and civil protection consist of the evaluation of the maximum water levels and discharges that may be attained at particular locations during the development of an exceptional meteorological event. There is another category of events of catastrophic nature whose effects also fall into the civil protection area. It is the prevision of the scenario subsequent to the almost instantaneous release of a great volume of liquid. The situation is that of the breaking of a man made dam. In many countries, the determination of the parameters of the wave likely to be produced after the failure of a dam is required by law (Molinaro, 1991, Betamio de Almeida and Bento Franco, 1993), and systematic studies are mandatory. There are works based on scaled physical models of natural valleys, but they represent too expensive efforts not devoid of difficulties. There is therefore a necessity to develop adequate numerical models able to reproduce situations originated by the irregularities of a nonprismatic bed. It is also necessary to trace their applicability considering the difficulty of developing a model capable of producing solutions of the complete equations despite the irregular character of the river bed.
« Back... Mixtures of experts approaches in rainfall runoff modelling David Nott, National University of Singapore
An established method for addressing structural uncertainty in hydrological models is to incorporate information from several models at once. We describe an approach that goes beyond simple model averaging by using mixtures of experts models. In this framework the catchment is assumed to exist in one of a finite number of states with different rainfall runoff models in each of the states capturing different dominant physical processes. A statistical model describes uncertainty about the states probabilistically and how this varies according to catchment indicators (predictors). We discuss issues of model comparison, of computation using adaptive Monte Carlo methods, and of predictive performance and interpretability of the methods in a number of examples.
« Back... Standard pointprocess models for prediction and diagnosis of earthquake activity Yosihiko Ogata, The Institute of Statistical Mathematics, Japan
ABSTRACT: Seismic quiescence and activation, as the precursors to large earthquakes, have attracted much attention among seismologists. Of particular interest is the hypothesis that the stresschanges transferred from a rupture or silent slip in one region can cause seismic changes in other regions. However, the clustering feature of earthquakes prevents us from detecting the seismicity change that is caused by the stress change transferred from other region. This is because mostly successive earthquakes are triggered by nearby events under heterogeneous complex media. Nevertheless, we can use the statistical empirical laws as a practical method for predicting earthquake clusters. The objective of this talk is to demonstrate that diagnostic analysis based on fitting the Epidemic Type Aftershock Sequence (ETAS) model to regional seismicity can be helpful in detecting exogenous stress changes there. In particular, the changes due to silent slips on a fault are usually so slight that one can barely recognize systematic anomalies in seismicity without the aid of the ETAS model. The spacetime version of this model shows various regional physical characteristic of the crust, and can be well used for the anomaly of the seismic activity.
CONTENT: Aftershocks, Epidemic type aftershock sequence (ETAS) model, Seismicity anomalies as a sensor of stress change in the crust, Spacetime ETAS model, Hierarchical Spacetime ETAS model, Relationship of the parameters to the geophysical characteristics.
« Back... Modeling of heterogeneous datasets Yosihiko Ogata, The Institute of Statistical Mathematics, Japan
ABSTRACT: I will discuss methods for estimation and interpolation of parameters of geophysicsbased statistical models for predicting nonstationary and nonhomogeneous spacetime marked point process. Spline functions are used to characterize the evolution and variation of the parameter in time and space, respectively. Since many coefficients of the spline functions are required, I use the penalized loglikelihood with the standard roughness penalties for the spline functions to obtain sensible estimates. The penalized loglikelihood is interpreted by the Bayesian framework, and weights for the penalties are adjusted objectively by maximizing the integrated posterior function. Comparison of priors includes isotropic versus anisotropic the roughness penalties. The current methods and models are recently applied to the early forecasting of aftershock probability where the data are only partially available immediately after the main shock.
CONTENT: Bayesian Analysis of biases, Nonstationary and anisotropic statistical models with the nonuniform detection rates in time and space, Early forecasting of aftershock probability.
« Back... Numerical study of wave and submerged breakwater interaction Dang Hieu Phung, Institute of Meteorology, Hydrology and Environment, Vietnam
Presented is a study on numerical modeling of a highly nonlinear interaction between wave and structures using a 2D NavierStokes Equations with a usage of VOF method. The focus is placed on the porous submerged breakwater's characteristics (porosity, reflection and dissipation coefficients) which dissipate effectively the wave energy.
« Back... Analyzing input and structural uncertainty of a hydrological model with Stochastic, timedependent parameters Peter Reichert, Swiss Federal Institute of Aquatic Science and Technology (Eawag), Switzerland
A recently developed technique for identifying continuoustime, timedependent, stochastic parameters of dynamic models is applied in a systematic framework for identifying the causes of bias in model results of a simple hydrological model. Model parameters are sequentially replaced by stochastic processes and the degree of bias reduction of model output is analyzed for each parameter. In a next step, the identified timedependences of all parameters are analyzed for dependences on external influence factors and model states. If significant relationships between timedependent parameters and influence factors or states are found, the deterministic model must be improved. Otherwise, or after improving the deterministic model in a first step, the description of uncertainty in model predictions can be improved by replacing selected model parameters by stochastic processes. The application of this framework to a simple 8parameter conceptual hydrological model demonstrates its capabilities. Different timedependent parameters (including additional parameters for input and output modification) have significantly different potential for bias reduction. The degree of achievable bias reduction leads to the identification of the soil and runoff submodel as the one that has the highest potential for improvement. Attempts of reducing the deficits of the deterministic runoff model lead to a considerable improvement of the fit, particularly of outliers during strong storm events. After improving this submodel, the dominant fraction of the remaining bias is attributed to random input uncertainty of rainfall and described by a stochastic input modification factor.
« Back... Physicalbased emulation of dynamic models  concept and application in hydrology Peter Reichert, Swiss Federal Institute of Aquatic Science and Technology (Eawag), Switzerland
Emulators provide computationally efficient interpolation between outputs of simulator runs available at design points in input space. For this reason, they are very important tools to make systems analytical techniques that require many model evaluations, such as optimization, sensitivity analysis, or statistical inference, available for computationally demanding simulation models. So far, the dominant tool for developing such emulators have been priors in the form of Gaussian processes that were conditioned with the design data set. These emulators do not consider our knowledge of the structure of the simulation model and run into numerical difficulties if there is a large number of closely spaced input points. This is usually the case in the time dimension of dynamic models. To address these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear statespace model of the temporal evolution of the simulation model with Gaussian processes in the other input dimensions. Conditioning of this prior to the design data set is done by Kalman smoothing. The feasibility of the approach is demonstrated by the application to a simple hydrological model.
« Back... The numerical simulation of hydrodynamic free surface flows Guus Stelling, Delft University of Technology, The Netherlands
Contains:
Hydrodynamic pressure correction, 1D, 2DH,2DV, 3D equations
Undulating bores, wave run up, surf beat
« Back... The numerical simulation of hydrostatic free surface flows Guus Stelling, Delft University of Technology, The Netherlands
Contains:
The hydrostatic pressure assumption, 1D, 2D and 3D equations
Coastal Flow, Estuarine flows, River flows
Rapidly Varying flows, Flooding and Drying, Tsunami run up
« Back... Applying data assimilation methods in DelftFEWS to improve real time forecasting Albrecht Weerts, WL  Delft Hydraulics, The Netherlands
Realtime water level and discharge (flood) forecasting makes use of interlinked hydrological and hydrodynamic models, that are embedded in a datamanagement environment (like DelftFEWS). The model chains are run in two principal operational modes: in i) historical and ii) forecast mode. In the first mode the models are forced by meteorological observations over a limited time period prior to the onset of the forecast. In the second mode the models are forced by quantitative precipitation and temperature forecasts, whereby the internal model states at the end of the historic run are taken as initial conditions for the forecast run.
Data assimilation is a key element of real time (flood) forecasting, and most forecasting systems apply some form of data assimilation. For instance, error correction is a very effective and simple method to improve the forecast. Error correction is available in DelftFEWS.
A more sophisticated way of improving the flow and water level forecast is to update the state of the hydrological model or the hydraulic models through sequential data assimilation. With sequential data assimilation the prior probability density function of the model state is estimated (forecasted). This prior estimate of the pdf of the model state is subsequently being updated by using the available measurements resulting in the posterior pdf of the model state which can be used during the next forecasting step(s).
Operational sequential data assimilation like Ensemble Kalman Filtering (EnKF) and Residual Resampling filtering (RRF) is becoming more and more feasible through enhanced computing power and the availability of generic data assimilation modules/packages. The main advantage of state updating (filtering) over error correction is that it is possible to take explicitly model and data uncertainties into account. However, the specification of the model and data uncertainties is also the main difficulty in implementing such a filter because these uncertainties are generally poorly known.
After an introduction into DelftFEWS & DATools (generic data assimilation package), examples will be shown how the available data assimilation methods can improve the water level and discharge forecasts. The examples will cover both error correction (applied to tributaries of the Rhine) and sequential data assimilation (EnKF applied to SOBEK hydrodynamic model or the Rhine).
« Back... Modeling annual precipation outputs from a deterministic model: the problem of extremes Jim Zidek, University of British Columbia, Canada
Francis Zwiers posed problem that this talk addresses. His asked how the return values for the annual maxima of precipation over 312 grid cells covering Canada could be jointly specified. Since precipation is not measured over much of that surface, the "data" were in fact simulated from a numerical coupled climate model (CCM3). [That model also enables the future to be projected under various assumptions about the parameters such as CO2 emissions that determine climate.]
The approach taken to model the field of precipation extreme values is hierarchical Bayes since the current state of multivariate extreme value theory does not seem capable of handling fields of such a high (312) dimensional field as that under consideration. Moreover, the approach yields a logmultivariate t model, one that offers the practical advantage of tractability and promise of applicability in other contexts. (In fact, it worked well for modelling air pollution extremes in London England.) That distribution is generated from a logGaussian process with an unknown spatial covariance matrix. To assess the performance in that application suggested by Zwiers, we use cross validation to find the model's predictive credibility ellipsoids and confirm that their coverage fractions are close to their nominal credibility levels. Return values are estimated for cell marginal distributions. The result is a model that can predict not only the number of cells that sustain say a 100 year rain, but as well, the uncertainty associated with that prediction.
Time permitting, I will also discuss some of the problems associated with the design of a network that could measure extremes.
Joint work with Audrey Fu, U of Washington and Nhu D Le, BC Cancer Agency
« Back... Reconciling physical & statistical approaches to modelling Jim Zidek, University of British Columbia, Canada
The cultures of physical and statistical modellers differ greatly. However, a search for reconciliation has begun, driven by the practical requirements of handling processes over very large spacetime domains, and the risks attached to them. In this talk and its sequel, I will explore some of the emerging directions in a near frontier for statistical science.
My first talk derives from the experience, I have had with my UBC coresearchers, Nhu Le, Yiping Dou, and Zhong Liu with much input from Douw Steyn, an atmospheric scientist. We have been examining hourly ground level ozone concentrations over a very large part of the eastern USA. In particular, we have been seeing how to reconcile simulated data from MAQSIP, a very large deterministic chemical transportation model for that field, and data from about 300 sites. The data were produced over about 120 days in a single summer. I will describe our approaches and some of the results. However, much of the discussion will be devoted to more fundamental issues arising from the differences between these two cultures. Although the context of our work is air pollution I believe the lessons learned will be of value in other contexts as well.
« Back...

