Spring 2009

Exact Filtering of Measurement Errors in Dynamical Systems

Chris Wingard

Department of Mathematics & Statistics

University of Nevada Reno

Tuesday, May 5, Location TBA at 2:30 pm

Abstract: Measurements of values in dynamical systems are often incomplete or accompanied by error. Incomplete measurements occur when not all of the relevant quantities in the system are measured. We provide a technique which recovers the full state of a dynamical system given an incomplete measurement. One type of error we consider is constant shifting, caused by systematic bias in the measuring device. Two other types of measuring device errors are amplification and attenuation, both of which result in a constant multiplicative rescaling of measurement. Lastly, we consider periodic noise, which occurs when a measured signal is obscured by another signal. For each kind of error, we provide a filtering method which recovers the full state of the dynamical system.


Some bacterial growth models with randomness

Benito Chen

Department of Mathematics, University of Texas at Arlington

Friday, April 24, AB 102, 2:30 pm

Abstract: In mathematical modeling of population growth, and in particular of bacterial growth, parameters are either measured directly or determined by curve fitting. These parameters have large variability that depends on the experimental method and its inherent error, on differences in the actual population sample size used, as well as other factors that are difficult to account for. In this work the parameters that appear in the Monod kinetics growth model are considered random variables with specified distributions. A stochastic spectral representation of the parameters is used, together with the polynomial chaos method, to obtain a system of differential equations, which is integrated numerically to obtain the evolution of the mean and higher-order moments with respect to time.


The existence of outer automorphisms of the Calkin algebra is undecidable in ZFC

N. Christopher Phillips

Department of Mathematics, University of Oregon

Friday, April 24, AB 110, at 1:00 pm

Abstract: Let H be a separable infinite dimensional Hilbert space. The Calkin algebra is the quotient of the algebra of all bounded operators on H by the ideal of compact operators on H. In 1977, in connection with extension theory for C*-algebras, the question of the existence of outer automorphisms of the Calkin algebra was raised. It turns out that this question is undecidable in ZFC (Zermelo-Fraenkel set theory plus the axiom of choice). Assuming the Continuum Hypothesis, joint work with Nik Weaver shows that outer automorphisms exist. Ilijas Farah has recently shown that it is consistent with ZFC that all automorphisms of the Calkin algebra are inner.

This talk will primarily concentrate on the result with Weaver. It is intended for a general audience.


hp-Adaptive Finite Elements for the Schroedinger Equation

William Mitchell

National Institute of Standards and Technology

Thursday, April 23, AB 102, 2:30 pm

Abstract: Recently the hp-version of the finite element method for solving partial differential equations has received increasing attention. This is an adaptive finite element approach in which adaptivity occurs in both the size, h, of the elements (spacial or h adaptivity) and in the order, p, of the approximating piecewise polynomials (order or p adaptivity). The objective is to determine a distribution of h and p that minimizes the error using the least amount of work in some measure. The main attraction of hp adaptivity is that, in theory, the discretization error can decrease exponentially with the number of degrees of freedom, n, even in the presence of singularities, and this rate of convergence has been observed in practice. We apply adaptive finite element methods to a Schroedinger equation that models the interaction of two trapped atoms. Ultra-cold atoms can be held in the cells of an optical trap. The barriers between the cells can be lowered to allow the atoms to interact, causing entanglement and providing for one possible realization of a quantum gate for quantum computers. We present some preliminary computations with this model.


Smooth structures on 4-manifolds

Ron Fintushel

Department of Mathematics, Michigan State University

Thursday, April 16, AB 106, 2:30 pm

Abstract: I will talk about some of the basic problems in 4-manifold theory and approaches for trying to solve them. In the past few years the subject has had some major advances. I'll describe the history that led to these new results and show some ways to recreate them. This talk will be suitable for a general mathematical audience.


MOOSE: A Parallel Solution Framework for Complex Multiscale Multiphysics Applications

Glen Hansen

Multiphysics Methods Group, Idaho National Laboratory

Wednesday, April 15, AB 106, 4:00 pm

Abstract: The Multiphysics Methods Group is developing a software framework called MOOSE (Multiphysics Object Oriented Simulation Environment). MOOSE is based on a physics-based preconditioned Jacobian-free Newton Krylov (JFNK) approach to support rapid application development for engineering analysis and design. The framework is designed for tightly coupled solution of finite element problems, and provides a finite element library, input and output capabilities, mesh adaptation, and a set of parallel nonlinear solution methods. The JFNK abstraction results in a clean architecture for implementing a variety of multiphysics and multiscale problems.

This talk begins with an overview of the architecture of MOOSE and a presentation of the JFNK solution method. Two representative examples are considered in detail; BISON, a nuclear fuel performance application and PRONGHORN, a pebble bed nuclear reactor simulation code. BISON is a quasi-steady and transient application that currently couples models for fuel thermomechanics, oxygen diffusion, and fission product swelling. Further, BISON incorporates a mesoscale phase-field simulation for the calculation of fuel thermal conductivity in a nonlinearly consistent manner. PRONGHORN couples a neutron diffusion solution to a porous media flow model to simulate the behavior of a pebble bed reactor.

Unilateral small deviations of self-similar Gaussian processes

George Molchan

Russian Academy of Science, Moscow

Friday, April 10, AB-209 at 1:00 pm

Abstract: Let x(s), x(0) = E[x(s)] = 0 be a real-valued Gaussian self-similar random process with Hurst parameter H and d-dimensional time. We consider the problem of asymptotic behavior of the probability p(T) that x(s) does not exceed a fixed positive level in a star shaped expanding domain TxG as T goes to infinity; here G is a fixed domain that includes 0. Typically p(T) has unusual asymptotic

log p(t) =-θ(log T)^D(1+o(1)), T >> 1,

and the problem is reduced to the following questions: existence, estimation, and explicit values of (θ, D).

We give a complete solution of the problem for the fractional Brownian motion (FBM). In this case D = 1 and θ= 1-H, if G= [0,1] or = d, if G={s : |s|<1}. We prove the Le&Shao hypothesis about existence of θ for fractional Brownian sheet in the case d = D = 2 and G = [0,1]x[0,1]. We discuss the hypothesis that in the case of integrated FBM one has D = 1 and θ=H(1-H) if G=[0,1], and θ=1-H if G=[-1,1]. This hypothesis is important for analysis of 1-d inviscid Burgers' equation with random initial data. The proof is known for θ=1/2 only. Methods of analysis of the problem are represented as well.


Algorithms in Compressed Sensing

Deanna Needell

Department of Mathematics, UC Davis

Thursday, April 9, AB-106 at 2:30 pm

Abstract: Compressed sensing is a new and fast growing field of applied mathematics that addresses the shortcomings of conventional signal compression. Given a signal with few nonzero coordinates relative to its dimension, compressed sensing seeks to reconstruct the signal from few nonadaptive linear measurements. As work in this area developed, two major approaches to the problem emerged, each with its own set of advantages and disadvantages. The first approach, L1-Minimization, provided strong results, but lacked the speed of the second, the greedy approach. The greedy approach, while providing a fast runtime, lacked stability and uniform guarantees. This gap between the approaches has led researchers to seek an algorithm that could provide the benefits of both. Recently, we bridged this gap and provided a breakthrough algorithm, called Regularized Orthogonal Matching Pursuit (ROMP). ROMP is the first algorithm to provide the stability and uniform guarantees similar to those of L1-Minimization, while providing speed as a greedy approach. After analyzing these results, we developed the algorithm Compressive Sampling Matching Pursuit (CoSaMP), which improved upon the guarantees of ROMP. CoSaMP is the first algorithm to have provably optimal guarantees in every important aspect. This talk will provide an introduction to the area of compressed sensing and a discussion of these two recent developments.


Covering links and the slicing of Bing doubles Cornelia Van Cott Department of Mathematics, University of San Francisco

Thursday, February 26, AB-102 at 2:30 pm

Abstract: A link is slice if its components bound disjoint smooth disks in B4. Showing that links are (or are not) slice is a difficult problem with a long history of deep results. In this talk, we will overview the history and motivation behind the study of slice links. Then we will focus on a particular class of links: iterated Bing doubles. We will see that many of the classical tools for showing links are slice break down for Bing doubles, but new results involving branched covers of S3 have yielded progress.


Predicting Earthquake Shaking in Complex 3D Geology

John Louie, Professor

Geophysics, College of Science, University of Nevada, Reno

Thursday, February 19, AB-102 at 2:30 pm

Abstract: Predicting the strength of ground shaking caused by an earthquake scenario is a task that depends on complex but fortunately mostly linear phenomena at the earthquake source, along the path between the earthquake and the urban area, and within the urban area. While Nevada structural geologists work on identifying possible earthquake sources and their likelihood, my work has involved assessing the effects of the many geological basins that pock the Nevada landscape, and on evaluating near-surface seismic properties in the urban areas. Two essential tools for modeling earthquake scenarios are a Community Seismic Velocity Model, assembling geological, geophysical, and geotechnical knowledge into 3D grids allowing seismic computations; and a viscoelastic wave-propagation code implemented on a computing cluster.

Developing and adapting such tools for use in Nevada, I have examined wave-propagation phenomena occurring in likely earthquake scenarios. Nonlinear effects may have minimal influence, though current efforts are assessing whether soil weakening that accompanies strong earthquake shaking may be computed in a fully separated manner. In realistic models of the Nevada crust, the largest shaking amplitudes are carried by Rayleigh surface waves, and are thus fundamentally affected by basin-edge geometry and shallow geotechnical properties. Earthquake-rupture directivity has a very strong effect on shaking predictions, adding uncertainty to hazard assessments. The many basins that are present between Nevada urban areas and potential earthquake source zones diffract and spread earthquake-source effects. The expected strong correlation between shaking amplitude and total basin depth can appear, but basin depths are not predictive of the areas of strongest shaking within basins. To date, our models of shaking from the Feb. 21, 2008 earthquake in Wells can be validated against some simple aspects of the data recorded from that event. Attempts to validate our models of the April 25, 2008 Mogul event have so far been completely unsuccessful.

Additional information and downloadable wave-propagation movies for computers and cell phones are available at www.seismo.unr.edu/ma.


A short excursion into Math, Philosophy and Logic

Henrik Nordmark

Institute for Logic, Language and Computation, Universiteit van Amsterdam, Netherlands

Wednesday, January 28, EJCH 108H at 4:00 pm

Abstract: In empirical sciences such as physics, biology or psychology, truth can be established by testing theories against reality. We conduct experiments which either refute or confirm our hypotheses. However, in mathematics truth is traditionally established via logical deductions from axioms, which we simply assume to be true. Now, if these axiom systems are supposed to describe some sort of mathematical reality in the same way that scientific theories describe physical reality, how is it that we know anything about this intangible mathematical reality in the first place? On the other hand, if we do not wish to commit ourselves to a mysterious platonic universe, then what is mathematics actually about? And why does it seem to be so useful and pervasive in science, engineering & finance? This talk is essentially an overview of some of the problems that arise in philosophy of mathematics and some of the attempts that have been made to answer these questions by different philosophical camps. No prior knowledge of philosophy of mathematics is presumed as I shall build up everything more or less from scratch. Some prior familiarity with set theory and symbolic logic is useful but not indispensable.

Henrik Nordmark is a graduate student at the Institute for Logic, Language and Computation at the Universiteit van Amsterdam in the Netherlands. As an undergraduate, Henrik studied Mathematics, Psychology and Philosophy at the University of Nevada, Reno.