This detailed introduction to distribution theory is designed as a text for the probability portion of the first year statistical theory sequence for Master's and PhD students in statistics, biostatis
The estimation of noisily observed states from a sequence of data has traditionally incorporated ideas from Hilbert spaces and calculus-based probability theory. As conditional expectation is the key concept, the correct setting for filtering theory is that of a probability space. Graduate engineers, mathematicians and those working in quantitative finance wishing to use filtering techniques will find in the first half of this book an accessible introduction to measure theory, stochastic calculus, and stochastic processes, with particular emphasis on martingales and Brownian motion. Exercises are included. The book then provides an excellent users' guide to filtering: basic theory is followed by a thorough treatment of Kalman filtering, including recent results which extend the Kalman filter to provide parameter estimates. These ideas are then applied to problems arising in finance, genetics and population modelling in three separate chapters, making this a comprehensive resource for
Point-to-point vs hub-and-spoke. Questions of network design are real and involve many billions of dollars. Yet little is known about optimising design - nearly all work concerns optimising flow assuming a given design. This foundational book tackles optimisation of network structure itself, deriving comprehensible and realistic design principles. With fixed material cost rates, a natural class of models implies the optimality of direct source-destination connections, but considerations of variable load and environmental intrusion then enforce trunking in the optimal design, producing an arterial or hierarchical net. Its determination requires a continuum formulation, which can however be simplified once a discrete structure begins to emerge. Connections are made with the masterly work of Bendsøe and Sigmund on optimal mechanical structures and also with neural, processing and communication networks, including those of the Internet and the World Wide Web. Technical appendices are
Starting around the late 1950s, several research communities began relating the geometry of graphs to stochastic processes on these graphs. This book, twenty years in the making, ties together research in the field, encompassing work on percolation, isoperimetric inequalities, eigenvalues, transition probabilities, and random walks. Written by two leading researchers, the text emphasizes intuition, while giving complete proofs and more than 850 exercises. Many recent developments, in which the authors have played a leading role, are discussed, including percolation on trees and Cayley graphs, uniform spanning forests, the mass-transport technique, and connections on random walks on graphs to embedding in Hilbert space. This state-of-the-art account of probability on networks will be indispensable for graduate students and researchers alike.
This modern and comprehensive guide to long-range dependence and self-similarity starts with rigorous coverage of the basics, then moves on to cover more specialized, up-to-date topics central to current research. These topics concern, but are not limited to, physical models that give rise to long-range dependence and self-similarity; central and non-central limit theorems for long-range dependent series, and the limiting Hermite processes; fractional Brownian motion and its stochastic calculus; several celebrated decompositions of fractional Brownian motion; multidimensional models for long-range dependence and self-similarity; and maximum likelihood estimation methods for long-range dependent time series. Designed for graduate students and researchers, each chapter of the book is supplemented by numerous exercises, some designed to test the reader's understanding, while others invite the reader to consider some of the open research problems in the field today.
This rigorous introduction to network science presents random graphs as models for real-world networks. Such networks have distinctive empirical properties and a wealth of new models have emerged to capture them. Classroom tested for over ten years, this text places recent advances in a unified framework to enable systematic study. Designed for a master's-level course, where students may only have a basic background in probability, the text covers such important preliminaries as convergence of random variables, probabilistic bounds, coupling, martingales, and branching processes. Building on this base - and motivated by many examples of real-world networks, including the Internet, collaboration networks, and the World Wide Web - it focuses on several important models for complex networks and investigates key properties, such as the connectivity of nodes. Numerous exercises allow students to develop intuition and experience in working with the models.
The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations,
This book was first published in 2004. Many observed phenomena, from the changing health of a patient to values on the stock market, are characterised by quantities that vary over time: stochastic processes are designed to study them. This book introduces practical methods of applying stochastic processes to an audience knowledgeable only in basic statistics. It covers almost all aspects of the subject and presents the theory in an easily accessible form that is highlighted by application to many examples. These examples arise from dozens of areas, from sociology through medicine to engineering. Complementing these are exercise sets making the book suited for introductory courses in stochastic processes. Software (available from www.cambridge.org) is provided for the freely available R system for the reader to apply to all the models presented.
Many electronic and acoustic signals can be modelled as sums of sinusoids and noise. However, the amplitudes, phases and frequencies of the sinusoids are often unknown and must be estimated in order to characterise the periodicity or near-periodicity of a signal and consequently to identify its source. This book presents and analyses several practical techniques used for such estimation. The problem of tracking slow frequency changes over time of a very noisy sinusoid is also considered. Rigorous analyses are presented via asymptotic or large sample theory, together with physical insight. The book focuses on achieving extremely accurate estimates when the signal to noise ratio is low but the sample size is large. Each chapter begins with a detailed overview, and many applications are given. Matlab code for the estimation techniques is also included. The book will thus serve as an excellent introduction and reference for researchers analysing such signals.
All scientific disciplines prize predictive success. Conventional statistical analyses, however, treat prediction as secondary, instead focusing on modeling and hence estimation, testing, and detailed physical interpretation, tackling these tasks before the predictive adequacy of a model is established. This book outlines a fully predictive approach to statistical problems based on studying predictors; the approach does not require predictors correspond to a model although this important special case is included in the general approach. Throughout, the point is to examine predictive performance before considering conventional inference. These ideas are traced through five traditional subfields of statistics, helping readers to refocus and adopt a directly predictive outlook. The book also considers prediction via contemporary 'black box' techniques and emerging data types and methodologies where conventional modeling is so difficult that good prediction is the main criterion available
This comprehensive guide to stochastic processes gives a complete overview of the theory and addresses the most important applications. Pitched at a level accessible to beginning graduate students and researchers from applied disciplines, it is both a course book and a rich resource for individual readers. Subjects covered include Brownian motion, stochastic calculus, stochastic differential equations, Markov processes, weak convergence of processes and semigroup theory. Applications include the Black–Scholes formula for the pricing of derivatives in financial mathematics, the Kalman–Bucy filter used in the US space program and also theoretical applications to partial differential equations and analysis. Short, readable chapters aim for clarity rather than full generality. More than 350 exercises are included to help readers put their new-found knowledge to the test and to prepare them for tackling the research literature.
This book is about the statistical principles behind the design of effective experiments and focuses on the practical needs of applied statisticians and experimenters engaged in design, implementation and analysis. Emphasising the logical principles of statistical design, rather than mathematical calculation, the authors demonstrate how all available information can be used to extract the clearest answers to many questions. The principles are illustrated with a wide range of examples drawn from real experiments in medicine, industry, agriculture and many experimental disciplines. Numerous exercises are given to help the reader practise techniques and to appreciate the difference that good design can make to an experimental research project. Based on Roger Mead's excellent Design of Experiments, this new edition is thoroughly revised and updated to include modern methods relevant to applications in industry, engineering and modern biology. It also contains seven new chapters on
In nonparametric and high-dimensional statistical models, the classical Gauss–Fisher–Le Cam theory of the optimality of maximum likelihood estimators and Bayesian posterior inference does not apply, and new foundations and ideas have been developed in the past several decades. This book gives a coherent account of the statistical theory in infinite-dimensional parameter spaces. The mathematical foundations include self-contained 'mini-courses' on the theory of Gaussian and empirical processes, approximation and wavelet theory, and the basic theory of function spaces. The theory of statistical inference in such models - hypothesis testing, estimation and confidence sets - is presented within the minimax paradigm of decision theory. This includes the basic theory of convolution kernel and projection estimation, but also Bayesian nonparametrics and nonparametric maximum likelihood estimation. In a final chapter the theory of adaptive inference in nonparametric models is developed
Starting around the late 1950s, several research communities began relating the geometry of graphs to stochastic processes on these graphs. This book, twenty years in the making, ties together research in the field, encompassing work on percolation, isoperimetric inequalities, eigenvalues, transition probabilities, and random walks. Written by two leading researchers, the text emphasizes intuition, while giving complete proofs and more than 850 exercises. Many recent developments, in which the authors have played a leading role, are discussed, including percolation on trees and Cayley graphs, uniform spanning forests, the mass-transport technique, and connections on random walks on graphs to embedding in Hilbert space. This state-of-the-art account of probability on networks will be indispensable for graduate students and researchers alike.