This book was first published in 2004. Many observed phenomena, from the changing health of a patient to values on the stock market, are characterised by quantities that vary over time: stochastic processes are designed to study them. This book introduces practical methods of applying stochastic processes to an audience knowledgeable only in basic statistics. It covers almost all aspects of the subject and presents the theory in an easily accessible form that is highlighted by application to many examples. These examples arise from dozens of areas, from sociology through medicine to engineering. Complementing these are exercise sets making the book suited for introductory courses in stochastic processes. Software (available from www.cambridge.org) is provided for the freely available R system for the reader to apply to all the models presented.
The estimation of noisily observed states from a sequence of data has traditionally incorporated ideas from Hilbert spaces and calculus-based probability theory. As conditional expectation is the key concept, the correct setting for filtering theory is that of a probability space. Graduate engineers, mathematicians and those working in quantitative finance wishing to use filtering techniques will find in the first half of this book an accessible introduction to measure theory, stochastic calculus, and stochastic processes, with particular emphasis on martingales and Brownian motion. Exercises are included. The book then provides an excellent users' guide to filtering: basic theory is followed by a thorough treatment of Kalman filtering, including recent results which extend the Kalman filter to provide parameter estimates. These ideas are then applied to problems arising in finance, genetics and population modelling in three separate chapters, making this a comprehensive resource for
Point-to-point vs hub-and-spoke. Questions of network design are real and involve many billions of dollars. Yet little is known about optimising design - nearly all work concerns optimising flow assuming a given design. This foundational book tackles optimisation of network structure itself, deriving comprehensible and realistic design principles. With fixed material cost rates, a natural class of models implies the optimality of direct source-destination connections, but considerations of variable load and environmental intrusion then enforce trunking in the optimal design, producing an arterial or hierarchical net. Its determination requires a continuum formulation, which can however be simplified once a discrete structure begins to emerge. Connections are made with the masterly work of Bendsøe and Sigmund on optimal mechanical structures and also with neural, processing and communication networks, including those of the Internet and the World Wide Web. Technical appendices are
The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations,
This eagerly awaited textbook covers everything the graduate student in probability wants to know about Brownian motion, as well as the latest research in the area. Starting with the construction of Brownian motion, the book then proceeds to sample path properties like continuity and nowhere differentiability. Notions of fractal dimension are introduced early and are used throughout the book to describe fine properties of Brownian paths. The relation of Brownian motion and random walk is explored from several viewpoints, including a development of the theory of Brownian local times from random walk embeddings. Stochastic integration is introduced as a tool and an accessible treatment of the potential theory of Brownian motion clears the path for an extensive treatment of intersections of Brownian paths. An investigation of exceptional points on the Brownian path and an appendix on SLE processes, by Oded Schramm and Wendelin Werner, lead directly to recent research themes.
Discover what you can do with R! Introducing the R system, covering standard regression methods, then tackling more advanced topics, this book guides users through the practical, powerful tools that the R system provides. The emphasis is on hands-on analysis, graphical display, and interpretation of data. The many worked examples, from real-world research, are accompanied by commentary on what is done and why. The companion website has code and datasets, allowing readers to reproduce all analyses, along with solutions to selected exercises and updates. Assuming basic statistical knowledge and some experience with data analysis (but not R), the book is ideal for research scientists, final-year undergraduate or graduate-level students of applied statistics, and practising statisticians. It is both for learning and for reference. This third edition expands upon topics such as Bayesian inference for regression, errors in variables, generalized linear mixed models, and random forests.
Exact statistical inference may be employed in diverse fields of science and technology. As problems become more complex and sample sizes become larger, mathematical and computational difficulties can arise that require the use of approximate statistical methods. Such methods are justified by asymptotic arguments but are still based on the concepts and principles that underlie exact statistical inference. With this in perspective, this book presents a broad view of exact statistical inference and the development of asymptotic statistical inference, providing a justification for the use of asymptotic methods for large samples. Methodological results are developed on a concrete and yet rigorous mathematical level and are applied to a variety of problems that include categorical data, regression, and survival analyses. This book is designed as a textbook for advanced undergraduate or beginning graduate students in statistics, biostatistics, or applied statistics but may also be used as
When is a random network (almost) connected? How much information can it carry? How can you find a particular destination within the network? And how do you approach these questions - and others - when the network is random? The analysis of communication networks requires a fascinating synthesis of random graph theory, stochastic geometry and percolation theory to provide models for both structure and information flow. This book is the first comprehensive introduction for graduate students and scientists to techniques and problems in the field of spatial random networks. The selection of material is driven by applications arising in engineering, and the treatment is both readable and mathematically rigorous. Though mainly concerned with information-flow-related questions motivated by wireless data networks, the models developed are also of interest in a broader context, ranging from engineering to social networks, biology, and physics.
Bootstrap methods are computer-intensive methods of statistical analysis, which use simulation to calculate standard errors, confidence intervals, and significance tests. The methods apply for any level of modelling, and so can be used for fully parametric, semiparametric, and completely nonparametric analysis. This 1997 book gives a broad and up-to-date coverage of bootstrap methods, with numerous applied examples, developed in a coherent way with the necessary theoretical basis. Applications include stratified data; finite populations; censored and missing data; linear, nonlinear, and smooth regression models; classification; time series and spatial problems. Special features of the book include: extensive discussion of significance tests and confidence intervals; material on various diagnostic methods; and methods for efficient computation, including improved Monte Carlo simulation. Each chapter includes both practical and theoretical exercises. S-Plus programs for implementing the
Recent years have witnessed an explosion in the volume and variety of data collected in all scientific disciplines and industrial settings. Such massive data sets present a number of challenges to researchers in statistics and machine learning. This book provides a self-contained introduction to the area of high-dimensional statistics, aimed at the first-year graduate level. It includes chapters that are focused on core methodology and theory - including tail bounds, concentration inequalities, uniform laws and empirical process, and random matrices - as well as chapters devoted to in-depth exploration of particular model classes - including sparse linear models, matrix models with rank constraints, graphical models, and various types of non-parametric models. With hundreds of worked examples and exercises, this text is intended both for courses and for self-study by graduate students and researchers in statistics, machine learning, and related fields who must understand, apply, and
This detailed introduction to distribution theory is designed as a text for the probability portion of the first year statistical theory sequence for Master's and PhD students in statistics, biostatis
Bayesian nonparametrics works - theoretically, computationally. The theory provides highly flexible models whose complexity grows appropriately with the amount of data. Computational issues, though challenging, are no longer intractable. All that is needed is an entry point: this intelligent book is the perfect guide to what can seem a forbidding landscape. Tutorial chapters by Ghosal, Lijoi and Prünster, Teh and Jordan, and Dunson advance from theory, to basic models and hierarchical modeling, to applications and implementation, particularly in computer science and biostatistics. These are complemented by companion chapters by the editors and Griffin and Quintana, providing additional models, examining computational issues, identifying future growth areas, and giving links to related topics. This coherent text gives ready access both to underlying principles and to state-of-the-art practice. Specific examples are drawn from information retrieval, NLP, machine vision, computational
In nonparametric and high-dimensional statistical models, the classical Gauss–Fisher–Le Cam theory of the optimality of maximum likelihood estimators and Bayesian posterior inference does not apply, and new foundations and ideas have been developed in the past several decades. This book gives a coherent account of the statistical theory in infinite-dimensional parameter spaces. The mathematical foundations include self-contained 'mini-courses' on the theory of Gaussian and empirical processes, approximation and wavelet theory, and the basic theory of function spaces. The theory of statistical inference in such models - hypothesis testing, estimation and confidence sets - is presented within the minimax paradigm of decision theory. This includes the basic theory of convolution kernel and projection estimation, but also Bayesian nonparametrics and nonparametric maximum likelihood estimation. In a final chapter the theory of adaptive inference in nonparametric models is developed
Starting around the late 1950s, several research communities began relating the geometry of graphs to stochastic processes on these graphs. This book, twenty years in the making, ties together research in the field, encompassing work on percolation, isoperimetric inequalities, eigenvalues, transition probabilities, and random walks. Written by two leading researchers, the text emphasizes intuition, while giving complete proofs and more than 850 exercises. Many recent developments, in which the authors have played a leading role, are discussed, including percolation on trees and Cayley graphs, uniform spanning forests, the mass-transport technique, and connections on random walks on graphs to embedding in Hilbert space. This state-of-the-art account of probability on networks will be indispensable for graduate students and researchers alike.