Quantitative Methods in Linguistics introduces the general strategies and methods of quantitative analysis. The book dedicates individual chapters to phonetics, psycholinguistics, sociolinguistics, hi
Quantitative Methods in Linguistics introduces the general strategies and methods of quantitative analysis. The book dedicates individual chapters to phonetics, psycholinguistics, sociolinguistics, hi
Covering the general process of data analysis to finding, collecting, organizing, and presenting data, this book offers a complete introduction to the fundamentals of data analysis.Using real-wor
Covering the general process of data analysis to finding, collecting, organizing, and presenting data, this book offers a complete introduction to the fundamentals of data analysis.Using real-wor
This book surveys current research at the intersection of national security and statistical sciences, offering general reviews of quantitative approaches to counterterrorism, for policy makers, as wel
Quantitative genetics offers a general theory of the development of individual differences that suggests novel concepts and research strategies: the idea that genetic influences operate in age-to-age change as well as in continuity for example. Quantitative genetics also provides powerful methods to address questions of change and continuity, including model-fitting approaches that test the fit between a specific model of genetic and environmental influences and observed correlations among family members, which are here helpfully introduced. A simple parent and offspring model is extended to include longitudinal and multivariate analyses. Longitudinal quantitative genetic research is essential to the understanding of developmental change and continuity. The largest and longest longitudinal adoption study is the Colorado Adoption Project, which has generated much of the rich data on the progress from infancy to early childhood on which the authors draw throughout this 1988 book. Their c
The book consists of an outline of combined qualitative and quantitative methods, designed on the basis of Bourdieu’s general social theory. The methods are focused on individual and collective actors
Recent trends suggest that international economic law may be witnessing a renaissance of convergence – both parallel and intersectional. The adjudicative process also reveals signs of convergence. These diverse claims of convergence are of legal, empirical and normative interest. Yet, convergence discourse also warrants scepticism. This volume contributes to both the general debate on the fragmentation of international law and the narrower discourse concerning the interplay between international trade and investment, focusing on dispute settlement. It moves beyond broad observations or singular case studies to provide an informed and wide-reaching assessment by investigating multiple standards, processes, mechanisms and behaviours. Methodologically, a normative stance is largely eschewed in favour of a range of 'doctrinal,' quantitative and qualitative methods that are used to address the research questions. Furthermore, in determining the extent of convergence or divergence, it is imp
Probability and Statistics are as much about intuition and problem solving as they are about theorem proving. Because of this, students can find it very difficult to make a successful transition from lectures to examinations to practice, since the problems involved can vary so much in nature. Since the subject is critical in many modern applications such as mathematical finance, quantitative management, telecommunications, signal processing, bioinformatics, as well as traditional ones such as insurance, social science andengineering, the authors have rectified deficiencies in traditional lecture-based methods by collecting together a wealth of exercises with complete solutions, adapted to needs and skills of students. Following on from the success of Probability and Statistics by Example: Basic Probability and Statistics, the authors here concentrate on random processes, particularly Markov processes, emphasising modelsrather than general constructions. Basic mathematical facts are sup
We elaborate a general workflow of weighting-based survey inference, decomposing it into two main tasks. The first is the estimation of population targets from one or more sources of auxiliary information. The second is the construction of weights that calibrate the survey sample to the population targets. We emphasize that these tasks are predicated on models of the measurement, sampling, and nonresponse process whose assumptions cannot be fully tested. After describing this workflow in abstract terms, we then describe in detail how it can be applied to the analysis of historical and contemporary opinion polls. We also discuss extensions of the basic workflow, particularly inference for causal quantities and multilevel regression and poststratification.
Probability and Statistics are as much about intuition and problem solving as they are about theorem proving. Because of this, students can find it very difficult to make a successful transition from lectures to examinations to practice, since the problems involved can vary so much in nature. Since the subject is critical in many modern applications such as mathematical finance, quantitative management, telecommunications, signal processing, bioinformatics, as well as traditional ones such as insurance, social science andengineering, the authors have rectified deficiencies in traditional lecture-based methods by collecting together a wealth of exercises with complete solutions, adapted to needs and skills of students. Following on from the success of Probability and Statistics by Example: Basic Probability and Statistics, the authors here concentrate on random processes, particularly Markov processes, emphasising modelsrather than general constructions. Basic mathematical facts are sup
In the political fight over copyright, Internet advocacy has reshaped the playing field. This was shown in the 2012 'SOPA blackout', when the largest online protest in history stopped two copyright bills in their tracks. This protest was the culmination of an intellectual and political evolution more than a decade in the making. This book examines the debate over digital copyright, from the late 1980s through early 2012, and the new tools of political communication involved in the advocacy around the issue. Drawing on methods from legal studies, political science and communications, it explores the rise of a coalition seeking more limited copyright, as well as how these early-adopting, technology-savvy policy advocates used online communication to shock the world. It compares key bills, congressional debates, and offline and online media coverage using quantitative and qualitative methods to create a rigorous study for researchers that is also accessible to a general audience.
In the political fight over copyright, Internet advocacy has reshaped the playing field. This was shown in the 2012 'SOPA blackout', when the largest online protest in history stopped two copyright bills in their tracks. This protest was the culmination of an intellectual and political evolution more than a decade in the making. This book examines the debate over digital copyright, from the late 1980s through early 2012, and the new tools of political communication involved in the advocacy around the issue. Drawing on methods from legal studies, political science and communications, it explores the rise of a coalition seeking more limited copyright, as well as how these early-adopting, technology-savvy policy advocates used online communication to shock the world. It compares key bills, congressional debates, and offline and online media coverage using quantitative and qualitative methods to create a rigorous study for researchers that is also accessible to a general audience.
The book’s content is focused on rigorous and advanced quantitative methods for the pricing and hedging of counterparty credit and funding risk. The new general theory that is required for this method
Based on a starter course for beginning graduate students, Core Statistics provides concise coverage of the fundamentals of inference for parametric statistical models, including both theory and practical numerical computation. The book considers both frequentist maximum likelihood and Bayesian stochastic simulation while focusing on general methods applicable to a wide range of models and emphasizing the common questions addressed by the two approaches. This compact package serves as a lively introduction to the theory and tools that a beginning graduate student needs in order to make the transition to serious statistical analysis: inference; modeling; computation, including some numerics; and the R language. Aimed also at any quantitative scientist who uses statistical methods, this book will deepen readers' understanding of why and when methods work and explain how to develop suitable methods for non-standard situations, such as in ecology, big data and genomics.
Introducing the essentials of modern geochemistry for students across the Earth and environmental sciences, this new edition emphasises the general principles of this central discipline. Focusing on inorganic chemistry, Francis Albarède's refreshing approach is brought to topics that range from measuring geological time to the understanding of climate change. The author leads the student through the necessary mathematics to understand the quantitative aspects of the subject in an easily understandable manner. The early chapters cover the principles and methods of physics and chemistry that underlie geochemistry, to build the students' understanding of concepts such as isotopes, fractionation, and mixing. These are then applied across many of the environments on Earth, including the solid Earth, rivers, and climate, and then extended to processes on other planets. Three new chapters have been added – on stable isotopes, biogeochemistry, and environmental geochemistry. End-of-chapter stu
Acculturation is the process of group and individual changes in culture and behaviour that result from intercultural contact. These changes have been taking place forever, and continue at an increasing pace as more and more peoples of different cultures move, meet and interact. Variations in the meanings of the concept, and some systematic conceptualisations of it are presented. This is followed by a survey of empirical work with indigenous, immigrant and ethnocultural peoples around the globe that employed both ethnographic (qualitative) and psychological (quantitative) methods. This wide-ranging research has been undertaken in a quest for possible general principles (or universals) of acculturation. This Element concludes with a short evaluation of the field of acculturation; its past, present and future.
Based on a starter course for beginning graduate students, Core Statistics provides concise coverage of the fundamentals of inference for parametric statistical models, including both theory and practical numerical computation. The book considers both frequentist maximum likelihood and Bayesian stochastic simulation while focusing on general methods applicable to a wide range of models and emphasizing the common questions addressed by the two approaches. This compact package serves as a lively introduction to the theory and tools that a beginning graduate student needs in order to make the transition to serious statistical analysis: inference; modeling; computation, including some numerics; and the R language. Aimed also at any quantitative scientist who uses statistical methods, this book will deepen readers' understanding of why and when methods work and explain how to develop suitable methods for non-standard situations, such as in ecology, big data and genomics.
This Element discusses how shiny, an R package, can help instructors teach quantitative methods more effectively by way of interactive web apps. The interactivity increases instructors' effectiveness by making students more active participants in the learning process, allowing them to engage with otherwise complex material in an accessible, dynamic way. The Element offers four detailed apps that cover two fundamental linear regression topics: estimation methods (least squares, maximum likelihood) and the classic linear regression assumptions. It includes a summary of what the apps can be used to demonstrate, detailed descriptions of the apps' full capabilities, vignettes from actual class use, and example activities. Two other apps pertain to a more advanced topic (LASSO), with similar supporting material. For instructors interested in modifying the apps, the Element also documents the main apps' general code structure, highlights some of the more likely modifications, and goes through