1. Paired data involves dependent samples that are measured in pairs, such as measurements taken from the same subject before and after a treatment. When comparing paired data, the proper test is a one-sample t-test on the differences between pairs.
2. The null hypothesis for a paired t-test states that the mean of the differences between pairs is equal to zero, indicating no change between measurements. The alternative hypothesis depends on the specific problem but can be left-tailed, right-tailed, or two-tailed.
3. The key steps in a paired t-test are calculating the differences between pairs, finding the mean and standard deviation of the differences, determining the t-statistic, computing the
The document discusses constructing confidence intervals for the difference between two population parameters using independent random samples from two populations. There are several types of confidence intervals that can be constructed, including for the difference between two means (亮1 - 亮2) when the standard deviations are known or unknown, and for the difference between two proportions (p1 - p2). Examples are provided to illustrate how to construct confidence intervals for the difference between means and proportions.
This chapter discusses statistical intervals for single samples. It covers key concepts such as:
- The estimated mean from a sample is a random variable that can vary between samples.
- Confidence intervals provide a range of plausible values for the true population mean based on a sample.
- For a normal distribution with known variance, the standard z-interval can be used. For unknown variance, the t-distribution is used.
- Sample size requirements can be determined to ensure a specified level of accuracy for the mean estimate.
- One-sided confidence bounds provide an upper or lower limit for the mean instead of a two-sided interval.
- Large samples allow approximating the sampling distribution of the mean as normal for
Chapter 10 discusses estimation and hypothesis testing involving the means of two populations, focusing on independent versus dependent samples. It provides methods for calculating means, standard deviations, and confidence intervals, as well as tests for hypotheses about population means. The chapter includes various examples applying these concepts to real-world scenarios, such as analyzing salary differences and health insurance premiums.
36086 Topic Discussion3Number of Pages 2 (Double Spaced).docxrhetttrevannion
油
The document discusses the chi-square (x2) goodness-of-fit test, which compares observed frequencies of categorical data against expected frequencies to determine if there are significant differences. It explains the calculation process, interpretation of results, and assumptions involved in both the goodness-of-fit test and the x2 test of independence. Additionally, it covers effect size using the phi coefficient and contrasts chi-square tests with t-tests, illustrating their nonparametric nature and application in statistical analysis.
1. The standard deviation of the diameter at breast height, or DBH.docxpaynetawnya
油
The document discusses various statistical tests and hypotheses related to environmental science, medical research, and behavioral studies. It includes examples of Type I errors, p-values, confidence intervals, and tests of proportions and means for different scenarios. Additionally, it emphasizes the significance levels and methodologies for analyzing collected data.
This document outlines the content of Module 3 of an analytical chemistry course, which covers inferential statistics. It includes lectures and practical computer sessions on confidence intervals, hypothesis testing, and statistical tests involving single and multiple samples. Students will learn about calculating confidence intervals for population means and variances, performing one-sample z-tests and t-tests, and using statistical tests like the t-test, paired t-test, and F-test for two samples. The module concludes with a midterm exam and recommends textbooks for further reading on introductory statistics and chemometrics. As homework, students will complete exercises applying these statistical concepts to practical chemistry problems involving confidence intervals, hypothesis testing, and error analysis.
The document discusses bioequivalence in pharmaceuticals, focusing on average, individual, and population bioequivalence definitions, concerns, and methodologies. It emphasizes the limitations of the conventional average bioequivalence approach and presents individual and population bioequivalence as alternatives that allow for better switchability and prescribability of generic drugs. Recommendations for study designs and statistical methods to ensure reliable results in bioequivalence studies are provided, along with examples and analysis of variability.
This document discusses probabilistic risk assessment (PRA) in environmental toxicology. PRA aims to more realistically communicate uncertainty and variability in risk estimates compared to deterministic approaches. Key aspects of PRA covered include using species sensitivity distributions (SSDs) fitted to toxicity data from multiple species to estimate hazardous concentrations protecting a given percentage of species, and joint probability curves to assess the likelihood of exposure exceeding toxicity thresholds. However, the document notes that PRA implementation has outpaced understanding, and key challenges include reliability of SSDs based on small data sets and accurately representing dependence between exposure and toxicity inputs.
The document provides an overview of categorical data analysis, focusing on statistical methods such as the chi-square test, contingency tables, and various tests for assessing associations between categorical variables in biomedical research. It details examples of applications, including the treatment effects of hormone replacement therapy and dental interventions, alongside explanations of calculating expected frequencies and tests for independence. The document also discusses the importance of ensuring valid sample sizes for statistical tests and introduces advanced techniques like McNemar's test and the kappa statistic for measuring reliability.
This document summarizes a lecture on probability, probability distributions, and sampling distributions. It includes examples of calculating probabilities of events using concepts like the binomial distribution, normal distribution, and sampling distributions. It also defines key terms like population, sample, parameter, and statistic. Sample problems are worked through, such as finding the probability that the mean length of 5 randomly selected trout is between 8 to 12 inches.
This document summarizes a project aimed at predicting algae blooms using data mining techniques. The team analyzed a dataset of 200 water samples measuring chemical properties and frequencies of 7 algae types. They tested multiple linear regression, regression trees, and random forests but were unable to produce an accurate predictive model, with normalized mean square errors remaining high. They concluded that the available data was insufficient to reliably predict algae concentrations and that further methods would be needed.
The t test can be used to compare sample means to population means, compare means between independent samples, or compare readings within a single sample taken at different times. It involves testing a hypothesis about whether two means are statistically significantly different. The document provides examples of applying the t test to compare a sample mean to a population mean, compare means between independent male and female samples, and compare blood pressure readings within a single sample taken before and after treatment.
Ap statistics review ch23 24 25_with_answersjmalpass
油
A golf instructor recorded golf scores for students before and after taking a 4-week beginner golf class to see how much they improved. The correct statistical test to analyze this paired data is a matched pairs t-test.
1) The document reviews key concepts in ecology and adaptation covered in Biology 205, including principles of allocation, offspring number vs size, life history strategies, competition, predator-prey dynamics, and community structure.
2) Main concepts discussed include tradeoffs organisms face regarding offspring size vs number, how adult survival impacts reproduction age and body size, and r vs K selection strategies.
3) Models of population dynamics and species interactions are examined, including logistic growth, Lotka-Volterra, and food webs. Factors influencing community diversity such as environmental complexity and disturbance are also addressed.
This lab report summarizes a population dynamics simulation of seals and killer whales. The purpose was to analyze how predation, birth and death rates impact marine populations over time. Variables like initial population sizes, growth and death rates were adjusted. Results showed the highest populations occurred when seal numbers were highest, providing more food for whales. Birth rates affected both species' capacities, as they depend on each other. Climate change reducing seal habitat could decrease their numbers by improving whale hunting efficiency. Introducing another whale food source may cause both populations to rise. The simulation showed a sigmoid relationship between the two species. While simplifying nature, the lab helped visualize ecosystem interactions.
The document describes a study examining the effects of three snake venom toxins (Viperitoxin, Taipoxin, and 留-bungarotoxin) on the twitch response, CCh response, and KCl response of an isolated frog sciatic nerve-gastrocnemius muscle preparation. Viperitoxin decreased the twitch and KCl responses but the CCh response was not tested. Taipoxin decreased the twitch response and increased the KCl response but did not change the CCh response. 留-bungarotoxin decreased the twitch and KCl responses and decreased the CCh response. The study did not provide enough information to determine which venom caused Sam's paralysis when bitten or the revers
This document provides an overview of statistical concepts relevant to data mining. It discusses data mining tools and techniques for revealing patterns in data. Key concepts covered include supervised vs unsupervised learning, classification vs regression, measures of central tendency, variance, standard deviation, hypothesis testing, and the normal distribution. Examples are provided to illustrate calculating averages, variance, hypothesis testing, and interpreting normal distributions. Exercises with solutions are included to demonstrate applying statistical concepts.
- The document describes Stanley Milgram's famous experiment on obedience to authority from 1963. In the experiment, participants were instructed to administer electric shocks to a learner for incorrect answers, though no actual shocks were given.
- About 65% of participants administered what they believed were severe electric shocks, showing high obedience to authority. Each participant can be viewed as a Bernoulli trial with probability of 0.35 to refuse the shock.
- The document then discusses using the binomial distribution to calculate probabilities of outcomes with a given number of trials and probability of success for each trial. It provides the formula and conditions for applying the binomial distribution.
This document discusses chi square distribution and its use in analyzing frequency data. Chi square tests can be used to test goodness of fit, independence, and homogeneity. It provides examples of chi square tests for goodness of fit to determine if sample data fits a theoretical distribution, and tests of independence to determine if two classification criteria are independent. The document also outlines the steps for conducting chi square tests, including calculating test statistics, determining degrees of freedom, and comparing results to critical values to reject or fail to reject the null hypothesis.
Hypothesis Test _Two-sample t-test, Z-test, Proportion Z-testRavindra Nath Shukla
油
This document describes hypothesis testing methods for comparing means, proportions, and variances between two populations including independent and related samples. It covers various statistical tests such as the two-sample z-test, two-sample t-test, and paired t-test, providing formulas, examples, and assumptions required for these tests. Additionally, it illustrates practical applications using financial data and interventions to assess the significance of differences in population parameters.
1) The study aims to assess how predation risk influences the defensive chemical compounds in striped skunk spray. Specifically, it will compare the amounts of trans-2-butene-thiol and trans-2-butenyl thioacetate, two abundant noxious chemicals, in skunk populations facing different predation pressures.
2) Gas chromatography-mass spectrometry will be used to identify and quantify the two target chemicals in spray samples collected from skunks in areas of high and low mammalian and avian predation risk.
3) Three hypotheses are that skunks in riskier areas will have more variable spray potency, juveniles will have stronger spray, and bolder
Measures of central tendency-understading dataSimranZore
油
The document provides an overview of measures of central tendency, including mean, median, and mode, essential for summarizing data sets. It explains calculations, features, and applications of each measure, highlighting their respective advantages and limitations. Additionally, it includes examples and graphical representations to illustrate the concepts and their relevance in data analysis.
This document summarizes key probability distributions: binomial, Poisson, and normal. The binomial distribution describes the number of successes in fixed number of trials where the probability of success is constant. The Poisson distribution approximates the binomial when the number of trials is large and the probability of success is small. The normal distribution describes many continuous random variables and is symmetric with two parameters: mean and standard deviation. The document also discusses when binomial and Poisson distributions can be approximated as normal distributions.
Dr. ed cooper kcnq2 Cure summit parent track learn more at kcnq2cure.orgscottyandjim
油
1. KCNQ2 is a potassium ion channel gene that is important for preventing seizures and early brain development. Mutations in this gene can cause neurological conditions ranging from mild to severe depending on how much it reduces the channel activity.
2. While current drug treatments have limitations, increasing the channel activity through drugs may help alleviate symptoms since the channel is only functioning at a small fraction of its potential capacity in patients.
3. Further research is still needed to develop more effective drugs and improve diagnosis and treatment for conditions caused by KCNQ2 mutations.
- The document discusses factors that determine appropriate sample size for various study types, including exploratory studies, cross-sectional studies, and analytical studies. It provides formulas to calculate sample size needed based on variables like standard deviation, confidence interval, and expected difference between groups. Appropriate sample size is a balance between what is desirable and feasible given time, resources, and study objectives. Sample size determination involves using formulas or tables to calculate the needed size to achieve necessary precision or significant difference.
- The document discusses comparing two proportions from different populations or groups using confidence intervals and significance tests.
- It provides the formula for calculating a confidence interval for the difference between two proportions (p1 - p2) based on sample data. The standard error takes into account the sample sizes and proportions.
- An example calculates a 95% confidence interval for the difference between the proportion of US teens and adults who use social media based on survey data, finding the interval to be between 0.223 and 0.297.
This document discusses how advances in genetic sequencing and computing are enabling humans to read and understand the "software of life" encoded in their human and microbiome DNA. It notes that the human microbiome contains millions of microbial genes compared to the 23,000 genes in human cells. The author details how the cost of DNA sequencing has fallen over 100,000-fold, allowing sequencing of both human and microbial genomes. Machine learning will be needed to understand differences between healthy and diseased states by analyzing enormous genomic and microbiome datasets. The author provides an example of analyzing their own gut microbiome over time and comparing to healthy/IBD populations.
The Triton Travel Club contact survey collects a student's name, address, birthday, and parent/guardian contact information. It asks whether the student will enroll in an upcoming tour and which other countries or places the club should offer trips to in the future, with options including Argentina, Australia, Britain, California, China, Costa Rica, Egypt, France, Germany, Greece, India, Italy, Mexico, New Zealand, Peru, South Africa, Spain, Thailand, Vietnam and others.
Logarithmic functions are inverses of exponential functions. To graph a logarithmic function:
1. Identify the inverse exponential form.
2. Create a table of values for the exponential form.
3. Invert the ordered pairs.
4. Plot the points and sketch the graph of the logarithm.
Logarithmic functions can be transformed through stretching, compression, reflection, horizontal translation, and vertical translation compared to the parent logarithmic function. Examples are shown to demonstrate how transformations alter the graph.
This document discusses probabilistic risk assessment (PRA) in environmental toxicology. PRA aims to more realistically communicate uncertainty and variability in risk estimates compared to deterministic approaches. Key aspects of PRA covered include using species sensitivity distributions (SSDs) fitted to toxicity data from multiple species to estimate hazardous concentrations protecting a given percentage of species, and joint probability curves to assess the likelihood of exposure exceeding toxicity thresholds. However, the document notes that PRA implementation has outpaced understanding, and key challenges include reliability of SSDs based on small data sets and accurately representing dependence between exposure and toxicity inputs.
The document provides an overview of categorical data analysis, focusing on statistical methods such as the chi-square test, contingency tables, and various tests for assessing associations between categorical variables in biomedical research. It details examples of applications, including the treatment effects of hormone replacement therapy and dental interventions, alongside explanations of calculating expected frequencies and tests for independence. The document also discusses the importance of ensuring valid sample sizes for statistical tests and introduces advanced techniques like McNemar's test and the kappa statistic for measuring reliability.
This document summarizes a lecture on probability, probability distributions, and sampling distributions. It includes examples of calculating probabilities of events using concepts like the binomial distribution, normal distribution, and sampling distributions. It also defines key terms like population, sample, parameter, and statistic. Sample problems are worked through, such as finding the probability that the mean length of 5 randomly selected trout is between 8 to 12 inches.
This document summarizes a project aimed at predicting algae blooms using data mining techniques. The team analyzed a dataset of 200 water samples measuring chemical properties and frequencies of 7 algae types. They tested multiple linear regression, regression trees, and random forests but were unable to produce an accurate predictive model, with normalized mean square errors remaining high. They concluded that the available data was insufficient to reliably predict algae concentrations and that further methods would be needed.
The t test can be used to compare sample means to population means, compare means between independent samples, or compare readings within a single sample taken at different times. It involves testing a hypothesis about whether two means are statistically significantly different. The document provides examples of applying the t test to compare a sample mean to a population mean, compare means between independent male and female samples, and compare blood pressure readings within a single sample taken before and after treatment.
Ap statistics review ch23 24 25_with_answersjmalpass
油
A golf instructor recorded golf scores for students before and after taking a 4-week beginner golf class to see how much they improved. The correct statistical test to analyze this paired data is a matched pairs t-test.
1) The document reviews key concepts in ecology and adaptation covered in Biology 205, including principles of allocation, offspring number vs size, life history strategies, competition, predator-prey dynamics, and community structure.
2) Main concepts discussed include tradeoffs organisms face regarding offspring size vs number, how adult survival impacts reproduction age and body size, and r vs K selection strategies.
3) Models of population dynamics and species interactions are examined, including logistic growth, Lotka-Volterra, and food webs. Factors influencing community diversity such as environmental complexity and disturbance are also addressed.
This lab report summarizes a population dynamics simulation of seals and killer whales. The purpose was to analyze how predation, birth and death rates impact marine populations over time. Variables like initial population sizes, growth and death rates were adjusted. Results showed the highest populations occurred when seal numbers were highest, providing more food for whales. Birth rates affected both species' capacities, as they depend on each other. Climate change reducing seal habitat could decrease their numbers by improving whale hunting efficiency. Introducing another whale food source may cause both populations to rise. The simulation showed a sigmoid relationship between the two species. While simplifying nature, the lab helped visualize ecosystem interactions.
The document describes a study examining the effects of three snake venom toxins (Viperitoxin, Taipoxin, and 留-bungarotoxin) on the twitch response, CCh response, and KCl response of an isolated frog sciatic nerve-gastrocnemius muscle preparation. Viperitoxin decreased the twitch and KCl responses but the CCh response was not tested. Taipoxin decreased the twitch response and increased the KCl response but did not change the CCh response. 留-bungarotoxin decreased the twitch and KCl responses and decreased the CCh response. The study did not provide enough information to determine which venom caused Sam's paralysis when bitten or the revers
This document provides an overview of statistical concepts relevant to data mining. It discusses data mining tools and techniques for revealing patterns in data. Key concepts covered include supervised vs unsupervised learning, classification vs regression, measures of central tendency, variance, standard deviation, hypothesis testing, and the normal distribution. Examples are provided to illustrate calculating averages, variance, hypothesis testing, and interpreting normal distributions. Exercises with solutions are included to demonstrate applying statistical concepts.
- The document describes Stanley Milgram's famous experiment on obedience to authority from 1963. In the experiment, participants were instructed to administer electric shocks to a learner for incorrect answers, though no actual shocks were given.
- About 65% of participants administered what they believed were severe electric shocks, showing high obedience to authority. Each participant can be viewed as a Bernoulli trial with probability of 0.35 to refuse the shock.
- The document then discusses using the binomial distribution to calculate probabilities of outcomes with a given number of trials and probability of success for each trial. It provides the formula and conditions for applying the binomial distribution.
This document discusses chi square distribution and its use in analyzing frequency data. Chi square tests can be used to test goodness of fit, independence, and homogeneity. It provides examples of chi square tests for goodness of fit to determine if sample data fits a theoretical distribution, and tests of independence to determine if two classification criteria are independent. The document also outlines the steps for conducting chi square tests, including calculating test statistics, determining degrees of freedom, and comparing results to critical values to reject or fail to reject the null hypothesis.
Hypothesis Test _Two-sample t-test, Z-test, Proportion Z-testRavindra Nath Shukla
油
This document describes hypothesis testing methods for comparing means, proportions, and variances between two populations including independent and related samples. It covers various statistical tests such as the two-sample z-test, two-sample t-test, and paired t-test, providing formulas, examples, and assumptions required for these tests. Additionally, it illustrates practical applications using financial data and interventions to assess the significance of differences in population parameters.
1) The study aims to assess how predation risk influences the defensive chemical compounds in striped skunk spray. Specifically, it will compare the amounts of trans-2-butene-thiol and trans-2-butenyl thioacetate, two abundant noxious chemicals, in skunk populations facing different predation pressures.
2) Gas chromatography-mass spectrometry will be used to identify and quantify the two target chemicals in spray samples collected from skunks in areas of high and low mammalian and avian predation risk.
3) Three hypotheses are that skunks in riskier areas will have more variable spray potency, juveniles will have stronger spray, and bolder
Measures of central tendency-understading dataSimranZore
油
The document provides an overview of measures of central tendency, including mean, median, and mode, essential for summarizing data sets. It explains calculations, features, and applications of each measure, highlighting their respective advantages and limitations. Additionally, it includes examples and graphical representations to illustrate the concepts and their relevance in data analysis.
This document summarizes key probability distributions: binomial, Poisson, and normal. The binomial distribution describes the number of successes in fixed number of trials where the probability of success is constant. The Poisson distribution approximates the binomial when the number of trials is large and the probability of success is small. The normal distribution describes many continuous random variables and is symmetric with two parameters: mean and standard deviation. The document also discusses when binomial and Poisson distributions can be approximated as normal distributions.
Dr. ed cooper kcnq2 Cure summit parent track learn more at kcnq2cure.orgscottyandjim
油
1. KCNQ2 is a potassium ion channel gene that is important for preventing seizures and early brain development. Mutations in this gene can cause neurological conditions ranging from mild to severe depending on how much it reduces the channel activity.
2. While current drug treatments have limitations, increasing the channel activity through drugs may help alleviate symptoms since the channel is only functioning at a small fraction of its potential capacity in patients.
3. Further research is still needed to develop more effective drugs and improve diagnosis and treatment for conditions caused by KCNQ2 mutations.
- The document discusses factors that determine appropriate sample size for various study types, including exploratory studies, cross-sectional studies, and analytical studies. It provides formulas to calculate sample size needed based on variables like standard deviation, confidence interval, and expected difference between groups. Appropriate sample size is a balance between what is desirable and feasible given time, resources, and study objectives. Sample size determination involves using formulas or tables to calculate the needed size to achieve necessary precision or significant difference.
- The document discusses comparing two proportions from different populations or groups using confidence intervals and significance tests.
- It provides the formula for calculating a confidence interval for the difference between two proportions (p1 - p2) based on sample data. The standard error takes into account the sample sizes and proportions.
- An example calculates a 95% confidence interval for the difference between the proportion of US teens and adults who use social media based on survey data, finding the interval to be between 0.223 and 0.297.
This document discusses how advances in genetic sequencing and computing are enabling humans to read and understand the "software of life" encoded in their human and microbiome DNA. It notes that the human microbiome contains millions of microbial genes compared to the 23,000 genes in human cells. The author details how the cost of DNA sequencing has fallen over 100,000-fold, allowing sequencing of both human and microbial genomes. Machine learning will be needed to understand differences between healthy and diseased states by analyzing enormous genomic and microbiome datasets. The author provides an example of analyzing their own gut microbiome over time and comparing to healthy/IBD populations.
The Triton Travel Club contact survey collects a student's name, address, birthday, and parent/guardian contact information. It asks whether the student will enroll in an upcoming tour and which other countries or places the club should offer trips to in the future, with options including Argentina, Australia, Britain, California, China, Costa Rica, Egypt, France, Germany, Greece, India, Italy, Mexico, New Zealand, Peru, South Africa, Spain, Thailand, Vietnam and others.
Logarithmic functions are inverses of exponential functions. To graph a logarithmic function:
1. Identify the inverse exponential form.
2. Create a table of values for the exponential form.
3. Invert the ordered pairs.
4. Plot the points and sketch the graph of the logarithm.
Logarithmic functions can be transformed through stretching, compression, reflection, horizontal translation, and vertical translation compared to the parent logarithmic function. Examples are shown to demonstrate how transformations alter the graph.
This document introduces logarithmic functions as inverses of exponential functions. It defines logarithms as the inverse of an exponential function y = bx, such that y = bx is equivalent to logby = x. The document provides examples of writing exponential equations in logarithmic form and vice versa. It also demonstrates how to evaluate logarithms by using the definition to write them in exponential form and setting the exponents equal. Finally, it defines the common logarithm as a logarithm with base 10, which can be written as logx.
The document discusses exponential functions of the form f(x) = a*b^x, explaining that they always have a curved shape and asymptote at y=0. It distinguishes between exponential growth, where the value of y increases as x increases, and exponential decay, where the value of y decreases as x increases. Examples are provided to demonstrate how to determine if a function represents growth or decay and to find the y-intercept.
This document discusses properties and transformations of exponential functions, including stretch, compression, reflection, and horizontal and vertical translation. It also discusses the number e as the base for natural exponential functions and using the function A=Pe^rt to model continuously compounded interest. Examples are provided to demonstrate graphing transformations of exponential functions and using the continuously compounded interest formula.
This document discusses the chi-square goodness of fit test, which is used to check if observed data counts match the expected distribution of counts into categories. It examines whether a population follows a specified theoretical distribution.
This document discusses testing for homogeneity among populations using chi-square tests. It defines homogeneity as populations having the same structure or composition. A test of homogeneity determines if different populations have the same proportions for various categories. It requires using a contingency table and chi-square distribution. An example tests if the same proportion of males and females prefer different pet types using survey data from college students.
The document provides an overview of the chi-square distribution and how it can be used for hypothesis testing. It discusses that the chi-square distribution is used to find critical values for determining the area under the curve for a given degrees of freedom. It also gives an example of how chi-square can be used to test if two variables such as keyboard type and time to learn typing are independent.
This document discusses synthetic division and the remainder theorem. Synthetic division is a process that simplifies long division when dividing a polynomial by a linear factor of the form x - a. It involves setting up the coefficients of the polynomial and multiplying/adding through the process. The remainder theorem states that if a polynomial P(x) is divided by x - a, then the remainder is equal to P(a). It provides a quick way to find the remainder of a polynomial division problem by evaluating the polynomial at the value of a. Examples are given to demonstrate evaluating polynomials using the remainder theorem.
Long division can be used to divide polynomials in a similar way to dividing numbers. The key steps are to set up the division problem, divide the term of the dividend by the term of the divisor, multiply the divisor by the quotient term and subtract, then bring down the next term of the dividend and repeat. This polynomial long division allows polynomials to be factored by finding all divisor polynomials that give a remainder of zero. The factor theorem can also be used to check if a linear polynomial is a factor by setting it equal to zero and checking if it makes the other polynomial equal to zero.
This document discusses solving polynomial equations by factoring. It provides examples of factoring polynomials, including factoring the difference and sum of cubes. Factoring by substitution is also introduced as a method for factoring polynomials of degree 4 or higher. The document demonstrates solving polynomial equations by factoring the expressions and setting each factor equal to 0. Both real and imaginary solutions may be obtained depending on whether the factors are real or complex numbers. Graphing is presented as an alternative method to find real solutions of a polynomial equation.
The document discusses inferences for correlation and regression. It provides an example of testing the correlation between percentage of population with a college degree (x) and percentage growth in income (y) for 6 Ohio communities. There is a positive correlation between x and y, but this does not necessarily mean higher education causes higher earnings. The document also discusses measuring the spread of data points around the least squares line, including the standard error of estimate, using an example of how much copper sulfate dissolves in water at different temperatures.
This document provides guided notes on inferences for correlation and regression. It discusses how the sample correlation coefficient and least squares line estimate population parameters and require assumptions about the data. It also outlines how to test the population correlation coefficient using a significance test and interpret the results. An example is provided testing the correlation between education levels and income growth. Students are asked to practice computing the standard error of estimate from a data set and answering summary questions.
1. The document discusses writing polynomials in factored form and finding the zeros of polynomial functions. It defines linear factors, roots, zeros, and x-intercepts as equivalent terms.
2. Examples are provided of writing polynomials in factored form using the factor theorem to find the zeros, and then graphing the polynomial function based on its zeros.
3. The factor theorem states that a linear expression x - a is a factor of a polynomial if and only if a is a zero of the related polynomial function. This allows writing a polynomial given its zeros.
This document discusses how to describe the shape of a cubic function by listing it in standard form, describing the end behavior of the graph, determining the possible number of turning points using a table of values, and determining the increasing and decreasing intervals. It explains that to describe the shape, you identify the sign of the leading coefficient to determine the end behavior and the number of turning points, which is one less than the possible degree. The document also discusses using differences of consecutive y-values in a table to determine the least degree of the polynomial function that could generate the data, with constant first differences indicating linear, constant second differences indicating quadratic, and constant third differences indicating cubic.
This document defines key concepts related to polynomials and polynomial functions. It defines monomials as terms involving variables and exponents, and polynomials as sums of monomials. The degree of a polynomial is the highest exponent among its terms. Polynomial functions are polynomials written in terms of a single variable. Standard form arranges polynomial terms by descending degree. Polynomials are classified by degree and number of terms. Higher degree polynomials can have more turning points and their end behavior depends on the leading term. Examples show determining standard form, classifying polynomials, identifying end behavior and increasing/decreasing parts of graphs.
This document discusses scatter diagrams and linear correlation. It provides examples of scatter diagrams that do and do not show linear correlation. It defines the correlation coefficient r as a measure of linear correlation between two variables on a scatter plot, with values between -1 and 1. It presents formulas for calculating r and provides an example of computing r using wind velocity and sand drift rate data. It cautions that correlation does not necessarily imply causation and that lurking variables can influence the correlation between two variables.
1) Linear regression finds the "best-fitting" linear relationship between two variables by minimizing the vertical distances between the data points and the linear equation line.
2) The coefficient of determination, r^2, measures how well the linear relationship described by the regression line fits the actual data, with higher r^2 values indicating less unexplained variability.
3) r^2 has an interpretation as the percentage of the total variation in the response variable that is explained by the explanatory variable.
2. Independent Samples and
Dependent Samples
In this section, we will use samples from two
populations to create confidence intervals for
the difference between population parameters.
we need to have a sample from each population
Samples may be independent or dependent
according to how they are selected
3. Independent Samples and
Dependent Samples
Two samples are independent is sample data
drawn from one population are completely
unrelated to the selection of sample data from the
other population.
Examples
Sample 1: test scores for 35 statistics students
Sample 2: test scores for 42 biology students
Sample 1: heights of 27 adult females
Sample 2: heights of 27 adult males
Sample 1: the SAT scores for 35 high school students who
did not take an SAT preparation course
Sample 2: the SAT scores for 40 high school students who
did take an SAT preparation course
4. Independent Samples and
Dependent Samples
Two samples are dependent if each data value in
one sample can be paired with a corresponding
data value in the other sample.
Examples
Sample 1: resting heart rates of 35 individuals before
drinking coffee
Sample 2: resting heart rates of the same individuals after
drinking two cups of coffee
Sample 1: midterm exam scores of 14 chemistry students
Sample 2: final exam scores of the same 14 chemistry
students
Sample 1: the fuel mileage of 10 cars
Sample 2: the fuel mileage of the same 10 cars using a fuel
additive
5. Independent Samples and
Dependent Samples
Dependent samples and data pairs occur very
naturally in before and after situations in
which the same object or item is measured
twice.
Independent samples occur very naturally
when we draw two random samples, one from
the first population and one from the second
population.
All the examples of this section will involve
independent random samples.
6. Confidence Intervals for the
difference between two population
parameters
There are several types of confidence intervals
for the difference between two population
parameters
Confidence Intervals for 1 2 (1 and 2
known)
Confidence Intervals for 1 2 (1 and 2 Are
Unknown)
Confidence Intervals for 1 2 (1 = 2)
Confidence Intervals for p1 p2
11. Example 8 Page 375
Confidence interval for 1 2, 1 and 2 known
In the summer of 1988, Yellowstone National Park had some major fires
that destroyed large tracts of old timber near many famous trout streams.
Fishermen were concerned about the long-term effects of the fires on
these streams. However, biologists claimed that the new meadows that
would spring up under dead trees would produce a lot more insects, which
would in turn mean better fishing in the years ahead. Guide services
registered with the park provided data about the daily catch for fishermen
over many years. Ranger checks on the streams also provided data about
the daily number of fish caught by fishermen. Yellowstone Today (a
national park publication) indicates that the biologists claim is basically
correct and that Yellowstone anglers are delighted by their average
increased catch. Suppose you are a biologist studying fishing data from
Yellowstone streams before and after the fire. Fishing reports include the
number of trout caught per day per fisherman.
A random sample of n1 = 167 reports from the period before the fire
showed that the average catch was x1 = 5.2 trout per day. Assume that the
standard deviation of daily catch per fisherman during this period was 1 =
1.9 . Another random sample of n2 = 125 fishing reports 5 years after the
fire showed that the average catch per day was x2 = 6.8 trout. Assume that
the standard deviation during this period was 2 = 2.3.
12. Example 8 Page 375
Confidence interval for 1 2, 1 and 2
known
Solution:
The population for the first sample is the number of trout caught per day by
fishermen before the fire.
The population for the second sample is the number of trout caught per day
after the fire.
Both samples were random samples taken in their respective time periods.
There was no effort to pair individual data values. Therefore, the samples
can be thought of as independent samples.
A normal distribution is appropriate for the x1 x2 distribution because
sample sizes are sufficiently large and we know both 1 and 2.
13. Example 8 Page 375
Confidence interval for 1 2, 1 and 2
known
14. Example 8 Page 375
Confidence interval for 1 2, 1 and 2
known
c) Interpretation What is the meaning of the
confidence interval computed in part (b)?
Solution:
Since the confidence interval contains only negative
values, we are 95% sure that 1 2 < 0. This
means we are 95% sure that 1 < 2 .
18. Example 9 Page 377
Confidence interval for 1 2, 1 and 2
unknown
Alexander Borbely is a professor at the Medical School of the University of Zurich,
where he is director of the Sleep Laboratory. Dr. Borbely and his colleagues are
experts on sleep, dreams, and sleep disorders. In his book Secrets of Sleep, Dr.
Borbely discusses brain waves, which are measured in hertz, the number of
oscillations per second. Rapid brain waves (wakefulness) are in the range of 16
to 25 hertz. Slow brain waves (sleep) are in the range of 4 to 8 hertz. During
normal sleep, a person goes through several cycles (each cycle is about 90
minutes) of brain waves, from rapid to slow and back to rapid. During deep sleep,
brain waves are at their slowest. In his book, Professor Borbely comments that
alcohol is a poor sleep aid. In one study, a number of subjects were given 1/2 liter
of red wine before they went to sleep. The subjects fell asleep quickly but did not
remain asleep the entire night. Toward morning, between 4 and 6 A.M., they
tended to wake up and have trouble going back to sleep.
Suppose that a random sample of 29 college students was randomly divided into
two groups. The first group of n1 = 15 people was given 1/2 liter of red wine
before going to sleep. The second group of n2 = 14 people was given no alcohol
before going to sleep. Everyone in both groups went to sleep at 11 P.M. The
average brain wave activity (4 to 6 A.M.) was determined for each individual in
the groups. Assume the average brain wave distribution in each group is
moundshaped and symmetric.
19. Example 9 Page 377
Confidence interval for 1 2, 1 and 2
unknown
20. Example 9 Page 377
Confidence interval for 1 2, 1 and 2
unknown
21. Example 9 Page 377
Confidence interval for 1 2, 1 and 2
unknown
22. Example 9 Page 377
Confidence interval for 1 2, 1 and 2
unknown
c) Interpretation What is the meaning of the
confidence interval you computed in part (b)?
Solution
We are 90% confident that the interval between 11.8
and 14.3 hertz is one that contains the difference 1
2.
Since the confidence interval from 11.8 to 14.3
contains only positive values, we could express this
by saying that we are 90% confident that 1 2 is
positive.
This means that 1 2 > 0. Thus, we are 90%
confident
1 > 2