The document discusses Chi-square tests and their applications. Chi-square tests can be used for goodness of fit, independence, and homogeneity. They are non-parametric tests used to analyze categorical data. The three main types are: 1) goodness of fit tests determine if a sample fits a hypothesized distribution, 2) independence tests determine if two categorical variables are associated, and 3) homogeneity tests determine if a categorical variable is distributed identically across populations. Chi-square tests involve calculating expected frequencies, observed frequencies, and a test statistic to determine if the null hypothesis can be rejected.
The document discusses Chi-square tests and their applications. Chi-square tests can be used for goodness of fit, independence, and homogeneity. They are non-parametric tests used to analyze categorical data. The three main types are: 1) goodness of fit tests determine if a sample fits a hypothesized distribution, 2) independence tests determine if two categorical variables are associated, and 3) homogeneity tests determine if a categorical variable is distributed identically across populations. Chi-square tests involve calculating expected frequencies, observed frequencies, and a test statistic to determine if the null hypothesis can be rejected.
This document discusses theories of language structure and processing. It begins by describing Noam Chomsky's critique of behaviorism and introduction of concepts like universal grammar and the poverty of stimulus. It then covers topics like the types of words in language, sentence structure rules, properties of language like creativity and arbitrariness, and theories of language processing including lexical access and categorical perception. Research methods discussed include studies of language acquisition, disorders, reaction times, brain imaging, and cross-cultural comparisons.
This document discusses functions in R language and data analysis. It explains control structures like if/else statements, the ... argument which allows a variable number of arguments, function arguments and defaults, lazy evaluation of arguments, and how the ... argument is used when the number of arguments is unknown. Examples are provided to illustrate if/else logic, formals() to view function arguments, and how ... passes variable arguments to functions like paste() and cat().
The document discusses functions in R and their arguments. It explains that functions have formal arguments that may have default values, and arguments can be matched by position or by name. It also demonstrates using the psych package to calculate descriptive statistics and visualize the iris data grouped by species.
R provides vectorized operations that allow performing calculations efficiently on entire vectors and matrices at once. Functions like addition, subtraction, multiplication, and division work element-wise across vectors of the same length. Matrix operations like multiplication can also be performed. R uses factors to represent categorical data, which are treated specially in modeling functions. Factors have levels and can be ordered. Random samples can be drawn from vectors and matrices constructed to represent categorical data.
R is a programming language for data analysis and statistics. It allows users to enter commands at the prompt ">" to perform calculations and manipulate numeric and other objects like vectors and matrices. Basic objects in R include numeric, integer, character, complex, and logical values. Vectors are the most basic data structure and can contain elements of the same type. Matrices are two-dimensional vectors that store values in rows and columns. Functions like c(), seq(), and rep() can be used to create, combine and replicate vectors and sequences of values.
This document discusses different statistical modeling techniques including one-way ANOVA, two-way ANOVA, linear regression, logistic regression, support vector machines, and artificial neural networks. It provides information on the arguments and functions used for one-way and two-way ANOVA. It also explains the key differences between linear regression and logistic regression models. Support vector machines and regression are introduced for categorical prediction and predicting linear relationships. Artificial neural networks are also listed briefly.
This document discusses statistical computing in R, including generating random numbers from distributions, probability density functions, cumulative distribution functions, and quantile functions. It also covers loops and if/else conditional statements in R. Specifically, it shows how to generate random normals, calculate normal densities and CDFs, and take quantiles. It also demonstrates for loops, if/else statements, and nested if/else statements.
This document discusses statistical computing in RStudio. It covers importing and browsing data, data types, and hands-on exercises. It also demonstrates basic math operations, using packages, getting help, and best practices for creating R documents.
A multiple regression analysis was conducted to predict body fat percentage using triceps skinfold thickness, thigh circumference, and midarm circumference as predictor variables. The analysis found that triceps skinfold thickness alone accounted for some of the variation in body fat percentage. Adding thigh circumference and midarm circumference as additional predictors further reduced error and increased the accuracy of predictions, as shown through calculations of sums of squares. Multiple regression allows determining the contribution of each predictor variable both individually and in combination with other predictors.
This document discusses different types of analysis of variance (ANOVA) models including Type III ANOVA with fixed and random factors, two-way ANOVA, and simple main effects as well as regression models.
This document provides an overview of essential methods for analyzing EEG/MEG signal data. It discusses (1) what EEG and MEG are and how they relate to brain activity, (2) common analytic steps like epoching, artifact rejection, averaging, and measuring amplitudes, (3) examples of ERP components and how to avoid overlap, (4) advanced approaches like source analysis, time-frequency analysis, and multiscale entropy analysis, and (5) the importance of EEG/MEG for studying human cognitive functions and brain mechanisms.
This document provides an overview of APA style formatting. It discusses what APA style is and the fields that commonly use it. It then covers the basic sections of an APA paper including the title page, abstract, introduction, method, results, discussion, and references page. It also details formatting guidelines for headings, numbers, lists, punctuation, quotations, paraphrasing, and citing sources. The document aims to explain the key rules and conventions for writing academic papers in APA style.
Here are the R commands to create the requested graph from the MASS leuk dataset and save it as MASSleuk.jpeg:
```r
data(leuk)
windows()
par(mfrow=c(2,2))
plot(leuk$time, main="Scatter plot of time", ylab="time")
hist(leuk$time, main="Histogram of time", xlab="time")
boxplot(leuk$time, main="Boxplot of time")
qqnorm(leuk$time); qqline(leuk$time)
dev.copy(png, "MASSleuk.jpeg")
```
This will open a graphics window,
31. ? Tarkiainen et al. (1999)
M100
response
M170
response
# M100 response varies in intensity with visual noise
13年5月16?日星期四
32. ? Tarkiainen et al. (1999)
M100
response
M170
response
# M100 response varies in intensity with visual noise
# M170 response varies in intensity with string length
13年5月16?日星期四
33. ? Tarkiainen et al. (1999)
M100
response
M170
response
# M100 response varies in intensity with visual noise
# M170 response varies in intensity with string length
# M170 response shows the difference between symbols and letters
13年5月16?日星期四
34. Reading-Related N170 response
? 150ms~200ms after onsets
? have been well defined in both ERP & MEG studies.
? generated from fusiform gyrus
– lateralized to the left hemisphere fusiform gyrus (the visual word form area;
Cohen et al., 2000)
? orthographic word-form detection
(Bentin et al., 1999)
13年5月16?日星期四
36. ? In studies of alphabetic languages, there are
different measurements for different aspect of
orthographic properties.
–e.g., letter length and bigram frequency
? In Chinese orthography, number of strokes is highly
correlated with many factors
–strokes and frequency: r = -.14***
–strokes and phonetic combinability: r = -.14***
–strokes and semantic combinability: r = -.19***
(3967 phonograms)
–N170/ M170 can reflect:
? Letter length (Tarkiainen et al., 2002)
? Bigram frequency (Hauk et al., 2006)
? transition probability (Solomyak and Marantz, 2010)
? Expertise of words (Bentin et al., 1999; Wong et al.,
2005)
limitations of factorial design
13年5月16?日星期四
37. ? In studies of alphabetic languages, there are
different measurements for different aspect of
orthographic properties.
–e.g., letter length and bigram frequency
? In Chinese orthography, number of strokes is highly
correlated with many factors
–strokes and frequency: r = -.14***
–strokes and phonetic combinability: r = -.14***
–strokes and semantic combinability: r = -.19***
(3967 phonograms)
limitations of factorial design
13年5月16?日星期四
38. ? Solutions:
–single-trial analyses
? Dambacher, Kliegl, Hofmann, & Jacobs, 2006; Hauk
et al., 2006; Solomyak & Marantz, 2009, 2009
–linear mixed model (Baayen et al., 2008).
–Measurement of MEG source activation by
minimum-norm estimations
13年5月16?日星期四
39. Experimental Design
? 400 real characters
? 400 pseudo-characters and non-characters
? Task: lexical decision
? Subjects:
–10 native Chinese speakers, error rate: 9% (S.D.: 3%)
–5 English speakers, error rate: 50% (45~54)
13年5月16?日星期四
40. ? 線性混合模型的優點
– 可以同時估計固定效果(fixed effects)、隨機效果(random effects)
– 使?用「最?大相似法」(maximum likelihood) 估計預測變項之效果以及其變
異
? 可以處理遺漏值、樣本數不對等的?比較
– Baayen et al. (2008) 建議使?用蒙地卡羅程序 (Markov chain Monte Carlo
sampling),可以控制 Type-1 error,?而且不受樣本數的影響。
Type I error rates across different methods (64 observations)
Type I error rates across different methods (800 observations)
13年5月16?日星期四