Goals:

  • Illustrate convergence rates

  • Illustrate the difference between convergence of coefficients and convergence of root mean squared error

  • Illustrate the differences between OLS and kernel regression


Linear world

We start in a world where the conditional expectation function (CEF) is actually linear and OLS should converge at \(\sqrt{N}\).

Data generating process

Consider a data generating process (DGP) with:

\(Y = \underbrace{\beta_0 + \beta_1 X}_{CEF} + U\), where \(\beta_0 = -1\), \(\beta_1 = 2\), \(X \sim uniform(0,1)\) and \(U \sim uniform(-1,1)\).

For illustration, we plot a random draw with \(N=500\) and the true CEF:

# Load the packages required for later
if (!require("tidyverse")) install.packages("tidyverse", dependencies = TRUE); library(tidyverse)
if (!require("ggridges")) install.packages("ggridges", dependencies = TRUE); library(ggridges)
if (!require("latex2exp")) install.packages("latex2exp", dependencies = TRUE); library(latex2exp)
if (!require("np")) install.packages("np", dependencies = TRUE); library(np)

set.seed(1234)  # For replicability
n = 500         # Sample size for illustration

x = runif(n)    # Draw covariate

# Define population parameters
b0 = -1
b1 = 2

# Define conditional expectation function
cef = function(x){b0 + b1*x}

# Generate outcome variable
y = cef(x) + runif(n,-1,1)

# Plot sample
df = data.frame(x=x,y=y)
ggplot(df) + stat_function(fun=cef,size=1) + 
            geom_point(aes(x=x,y=y),color="blue",alpha = 0.4)

Applying OLS to the plotted sample shows that it nearly recovers the true coefficients, but not exactly due to sampling noise (as expected).

summary(lm(y ~ x))

Call:
lm(formula = y ~ x)

Residuals:
     Min       1Q   Median       3Q      Max 
-1.02784 -0.51841  0.00381  0.55908  0.98856 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -0.99330    0.05481  -18.12   <2e-16 ***
x            2.02376    0.09465   21.38   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.599 on 498 degrees of freedom
Multiple R-squared:  0.4786,    Adjusted R-squared:  0.4776 
F-statistic: 457.2 on 1 and 498 DF,  p-value: < 2.2e-16



Convergence rate of OLS - coefficients

We know that \[\sqrt{N}(\hat{\beta}_N - \beta) \overset{d}{\rightarrow} \mathcal{N}(0,\Sigma)\]

This means that the difference between the estimated coefficients and the true values is normally distributed with constant variance if we multiply it by \(\sqrt{N}\).


Let’s unpack this in the following way:

  1. Inspect distributions of \(\hat{\beta}_N\) for increasing \(N\)

  2. Inspect distributions of \(\hat{\beta}_N - \beta\) for increasing \(N\)

  3. Inspect distributions of \(\sqrt{N}(\hat{\beta}_N - \beta)\) for increasing \(N\)


The tool we use is a so-called Monte Carlo simulation study. Concretely, we follow these steps (for I nice intro Ch. 15 of “The Effect”:

  1. Draw a random sample from the DGP defined above of size \(N\).

  2. Run OLS and save the coefficients.

  3. Repeat 1. and 2. \(R\) times.

  4. Repeat 1. to 3. for a sequence of sample sizes \(N\) increasing by factor four in each step.


We start with \(N=10\), look at five sample sizes, set \(R=5,000\) and focus on \(\beta_1\).

# Define parameters of the simulation
r = 5000
seq = 5
start = 10

# Initialize empty matrix to fill in estimated coefficient b1
results = matrix(NA,r,seq+1)
colnames(results) = rep("Temp",seq+1) # To be automatically replaced in loop below


for (i in 0:seq) { # Outer loop over sample sizes 
  n = 4^i * start # sample size for this loop
  colnames(results)[i+1] = paste0("n=",toString(n)) # save sample size
  for (j in 1:r) { # Inner loop over replications
    # Draw sample
    x = runif(n)
    y = cef(x) + runif(n,-1,1)
    # Estimate and directly save b1 coefficient
    results[j,i+1] = lm(y ~ x)$coefficients[2]
  }
}


\(\hat{\beta}_N\)

Now let’s plot the distributions of the resulting coefficients for different sample sizes:

# Get the data ready
tib = as_tibble(results) %>% pivot_longer(cols = starts_with("n="), names_to = "N", names_prefix = "n=", values_to = "betas1") %>% mutate(N = as_factor(N))
# Plot
tib %>% ggplot(aes(x = betas1, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") + xlab(TeX(r'($\hat{\beta}_1$)')) +
  geom_vline(xintercept = b1)

This picture provides as a side effect intuition about two other important concepts:

  1. Unbiasedness: The estimator distribution is centered around the true value \(\Rightarrow E[\hat{\beta}_{1N}] = \beta_1\) for every sample size (each row in the graph).

  2. Consistency: The distributions center more and more around the true value as sample size increases.


\(\hat{\beta}_N - \beta\)

Now we center the distributions around zero:

tib %>% mutate(dev = betas1 - b1) %>% ggplot(aes(x = dev, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") + 
    xlab(TeX(r'($\hat{\beta}_1 - \beta_1$)')) +
  geom_vline(xintercept = 0)

Of course, this does not change the pattern that the distributions become more compact with larger sample size.


\(\sqrt{N}(\hat{\beta}_N - \beta)\)

However, if we multiply each distribution with the square root of the respective sample size, tadaaa

tib %>% mutate(sqrtn_dev = sqrt(as.numeric(as.character(N))) * (betas1 - b1)) %>% 
  ggplot(aes(x = sqrtn_dev, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") +
   xlab(TeX(r'($\sqrt{N}(\hat{\beta}_1 - \beta_1)$)'))

they become very similar looking normal distributions.

This means \(\sqrt{N}\) is exactly the factor that offsets the speed by which the distributions above become more compact. We can infer that the distributions converge to zero at the rate that is required to offset convergence.

To illustrate further, vary delta in the following code snippet and observe how the distributions still narrow if it is smaller than 0.5 like, e.g. with 0.3 …

delta = 0.3
  tib %>% mutate(dev = (as.numeric(as.character(N)))^delta * (betas1 - b1)) %>% 
    ggplot(aes(x = dev, y = fct_rev(N), fill = N)) +
    geom_density_ridges() + ylab("N") + 
    xlab(TeX(r'($N^{3/10}(\hat{\beta}_1 - \beta_1)$)'))

… and that the distributions explode if we set it larger than 0.5, e.g. 0.7:

delta = 0.7
  tib %>% mutate(dev = (as.numeric(as.character(N)))^delta * (betas1 - b1)) %>% 
    ggplot(aes(x = dev, y = fct_rev(N), fill = N)) +
    geom_density_ridges() + ylab("N") +
    xlab(TeX(r'($N^{7/10}(\hat{\beta}_1 - \beta_1)$)'))


Another illustration, how the distribution of the deviations is stabilized by multiplying with \(\sqrt{N}\), shows the standard deviation of \(N^\delta(\hat{\beta_1} - \beta_1)\) which is \(\sqrt{Var(N^\delta(\hat{\beta_1} - \beta_1))} = \sqrt{N^{2\delta} \cdot Var(\hat{\beta_1} - \beta_1)} = N^\delta \cdot sd(\hat{\beta_1} - \beta_1)\). The following graph shows how \(N^\delta \cdot sd(\hat{\beta_1} - \beta_1)\) evolves for different \(\delta\) with increasing \(N\):

new_tib = tib %>% mutate(dev = betas1 - b1, N = as.numeric(as.character(N)))  %>% group_by(N) %>% summarise(sd_dev = sqrt(var(dev))) 

delta_seq = c(0,0.3,0.4,0.5,0.55,0.6)
new_tib = cbind(as.matrix(new_tib), matrix(NA,nrow = dim(as.matrix(new_tib))[1],ncol =  length(delta_seq)))
colnames(new_tib) = c("N", "sd", rep("Temp",length(delta_seq)))

for (i in 1:length(delta_seq)) {
  delta = delta_seq[i]
  colnames(new_tib)[i+2] = paste0("d=",toString(delta))
  new_tib[,i+2] = new_tib[,1]^(delta) * new_tib[,2]
}

new_tib = as_tibble(new_tib) %>% pivot_longer(cols = starts_with("d="), names_to = "delta", names_prefix = "d=", values_to = "value") %>% mutate(delta = as_factor(delta))

new_tib %>% ggplot(aes(x = N, y = value, color = delta))+
  geom_line(aes(size = delta)) + scale_size_manual(values = c(0.5,0.5, 0.5, 2,0.5,0.5)) + 
    ylab(TeX(r'($N^\delta \cdot sd(\hat{\beta_1} - \beta_1)$)'))

With \(\delta < 1/2\), the standard deviation of \(\hat{\beta_1} - \beta_1\) goes to zero. Only when when multiplying by \(N^{1/2}\) (thick), the standard deviation remains constant across different sample sizes \(\Rightarrow\) The distribution is case stabilized. Multiplication by \(N^\delta, \delta>1/2\) makes the standard deviation of the product increase with increasing sample size \(\Rightarrow\) the distributions explode asymptotically.

\(N(\hat{\beta}_1 - \beta_1)^2\)

Note similarly that \(N\) stabilizes the distribution of the squared difference

tib %>% mutate(sqrtn_dev = as.numeric(as.character(N)) * (betas1 - b1)^2) %>% 
  ggplot(aes(x = sqrtn_dev, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") +
   xlab(TeX(r'($N(\hat{\beta}_1 - \beta_1)^2$)'))


\(\sqrt{N(\hat{\beta}_1 - \beta_1)^2}\)

… and \(\sqrt N\) stabilizes the distribution of the square root of the squared difference:

tib %>% mutate(sqrtn_dev = sqrt( as.numeric(as.character(N)) * (betas1 - b1)^2) ) %>% 
  ggplot(aes(x = sqrtn_dev, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") +
   xlab(TeX(r'($\sqrt{N(\hat{\beta}_1 - \beta_1)^2}$)'))


Convergence rate of OLS - RMSE of fitted/predicted values

In this course, it will be important how fast fitted/predicted values of a method converge to the CEF, not how fast a single coefficient converges.

Therefore note that \(\sqrt{N}\) convergence in the parameters implies \(\sqrt{N}\) of the root mean squared error (RMSE) \[RMSE = \sqrt{\frac{1}{N}\sum_i(X_i'\hat{\beta}_N - X_i'\beta)^2}\]

To illustrate this, we repeat the simulation above, but instead of saving the coefficient, we save the RMSE of each run.

# Initialize empty matrix to fill in RMSE
results = matrix(NA,r,seq+1)
colnames(results) = rep("Temp",seq+1) # To be automatically replaced in loop below

for (i in 0:seq) { # Outer loop over sample sizes 
  n = 4^i * start # sample size for this loop
  colnames(results)[i+1] = paste0("n=",toString(n)) # save sample size
  for (j in 1:r) { # Inner loop over replications
    # Draw sample
    x = runif(n)
    y = cef(x) + runif(n,-1,1)
    # Save RMSE of this replication
    results[j,i+1] = sqrt( mean( (lm(y ~ x)$fitted.values - cef(x))^2 ) )
  }
}

Plot the RMSE for different sample sizes:

tib = as_tibble(results) %>% pivot_longer(cols = starts_with("n="), names_to = "N", names_prefix = "n=", values_to = "rmse") %>% mutate(N = as_factor(N))
tib %>% ggplot(aes(x = rmse, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") + xlab("RMSE")

The distributions of RMSE come closer to zero, the larger the sample size.

To see that also the RMSE distributions converge at rate \(\sqrt{N}\), we blow up the raw distributions once more and observe that they stabilize when doing so:

tib %>% ggplot(aes(x = sqrt( as.numeric(as.character(N)) ) * rmse , y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") + xlab(TeX(r'($\sqrt{N}\times RMSE$)'))


This illustrates the (for me) most intuitive implication of \(\sqrt{N}\) convergence that the RMSE halves when we quadruple the sample size.

Check the means of the RMSE distributions

mrmse = round(colMeans(results),3)
mrmse
   n=10    n=40   n=160   n=640  n=2560 n=10240 
  0.229   0.114   0.057   0.029   0.014   0.007 

and realize that they indeed nearly perfectly halve:

round(mrmse/lag(mrmse),3)
   n=10    n=40   n=160   n=640  n=2560 n=10240 
     NA   0.498   0.500   0.509   0.483   0.500 



Ideas for DIY extensions:

  • Use different sequences of \(N\)

  • Check also \(\beta_0\)

  • Illustrate that the mean squared error (MSE) converges at rate \(N\)

  • Adapt the DGP, e.g. different parameter values, more than one covariate, different error term distribution, …



Kernel regression convergence rates

Recall that we specified a linear CEF and therefore OLS is supposed to work very well. However, we could also use nonparametric methods that do not assume linearity. They differ from OLS in several ways:

  • They do not estimate global parameters. Convergence in terms of \(\beta\)s can therefore not be investigated.

  • They converge slower than \(\sqrt{N}\) because they do not work under the assumption that the world is linear and need to figure this out on their own.

  • They are computationally more expensive. We run not so many replications and sample sizes as with OLS

We run the same simulation study as above, but using kernel regression with cross-validated bandwidth, which should converge at \(N^{2/5}\). We print the plot of the first replication of a sample size to illustrate the fit.

# Set simulation parameters
r = 100 # Decrease for faster running time, increase for more precise results
seq = 4

# Initialize empty matrix to fill in RMSE
results = matrix(NA,r,seq+1)
colnames(results) = rep("Temp",seq+1) # To be automatically replaced in loop below

for (i in 0:seq) { # Outer loop over sample sizes 
  n = 4^i * start # sample size for this loop
  colnames(results)[i+1] = paste0("n=",toString(n)) # save sample size
  for (j in 1:r) { # Inner loop over replications
    # Draw sample    
    x = runif(n)
    y = cef(x) + runif(n,-1,1)
    
    # Cross-validate bandwidth, but only once for computational reasons
    if(j==1) bwobj = npregbw(ydat = y, xdat = x,  regtype = 'lc', bwmethod = 'cv.ls')
    # Run Kernel regression
    model = npreg(tydat = y, txdat = x, bws=bwobj$bw, regtype = 'lc')
    if(j==1) plot(model, main = paste0("N=",toString(n)))
    results[j,i+1] = sqrt( mean( (fitted(model) - cef(x))^2 ) )
  }
}

The RMSE of kernel regression decreases also with higher \(N\)

mrmse = round(colMeans(results),3)
mrmse
  n=10   n=40  n=160  n=640 n=2560 
 0.469  0.155  0.102  0.057  0.033 

BUT at a slower rate (as expected)

round(mrmse/lag(mrmse),3)
  n=10   n=40  n=160  n=640 n=2560 
    NA  0.330  0.658  0.559  0.579 

Note that the theoretical decrease for quadrupling sample size is

n^(2/5) / (4*n)^(2/5)
[1] 0.5743492

which is roughly what we observe (the more replications you can computationally afford, the closer it should get).



Non-linear world

So far, we created a world where OLS shines and shows (the best possible) \(\sqrt{N}\) convergence of coefficients and RMSE.

BUT why should the world be linear? To make the point, let’s push it and assume the following definitely non linear DGP: \[Y = \underbrace{sin(X)}_{CEF} + U\] where \[X \sim uniform(-1.43 \pi,1.43 \pi);~U \sim \mathcal{N}(0,1)\]

n = 500        # Sample size for illustration

# Define conditional expectation function
cef = function(x){sin(x)}

# Draw sample
x = runif(n,-pi*1.43,pi*1.43)
y = cef(x) + rnorm(n,0,1)

# Plot sample
df = data.frame(x=x,y=y)
ggplot(df) + stat_function(fun=cef,size=1) + 
           geom_point(aes(x=x,y=y),color="blue",alpha = 0.4)

The DGP is engineered to trick OLS to estimate \(\beta_0 = \beta_1 = 0\) exploiting basically no information in the dataset:

summary(lm(y~ x))

Call:
lm(formula = y ~ x)

Residuals:
    Min      1Q  Median      3Q     Max 
-3.2184 -0.8817 -0.0519  0.8344  3.3488 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.036329   0.053115  -0.684    0.494
x           -0.003947   0.019593  -0.201    0.840

Residual standard error: 1.187 on 498 degrees of freedom
Multiple R-squared:  8.149e-05, Adjusted R-squared:  -0.001926 
F-statistic: 0.04059 on 1 and 498 DF,  p-value: 0.8404


Now we run a simulation study with this DGP and compare the evolution of the RMSE between OLS and kernel regression:

r = 100 # Decrease for faster running time, increase for more precise results
seq = 4

# Initialize empty matrices to fill in RMSE
results_ols = results_kr = matrix(NA,r,seq+1)
Ns = rep(NA,seq+1)

for (i in 0:seq) { # Outer loop over sample sizes 
  n = 4^i * start # sample size for this loop
  Ns[i+1] = n
  for (j in 1:r) { # Inner loop over replications
    # Draw sample
    x = runif(n,-pi*1.43,pi*1.43)
    y = cef(x) + rnorm(n,0,1)
    
    # OLS
    results_ols[j,i+1] = sqrt( mean( (lm(y ~ x)$fitted.values - cef(x))^2 ) )
    
    # Kernel
    bwobj = npregbw(ydat = y, xdat = x, regtype = 'lc', bwmethod = 'cv.ls')
    model = npreg(tydat = y, txdat = x, bws=bwobj$bw, regtype = 'lc')
    if(j==1) plot(model, main = paste0("N=",toString(n)))
    results_kr[j,i+1] = sqrt( mean( (fitted(model) - cef(x))^2 ) )
  }
}

OLS with only the main effect has no chance and converges to a completely wrong CEF such that the RMSE does not decrease with increasing sample size. Kernel regression on the other hand steadily but slowly improves as we increase the sample size:

tibble(N = Ns, ols = colMeans(results_ols), kernel = colMeans(results_kr)) %>%
      pivot_longer(names_to = "Estimator",values_to = "rmse", cols = -N) %>%
      ggplot(aes(x=N,y=rmse,color = Estimator)) + geom_line()  + geom_point() +
      geom_hline(yintercept = 0)

Indeed, the convergence rate of Kernel regression in this simulation is close to the theoretical 0.574 (and probably would come closer if we increased r):

round(colMeans(results_kr)/lag(colMeans(results_kr)),3)
[1]    NA 0.571 0.573 0.545 0.584



Ideas for DIY extensions:

  • Illustrate that the coefficients of OLS converge at \(\sqrt{N}\) to zero?

  • How does OLS perform if we add squared, cubic,… terms?

---
title: "Basics: Convergence rates"
subtitle: "Simulation notebook"
author: "Michael Knaus"
date: "`r format(Sys.time(), '%m/%y')`"
output: 
  html_notebook:
    toc: true
    toc_float: true
    code_folding: show
---

<br>

Goals:

- Illustrate convergence rates

- Illustrate the difference between convergence of coefficients and convergence of root mean squared error

- Illustrate the differences between OLS and kernel regression


<br>

## Linear world

We start in a world where the conditional expectation function (CEF) is actually linear and OLS should converge at $\sqrt{N}$.

### Data generating process

Consider a data generating process (DGP) with: 

$Y = \underbrace{\beta_0 + \beta_1 X}_{CEF} + U$, where $\beta_0 = -1$, $\beta_1 = 2$, $X \sim uniform(0,1)$ and $U \sim uniform(-1,1)$.

For illustration, we plot a random draw with $N=500$ and the true CEF:

```{r message=FALSE, warning = F}
# Load the packages required for later
if (!require("tidyverse")) install.packages("tidyverse", dependencies = TRUE); library(tidyverse)
if (!require("ggridges")) install.packages("ggridges", dependencies = TRUE); library(ggridges)
if (!require("latex2exp")) install.packages("latex2exp", dependencies = TRUE); library(latex2exp)
if (!require("np")) install.packages("np", dependencies = TRUE); library(np)

set.seed(1234)  # For replicability
n = 500         # Sample size for illustration

x = runif(n)    # Draw covariate

# Define population parameters
b0 = -1
b1 = 2

# Define conditional expectation function
cef = function(x){b0 + b1*x}

# Generate outcome variable
y = cef(x) + runif(n,-1,1)

# Plot sample
df = data.frame(x=x,y=y)
ggplot(df) + stat_function(fun=cef,size=1) + 
            geom_point(aes(x=x,y=y),color="blue",alpha = 0.4)
```

Applying OLS to the plotted sample shows that it nearly recovers the true coefficients, but not exactly due to sampling noise (as expected).

```{r}
summary(lm(y ~ x))
```

<br>
<br>

### Convergence rate of OLS - coefficients

We know that $$\sqrt{N}(\hat{\beta}_N - \beta) \overset{d}{\rightarrow} \mathcal{N}(0,\Sigma)$$ 

This means that the difference between the estimated coefficients and the true values is normally distributed **with constant variance** if we multiply it by $\sqrt{N}$.

<br>

Let's unpack this in the following way:

1. Inspect distributions of $\hat{\beta}_N$ for increasing $N$

2. Inspect distributions of $\hat{\beta}_N - \beta$ for increasing $N$

3. Inspect distributions of $\sqrt{N}(\hat{\beta}_N - \beta)$ for increasing $N$

<br>


The tool we use is a so-called Monte Carlo simulation study. Concretely, we follow these steps (for I nice intro [Ch. 15 of "The Effect"](https://theeffectbook.net/ch-Simulation.html):

1. Draw a random sample from the DGP defined above of size $N$.

2. Run OLS and save the coefficients.

3. Repeat 1. and 2. $R$ times.

4. Repeat 1. to 3. for a sequence of sample sizes $N$ increasing by factor four in each step.

<br>

We start with $N=10$, look at five sample sizes, set $R=5,000$ and focus on $\beta_1$.

```{r}
# Define parameters of the simulation
r = 5000
seq = 5
start = 10

# Initialize empty matrix to fill in estimated coefficient b1
results = matrix(NA,r,seq+1)
colnames(results) = rep("Temp",seq+1) # To be automatically replaced in loop below


for (i in 0:seq) { # Outer loop over sample sizes 
  n = 4^i * start # sample size for this loop
  colnames(results)[i+1] = paste0("n=",toString(n)) # save sample size
  for (j in 1:r) { # Inner loop over replications
    # Draw sample
    x = runif(n)
    y = cef(x) + runif(n,-1,1)
    # Estimate and directly save b1 coefficient
    results[j,i+1] = lm(y ~ x)$coefficients[2]
  }
}
```

<br>

#### $\hat{\beta}_N$

Now let's plot the distributions of the resulting coefficients for different sample sizes:


```{r}
# Get the data ready
tib = as_tibble(results) %>% pivot_longer(cols = starts_with("n="), names_to = "N", names_prefix = "n=", values_to = "betas1") %>% mutate(N = as_factor(N))
# Plot
tib %>% ggplot(aes(x = betas1, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") + xlab(TeX(r'($\hat{\beta}_1$)')) +
  geom_vline(xintercept = b1)

```

This picture provides as a side effect intuition about two other important concepts:

1. **Unbiasedness:** The estimator distribution is centered around the true value $\Rightarrow E[\hat{\beta}_{1N}] = \beta_1$ for every sample size (each row in the graph).

2. **Consistency:** The distributions center more and more around the true value as sample size increases.

<br>

#### $\hat{\beta}_N - \beta$

Now we center the distributions around zero:

```{r}
tib %>% mutate(dev = betas1 - b1) %>% ggplot(aes(x = dev, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") + 
    xlab(TeX(r'($\hat{\beta}_1 - \beta_1$)')) +
  geom_vline(xintercept = 0)
```

Of course, this does not change the pattern that the distributions become more compact with larger sample size.

<br>

#### $\sqrt{N}(\hat{\beta}_N - \beta)$

However, if we multiply each distribution with the square root of the respective sample size, tadaaa

```{r, message = FALSE}
tib %>% mutate(sqrtn_dev = sqrt(as.numeric(as.character(N))) * (betas1 - b1)) %>% 
  ggplot(aes(x = sqrtn_dev, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") +
   xlab(TeX(r'($\sqrt{N}(\hat{\beta}_1 - \beta_1)$)'))
```

they become very similar looking normal distributions.

This means $\sqrt{N}$ is exactly the factor that offsets the speed by which the distributions above become more compact. We can infer that the distributions converge to zero at the rate that is required to offset convergence.

To illustrate further, vary `delta` in the following code snippet and observe how the distributions still narrow if it is smaller than 0.5 like, e.g. with 0.3 ...

```{r, message = FALSE}
delta = 0.3
  tib %>% mutate(dev = (as.numeric(as.character(N)))^delta * (betas1 - b1)) %>% 
    ggplot(aes(x = dev, y = fct_rev(N), fill = N)) +
    geom_density_ridges() + ylab("N") + 
    xlab(TeX(r'($N^{3/10}(\hat{\beta}_1 - \beta_1)$)'))
```

... and that the distributions explode if we set it larger than 0.5, e.g. 0.7:

```{r, message = FALSE}
delta = 0.7
  tib %>% mutate(dev = (as.numeric(as.character(N)))^delta * (betas1 - b1)) %>% 
    ggplot(aes(x = dev, y = fct_rev(N), fill = N)) +
    geom_density_ridges() + ylab("N") +
    xlab(TeX(r'($N^{7/10}(\hat{\beta}_1 - \beta_1)$)'))
```

<br>

Another illustration, how the distribution of the deviations is stabilized by multiplying with $\sqrt{N}$, shows the standard deviation of $N^\delta(\hat{\beta_1} - \beta_1)$ which is $\sqrt{Var(N^\delta(\hat{\beta_1} - \beta_1))} = \sqrt{N^{2\delta} \cdot Var(\hat{\beta_1} - \beta_1)} = N^\delta \cdot sd(\hat{\beta_1} - \beta_1)$. The following graph shows how $N^\delta \cdot sd(\hat{\beta_1} - \beta_1)$ evolves for different $\delta$ with increasing $N$:

```{r}
new_tib = tib %>% mutate(dev = betas1 - b1, N = as.numeric(as.character(N)))  %>% group_by(N) %>% summarise(sd_dev = sqrt(var(dev))) 

delta_seq = c(0,0.3,0.4,0.5,0.55,0.6)
new_tib = cbind(as.matrix(new_tib), matrix(NA,nrow = dim(as.matrix(new_tib))[1],ncol =  length(delta_seq)))
colnames(new_tib) = c("N", "sd", rep("Temp",length(delta_seq)))

for (i in 1:length(delta_seq)) {
  delta = delta_seq[i]
  colnames(new_tib)[i+2] = paste0("d=",toString(delta))
  new_tib[,i+2] = new_tib[,1]^(delta) * new_tib[,2]
}

new_tib = as_tibble(new_tib) %>% pivot_longer(cols = starts_with("d="), names_to = "delta", names_prefix = "d=", values_to = "value") %>% mutate(delta = as_factor(delta))

new_tib %>% ggplot(aes(x = N, y = value, color = delta))+
  geom_line(aes(size = delta)) + scale_size_manual(values = c(0.5,0.5, 0.5, 2,0.5,0.5)) + 
    ylab(TeX(r'($N^\delta \cdot sd(\hat{\beta_1} - \beta_1)$)'))

```
With $\delta < 1/2$, the standard deviation of $\hat{\beta_1} - \beta_1$ goes to zero. Only when when multiplying by $N^{1/2}$ (thick), the standard deviation remains constant across different sample sizes $\Rightarrow$ The distribution is case stabilized. Multiplication by $N^\delta, \delta>1/2$ makes the standard deviation of the product increase with increasing sample size $\Rightarrow$ the distributions explode asymptotically.
<br>

#### $N(\hat{\beta}_1 - \beta_1)^2$

Note similarly that $N$ stabilizes the distribution of the squared difference

```{r}
tib %>% mutate(sqrtn_dev = as.numeric(as.character(N)) * (betas1 - b1)^2) %>% 
  ggplot(aes(x = sqrtn_dev, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") +
   xlab(TeX(r'($N(\hat{\beta}_1 - \beta_1)^2$)'))
```

<br>

#### $\sqrt{N(\hat{\beta}_1 - \beta_1)^2}$

... and $\sqrt N$ stabilizes the distribution of the square root of the squared difference:

```{r}
tib %>% mutate(sqrtn_dev = sqrt( as.numeric(as.character(N)) * (betas1 - b1)^2) ) %>% 
  ggplot(aes(x = sqrtn_dev, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") +
   xlab(TeX(r'($\sqrt{N(\hat{\beta}_1 - \beta_1)^2}$)'))

```

<br>


### Convergence rate of OLS - RMSE of fitted/predicted values

In this course, it will be important how fast fitted/predicted values of a method converge to the CEF, not how fast a single coefficient converges.

Therefore note that $\sqrt{N}$ convergence in the parameters implies $\sqrt{N}$ of the root mean squared error (RMSE) $$RMSE = \sqrt{\frac{1}{N}\sum_i(X_i'\hat{\beta}_N - X_i'\beta)^2}$$

To illustrate this, we repeat the simulation above, but instead of saving the coefficient, we save the RMSE of each run.

```{r}
# Initialize empty matrix to fill in RMSE
results = matrix(NA,r,seq+1)
colnames(results) = rep("Temp",seq+1) # To be automatically replaced in loop below

for (i in 0:seq) { # Outer loop over sample sizes 
  n = 4^i * start # sample size for this loop
  colnames(results)[i+1] = paste0("n=",toString(n)) # save sample size
  for (j in 1:r) { # Inner loop over replications
    # Draw sample
    x = runif(n)
    y = cef(x) + runif(n,-1,1)
    # Save RMSE of this replication
    results[j,i+1] = sqrt( mean( (lm(y ~ x)$fitted.values - cef(x))^2 ) )
  }
}
```

Plot the RMSE for different sample sizes:

```{r}
tib = as_tibble(results) %>% pivot_longer(cols = starts_with("n="), names_to = "N", names_prefix = "n=", values_to = "rmse") %>% mutate(N = as_factor(N))
tib %>% ggplot(aes(x = rmse, y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") + xlab("RMSE")
```

The distributions of RMSE come closer to zero, the larger the sample size.

To see that also the RMSE distributions converge at rate $\sqrt{N}$, we blow up the raw distributions once more and observe that they stabilize when doing so:

```{r}
tib %>% ggplot(aes(x = sqrt( as.numeric(as.character(N)) ) * rmse , y = fct_rev(N), fill = N)) +
  geom_density_ridges() + ylab("N") + xlab(TeX(r'($\sqrt{N}\times RMSE$)'))
```

<br>

This illustrates the (for me) most intuitive implication of $\sqrt{N}$ convergence that the RMSE halves when we quadruple the sample size.

Check the means of the RMSE distributions

```{r}
mrmse = round(colMeans(results),3)
mrmse
```

and realize that they indeed nearly perfectly halve:

```{r}
round(mrmse/lag(mrmse),3)
```
 
<br>
<br>

*Ideas for DIY extensions:*

- Use different sequences of $N$

- Check also $\beta_0$

- Illustrate that the mean squared error (MSE) converges at rate $N$

- Adapt the DGP, e.g. different parameter values, more than one covariate, different error term distribution, ...

<br>
<br>

### Kernel regression convergence rates

Recall that we specified a linear CEF and therefore OLS is supposed to work very well. However, we could also use nonparametric methods that do not assume linearity. They differ from OLS in several ways:

- They do not estimate global parameters. Convergence in terms of $\beta$s can therefore not be investigated.

- They converge slower than $\sqrt{N}$ because they do not work under the assumption that the world is linear and need to figure this out on their own.

- They are computationally more expensive. We run not so many replications and sample sizes as with OLS

We run the same simulation study as above, but using kernel regression with cross-validated bandwidth, which should converge at $N^{2/5}$. We print the plot of the first replication of a sample size to illustrate the fit.


```{r, results='hide'}
# Set simulation parameters
r = 100 # Decrease for faster running time, increase for more precise results
seq = 4

# Initialize empty matrix to fill in RMSE
results = matrix(NA,r,seq+1)
colnames(results) = rep("Temp",seq+1) # To be automatically replaced in loop below

for (i in 0:seq) { # Outer loop over sample sizes 
  n = 4^i * start # sample size for this loop
  colnames(results)[i+1] = paste0("n=",toString(n)) # save sample size
  for (j in 1:r) { # Inner loop over replications
    # Draw sample    
    x = runif(n)
    y = cef(x) + runif(n,-1,1)
    
    # Cross-validate bandwidth, but only once for computational reasons
    if(j==1) bwobj = npregbw(ydat = y, xdat = x,  regtype = 'lc', bwmethod = 'cv.ls')
    # Run Kernel regression
    model = npreg(tydat = y, txdat = x, bws=bwobj$bw, regtype = 'lc')
    if(j==1) plot(model, main = paste0("N=",toString(n)))
    results[j,i+1] = sqrt( mean( (fitted(model) - cef(x))^2 ) )
  }
}
```

The RMSE of kernel regression decreases also with higher $N$

```{r}
mrmse = round(colMeans(results),3)
mrmse
```

BUT at a slower rate (as expected)
 
```{r}
round(mrmse/lag(mrmse),3)
```


Note that the theoretical decrease for quadrupling sample size is
```{r}
n^(2/5) / (4*n)^(2/5)
```

which is roughly what we observe (the more replications you can computationally afford, the closer it should get).

<br>
<br>

## Non-linear world

So far, we created a world where OLS shines and shows (the best possible) $\sqrt{N}$ convergence of coefficients and RMSE.

BUT why should the world be linear? To make the point, let's push it and assume the following definitely non linear DGP: $$Y = \underbrace{sin(X)}_{CEF} + U$$
where 
$$X \sim uniform(-1.43 \pi,1.43 \pi);~U \sim \mathcal{N}(0,1)$$


```{r}
n = 500        # Sample size for illustration

# Define conditional expectation function
cef = function(x){sin(x)}

# Draw sample
x = runif(n,-pi*1.43,pi*1.43)
y = cef(x) + rnorm(n,0,1)

# Plot sample
df = data.frame(x=x,y=y)
ggplot(df) + stat_function(fun=cef,size=1) + 
           geom_point(aes(x=x,y=y),color="blue",alpha = 0.4)
```


The DGP is engineered to trick OLS to estimate $\beta_0 = \beta_1 = 0$ exploiting basically no information in the dataset:

```{r}
summary(lm(y~ x))
```

<br>

Now we run a simulation study with this DGP and compare the evolution of the RMSE between OLS and kernel regression:

```{r, results='hide'}
r = 100 # Decrease for faster running time, increase for more precise results
seq = 4

# Initialize empty matrices to fill in RMSE
results_ols = results_kr = matrix(NA,r,seq+1)
Ns = rep(NA,seq+1)

for (i in 0:seq) { # Outer loop over sample sizes 
  n = 4^i * start # sample size for this loop
  Ns[i+1] = n
  for (j in 1:r) { # Inner loop over replications
    # Draw sample
    x = runif(n,-pi*1.43,pi*1.43)
    y = cef(x) + rnorm(n,0,1)
    
    # OLS
    results_ols[j,i+1] = sqrt( mean( (lm(y ~ x)$fitted.values - cef(x))^2 ) )
    
    # Kernel
    bwobj = npregbw(ydat = y, xdat = x, regtype = 'lc', bwmethod = 'cv.ls')
    model = npreg(tydat = y, txdat = x, bws=bwobj$bw, regtype = 'lc')
    if(j==1) plot(model, main = paste0("N=",toString(n)))
    results_kr[j,i+1] = sqrt( mean( (fitted(model) - cef(x))^2 ) )
  }
}
```


OLS with only the main effect has no chance and converges to a completely wrong CEF such that the RMSE does not decrease with increasing sample size. Kernel regression on the other hand steadily but slowly improves as we increase the sample size:

```{r}
tibble(N = Ns, ols = colMeans(results_ols), kernel = colMeans(results_kr)) %>%
      pivot_longer(names_to = "Estimator",values_to = "rmse", cols = -N) %>%
      ggplot(aes(x=N,y=rmse,color = Estimator)) + geom_line()  + geom_point() +
      geom_hline(yintercept = 0)
```

Indeed, the convergence rate of Kernel regression in this simulation is close to the theoretical `r round(n^(2/5) / (4*n)^(2/5),3)` (and probably would come closer if we increased `r`):
 
```{r}
round(colMeans(results_kr)/lag(colMeans(results_kr)),3)
```

<br>
<br>


**Ideas for DIY extensions**:

- Illustrate that the coefficients of OLS converge at $\sqrt{N}$ to zero?

- How does OLS perform if we add squared, cubic,... terms?