# Blog Posts

## Simple Linear Regression

In this lesson, you will: Identify Regression Applications Learn How Regression Works Apply Regression to Problems Using Python Machine Learning is frequently split into supervised and unsupervised learning. Regression, which you will be learning about in this lesson (and its extensions in later lessons), is an example of supervised machine learning. In supervised machine learning, you are interested in predicting a label for your data. Commonly, you might want to predict fraud, customers that will buy a product, or home values in an area. In unsupervised machine learning, you are interested in clustering data together that isn't already labeled. This is covered in more detail in the Machine Learning Engineer Nanodegree. However, we will not be going into the details of these algorithms in this course.…

## Case Study: A/B Tests

A/B tests are used to test changes on a web page by running an experiment where a control group sees the old version, while the experiment group sees the new version. A metric is then chosen to measure the level of engagement from users in each group. These results are then used to judge whether one version is more effective than the other. A/B testing is very much like hypothesis testing with the following hypotheses: Null Hypothesis: The new version is no better, or even worse, than the old version Alternative Hypothesis: The new version is better than the old version If we fail to reject the null hypothesis, the results would suggest keeping the old version. If we reject the null hypothesis, the results would suggest launching…

## Hypothesis Testing

rules for setting up null and alternative hypotheses: The H_0H0​ is true before you collect any data. The H_0H0​ usually states there is no effect or that two groups are equal. The H_0H0​ and H_1H1​ are competing, non-overlapping hypotheses. H_1H1​ is what we would like to prove to be true. H_0H0​ contains an equal sign of some kind - either =, \leq≤, or \geq≥. H_1H1​ contains the opposition of the null - either \neq≠, >>, or <<. You saw that the statement, "Innocent until proven guilty" is one that suggests the following hypotheses are true: H_0H0​: Innocent H_1H1​: Guilty We can relate this to the idea that "innocent" is true before we collect any data. Then the alternative must be a competing, non-overlapping hypothesis. Hence, the alternative hypothesis is that an individual is guilty.   Because…

## Confidence Intervals – Udacity

import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline np.random.seed(42) full_data = pd.read_csv('../data/coffee_dataset.csv') sample_data = full_data.sample(200)   diffs = [] for _ in range(10000): bootsamp = sample_data.sample(200, replace = True) coff_mean = bootsamp[bootsamp['drinks_coffee'] == True]['height'].mean() nocoff_mean = bootsamp[bootsamp['drinks_coffee'] == False]['height'].mean() diffs.append(coff_mean - nocoff_mean) np.percentile(diffs, 0.5), np.percentile(diffs, 99.5) # statistical evidence coffee drinkers are on average taller   diffs_age = [] for _ in range(10000): bootsamp = sample_data.sample(200, replace = True) under21_mean = bootsamp[bootsamp['age'] == '<21']['height'].mean() over21_mean = bootsamp[bootsamp['age'] != '<21']['height'].mean() diffs_age.append(over21_mean - under21_mean) np.percentile(diffs_age, 0.5), np.percentile(diffs_age, 99.5) # statistical evidence that over21 are on average taller   diffs_coff_under21 = [] for _ in range(10000): bootsamp = sample_data.sample(200, replace = True) under21_coff_mean = bootsamp.query("age == '<21'…

## Statistics – Udacity

Descriptive Statistics Descriptive statistics is about describing our collected data using the measures discussed throughout this lesson: measures of center, measures of spread, shape of our distribution, and outliers. We can also use plots of our data to gain a better understanding. Inferential Statistics Inferential Statistics is about using our collected data to draw conclusions to a larger population. Performing inferential statistics well requires that we take a sample that accurately represents our population of interest. A common way to collect data is via a survey. However, surveys may be extremely biased depending on the types of questions that are asked, and the way the questions are asked. This is a topic you should think about when tackling the the first project. We…

## Probability – Udacity

Probability Here you learned some fundamental rules of probability. Using notation, we could say that the outcome of a coin flip could either be T or H for the event that the coin flips tails or heads, respectively. Then the following rules are true: \bold{P(H)} = 0.5P(H)=0.5 \bold{1 - P(H) = P(\text{not H})} = 0.51−P(H)=P(not H)=0.5 where \bold{\text{not H}}not H is the event of anything other than heads. Since, there are only two possible outcomes, we have that \bold{P(\text{not H}) = P(T)} = 0.5.P(not H)=P(T)=0.5. In later concepts, you will see this with the following notation: \bold{\lnot H}¬H. Across multiple coin flips, we have the probability of seeing n heads as \bold{P(H)^n}P(H)n. This is because these events are independent. We can get two generic rules from this: The probability of any event must be between…

## Project Notes – Udacity

b for new cell x to delete cell m to markdown   #axis = 1 so that the column names being used #before dropping, you could chart it to learn what is missing, make sure not different from general dataframe df.hist(figsize=(10,0)) df[df.Age.isnull()].hist(figsize=(10,0)) df.fillna(df.mean(), inplace=True) was fare associated with survival? create masks for rows survived and rows died survived = df.Survived == True died = df.Survived == False df.Fare[survived].mean() df.Fare[died].mean() df.Fare[survived.hist(alpha=0.5, bins=20, label='survived') df.Fare[died].hist(alpha=0.5, bins=20, label='died'); #semicolon on last line so that text doesnt pop up #alpha to make it more transparent #bin is number of columns   dg.groupby('Pclass').Survived.mean().plot(kind='bar');   df.Age[survived].hist(alpha=0.5, bins=20, label='survived') df.Age[died].hist(alpha=0.5, bins=20, label='died');   dg.groupby('Sex').Survived.mean().plot(kind='bar'); df.groupby('Sex')['Pclass'].value_counts() df.query('Sex =="female"')['Fare'].median(), df.query('Sex =="male"')['Fare'].median() df.groupby(['Pclass', 'Sex']).Survived.mean().plot(kind='bar');   df.SibSp[survived].valuecounts().plot(kind='bar, alpha=0.5, color='blue', labels='survived') df.SibSp[died].valuecounts().plot(kind='bar, alpha=0.5,…

## Data Analysis Process – Case Study 2 – Udacity

Cleaning Column Labels 1. Drop extraneous columns Drop features that aren't consistent (not present in both datasets) or aren't relevant to our questions. Use Pandas' dropfunction. 2. Rename Columns Change the "Sales Area" column label in the 2008 dataset to "Cert Region" for consistency. Rename all column labels to replace spaces with underscores and convert everything to lowercase. (Underscores can be much easier work with in Python than spaces. For example, having spaces wouldn't allow you to use df.column_name instead of df['column_name'] to select columns or use query(). Being consistent with lowercase and underscores also helps make column names easy to remember.) # load datasets import pandas as pd df_08 = pd.read_csv('all_alpha_08.csv') df_18 = pd.read_csv('all_alpha_18.csv') # view 2008 dataset df_08.head(1) # view 2018 dataset df_18.head(1) Drop…