Self-Improvement, Books, Philosophy of Everyday Life

Showing: 108 RESULTS
Turtles All the Way Down
All Posts Books

Turtles All the Way Down

by John Green Read:  2017-05-31, Rating:  9/10. A gripping story about someone struggling with OCD. Bandaids, helicopter moms, Rich Boys, and explaining the universe. My Favourite Quotes: True terror isn’t being scared; it’s not having a choice in the matter. Whether it hurts is kind of irrelevant. You tell yourself I am not my thoughts …

Win Bigly
All Posts Books

Win Bigly

Persuasion in a World Where Facts Don’t Matter by Scott Adams Read:  2017-04-12, Rating:  7/10. How to Fail at Almost Everything and Still Win Big is still one of my favourite books to recommend. I followed Scott Adam’s blog throughout the 2016 election race. This book repeats much of his blog content on the persuasion techniques …

All Posts Udacity

git

Further Research Git Internals – Plumbing and Porcelain (advanced – bookmark this and check it out later) Customizing Git – Git Hooks Git Init Recap Use the git init command to create a new, empty repository in the current directory. $ git init Running this command creates a hidden .git directory. This .git directory is the brain/storage center for the repository. It …

Oracle Bones
All Posts Books

Oracle Bones

A Journey Through Time in China by Peter Hessler Read:  2017-03-14, Rating:  8/10. As entertaining as River Town by the same author. Excellent way to see China from a westerner’s perspective. Book Notes to follow certain individuals across this period, recording how their lives were shaped by a changing world. Polat means “steel” in the Uighur …

All Posts Udacity

Multiple Linear Regression

In this lesson, you will be extending your knowledge of simple linear regression, where you were predicting a quantitative response variable using a quantitative explanatory variable. That is, you were using an equation that looked like this: \hat{y} = b_0 + b_1x_1y^​=b0​+b1​x1​ In this lesson, you will learn about multiple linear regression. In these cases, …

All Posts Udacity

Logistic Regression

Fitting Logistic Regression import numpy as np import pandas as pd import statsmodels.api as sm df = pd.read_csv(‘./fraud_dataset.csv’) df.head() 1. As you can see, there are two columns that need to be changed to dummy variables. Replace each of the current columns to the dummy version. Use the 1 for weekday and True, and 0 otherwise. Use the first …

All Posts Udacity

Simple Linear Regression

In this lesson, you will: Identify Regression Applications Learn How Regression Works Apply Regression to Problems Using Python Machine Learning is frequently split into supervised and unsupervised learning. Regression, which you will be learning about in this lesson (and its extensions in later lessons), is an example of supervised machine learning. In supervised machine learning, you are interested in predicting …

All Posts Udacity

Case Study: A/B Tests

A/B tests are used to test changes on a web page by running an experiment where a control group sees the old version, while the experiment group sees the new version. A metric is then chosen to measure the level of engagement from users in each group. These results are then used to judge whether one version is more effective than …

All Posts Udacity

Hypothesis Testing

rules for setting up null and alternative hypotheses: The H_0H0​ is true before you collect any data. The H_0H0​ usually states there is no effect or that two groups are equal. The H_0H0​ and H_1H1​ are competing, non-overlapping hypotheses. H_1H1​ is what we would like to prove to be true. H_0H0​ contains an equal sign of some kind – either =, \leq≤, or \geq≥. H_1H1​ contains the opposition …

All Posts Udacity

Confidence Intervals – Udacity

import pandas as pd import numpy as np import matplotlib.pyplot as plt %matplotlib inline np.random.seed(42) full_data = pd.read_csv(‘../data/coffee_dataset.csv’) sample_data = full_data.sample(200)   diffs = [] for _ in range(10000): bootsamp = sample_data.sample(200, replace = True) coff_mean = bootsamp[bootsamp[‘drinks_coffee’] == True][‘height’].mean() nocoff_mean = bootsamp[bootsamp[‘drinks_coffee’] == False][‘height’].mean() diffs.append(coff_mean – nocoff_mean) np.percentile(diffs, 0.5), np.percentile(diffs, 99.5) # statistical evidence …