Notes from the book “Hooked”

I re-read the book Hooked by Nir Eyal and these are my notes.

1/ The key question that the book answers is: how to make habit-forming products. And its answer is a model that involves four stages: a) trigger; b) action; c) variable reward; d) investment

2/ Why should products be habit-forming? It’s because only those products that become part of someone’s daily life go on to become valuable. So, whether a product has habit formation potential is a leading indicator for whether the product will turn out to be valuable. ...  Read the entire post →

How to increase wealth for everyone

I’ve started reading The Capitalist Manifesto, a book full of data on how free markets generate prosperity.

The author is nuanced and uses data and logic to show how we should be rooting for capitalism (instead of bringing it down).

Income and wealth inequality created as a function of entrepreneurship is not a bad thing per se. Risk-takers and innovators derive a higher income when they create much more value elsewhere.

Nobody hands over a million dollars to someone else for free. Profit earned by someone in free markets is typically a surplus made available to them because they expanded the pie for everyone. (And not because they stole it from someone else. Capitalism is NOT a zero-sum game.) ...  Read the entire post →

Notes from the book “Decoding the Why”

Finished (well, skimmed) the book “Decoding the Why: How Behavioral Science is Driving the Next Generation of Product Design”. It’s a decent introductory book if you have not delved into behavioral science before, but if you have, most of the ideas in the book won’t come across as surprising or new.

Nevertheless, made some quick notes.

1/ Imagining the future as concretely as possible helps us align our actions towards the future self (otherwise, the future self remains an abstract, theoretical concept).

So, whenever you can, help people imagine the benefit of making a change in their life via your product. ...  Read the entire post →

Notes from the book “Magic Words”

Just finished reading Magic Words by Jonah Berger and my notes follow.

1/ Most of us spend a significant amount of time in assembling the thoughts we want to convey, but assume the words to convey them as a given.

2/ This book – written by a professor of business – takes the view that the specific words we choose to communicate have an enormous impact on the listener.

3/ Since words paint an image in the listener’s head, the same idea presented differently can paint a very different image. ...  Read the entire post →

Games are problems people pay to solve

Good definitions are powerful. Lately, while reading The Art of Game Design, it became clear to me that the author’s definition of games makes a lot of sense. He defines games as problems that people pay to solve with either their time or money.

Unlike movies or books, games are not passive: they require an active participation and in that sense, they’re problems to be solved. And the fact that we willingly pay (with time or money) to solve those problems is fascinating.

Even though I’m not a gamer, I’m building a consumer startup in the behavior change domain and that’s pushing me to study games. Specifically, I’m interested in exploring what is that about great games that people will spend many hundreds of hours trying to master them, while most consumer experiences (including courses purchased or apps installed) have a ~90% churn of users on day 1 itself.  ...  Read the entire post →

Notes from the book “The Shallows”

Just finished the book that’s often called a deeper, and a more philosophical version of the popular book Deep Work by Cal Newport.

The Shallows by Nicholas Carr explores what the Internet is doing to our brains. Even though the book was written in 2010, it’s eerily accurate about what most of us have come to feel about the effects of the Internet and social media.

The most significant point the book drives home is that while the Internet makes us feel smarter, it is actually making us dumber. This apparent paradox is resolved when you get to the neuroscience of how our brain processes information. ...  Read the entire post →

Notes from the book “The Mind is Flat”

I finished reading the book “The Mind is Flat” by Nick Chater and here are my notes.

1/ The core idea of the book is that our thoughts are not reflections of a deeper self. Rather they’re real-time fabrications of the mind trying to be consistent with previous fabrications.

2/ In this sense, the book advocates quite successfully, there’s no “unconscious”. There are no deep motivations guiding your conscious self. The conscious, real-time thoughts are all you’ve got. ...  Read the entire post →

How to be a Bayesian

Just finished this book on the history of Bayes theorem and I highly recommend it.

In case you’re wondering what is it, keep reading.

(A thread on Bayes theorem)

1/ Statistics is all about calculating probabilities, and there are two camps who interpret probability differently.

Frequentists = frequency of events over multiple trials

Bayesians = subjective belief of the outcome of events

2/ This philosophical divide informs what these two camps usually bother with.

Frequentists = probability of data, given a model (of how data could have been generated)

Bayesians = probability of model, given the data

3/ Most often we care about the latter question.

E.g we want to know given that the mammography test is positive, what is the probability of having breast cancer.

And not given breast cancer, the probability of test being positive.

4/ These two questions sound similar but have different answers.  ...  Read the entire post →

The Outer Limits of Reason: A Summary

In the last 2 years, I have picked up my reading speed to about 5 books per month. And since I mostly read non-fiction, the concepts and ideas keep swirling in my head, sometimes I get a dizzy sense of not understanding completely what I just read. I’m talking about that odd sense when someone asks you what is the book about and you feel while you understood the main ideas behind the book, you still struggle explaining the details. So taking “if you can’t explain it, you haven’t understood it” to its logical extreme, I will start summarizing books that I like. In addition to cementing my understanding of the book, hopefully it will help others decide whether they want to read the book or not.

The Outer Limits of Reason by Noson S. Yanofsky

This book is written by Yanofsky, a professor in the Department of Computer and Information Science at Brooklyn College at the University of New York. Most popular science books like The Brief History of Time or The Selfish Gene talk about what we know about specific subjects. This book, like the title says, describes problems and concepts which are inherently out of bounds for us. This fantastic, eye-popping perspective challenges the intellectual bravado of mankind who believe that progress in science will ultimately be able to solve all problems and enable us to understand everything. What’s intriguing about the entire exercise of knowing what cannot be unknown is that the glimpse of unknown must be had with the known. It’s like taking a telescope, pointing it to the sky and observing a sign saying ‘there’s nothing to see here’.

The book does a great job at explaining concepts simply and I’m going to try simplifying it even more in this summary. Instead of going in a random order, I’ll follow the order of book’s chapters in which ideas appear.

Language Paradoxes

The book begins with a simple statement many of us might have come across earlier. Consider the sentence: “This sentence is false.” Is that sentence true or false? If we assume it’s true, then as per the sentence itself, it must be false. If we assume it’s false, it means it must be true. So we have an inconsistency and it’s there because language is a human made construct and full of inconsistencies. We can make up sentences that make no sense and that sentence was an example of such meaningless sentences. So far, so good.

Let’s talk about a famous paradox called Russell’s Paradox. In the late 19th century, mathematicians working on set theory (notably George Cantor) believed that entire mathematics can be derived from set theory. For example, zero could be represented by an empty set (the one with no elements). Addition could be simply combining two sets and counting the number of elements in the combined set. Taking set theory as it was prevalent in late 1800s, Russell proposed a paradox that I’ll discuss soon.

Sets are simply a collection of items. A set is an idea that if you have a bunch of items in a group, you can ask questions about it or do series of operations on it. For example, if you have a set of 5 oranges, you can ask questions like how many items remain in the set if we take 1 orange from it or you can take that set of 5 oranges and put them in a set of 5 mangoes and now you have a set of 10 items. With physical objects, sets are intuitive and easy to comprehend but the key idea is that sets are an abstract idea that can be applied to any collection of any types of items. So even though you will never see an infinite number of oranges, you can still define an infinite set (say a set of all whole numbers 0,1,2,3..).

There are some sets that contain themselves. Now the idea may seem strange but remember we’re talking mathematics and the idea of some sets containing themselves makes perfect sense. For example, define S as a set as collection of all sets that contain more than 1 element. There are infinite number of sets with more than 1 element (for example, you can define sets of two consecutive numbers {1,2}, {2,3}, {3,4}, etc.). So by definition S set itself fits in the definition of S (i.e. sets with >1 element). Hence S contains itself. For a simpler example, consider a book containing names of all books that are published. That book must contain its own name.

Back to Russell’s paradox. What Russell did was to define set R as a set of all sets that do not contain themselves. The question we must ask now is whether R contains itself. Thinking about this for a minute you’ll realize that this leads to an inconsistency. If R contains itself, it cannot contain itself (because it’s a set of all sets that do not contain themselves). If R does not contain itself, then it must contain itself. Voila, there we have the mathematical version of “this sentence is false” statement. Notice now that here we can’t dismiss this paradox as easily. It’s built on logic – which is the foundation of science.

Russell’s paradox announced in 1901 threw mathematics in a crisis. It was rescued by putting more axioms (which are truths that mathematicians take as self evident) and that led to a stronger version of set theory called Zermelo–Fraenkel set theory. But as we will see later, even that wasn’t enough because Gödel later demonstrated that in any consistent formal system, there are statements that can neither be proved or disproved.

Multiple types of infinities

One of the most interesting topics in mathematics is the topic of infinity. Even more interesting is the fact that there are multiple types of infinities. Well, what does that mean? Hang on.

Let’s start by counting. This seems like a simple enough exercise: if you have a few bananas, to count the total number, mentally you put them into a one-to-one correspondence to numbers. So you go.. “1 banana (pointing to the first banana), 2 bananas (pointing to the second one), 3 bananas (you know the drill)” and that’s how you know there are 3 bananas. Similarly if we define the concept of infinity as the counting of all natural numbers (1,2,3,4,5…), by putting other sets into one-to-one we can see how many other sets of numbers are infinite too.

Consider the set of all even numbers. Since even numbers are a subset of natural numbers, it’s very odd to think about that there are as many even numbers as there are natural numbers, but that’s exactly how it turns out to be through one to one correspondence. For each natural number, you can multiply it by 2 and get a corresponding number, and you can keep on doing this without ever running out of numbers. This means that both sets are of equal size, which is infinite. To distinguish it from other sets, this particular type of infinity is called countable infinity and is represented by aleph0 (pronounced as aleph zero). It may seem surprising, but even the set of all rational numbers (those of the form m/n where both m and n are natural numbers) are also of the same size as natural numbers. So, 1/2, 1/3, 1/4…. 2/3, 2/4, 2/5,… 3/4, 3/5, 3/6.. is equal to the size of 1,2,3,4,5.. The proof is simple, you can check it out here.

The amazing thing is that an infinity larger than these countable infinite numbers exists. And what’s even more amazing is that it can be proven. In order to do so, let’s talk about real numbers.

For those who need a little refresher, real numbers include all numbers on the number line. For this discussion, we can restrict ourselves to numbers between 0 and 1, but the argument holds for all real numbers out there. To prove that there exists a larger infinity, we will assume there isn’t a larger infinity and that all real numbers can be put into correspondence to natural numbers. This proof is called diagonalization proof.

To prove that the size of real numbers is bigger than the size of natural numbers, it’s sufficient to prove that there exists a real number that can’t be put into correspondence with natural numbers. To prove that first let’s assume that there’s some fancy correspondence between natural numbers (1,2,3,) and real numbers between 0 and 1. It’s not important what this correspondence is, the point is to assume that a perfect correspondence between real numbers and natural numbers exists and then to find out a real number that isn’t contained in the correspondence. See below:

Diagonal proof

The proof simply demands that for any correspondence, you take the first digit from the first row and change it to something else, take the second digit from the second row and change it to something else, and so on. The resulting changed digits when combined, make up for a real number that isn’t contained in the original one-to-one correspondence (because it’s different from the first number, the second number and so on.. in fact it’s different from all numbers). Voila! We have a real number that isn’t contained in the correspondence and hence we have proven that the set of real numbers is larger than the set of natural numbers. Since by changing different rows to different numbers, you can generate infinite new numbers, this proves that the size of real numbers is actually infinitely larger than the countable infinite size of natural numbers. This larger infinity is called an uncountable infinity and it’s size is 2aleph0. (This is because the set of real numbers can be put into one-to-one correspondence with the power set of natural numbers, and the size of any power set is 2size_of_the_set, e.g. super set of {1,2} is {{}, {1}, {2}, {1,2}}. For more info, see cardinality of continuum.)

Now here’s a kicker and the first time book introduces something that’s truly beyond reason: the continuum hypothesis (CH), which essentially postulates that there exists no set whose size is between the size of natural numbers (aleph0) and real numbers (2aleph0). Cantor put out this hypothesis in 1878 and spent many years trying to prove it. Then in 1940, Gödel proved that CH cannot be disproved within the set theory axioms (ZFC type, which is introduced above). And then in 1960s, Paul Cohen proved that axioms of set theory cannot prove CH. In short, CH is independent of set theory. There have been many attempts to extend the set theory but all attempts seem artificial since the current version of ZFC set theory seems to be foundational to much of modern mathematics. Anything we add or modify in ZFC theory to specifically accommodate CH seems a bit arbitrary (needless to say, impossibly difficult without making it inconsistent).

So returning to the question: are there any sets whose size is between natural numbers or real numbers? That’s an answer that reason cannot tell. It’s also a reflection on the question of whether mathematics is “discovered” or “invented”. Immediate thought is to call it a discovery as many people have same thoughts (e.g. number 1,2,3 exists in virtually all cultures), but then the question arises: discovered from where? If mathematics is discovered, there’s a definite answer to CH and we just don’t know it (or can’t know it). If it’s invented, then the question is meaningless. All we can say is that CH is independent of ZFC set theory, and the question demanding an answer about CH in context of ZFC set theory doesn’t make sense.

To know more about this topic, watch this video:

Practically impossible problems to solve: computational complexity theory aka P=NP? ...  Read the entire post →