How we do science determines what we discover

Ever heard of meta-science?

It’s the science of science. In this episode, I interview the meta-scientist James A. Evans who explains how science happens, why smaller teams do big scientific breakthroughs, similarities between startups and scientific endeavors, and what research shows about the path to success.

His research shows how science is not an automatic machine that keeps on generating truth, but is rather a system where social dynamics of how scientists interact with each other determine what they end up discovering.

About the guest

James Evans is the Director of Knowledge Lab, and faculty at the Sociology department at the University of Chicago. His research uses machine learning and large-scale data to understand how groups think and what they know. He studies collective efforts like science and Wikipedia to figure out how collective attention is distributed, how new ideas originate, and how shared habits of reasoning emerge.

What we talk about

0:04 – Introduction
1:49 – How science happens?
5:47 – Why science is going through a replication crisis?
11:33 – Does science generate truths?
15:14 – How does truth get established in any field?
20:04 – Robustness vs replicability
22:12 – Instead of open science, we should have more closed science
26:05 – Why we need diversity in scientific thought and process
28:40 – On risk in science and why failure is actually the success in scientific studies?
29:34 – Funding in science studies vs funding in start-ups
32:39 – How to push the system to increase the risk appetite in science?
37:29 – On having a diverse portfolio of research directions to enable yourself to take more risks
43:40 – Is productivity in science declining?
47:28 – Why prevents getting more breakthroughs in science
50:05 – Increasing failure to increase successful science discoveries
55:34 – We need to treat new start-up theories charitably
58:20 – How is success achieved across various fields?
1:02:34 – Using data from failures to figure success
1:04:35 – Difference between what you are doing vs others is the high ground of success
1:06:35 – Your research on aesthetics of knowledge – what is it and why are you interested in it

Dive into James Evan’s research

My notes and insights

1/ Meta-science, which is the science of science, aims to explore a simple but fundamentally important question: how do we run science better.

With big-data and ML available today, studying how big breakthroughs happen in science has become a tractable problem.

2/ Meta-science is ultimately about creating leverage.

We can leave science to progress organically, or we can try to answer how do we invest our $ and time to make scientific progress.

Ultimately, the knowledge gained through meta-science will help all the fields (from cosmology to epidemiology) and that’s what makes it exciting and important.

3/ Science is not an automatic truth machine.

We can and should influence how science happens.

Like other fields, it is a human enterprise. And the output of this enterprise – scientific knowledge – depends on how you put humans together, their dynamics, their incentives.

4/ Why should we study the incentives of scientists?

Because we want scientific knowledge to be both ROBUST and REPLICABLE.

Replicable => a genuine truth, not random.
Robust => applies in a wide variety of contexts. 9/ As you may know, there’s been a replication crisis in many scientific fields.

A high percentage of published genetic studies, psychology effects, and so on failed to replicate.

See this paper: Why Most Published Research Findings Are False

5/ We can prevent replication crisis by making the publishing criteria more strict, and that’s what has happened in many fields.

But the outcome of such restrictions is that it has led to boring and safe research.

When you cut off the variance, you cut it out from both ends.

6/ @profjamesevans says that it is a “perceived” replication crisis, not an actual one because replication has always been hard.

It’s not a new phenomenon.

Science is a messy process, and it doesn’t always produce nice, clean, generalizable truths about the world.

7/ “Science is a commitment to the unfolding of truth, .. so it’s necessarily dynamic”.

Science is an approximation to mythical truth. We get closer and closer to it, but we’d never know how far we’re from it.

8/ If scientific theories aren’t truths, how come some theories (such as relativity) seem so firmly established that they’re indistinguishable from the truth?

One thing to keep in mind is that an individual scientist cannot know and verify everything.

9/ The establishment of foundational truths in fields then can happen:

– Either through the convergence of evidence
– Or through convergence of scientists’ opinion

Usually, it’s a mix of both.

10/ The latter – where “truth” gets established through a convergence of opinion in a clique – is a function of how interconnected a scientific community is.

The more scientists talk and collaborate, the faster they gravitate towards similar methods and foundational principles.

11/ This overlap in thinking patterns, methods and language used by a scientific community is problematic because the community may have converged on a wrong (or non-robust) truth.

Read about the “Lyme Wars”

Lyme disease: Resolving the โ€œLyme warsโ€ – Harvard Health Blog

12/ In this sense, “open science” leads to faster convergence of opinion, leading to higher fragility and less diversity in the theories.

This is why @profjamesevans champions for “closed science”. 18/ Don’t let different scientific groups talk to each other so they explore a variety of approaches.

For all scientific problems, we need diversity in methods, approaches, data, theories to arrive at robust truths.

13/ What we want from science is ROBUSTNESS first and then REPLICABILITY.

(Interesting, this is what differentiates applied science happening in industry from science happening in academia.

Industry science needs to scale, and not just get published).

14/ So to discover robust truths, we need a diversity of approaches. How do we get diversity?

We need to celebrate failure. Most approaches will fail and if we incentivize scientists to only publish positive results, what we will get is positive results.

No surprise there!

15/ If we start all our scientific research projects with the assumption of publishing a positive result at the end, we will never discover anything new.

Research by definition should FAIL as there are more wrong truths out there than the correct ones.

16/ How do we enable failure in science?

We should learn from Venture Capital community. VCs make a portfolio of uncorrelated, high-risk bets that enables startups to explore of a wide variety of approaches.

17/ Right now scientific funding process expects each project proposal to succeed.

If VCs expected each company to not fail, we will never get any breakthrough companies.

VC model works because they expect most companies to fail.

We should do the same for science

18/ Broadly, @profjamesevans says “we need to fail more in order to succeed”.

In fact, he studied how success happens across startups, science, and other fields and that is what he found:

Quantifying dynamics of failure across science, startups, and security

19/ The gentleman-scientists of the victorian era is a fantastic example of this.

Scientists like Charles Darwin were independently wealthy, so they didn’t have continual pressure to publish positive findings (which is the case today). 26/ How do we do this today?

Institutions like @gatesfoundation explicitly fund high-risk scientific projects like toilets for the developing world.

By doing this, they normalize scientific risk-taking.

20/ Another way we should promote risk-taking is group evaluation, not project evaluation.

Bell Labs was a great example of this. They had a mix of projects with various risk levels, so people knew they’d not be out of the job if their risky project fails.

21/ Ultimately, if people know they’ll lose their job if they take a risk and fail, they will simply not take any risk.

Today, @HHMINEWS does this by funding individual scientists for long term (say 5-15 years), and this job security leads to HHMI fellow do higher impact science 29/ GOLD QUOTE by @profjamesevans ->

“Anything that enables individuals to have a risk portfolio of opportunities increases their likelihood of undertaking risk in any particular (research) investment”

This is modern portfolio theory applied to science ๐Ÿ™‚

22/ In today’s environment:

Scientists’ incentives => citation.

How do you get citations? Publish stuff similar to what’s already been published.

So for an individual scientist, incremental research is a LOGICAL strategy to pursue.

23/ What we want though is new breakthroughs.

Our incentives => breakthroughs.

How do you get breakthroughs? By charting the unknown territories.

So for the world, innovative research is a LOGICAL strategy.

24/ So,

What makes sense for a scientist != what makes sense for science.

We need to break this

25/ @profjamesevans research shows original, off-the-beat research gets cited less (on an average) than incremental research.

But also that the top-cited papers are from original, off-the-beat approaches. This shows if we change incentives, we can accelerate scientific progress

26/ There’s also a generic lesson here: HAVE A PORTFOLIO OF PROJECTS. If you have a single project, you’ll hold on to with your dear life and all your biases will push you to believe it is truer or more important than it actually is in reality.

27/ The fun thing about startups and industry is that they’re incentivized to discover WHAT IS TRUE, and not WHAT OTHERS THINK TO BE TRUE. Because academic scientists are dependent on grants, they’re incentivized to publish what grant-makers (peers often) think is true.

28/ One interesting fact I learned from @profjamesevans is that over time, the average age of Noble Prize Winners has been increasing while the average age of inventors has roughly remained the same. Why would that be?

29/ He thinks it’s probably because academia is more connected than before and because of review boards, there’s pressure to produce knowledge that’s consistent with everything else we currently know. This reduces the generative potential of science.

30/ nventions and patents, on the other hand, do not depend on what others think BUT whether it works and whether it is unique. The feedback in academia is from other people. The feedback in the industry is from the real world.

31/ It’s easy to see why selection pressure from other people can impede progress. Academic communities expect all new theories to EXPLAIN everything that currently best theories explain + predict something new. This creates a path dependence.

32/ A theory that explains something new but is INCONSISTENT with an “established” fact like the big bang will find it difficult to get published. But we forget that big bang was also inconsistent with facts that were “established” when it was proposed.

33/ We have a very high bar for theories right now. Some approaches may simply need time to evolve and killing them because their v0.1 is not better than current leaders is asking for too much.

34/ Imagine if we expected all startups to be better than big companies at EVERYTHING. No startup will even get started in that way. We expect startups to be better than big companies at narrow things and evolve from there. But we expect the same with new scientific theories.

35/ We should encourage independent theories that are INCONSISTENT with each other, which may evolve to be eventually consistent with each other. e.g., the standard model of particle physics well established but predicts dark matter and dark energy which nobody has oberved.

36/ There are alternative theories that do not require the existence dark matter or dark energy, but these are perceived to be minority / crazy approaches because the mainstream physics community has a high commitment to the standard model. This impedes progress.

37/ As today’s (publishable) science has evolving standards that require new theories to absolutely crush existing theories that have taken 100s of years to evolve, it will require genius working for 100s of years in a basement to surpass those high standrads. That wont happen

38/ “Fixation on importance of consistency across theories is deeply problematic because it assumes a simple mechanistic universe.” “None of our theories are true, they’re useful and good but not true”.

39/ Equations and laws are not generating stuff in the universe. No, stuff is being generated by reality itself and we’re simply approximating the generative process via our theories. Since our theories are approximations and not truth, consistency between theories is overrated

40/ We need to be careful not to mistake our scientific models for reality. If we do that, we cut off the possibility of failure and progress depends on failure. Let’s not forget that we’re simply approximating reality with our theories.

41/ Changing gears, what predicts success? @profjamesevans did research across domains (startups, science, terrorism) and found out that ACCELERATING FAILURES is the best predictor of success. Here’s the paper: https://arxiv.org/abs/1903.07562

42/ Here’s the advice he gives for success:

– The first thing you should do should be TOTALLY UNRELATED to things other people you’re doing

– You’ll likely fail with your first radical jump

– From there, iterate to success by doing less and less radical jumps

43/ So, success is not just about accelerating failure, but it is about jumping to uncharted fields but then hill-climbing the local region there to a higher plane of success.

44/ What you should NOT do: a) Do what everyone else is doing (more competition) b) Keep making radical jumps one after another (you never learn from failures).

“Few radical jumps and many fast iterations” is the key to success.

45/ @profjamesevans discovered that novel scientific ideas are more likely to come from what he terms as ALIENS. These are individuals or teams who have travelled from one domain to another domain. This radical jump lets them do something totally different new in that domain.



Have an opinion on this essay? You can send your feedback on email to me.