by Jason Crawford · December 4, 2023 · 7 min read
This is the written version of a talk presented to the Santa Fe Institute at a working group on “Accelerating Science.”
We’re here to discuss “accelerating science.” I like to start on topics like this by taking the historical view: When (if ever) has science accelerated in the past? Is it still accelerating now? And what can we learn from that?
I’ll submit that science, and more generally human knowledge, has been accelerating, for basically all of human history. I can’t prove this yet (and I’m only about 90% sure of it myself), but let me appeal to your intuition:
I’m leaving aside the question of whether science has slowed down since ~1950 or so, which I don’t have a strong opinion on. Even if it has, that’s mostly a minor, recent blip in the overall pattern of acceleration across the broad sweep of history. (Or, you know, the beginning of a historically unprecedented reversal and decline. One or the other.)
Part of the reason I’m pretty convinced of this accelerating pattern is that it’s not just science that is accelerating: pretty much all measures of human advancement show the same trend, including world GDP and world population.
What drives acceleration in science? Many factors, including:
Instruments. Better tools means we can do more and better science. Galileo had a simple telescope; now we have JWST and LIGO.
Computation. More computing power means more and better ways to process data.
Communication. The faster and better that ideas can spread, the more efficient and effective scientific communication can be. The scientific journal was invented only after the printing press; the Internet enabled preprint servers such as arXiv.
Method. Better methods make for better science, from Baconian empiricism to Koch’s postulates to the RCT (and really, all of statistics).
Institutions. Laboratories, universities, journals, funding agencies, etc. all make up an ecosystem that enables modern science.
Social status. The more science carries respect and prestige, the more people and money flow into it.
Now, if we want to ask whether science will continue to accelerate, we could think about which of these driving factors will continue to grow. I would suggest that:
In the long run, we may run out of people to continue to grow the base of researchers, if world population levels off as it is projected to do, and that is a potential concern, but not my focus today.
The biggest red flag is with our institutions of science. Institutions affect all the other factors, especially the management of money and talent. And today, many in the metascience community have concerns about our institutions. Common criticisms include:
Now, as a former tech founder, I can’t help but notice that most of these problems seem much alleviated in the world of for-profit VC funding. Raising VC money is relatively quick (typically a round comes together in a few months rather than a year or more). As a founder/CEO, I spent about 10–15% of my time fundraising, not 30–50%. VCs make bold bets, actively seek contrarian positions, and back young upstarts. They mostly give founders autonomy, perhaps taking a board seat for governance, and only firing the CEO for very bad performance. (The only concern listed above that startup founders might also complain about is patience: if your money runs out, you’d better have progress to show for it, or you’re going to have a bad time raising the next round.)
I don’t think the VC world does better on these points because VCs are smarter, wiser, or better people than science funders—they’re not. Rather, VCs:
In short, VCs are subject to evolutionary pressure. They can’t get stuck in obviously bad equilibria because if they do they will get out-competed and lose market power.
The proof of this is that VC has evolved over the decades—mostly in the direction of better treatment for founders. For instance, there has been a long-term trend towards higher valuations at earlier stages, which ultimately means lower dilution and a shift in power from VCs to founders: it used to be common for founders to give up half or more of their company in the first round of funding; last I checked that was more like 20% or less. VCs didn’t always fund young techies right out of college; there was a time when they tended to favor more experienced CEOs, perhaps with an MBA. They didn’t always support founder-led companies; once it was common for founders to get booted after the first few years and replaced with a professional CEO (when A16Z launched in 2009 they made a big deal out of how they were not going to do that).
So I think if we want to see our scientific institutions improve, we need to think about how they can evolve.
How evolvable are our scientific institutions? Not very. Most scientific organizations today are departments of university or government. Much as I respect universities and government, I think anyone would have to admit that they are some of our more slow-moving institutions. (Universities in particular are extremely resilient and resistant to change: Oxford and Cambridge, for instance, date from the Middle Ages and have survived the rise and fall of empires to reach the present day fairly intact.)
The challenges to the evolvability of scientific funding institutions are the inverse of what makes VC evolvable:
How might we improve evolvability of science funding? We should think about how we can improve these factors. I don’t have great ideas, but I’ll throw some half-baked ones out there to start the conversation:
How might we increase competition in science funding? We could increase the role of philanthropy. In the US, we could shift federal funding to the state level, creating fifty funders instead of one. (State agricultural experiment stations are a successful example of this, and competition among these stations was key to hybrid corn research, one of the biggest successes of 20th-century agricultural science.) At the international level, we could support more open immigration for scientists.
How might we create better feedback loops? This is tough because we need some way to measure outcomes. One way to do that would be to shift funding away from prospective grants and more towards a wide variety of retrospective prizes, at all levels. If this “economy” were sufficiently large and robust, these outcomes could be financialized in order to create a dynamic, competitive funding ecosystem, with the right level of risk-taking and patience, the right balance of seasoned veterans vs. young mavericks, etc. (Certificates of impact, such as hypercerts, could be part of this solution.)
How might we solve long feedback cycles? I don’t know. If we can’t shorten the cycles, maybe we need to lengthen the careers of funders, so they can at least learn from a few cycles—a potential benefit of longevity technology. Or, maybe we need a science funder that can learn extremely fast, can consume large amounts of historical information on research programs and their eventual outcomes, never forgets its experience, and never retires or dies—of course, I’m thinking of AI. There’s been a lot of talk of AI to support, augment, or replace scientific researchers themselves, but maybe the biggest opportunity for AI in science is on the funding and management side.
I doubt that grant-making institutions will shift themselves very far in this direction: they would have to voluntarily subject themselves to competition, enforce accountability, and admit mistakes, which is rare. (Just look at the institutions now taking credit for Karikó’s Nobel win when they did so little to support her.) If it’s hard for institutions to evolve, it’s even harder for them to meta-evolve.
But maybe the funders behind the funders, those who supply the budgets to the grant-makers, could begin to split up their funds among multiple institutions, to require performance metrics, or simply to shift to the retrospective model indicated above. That could supply the needed evolutionary pressure.
Get posts by email: