'Why aren't we far more productive?' sounds not too different from 'Why don't we have infinite growth on a finite planet?'
- There is not enough meaningful democratizable work. Most work is either menial, or challenging. And I'm not even talking about the third kind of work [1].
- Menial work is getting more and more automated.
- Challenging work doesn't necessarily scale with human resources beyond a certain point (e.g., If 1000 researchers are already working on cancer, increasing the number to 10,000 isn't necessarily going to help find a cure faster).
Why do you think that 10x increase in cancer researchers would not help find the cure faster? Do you think there is one supergenius destined to find the solution and he is already working on it?
10x researches can try out 10x approaches or concentrate of specific types of cancer.
From what I understand, most of actual research is done by hungry grad students and the higher your position get the more time you spend of trying to keep or expand the funding.
I know that keeping the grad students overworked and hungry keeps many grifters out of science, but at what cost?
IMO Joel Spolsky kind of covers a possible answer to your question in his blog post "Hitting the High Notes"[0]. While I'm sure there's plenty of counter examples, it's important to note a wide ranging variety of discoveries and breakthroughs that have come from individuals and small teams that have. In many of these cases, that person or small team is uniquely breaking against an otherwise agreed upon convention or state of the field.
1,000 people who agree on a paradigm which may be false is not helped by another 9,000 people who agree with that same paradigm. In the era of Einstein, how many physicists accepted the traditional view of space and time? throwing more physicists at the same problem probably doesn't mean getting more Einsteins.
Potential example for today (though IANAD nor Biologists), how many scientists and researchers agree upon removing plaques as a treatment for alzheimers? how long has that theory been the dominant narrative in spite of failed treatments at removing plaques? how many individuals wanted to try/research something different but all the other grants went to people pursuing the held narrative because the grantors also held that same narrative?
There's countless of companies where scaling up # of people just adds noise and bureaucracy and smaller companies with strong-minded individuals were able to break through.
another way to look at it is that cargo cult thinking is the default mode for human thinking. This is something I've noticed recently and increasingly believe to be true. The vast majority of people do NOT take the time to build an opinion from weighing multiple sources of information, looking for counter arguments etc. Cargo cult is the default mode for a civilization.
That has to be true, right? There's just way too much to understand and it's impossible to come up with an absolute ordering of importance. At some point you have to delegate huge chunks of your understanding.
And it works 9 out of 10 times. It's just that one time that's a problem, but it's still probably the most optimal strategy for nearly everyone.
I have almost no understanding of anything. Everyday I use technology that just works. What happens in the background, I'll never know. I interact with complex social systems without knowing what makes them work or how my interactions affect me and any of these systems.
All of this works, because I build simple heuristics on everything I interact with. Mostly, these heuristics are called expectations. I expect something to happen, because I don't know with certainty that it will.
There's a difference between knowing you don't understand something and thinking that you do - that's the difference between cargo cult thinking vs actual thinking.
Cargo cults are antivaxx, anti-climate change. Things that are obviously not true, but that many people want to believe so they do. The issue is that this kind of thinking is actually default, which I've grown to understand recently. IE many people who are in favor of good things don't really have a deep understanding of why that is - they've just adopted an opinion that sounds good to them. This is particularly pronounced in ie economics and politics, where understanding is extremely shallow.
Resource allocation is another issue. But even if the other teams try to independently replicate results (imagine such luxury), that would be useful.
The more people there are in the field the probability of challenging the status quo is higher.
Also why scale up the number of companies? You can increase the number of startups trying something new.
I've worked in research most my career and I tend to agree. The issue is that throwing more people in the same narrow approach most likely won't help if that approach is off to begin with. It has been promising and we've been getting and still get some returns out of it, but is it possible we're stuck in a local minima of the solution space in terms of representing the real system were trying to understand and manipulate? We of course can't know and those who are captains of a discipline and the funding agencies that decide who gets funded and don't steer to focus on narrow incremental changes. Some of this has to do with deeper business management steeping into basic scientific research, where I believe, it doesn't belong because it thinks in the wrong time horizons and has the wrong goals.
High risk, novel approaches often aren't funded or few opportunities exist for them. There is good reason for this because it can be abused by those just looking for fun or easy work while labeling it novel. It may also encourage fringe sciences that are bordering on psuedo-scientific work. They incorporate a bit of science but then go off in directions that are almost provably wrong. Distinguishing which novel approaches seem like genuinely novel realistic and non abusive proposals isn't always easy. In some cases they're just laughably wrong because there are so many assumptions baked in that are almost provably wrong or at least self inconsistent. On the other hand, some aren't quite so easy and involve a significant amount of effort and insight to understand exactly what's being proposed. This is often where paradigm shifting research really occurs.
It can take incredibly brilliant scientists with enough creativity to see the opportunity in a proposal and approve it and those aren't the people often awarding proposals. On the flip side, most of these sort of proposals, no matter how valid and novel they may be, simply aren't going to be correct and the reviewer is right to take a more critical eye. The novel approaches are inherently high risk and most will be wrong. I still think we need to fund these approaches. I've been involved in proposals that seemed only slightly novel in direction from accepted paradigms and reviewer responses came back (some agencies return anonymized responses) in a way that showed they clearly didn't understand enough about the domain to even make the assessment when their critiques were clearly invalid, they just didn't agree with the proposal approach. Maybe they were right the overall approach was off and created a lame excuse, maybe they were wrong, but this tendency to be risk averse even in the few funding opportunities that were clearly budgeted to be high risk shows the culture we have in modern science.
The burden of responsibility often lies on those proposing these huge shifts and we may be reaching points in some domains of human knowledge where the burden of proof is simply too high for an individual to provide for a given novel idea. Imagine if Peter Higgs alone had to provide evidence to his idea. It wasn't until so many other iteratative approaches were exhausted that particle physics decided they had to start testing novel other options. How much time, effort, even careers were wasted chasing other ideas? Should science be depth first search, breadth first search, a mixture of the two to try and hedge out bets that if we are on the wrong direction we might find a better paradigm in parallel we can jump to when we're stuck, or perhaps a different heuristic?
Then the really really difficult question, for me: if we support a mixture model of say BFS, DFS (and perhaps a handful of others) for the search space of acquirable knowledge, who should decide resourcing allocations for the search mixture model? What is the best set of approaches and resourcing? Science already incorporates complex search mixture approaches for knowledge at various levels but it seems at the highest level (how we resource stuff) it doesn't, it's incredibly iterative, risk averse, and focused on short time horizons for returns on investment.
Cancer really isn’t one disease. It’s thousands if not millions or maybe even billions of diseases that share some similarities. The odds of a silver bullet cure are effectively zero, but there’s plenty of opportunity to help sufferers die with rather than of. It’s a game of incremental improvement so it’s likely that greater investment would produce reasonable returns.
If you have a team of programmers who always miss their deadlines, will increasing the headcount necessarily make them a team that always meets the deadlines? No, because other than having more people means more distractions and need for communication, there might be other problems, such as technical debt, or miscommunicated requirements, or inexperience, or outright unreasonable demands.
The only variable that increasing the headcount necessarily improves is the headcount itself.
No but increasing the number of teams who are all now competing with each other would probably allow us to see progress scaling with the number of teams, not necessarily putting everyone on the same team.
Science discoveries don't just happen because there are more people "competing" to solve a problem. The more likely outcome of what you're saying is that you'd only have more teams competing, sure, but to redo each other's work.
They don't even compete to solve the problem, they compete for funding. People who are better at politics gets more funding, so adding more people could even be net negative with them draining up all the funding from those who do the actual research.
I hate to tell you, but there is not, and never will be a cure for cancer. Every cancer is unique. The best we can hope for in a realistic time frame is a set of drugs that cover a significant portion of the lineage dependencies corresponding to a majority of the the most common cancers. Ideally, we will be able to drug/ligand every human protein, but we're still a long way off.
Furthermore, science isn't necessarily a tractable problem that scales linearly with the amount of manpower thrown at it - many of our greatest discoveries have happened serendipitously.
I agree with you that there will never be a universal cure for cancer after the person is diagnosed with cancer. The problem is not that each cancer is unique (in fact there are many common patterns and stereotyped responses to treatments) the problem is that cancer is difficult to eradicate once established. Trying to cure advanced cancer is the wrong idea, most probably, despite colossal resources deployed towards that aim. Even curing an early stage cancer is a fraught process, and some apparently cured cancers can sit dormant for 20 or 30 years before returning to be incurable.
However, I believe that there will eventually be a strategy which prevents almost all cancers. Cancer begins from one cell 100% of the time, and eliminating that cell or preventing the transformation in the first place will stop the cancer from ever developing. One way is to genetically modify the human organism to be cancer resistant. We know this is possible in theory, because there are organisms out there which seem quite resistant to cancers for their size (the naked mole rat, elephants, the bowhead whale). But rationally altering the genome of everyone on the planet is a long way off and possibly unpalatable to many. There are other more near term possibilities.
I should probably add that cancer research is my day job.
Can they though, and is that helpful? Is there sufficient communication between everyone in the field to ensure no one is investigating the same things? Is that still 10x, given the communication overhead in ensuring what you're wanting to start doing isn't already being worked on by one of the 9999 others? Are their 10x as many approaches that can be investigated in parallel at all times?
The lessons from the mythical man month still apply even with research.
Those are all general thoughts and results would be different for different problems.
Some can be parallelized for millions of people, some can not be at all.
Parallelization is as much about throughput as latency. Nine women can not gestate a child in one month, one can apparently gestate eight in nine months.
The numbers 1000 and 10,000 are just to make a point. It could be 10,000 vs 100,000. I just don't know.
I've added "beyond a certain point" to 10x. So e.g., increasing from 10 to 100 helps. 100 to 1000 helps. 1000 to 10,000? maybe. 10,000 to 100,000? maybe not. And so on (like approaching diminishing returns). (again the numbers are for illustration, and would vary with the nature of work).
I think there are so only so many possible solutions, the first researcher will choose to researcher the possible solution with the highest probability or working. As you add more researchers they will research less probabil solutions. It is the law of diminishing returns.
Scientific research is not a high school problem where the clear algorithm is known. A lot of approaches need to tries before valid ones are found. And for cancer a single approach is most likely not enough.
The grandparent talks about going from 1,000 to to 10,000, not a single researcher. Are you implying the law of diminishing returns doesn't apply to research?
But how do the first researcher choose the approach "with the highest probability of working"? Since the research hasn't been conducted yet they can only use their own previous experience, also known as bias. Judging by the results the approaches to cure cancer that were pursued first haven't been very effective.
You cannot really pay a man to "focus" (I mean, work passionately) on something, which is what exactly is required in these kind of things.
And I think this might be blocking our progress right now. Because people cannot work on things they are passionate about, because they are told to focus on this other thing, some other idiot thought they better work on..
Our processing speed is way up, but I believe our cost of energy has more or less not grown substantially.
In fact, with proper economic cost accounting like externalities as we try to deal with global warming, costs are going up.
Solar/wind are gradually starting to improve things in meaningful ways but they are a fundamentally different grid design.
Perhaps a modular LFTR reactor design can beat what Solar/Wind will settle at on an EROEI measure once solar/wind are out of their main economies of scale and techonological development curves.
But what we are entering for the next century is clearly one of resource limitations, and working much harder to more efficiently use them.
Even though it has a long history of open-source attempts, as pointed out by Tim in his presentation, they are few and far between, and massively underwhelming compared to the thriving open source software community.
However, if this initiative takes off, it'll be a big help in creating an open source EDA toolchain community.
> However, if this initiative takes off, it'll be a big help in creating an open source EDA toolchain community.
The opensource EDA toolchain community is already producing some good stuff, Symbiflow: https://symbiflow.github.io/ is a good example, it's an open source FPGA flow targeting multiple devices. It uses Yosys (http://www.clifford.at/yosys/) as a synthesis tool which is also used by the OpenROAD flow: https://github.com/The-OpenROAD-Project/OpenROAD-flow which aims to give push-button RTL to GDS (i.e. take you from Verilog, which is one of the main languages used in hardware to the thing you give to the foundry as a design for them to produce).
The Skywater PDK is a great development, which is a key part of a healthy opensource EDA ecosystem though there's plenty of other great developments happening in parallel with it you will note there's some people who are involved in several of these projects they're not all being developed in isolation. The next set of talks on the Skywater PDK include how OpenROAD can be used to target Skywater: https://fossi-foundation.org/dial-up/
I don't know about a source, but Conway had been compiling knot tables since high school, and as a knot with only 11 crossings, the Conway knot is certain to have been on his radar. It's just such a simple knot, and sliceness such a fundamental property that Conway is sure to have tried to figure it out.
You've caught me there. I'm having trouble even locating a source for why it's even called the Conway knot! The wolfram page has a few references of about the right age, but I don't have access to them, sadly. It being a long-standing question about a knot named after him, I'm fairly certain that he'd take a crack at it. But alas, that's no proof.
- There is not enough meaningful democratizable work. Most work is either menial, or challenging. And I'm not even talking about the third kind of work [1].
- Menial work is getting more and more automated.
- Challenging work doesn't necessarily scale with human resources beyond a certain point (e.g., If 1000 researchers are already working on cancer, increasing the number to 10,000 isn't necessarily going to help find a cure faster).
[1] https://www.strike.coop/bullshit-jobs/