By building their own intellectual ecosystem, researchers worried about existential AI risk shed academia's baggage — and, perhaps, some of its strengths
2. Big one is UK aisi has made big push to get traditional academics involved, eg with a conference in November, or, £15m grant where they could only fund half the projects they wanted to fund.
3. Book "if anybody builds it everyone dies" was intended for public, with huge effort to reach many ppl.
4. Most MATS scholars publish into a main conference. The ones who don't publish is because the research they do is not interesting for academics, eg the work done by Palisade.
5. MATS is silver sponsor at neurips.
6. There has been close collaboration between ai2027 ppl and AI as normal technology ppl, the two most viral "big picture" perspectives on AI this year
7. There is tarbell fellowship to help upskill ppl in journalism.
8. I don't think your critiques apply to AI governance research? Or at least to a much lesser degree.
I feel like I could keep going actually... I'd be interested to know what you think! I think the article could have been more nuanced and balanced
I have read plenty of critical pieces and blog posts on EA, AI safety, rationalism, and long-termism. I think this may be at the top for me, because you've done something that other critics (Gebru, Torres, Bender) often fail to do: distinguishing individual motivations from movement-incentivized motivations and corporate-incentivized motivations.
As you say, from the perspective of someone who simply wants to do research that benefits humanity, there are only a few options.
If you don't have the money to self-fund your research so that you can have true independence from a corporation, an academic institution, or funders' preferences, then you are unfortunately stuck in the drought of aligning your vision with one that will appeal to someone willing to fund your life. And sure, sometimes, or oftentimes, people happen to find a group of like-minded individuals doing similar things. But even then, if we look at a small org, often behind that org there is a bigger fiscal sponsor or funder deeply embedded into either a research agenda led by a larger organization or funding institution, or a corporation.
So what must researchers do? Talented people who don't want their judgments constrained by financial motivations.
So far, my experience has been that you must resist the temptation to lock yourself into one revenue stream or one source of income or funds for your life, for your research. I know it takes more toll and time and cuts your slack, but having multiple income or revenue streams may be the only way to go if you want to remain truly independent in thought and will. For example, you may choose to have paid work that has nothing to do with your research and also apply for grants if you can find any grantor willing to give you a small amount of money. You may also want to have an ample network both in academia, in the corporate world, and in less strongly aligned circles.
I think that having a network of people across all domains and knowing exactly which stakeholders you wish to inform with your research may be the cure against insularity.
And it's something that keeps you humble and honest, because once you see if your research actually helps in the ways you envisioned, then you should be better calibrated to either keep going or focus your efforts elsewhere. If you are constantly being fed the idea that your research matters, but the rest of the people still don't care, or it doesn't matter whether other people care as long as one specific law gets passed or one practice in a big lab changes, then maybe that also endangers your own capacity to stay objective as to whether your research is actually impactful or not.
Traditional theories of impact are also built, in my experience, to appeal to a specific mindset. For example, following the logic of a specific funder that wants X number of people to convert to working full-time in AI safety or X number of practices to change in specific groups in specific companies.
As a lawyer, I've always hated the tendency of the law to regulate and let implementation be figured out later. I'm afraid that people in the AI safety community may be doing the same when they focus years of their life on research and then hope that the policy and legislative aspects of it, or governance aspects, are simply picked up by other people. Like, this is my part of the problem to deal with. As for the rest, somebody else can think about it. Can you at least think of who that somebody else is? Who is that somebody else to begin with? Can you aim at making your research available to the type of person that you think needs to pick up where you left off?
Those are the additions I would make to this piece. As for the rest, congratulations to Celia for writing it in a way that is fair, not to a movement, not to an ideal, but to individuals who actually care.
Ford's diagnosis of AI safety's insularity is spot-on, but I think there's a related external problem that compounds it: even when the field *does* attempt public engagement, the communication is fundamentally ineffective.
The insularity creates an echo chamber where researchers speak primarily to each other. But the consequence is that when they try to reach beyond that bubble, they haven't developed the language or framing to make their work accessible. Technical papers get published in raw form. Appeals rest on abstract longtermist calculations about billions of future lives. There are few human stories, little translation for non-experts.
Compare this to climate science. Whatever failures exist in climate action, the scientific community at least cleared the first hurdle: making the research understandable. Al Gore's documentaries, school climate strikes, tangible stories about communities affected today. People *understand* climate change, even if political and economic forces prevent adequate response.
AI safety is trying to skip that intermediate step entirely by moving straight from technical research to expecting public action, without the translation layer that makes complex ideas resonate. The field talks about existential risk in 2100, not about the teenager who died by suicide after interactions with an AI chatbot today. One is abstract philosophy; the other is a story that lands.
Some might point to efforts like the Tarbell Fellowship or books aimed at general audiences as evidence the field is trying. But these initiatives still largely operate within the field's existing frameworks and assumptions. They're attempts to export the field's perspective rather than translate it into language and narratives that work for people outside the bubble.
The Center for Humane Technology offers a useful contrast. They manage to take complicated, contentious issues and ground them in human impact happening right now. Their podcast doesn't feel like it's recruiting you to a philosophical movement; it feels like journalism about technology's effects on real people.
Ford's piece explains why the communications problem exists: if you only talk to people who share your priors, you never build the skills to reach anyone else. The insularity is the cause; the communications failure is the effect. And until the field recognizes that technical excellence and philosophical sophistication aren't substitutes for effective public communication, the work will remain trapped in its bubble, no matter how many outreach programs get launched.
This is a well-written article, but most of the critique feels a few years out of date.
AI safety programs initially focused heavily on recruiting within the rationalist/EA communities —where the majority of interested folk were concentrated pre-ChatGPT—but this is no longer the case. Most AI safety fieldbuilding orgs are explicitly trying to bring in talent from the outside, one example being BlueDot Impact spending money advertising on LinkedIn.
Also, contra to the "insularity" claim, I see the AI Safety community as being in quite a fortunate place due to the diversity of different kinds of organisations conducting research these days: frontier labs, government AI safety institutes, startups, non-profits, academia and even a few independent alignment researchers. Not to mention, that diversity of different research agendas and fundamentally different approaches is truly astounding!
One question that might be worth considering is: how does the 'insularity' of the AI safety community compare to academia? Now, I'm sure there's ways that the community could be better, but I would honestly be surprised if academia came ahead in a fair comparison.
This is a sobering critique of the lack of independent review of research in the safety community. Appreciating your thoughtful writing.
Running an AI Safety program myself, I agree that there is too much prioritising of making fast "progress" on research that fits the existing paradigm. The safety community has a start-up mindset, and does not spent enough time rigorously checking reasoning and assumptions.
Mechanistic interpretability has been pushed as a "possible" solution to help with long-term safety, but the leading mechinterp researchers at AI companies are not prepared to map out the limitations of this technique.
~~~
As I wrote about the researchers who left to Anthropic:
"By scaling unscoped models that hide all kinds of bad functionality, and can be misused at scale (e.g. to spread scams or propaganda), Dario’s circle made society less safe. By simultaneously implying they could or were making these inscrutable models safe, they were in effect safety-washing.
Chris Olah’s work on visualising circuits and mechanistic interpretability made for flashy articles promoted on OpenAI’s homepage. In 2021, I saw an upsurge of mechinterp teams joining AI Safety Camp, whom I supported, seeing it as cool research. It nerdsniped many, but progress in mechinterp has remained stuck around mapping the localised features of neurons and the localised functions of larger circuits, under artificially constrained input distributions. This is true even of later work at Anthropic, which Chris went on to found.
Some researchers now dispute that mapping mechanistic functionality is a tractable aim. The actual functioning of a deployed LLM is complex, since it not only depends on how shifting inputs received from the world are computed into outputs, but also how those outputs get used or propagated in the world.
- Internally, a foundational model carries hidden functionality that gets revealed only with certain input keys (this is what allows for undetectable backdoors).
- Externally, “the outputs…go through a huge, not-fully-known-to-us domain (the real world) before they have their real consequences” (to quote Eliezer Yudkowsky).
Traction is limited in terms of the subset of input-to-output mappings that get reliably interpreted, even in a static neural network. Even where computations of inputs to outputs are deterministically mapped, this misses how outputs end up corresponding to effects in the noisy physical world (and how effects feed back into model inputs/training).
Interpretability could be used for specific safety applications, or for AI ‘gain of function’ research. I’m not necessarily against Chris’ research. What's bad is how it got promoted.
Researchers in Chris’ circle promoted interpretability as a solution to an actual problem (inscrutable models) that they were making much worse (by scaling the models). They implied the safety work to be tractable in a way that would catch up with the capability work that they were doing. Liron Shapira has a nice term for this: tractability-washing.
Tractability-washing corrupts. It disables our community from acting with integrity to prevent reckless scaling. If instead of Dario’s team, accelerationists at Meta had taken over GPT training, we could at least know where we stand. Clearly then, it was reckless to scale data by 100x, parameters by 1000x, and compute by 10000x – over just three years."
Excellent exploration of the tradeoffs inovlved in the field's shift away from traditional academic structures! Your point about how speed and efficiency in AI safety research come at the cost of external scruitny really captures a critical tension. The comparison to the chimp lnaguage research cautionary tale is particularly compelling, especially when considering how confirmation bias can creep in when timelines feel urgent.
I think it's extremely important that safety research moves as fast as possible, and it seems likely things would go much faster if this research is more openly shared. It would be very sad if AI safety progress were unnecessarily restricted by selective publishing and PR concerns, by the labs which have the resources and model access to conduct it most effectively. Ideally, the "features" of no paywalls and quick progress you mentioned would be supplemented by incentives or regulations which encourage (or require) openness about the bad as well as the good. The point about the "inherited legitimacy of academia" being lost when research isn't peer-reviewed also resonated - the current situation in which labs are the arbiters of safety and pre-deployment checks is a lot harder to trust as an outsider.
I also liked the emphasis on the community's insularity as a whole, and not just on siloing within the community itself (e.g. labs with commercial incentives to keep safety research internal). A step in the right direction might be to encourage lower-effort means of "getting the research out there" than formal publication; for example, it was great to see OpenAI recently follow Anthropic's lead with an alignment blog (https://alignment.openai.com/). Formats like this could make research more accessible to those not already embedded within the AI safety community and encourage would-be researchers to engage with the field further.
This is like complaining that athletes are more fit than regular people, and demanding they strap 200 pound weights to themselves at all times so they're more in line with the median 300lb person.
"Hey guys, let's form a committee to investigate the pros and cons of reviewing the recommendation to revise the color of the book that will have our eventual AI regulations in, which we project to kick off two years from now."
Academia is entirely captured and useless. You spend months to years rubbing away at one little facet of one tiny problem, then when you're 90% there put it in a grant and pretend you haven't even started, then eventually write a paper, wait for peer review, make a couple of dumb sacrificial changes you deliberately put in there so they wouldn't mess with anything substantive, then throw a paper over the wall that nobody reads. You're only doing the most incremental and fully-predicted work possible, because you're shaped by what grant committees will approve, and the entire process is a monument to waste and folly.
Yes, AI researchers are in an insular cultural bubble, that's what happens in all fields that move quickly, and is an entirely necessary precondition for "excellence" or "actually getting things done." I would go so far as to assert that the instant you try to interfere with this process and insularity, you destroy the heart and soul of it and arrest all progress. Compare a Newton or Leibniz, or an Einstein or von Neumann, and the scale and pace of their contributions, to literally anyone in academia.
Maybe research should be a democracy? Let every idiot vote on what capabilities you study or push forward next? People don't even vote for their own Mayors or Governors (<20% turnout), and they can putatively understand what a governor does!
Maybe AI research should be like academia, fully captured, entirely shaped towards external ends, and mostly pointless?
How could you possibly change it in a way that doesn't fundamentally break it?
Because that's the core thesis here, that it's possible to do so. I think we have abundant evidence that it is NOT possible, and whenever you try, you let the magic smoke out.
Given the entire world economy is betting on AI at this point, I feel like letting the magic smoke out would do demonstrably more harm than good.
super well researched thank you for putting this together. you are very kind to consider ai safety research even remotely related to science, it often leaves me speechless how unscientific this all is.
Thanks for writing and highlighting the issues.
1. Do you have concrete recommendations?
Some counterpoints:
2. Big one is UK aisi has made big push to get traditional academics involved, eg with a conference in November, or, £15m grant where they could only fund half the projects they wanted to fund.
3. Book "if anybody builds it everyone dies" was intended for public, with huge effort to reach many ppl.
4. Most MATS scholars publish into a main conference. The ones who don't publish is because the research they do is not interesting for academics, eg the work done by Palisade.
5. MATS is silver sponsor at neurips.
6. There has been close collaboration between ai2027 ppl and AI as normal technology ppl, the two most viral "big picture" perspectives on AI this year
7. There is tarbell fellowship to help upskill ppl in journalism.
8. I don't think your critiques apply to AI governance research? Or at least to a much lesser degree.
I feel like I could keep going actually... I'd be interested to know what you think! I think the article could have been more nuanced and balanced
I have read plenty of critical pieces and blog posts on EA, AI safety, rationalism, and long-termism. I think this may be at the top for me, because you've done something that other critics (Gebru, Torres, Bender) often fail to do: distinguishing individual motivations from movement-incentivized motivations and corporate-incentivized motivations.
As you say, from the perspective of someone who simply wants to do research that benefits humanity, there are only a few options.
If you don't have the money to self-fund your research so that you can have true independence from a corporation, an academic institution, or funders' preferences, then you are unfortunately stuck in the drought of aligning your vision with one that will appeal to someone willing to fund your life. And sure, sometimes, or oftentimes, people happen to find a group of like-minded individuals doing similar things. But even then, if we look at a small org, often behind that org there is a bigger fiscal sponsor or funder deeply embedded into either a research agenda led by a larger organization or funding institution, or a corporation.
So what must researchers do? Talented people who don't want their judgments constrained by financial motivations.
So far, my experience has been that you must resist the temptation to lock yourself into one revenue stream or one source of income or funds for your life, for your research. I know it takes more toll and time and cuts your slack, but having multiple income or revenue streams may be the only way to go if you want to remain truly independent in thought and will. For example, you may choose to have paid work that has nothing to do with your research and also apply for grants if you can find any grantor willing to give you a small amount of money. You may also want to have an ample network both in academia, in the corporate world, and in less strongly aligned circles.
I think that having a network of people across all domains and knowing exactly which stakeholders you wish to inform with your research may be the cure against insularity.
And it's something that keeps you humble and honest, because once you see if your research actually helps in the ways you envisioned, then you should be better calibrated to either keep going or focus your efforts elsewhere. If you are constantly being fed the idea that your research matters, but the rest of the people still don't care, or it doesn't matter whether other people care as long as one specific law gets passed or one practice in a big lab changes, then maybe that also endangers your own capacity to stay objective as to whether your research is actually impactful or not.
Traditional theories of impact are also built, in my experience, to appeal to a specific mindset. For example, following the logic of a specific funder that wants X number of people to convert to working full-time in AI safety or X number of practices to change in specific groups in specific companies.
As a lawyer, I've always hated the tendency of the law to regulate and let implementation be figured out later. I'm afraid that people in the AI safety community may be doing the same when they focus years of their life on research and then hope that the policy and legislative aspects of it, or governance aspects, are simply picked up by other people. Like, this is my part of the problem to deal with. As for the rest, somebody else can think about it. Can you at least think of who that somebody else is? Who is that somebody else to begin with? Can you aim at making your research available to the type of person that you think needs to pick up where you left off?
Those are the additions I would make to this piece. As for the rest, congratulations to Celia for writing it in a way that is fair, not to a movement, not to an ideal, but to individuals who actually care.
Ford's diagnosis of AI safety's insularity is spot-on, but I think there's a related external problem that compounds it: even when the field *does* attempt public engagement, the communication is fundamentally ineffective.
The insularity creates an echo chamber where researchers speak primarily to each other. But the consequence is that when they try to reach beyond that bubble, they haven't developed the language or framing to make their work accessible. Technical papers get published in raw form. Appeals rest on abstract longtermist calculations about billions of future lives. There are few human stories, little translation for non-experts.
Compare this to climate science. Whatever failures exist in climate action, the scientific community at least cleared the first hurdle: making the research understandable. Al Gore's documentaries, school climate strikes, tangible stories about communities affected today. People *understand* climate change, even if political and economic forces prevent adequate response.
AI safety is trying to skip that intermediate step entirely by moving straight from technical research to expecting public action, without the translation layer that makes complex ideas resonate. The field talks about existential risk in 2100, not about the teenager who died by suicide after interactions with an AI chatbot today. One is abstract philosophy; the other is a story that lands.
Some might point to efforts like the Tarbell Fellowship or books aimed at general audiences as evidence the field is trying. But these initiatives still largely operate within the field's existing frameworks and assumptions. They're attempts to export the field's perspective rather than translate it into language and narratives that work for people outside the bubble.
The Center for Humane Technology offers a useful contrast. They manage to take complicated, contentious issues and ground them in human impact happening right now. Their podcast doesn't feel like it's recruiting you to a philosophical movement; it feels like journalism about technology's effects on real people.
Ford's piece explains why the communications problem exists: if you only talk to people who share your priors, you never build the skills to reach anyone else. The insularity is the cause; the communications failure is the effect. And until the field recognizes that technical excellence and philosophical sophistication aren't substitutes for effective public communication, the work will remain trapped in its bubble, no matter how many outreach programs get launched.
This is a well-written article, but most of the critique feels a few years out of date.
AI safety programs initially focused heavily on recruiting within the rationalist/EA communities —where the majority of interested folk were concentrated pre-ChatGPT—but this is no longer the case. Most AI safety fieldbuilding orgs are explicitly trying to bring in talent from the outside, one example being BlueDot Impact spending money advertising on LinkedIn.
Also, contra to the "insularity" claim, I see the AI Safety community as being in quite a fortunate place due to the diversity of different kinds of organisations conducting research these days: frontier labs, government AI safety institutes, startups, non-profits, academia and even a few independent alignment researchers. Not to mention, that diversity of different research agendas and fundamentally different approaches is truly astounding!
One question that might be worth considering is: how does the 'insularity' of the AI safety community compare to academia? Now, I'm sure there's ways that the community could be better, but I would honestly be surprised if academia came ahead in a fair comparison.
This feels less like “academia vs industry” and more like a jurisdictional shift.
Academia is optimized to produce legitimacy and truth under scrutiny.
Frontier AI work is optimized for control under uncertainty and speed.
Once systems became actors rather than objects of study, the bottleneck stopped being knowledge and became intervention.
Institutions built for peer review were never designed for that.
This is a sobering critique of the lack of independent review of research in the safety community. Appreciating your thoughtful writing.
Running an AI Safety program myself, I agree that there is too much prioritising of making fast "progress" on research that fits the existing paradigm. The safety community has a start-up mindset, and does not spent enough time rigorously checking reasoning and assumptions.
Mechanistic interpretability has been pushed as a "possible" solution to help with long-term safety, but the leading mechinterp researchers at AI companies are not prepared to map out the limitations of this technique.
~~~
As I wrote about the researchers who left to Anthropic:
"By scaling unscoped models that hide all kinds of bad functionality, and can be misused at scale (e.g. to spread scams or propaganda), Dario’s circle made society less safe. By simultaneously implying they could or were making these inscrutable models safe, they were in effect safety-washing.
Chris Olah’s work on visualising circuits and mechanistic interpretability made for flashy articles promoted on OpenAI’s homepage. In 2021, I saw an upsurge of mechinterp teams joining AI Safety Camp, whom I supported, seeing it as cool research. It nerdsniped many, but progress in mechinterp has remained stuck around mapping the localised features of neurons and the localised functions of larger circuits, under artificially constrained input distributions. This is true even of later work at Anthropic, which Chris went on to found.
Some researchers now dispute that mapping mechanistic functionality is a tractable aim. The actual functioning of a deployed LLM is complex, since it not only depends on how shifting inputs received from the world are computed into outputs, but also how those outputs get used or propagated in the world.
- Internally, a foundational model carries hidden functionality that gets revealed only with certain input keys (this is what allows for undetectable backdoors).
- Externally, “the outputs…go through a huge, not-fully-known-to-us domain (the real world) before they have their real consequences” (to quote Eliezer Yudkowsky).
Traction is limited in terms of the subset of input-to-output mappings that get reliably interpreted, even in a static neural network. Even where computations of inputs to outputs are deterministically mapped, this misses how outputs end up corresponding to effects in the noisy physical world (and how effects feed back into model inputs/training).
Interpretability could be used for specific safety applications, or for AI ‘gain of function’ research. I’m not necessarily against Chris’ research. What's bad is how it got promoted.
Researchers in Chris’ circle promoted interpretability as a solution to an actual problem (inscrutable models) that they were making much worse (by scaling the models). They implied the safety work to be tractable in a way that would catch up with the capability work that they were doing. Liron Shapira has a nice term for this: tractability-washing.
Tractability-washing corrupts. It disables our community from acting with integrity to prevent reckless scaling. If instead of Dario’s team, accelerationists at Meta had taken over GPT training, we could at least know where we stand. Clearly then, it was reckless to scale data by 100x, parameters by 1000x, and compute by 10000x – over just three years."
(from this post: https://www.lesswrong.com/posts/PBd7xPAh22y66rbme/anthropic-s-leading-researchers-acted-as-moderate)
Excellent exploration of the tradeoffs inovlved in the field's shift away from traditional academic structures! Your point about how speed and efficiency in AI safety research come at the cost of external scruitny really captures a critical tension. The comparison to the chimp lnaguage research cautionary tale is particularly compelling, especially when considering how confirmation bias can creep in when timelines feel urgent.
Chimp language research cautionary tale? Was this an earlier version of the draft I missed?
Thanks for writing about this important topic!
I think it's extremely important that safety research moves as fast as possible, and it seems likely things would go much faster if this research is more openly shared. It would be very sad if AI safety progress were unnecessarily restricted by selective publishing and PR concerns, by the labs which have the resources and model access to conduct it most effectively. Ideally, the "features" of no paywalls and quick progress you mentioned would be supplemented by incentives or regulations which encourage (or require) openness about the bad as well as the good. The point about the "inherited legitimacy of academia" being lost when research isn't peer-reviewed also resonated - the current situation in which labs are the arbiters of safety and pre-deployment checks is a lot harder to trust as an outsider.
I also liked the emphasis on the community's insularity as a whole, and not just on siloing within the community itself (e.g. labs with commercial incentives to keep safety research internal). A step in the right direction might be to encourage lower-effort means of "getting the research out there" than formal publication; for example, it was great to see OpenAI recently follow Anthropic's lead with an alignment blog (https://alignment.openai.com/). Formats like this could make research more accessible to those not already embedded within the AI safety community and encourage would-be researchers to engage with the field further.
I've written about very similar concerns here if you're interested: https://www.lesswrong.com/posts/2TA7HqBYdhLdJBcZz/on-closed-door-ai-safety-research
(I actually considered mentioning the tobacco industry in that post as well, but ended up settling on a couple of automotive cases instead!)
This is like complaining that athletes are more fit than regular people, and demanding they strap 200 pound weights to themselves at all times so they're more in line with the median 300lb person.
"Hey guys, let's form a committee to investigate the pros and cons of reviewing the recommendation to revise the color of the book that will have our eventual AI regulations in, which we project to kick off two years from now."
Academia is entirely captured and useless. You spend months to years rubbing away at one little facet of one tiny problem, then when you're 90% there put it in a grant and pretend you haven't even started, then eventually write a paper, wait for peer review, make a couple of dumb sacrificial changes you deliberately put in there so they wouldn't mess with anything substantive, then throw a paper over the wall that nobody reads. You're only doing the most incremental and fully-predicted work possible, because you're shaped by what grant committees will approve, and the entire process is a monument to waste and folly.
Yes, AI researchers are in an insular cultural bubble, that's what happens in all fields that move quickly, and is an entirely necessary precondition for "excellence" or "actually getting things done." I would go so far as to assert that the instant you try to interfere with this process and insularity, you destroy the heart and soul of it and arrest all progress. Compare a Newton or Leibniz, or an Einstein or von Neumann, and the scale and pace of their contributions, to literally anyone in academia.
Maybe research should be a democracy? Let every idiot vote on what capabilities you study or push forward next? People don't even vote for their own Mayors or Governors (<20% turnout), and they can putatively understand what a governor does!
Maybe AI research should be like academia, fully captured, entirely shaped towards external ends, and mostly pointless?
How could you possibly change it in a way that doesn't fundamentally break it?
Because that's the core thesis here, that it's possible to do so. I think we have abundant evidence that it is NOT possible, and whenever you try, you let the magic smoke out.
Given the entire world economy is betting on AI at this point, I feel like letting the magic smoke out would do demonstrably more harm than good.
A field that replaces external legitimacy with internal coherence gains speed and confidence and loses error detection.
Every closed epistemic system eventually mistakes consensus for truth.
super well researched thank you for putting this together. you are very kind to consider ai safety research even remotely related to science, it often leaves me speechless how unscientific this all is.