Discussion about this post

User's avatar
Lovkush Agarwal's avatar

Thanks for writing and highlighting the issues.

1. Do you have concrete recommendations?

Some counterpoints:

2. Big one is UK aisi has made big push to get traditional academics involved, eg with a conference in November, or, £15m grant where they could only fund half the projects they wanted to fund.

3. Book "if anybody builds it everyone dies" was intended for public, with huge effort to reach many ppl.

4. Most MATS scholars publish into a main conference. The ones who don't publish is because the research they do is not interesting for academics, eg the work done by Palisade.

5. MATS is silver sponsor at neurips.

6. There has been close collaboration between ai2027 ppl and AI as normal technology ppl, the two most viral "big picture" perspectives on AI this year

7. There is tarbell fellowship to help upskill ppl in journalism.

8. I don't think your critiques apply to AI governance research? Or at least to a much lesser degree.

I feel like I could keep going actually... I'd be interested to know what you think! I think the article could have been more nuanced and balanced

Katalina Hernández's avatar

I have read plenty of critical pieces and blog posts on EA, AI safety, rationalism, and long-termism. I think this may be at the top for me, because you've done something that other critics (Gebru, Torres, Bender) often fail to do: distinguishing individual motivations from movement-incentivized motivations and corporate-incentivized motivations.

As you say, from the perspective of someone who simply wants to do research that benefits humanity, there are only a few options.

If you don't have the money to self-fund your research so that you can have true independence from a corporation, an academic institution, or funders' preferences, then you are unfortunately stuck in the drought of aligning your vision with one that will appeal to someone willing to fund your life. And sure, sometimes, or oftentimes, people happen to find a group of like-minded individuals doing similar things. But even then, if we look at a small org, often behind that org there is a bigger fiscal sponsor or funder deeply embedded into either a research agenda led by a larger organization or funding institution, or a corporation.

So what must researchers do? Talented people who don't want their judgments constrained by financial motivations.

So far, my experience has been that you must resist the temptation to lock yourself into one revenue stream or one source of income or funds for your life, for your research. I know it takes more toll and time and cuts your slack, but having multiple income or revenue streams may be the only way to go if you want to remain truly independent in thought and will. For example, you may choose to have paid work that has nothing to do with your research and also apply for grants if you can find any grantor willing to give you a small amount of money. You may also want to have an ample network both in academia, in the corporate world, and in less strongly aligned circles.

I think that having a network of people across all domains and knowing exactly which stakeholders you wish to inform with your research may be the cure against insularity.

And it's something that keeps you humble and honest, because once you see if your research actually helps in the ways you envisioned, then you should be better calibrated to either keep going or focus your efforts elsewhere. If you are constantly being fed the idea that your research matters, but the rest of the people still don't care, or it doesn't matter whether other people care as long as one specific law gets passed or one practice in a big lab changes, then maybe that also endangers your own capacity to stay objective as to whether your research is actually impactful or not.

Traditional theories of impact are also built, in my experience, to appeal to a specific mindset. For example, following the logic of a specific funder that wants X number of people to convert to working full-time in AI safety or X number of practices to change in specific groups in specific companies.

As a lawyer, I've always hated the tendency of the law to regulate and let implementation be figured out later. I'm afraid that people in the AI safety community may be doing the same when they focus years of their life on research and then hope that the policy and legislative aspects of it, or governance aspects, are simply picked up by other people. Like, this is my part of the problem to deal with. As for the rest, somebody else can think about it. Can you at least think of who that somebody else is? Who is that somebody else to begin with? Can you aim at making your research available to the type of person that you think needs to pick up where you left off?

Those are the additions I would make to this piece. As for the rest, congratulations to Celia for writing it in a way that is fair, not to a movement, not to an ideal, but to individuals who actually care.

10 more comments...

No posts

Ready for more?