@CarmenCondor Just to clarify -- our initial outreach email is only very coarsely personalised (e.g. based on whether they are in academia vs industry). I'm describing the pitch I would give somebody on a 1:1 call.
@Arkose
Arkose is a field-building nonprofit that supports mid-career ML professionals to enter the field of AI safety.
https://arkose.org/$0 in pending offers
Arkose is an early-stage, field-building nonprofit with the mission of improving the safety of advanced AI systems to reduce potential large-scale risks. To this end, Arkose focuses on supporting researchers, engineers, and other professionals interested in contributing to AI safety. We do this by:
Hosting calls and other support programs that provide connections, resources, and tailored guidance to researchers, engineers, and others with relevant skill sets to facilitate their engagement in AI safety technical research. Machine learning professionals are invited to check out our list of Opportunities or Request a Call.
Advancing understanding of AI safety research and opportunities via outreach and curating educational resources. Our list of AI Safety Papers is periodically reviewed by an advisory panel of experts in the field.
Arkose
2 days ago
@CarmenCondor Just to clarify -- our initial outreach email is only very coarsely personalised (e.g. based on whether they are in academia vs industry). I'm describing the pitch I would give somebody on a 1:1 call.
Arkose
2 days ago
@CarmenCondor Unfortunately this varies a lot depending on who I'm speaking with, so it's hard to summarise.
I agree that the "career coaching" frame is not always appropriate, especially for academics. Often, I find it useful to emphasise both the potential for positive impact and the simple legitimacy of the work -- for many, it's useful to highlight that this is a serious field of research which can be published in top conferences and which can get funded. This often involves some more technical discussion of the areas of overlap with their research; I find AI safety is now broad enough there is often some area of overlap. With professors especially, I will often discuss any currently-open funds which might be relevant to their work, and encourage them to check our opportunities page when seeking funding in the future.
Arkose
2 days ago
@CarmenCondor Unfortuanately we don't track this information well. I was able to get some data on 56% of our calls (mostly the direct outreach calls). Of these, 90% were in the US, UK, or Europe. This means we've had 17 calls with researchers and engineers outside of these areas, including in China, Korea, and Singapore. There may be some inaccuracy in these statistics as it's not a key metric for us, but I do expect them to be broadly indicative.
Arkose
2 days ago
@CarmenCondor Hi Carmen, great question!
We reach out directly to researchers who've submitted to top AI conferences as well as speaking with those who are already interested in AI safety (evia referrals or direct applications through our website).
62% of our calls are sourced from direct outreach via email to researchers and engineers who've had work accepted to NeurIPS, ICML, or ICLR. As assessed by us after the call, 46% of the professionals on these calls had no prior knowledge of AI safety, and a further 25% were 'sympathetic' (e.g. had heard vague arguments before, but were largely unfamiliar). On these calls, we focus on an introduction to why we might be concerned about large-scale risks from AI before discussing technical AI safety work relevant to their background, and covering ways they could get involved.
The remainder of our calls come from a variety of sources, but are broadly more familiar with AI safety, coming from places like the MATS program or 80,000 Hours. We identify those who are in need of support primarily through self-selection, but also through a referral process from these organizations. On these calls, our value-add is having an in-depth understanding of the needs of AI safety organisations and recommending tailored, specific next steps to get involved.