I don't have the energy right now to write a high quality comment, but since I care about this I figure it's better to write something rather than nothing:
I think this project sounds like a good idea. Most (all?) AI Safety training programs these days don't even seem to touch on what I'd consider the actual core problems of alignment. I think there can be good reasons for many people and programs to mostly focus on other things at the moment, but it really seems almost catastrophically underemphasised at this stage. I don't know every training program of course, but talking to e.g. MATS graduates these days I often get the sense they haven't even really heard the basic case for why alignment might be hard. Looking at various AI Safety course curricular I likewise see an almost complete lack of material engaging with what I'd consider the core problems of alignment. If this continues, eventually I'm not sure this field will even really remember what it was supposed to be about, never mind try to work on it.
I know Mateusz a little. From our limited interactions, I got the impression that he probably knows at least a decent amount about the sort of old-school alignment thinking that I wish the alignment field today was a lot more familiar with. Tsvi's endorsement also means quite a bit to me here. I think he could make a good technical lead for this project. I don't think I know Sofie or Attila. I do know Plex, but haven't really worked with him professionally. Other people say he's good at what he does though. My guess is he'd be a good fit for this role.