This project has potential for enabling copius amounts of AI safety research that counterfactually wouldn't happen. The current lack of infrastructure around the field in Germany causes some to move away to pursue work abroad. However, there are many others who simply "move on" to other work they can do where they already live, or are never informed that AI safety or alignment is an option. SAIGE could help address those issues.
In the short term, I'm optimistic about the incubator program both introducing new people to AI alignment work and eventually enabling them to continue it. I have applied to the program as a mentor for projects in agent foundations related to my research at MATS.