Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
Bart avatarBart avatar
Bart Jaworski

@Bart

My life's missions is to mitigate the risks that come with the creation of AGI

https://www.linkedin.com/in/84rt
$0total balance
$0charity balance
$0cash balance

$0 in pending offers

About Me

I am a self-taught computer scientist and AGI safety enthusiast, with a unique background that has led me to where I am today. My passion for CS/AI began in high school in Poland, but I quickly realized that traditional education was not the path for me. Instead, I chose to forge my own path, diving headfirst into the world of web development and value investing.

For a year, I ran a successful small business creating websites for clients while simultaneously investing in the markets. However, as the market became increasingly automated by companies like WIX.com and Squarespace, I decided to close my web development venture and focus on my true passion: Artificial General Intelligence (AGI).

My initial plan was to continue investing until I could fund AGI research independently. However, after moving to Oxford two years ago and becoming part of the Effective Altruism and Rationalist community, my perspective shifted. I realized that while AGI has the potential to bring prosperity, it also poses significant risks if not aligned with human values.

This realization led me to dedicate myself fully to the alignment problem. Over the past year, I have participated in two research labs conducting both technical and theoretical AI safety research. I have read countless papers and several books on the topic, gaining a deep understanding of the field.

However, it quickly became apparent that my greatest impact could be made not through research, but through operations. With my technical background and organizational skills, I believe I can fill a crucial gap in the AI safety community. Many highly intelligent people simply need a place to work and an agenda, I believe that providing that is where I reach my full potential.

I am now looking to take on a management/operations role within the AI safety ecosystem. My goal is to organize what is missing and build products that should exist. To achieve this goal, I am seeking funding from Manifund. I am highly confident that we can significantly reduce the existential risk from AGI when I am not funding-constrained.