Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
1

AI Safety & Security workshop with Adversarial Simulation labs

Technical AI safetyAI governance
midfieldai avatar

Abhinav singh

Not fundedGrant
$0raised

Project summary


I propose the establishment of a formal training program inspired by the gamified Capture the Flag (CTF) format, with a specific focus on AI & LLM security. The training program will be tailored for experienced cybersecurity professionals, technology leaders and researchers to address the growing skills gap in safeguarding AI technologies. This program will focus on equipping participants with the knowledge and hands-on experience needed to adapt to the rapid adoption of AI in enterprises.

The training will be available in three formats with varying content size:

  • A one-day (8-hour) workshop at academic conferences.

  • A two-day (16-hour) live training at premier cybersecurity events.

  • A four-week virtual boot camp on the Maven platform.

Approximately 70% of the program will be dedicated to hands-on labs, simulating real-world adversarial attacks on AI agents, LLM-based applications and other scenarios. These labs will be hosted as an application that will be accessible to attendees even after the workshop for continued learning and development. Participants will explore attack tactics and implement robust defense measures to develop actionable AI security controls. The training will emphasize the importance of AI safety and  will highlight the role of security professionals in mitigating risks, a domain traditionally outside the scope of cybersecurity.

By seamlessly integrating attack and defense components, this training will offer comprehensive insights into adversarial techniques targeting AI systems and provide practical strategies for enhancing organizational defenses.

What are this project's goals? How will you achieve them?

  1. Build a comprehensive training course consisting of reading materials, slides, pointers and lab exercises.

  2. Build a complete online platform to simulate the labs through use-cases like prompt injections, AI agent manipulations, side-stepping attacks, instances of insecure code executions though LLMs, vulnerability identification, infrastructure takeover, training data poisoning, AI Red teaming practices and much more. 

  3. Extend the platform to be accessible even after the training and possibly to a wider audience for learning and development.

  4. Build a Slack community of like-minded individuals for continued knowledge sharing and fostering community knowledge sharing.

How will this funding be used?

Three major expenses where the funding will be used:

  • Cost for using APIs of public LLM applications like OpenAI, Anthropic.

  • GPU cost for fine-tuning base model to suite the requirements of the labs.

  • Cloud hosting cost for running the CTF application with adversarial simulation lab.

Who is on your team? What's your track record on similar projects?

I will be doing most of the work related with developing the content and online labs. External contracting help will be needed to complete the fornt end part of the application which will be less than 10% of the overall effort in building this workshop.

Track Record:

  1. Previous Experience running cloud security training as full 2-day and 3-day workshops at international Cybersecurity conferences like Blackhat, RSA, DEFCON, Hack in Paris, BruCon and many more. Ran the training in over 11 different countries training over 600 cybersecurity professionals.

  2. Experience speaking at Cybersecurity conferences on cutting edge research topics in the field of Cloud Security, AI Security, Data privacy and governance. 

  3. Experience running private, hands-on workshops as a result of my public training at Large Cybersecurity events. Focus areas would include Cloud security, malware and threat research.

  4. Experienced Cybersecurity professional with over 14 years of industry experience working as a researcher, consultant as a leader. 

  5. Author and co-author of 4 different books in the field of Cybersecurity, inventor and co-inventor of 3 patents, published multiple whitepapers and blogs on industry channels related with the field of Security research.

  6. Published interviews and thought leadership sessions in various forums.

Links to above mentioned workshops, professional profile, scholar profiles can be found in this Google Doc: https://docs.google.com/document/d/17ELEKYw0TBqnlyeU6p2f3NwWhEo3hmzFkhK8acKXqkA/edit?tab=t.0#heading=h.cn0py6a0ez0j

What are the most likely causes and outcomes if this project fails?

  • The potential causes of failure for this workshop may include insufficient participant engagement, lack of practical relevance, or logistical challenges in delivering the hands-on labs.

  • The success will depend on keeping the labs relevant and engaging that relates with the day to day challenges faced by cybersecurity professionals when securing AI Applications and services.

  • Cloud hosting and GPU usage gets too high.

How much money have you raised in the last 12 months, and from where?


NA

CommentsSimilar7
Dhruv712 avatar

Dhruv Sumathi

AI For Humans Workshop and Hackathon at Edge Esmeralda

Talks and a hackathon on AI safety, d/acc, and how to empower humans in a post-AGI world.

Science & technologyTechnical AI safetyAI governanceBiosecurityGlobal catastrophic risks
1
0
$0 raised
adityaraj avatar

AI Safety India

Fundamentals of Safe AI - Practical Track (Open Globally)

Bridging Theory to Practice: A 10-week program building AI safety skills through hands-on application

Science & technologyTechnical AI safetyAI governanceEA communityGlobal catastrophic risks
1
0
$0 raised
🍓

James Lucassen

More Detailed Cyber Kill Chain For AI Control Evaluation

Extending an AI control evaluation to include vulnerability discovery, weaponization, and payload creation

Technical AI safety
4
4
$0 raised
🐝

Sahil

[AI Safety Workshop @ EA Hotel] Autostructures

Scaling meaning without fixed structure (...dynamically generating it instead.)

3
7
$8.55K raised
AmritanshuPrasad avatar

Amritanshu Prasad

General support for research activities

Funding for my work across AI governance and policy research

AI governanceGlobal catastrophic risks
3
3
$0 / $15K
🐢

Fred Heiding

AI scam research targeting seniors

A microgrant to help us disseminate our work on AI scams targeting seniors and present it at BSides Las Vegas, DEF CON, and other conferences

Science & technologyAI governance
2
2
$3.5K raised
🐸

SaferAI

General support for SaferAI

Support for SaferAI’s technical and governance research and education programs to enable responsible and safe AI.

AI governance
3
1
$100K raised