Support the
CSAI Agentic Fund
Help fund public-interest research and responsible AI education that build the guardrails needed for the agentic era.
Your Support Helps AI Scale With Trust
Artificial Intelligence is rapidly evolving from assertive tools into autonomous systems capable of orchestrating workflows, interacting with infrastructure, and making decisions across digital environments.
But AI is scaling faster than security. Governance, security frameworks, and oversight are still catching up as organizations integrate AI into core systems. The CSAI Agentic Fund supports CSAI's 2026-2027 mission in ensuring this transformation happens securely and responsibly.
Your contribution helps:
Fund independent AI security research
Expand access to responsible AI education for the next generation
Support initiatives that advance secure and trustworthy AI
Grow mentorship and professional networks
Choose Your Impact
Support the CSAI Agentic Fund and help fund the research and education needed to ensure the agentic era of AI develops with trust at scale.
Your Information
Where Your Support Goes
The CSAI Agentic Fund supports initiatives focused on Securing the Agentic Control Plane. These efforts help build guardrails that allow agentic AI to scale with trust:
Operational Visibility and Vulnerability Reporting
Expanding beyond model trust scoring into active scanning of publicly available models and MCP servers against consensus trust criteria.
Infrastructure Hardening for Agent Architectures
Developing secure-by-default templates, scanning guidance, and operational guardrails for organizations deploying MCP-based agent architectures.
Agentic Threat Modeling
Advancing MAESTRO as an open-source agentic threat modeling framework enabling organizations to analyze autonomy across multiple layers of interaction.
Autonomy Governance
Developing a common language that ties increasing autonomy to enforceable control requirements.
Assurance Alignment
Aligning certification and assurance mechanisms with the realities of probabilistic, adaptive systems rather than static control checklists.
Catastrophic Risk Research
Beginning structured research into high-impact failure models associated with advanced and highly autonomous AI systems.