Home

ProvablySafe.AI

ProvablySafe.AI is a collaborative landing page for the field and community at the intersection of AI safety and formal methods.

Currently, the go-to introduction to the research field is Safeguarded AI: constructing safety by design (opens in a new tab) by David "davidad" Dalrymple.

Community

Community resources and channels directly maintained by the ProvablySafe.AI team:

Other community resources:

Publications

Programme thesis

Papers

Forum posts

Resources

Events

Upcoming events

Atlas Computing Initiative (opens in a new tab) is organizing multiple events for the field in 2024:

For AI safety-related events more generally, consider the AI Safety Events Tracker (opens in a new tab).

Past events

Media

Videos

Podcasts

About provablysafe.ai

ProvablySafe.AI is collaborative website for the field and community of Safeguarded AI / Provably Safe AI.

Objectives

  • Information hub: aggregating public information on the field and community (papers, orgs, collaboration opportunities, events, …)
  • Field introduction: Providing onboarding pathway(s) for newcomers
  • Foster collaboration and progress

Collaboration methodology and governance

Your contributions are very welcome! For updates, enhancements, bug fixes, feedback:

The core maintainers periodically update the website, and process suggestions (issues and PRs) on Github.

The meta channel (opens in a new tab) on Zulip enables for governance-level discussion for the project.

Caveat: Over the long-term, as per the research direction, a significant part of the R&D will plausibly involve collaboration within private secure environments which would not be public and therefore not be on a public website or forum.

Maintainers

Reach us out on the community Zulip (opens in a new tab).

ProvablySafe.AI