CONTACT US

DECLARATION OF INTENT
At Safebelt.ai, we believe the world deserves artificial intelligence that’s as safe and trustworthy as it is powerful.
Inspired by how the sustainability movement changed entire industries through consumer demand, we aim to spark a similar shift for AI.
Our goal is simple: empower everyone to ask, “Is this AI safe?” and hold companies accountable to real standards of transparency, fairness, and responsibility.
By working together, we can ensure that AI serves humanity—never the other way around.
Jorge Barros, founder.

Three Key Pilars
Reliability & Security:
• Safe systems should operate consistently and resist misuse, adversarial attacks, or failure.
• Includes fallback mechanisms, human-in-the-loop controls, and robustness testing.
Fairness, Privacy & Transparency:
• Systems must avoid bias, protect personal data, and explain how decisions are made.
• Public documentation should clearly articulate methods, limitations, and data sources.
Accountability & Lifecycle Governance:
• Organizations take responsibility for design, deployment, and post-launch monitoring.
• Audit trails, version control, third-party reviews, and red-teaming process firmly in place.

How to get involved?
1. Funding
Support our mission: Your funding will help us to build a team, set global AI safety standards, and drive policy change.
2. Advisory
Share your expertise: If you or your organization can advise on technology, AI ethics, or policy, join us as an advisor.
3. Collaborations
Collaborate with us: We welcome partnerships, projects, and creative collaborations that move AI safety forward.
4. Make it viral!
Spread the word: Make it viral! Use our hashtags #isthisaisafe and #safebeltai to amplify the message across social media.

What some of the brightest minds on Earth are saying about this:
Geoffrey Hinton (the "Godfather of AI")
“There is a very general subgoal… get more control. How do you prevent them from ever wanting to take control? And nobody knows the answer.”

Geoffrey Hinton
Ex Google Brain
Stephen Hawking
"Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last… unless we learn how to avoid the risks."

Stephen Hawking
Theoretical physicist & cosmologist
Nick Bostrom
"We would want the solution to the safety problem before somebody figures out the solution to the AI problem. Our approach to existential risks cannot be one of trial-and-error."

Nick Bostrom
Oxford Univeristy Philosopher
Yoshua Bengio
“It’s like handing someone dynamite… someone finds an algorithmic improvement that yields a major intelligence jump… They better be people with high ethical standards … like handling nuclear bombs.”

Yoshua Bengio
The most-cited computer scientist globally

Frequently Asked Questions
01
What is Safebelt.ai?
Safebelt.ai is a grassroots-driven movement aiming to make AI safety as widespread and demanded as sustainability. We empower people to ask, “Is this AI safe?”, pushing companies to be transparent, accountable, and trustworthy.
02
What does “AI safety” mean here?
At Safebelt.ai, AI safety means systems that are reliable and secure, fair and transparent, and accountable throughout their lifecycle—ensuring they're governed responsibly from design to operation.
03
How can you support the initiative?
You can fund our work (to build the team, set standards, and influence policy), advise on tech or AI ethics, collaborate on projects or campaigns, or help amplify our voice using hashtags #isthisaisafe and #safebeltai.
04
How is this different from government regulation like the EU AI Act?
While regulation is essential, Safebelt.ai is consumer-driven—we’re leveraging public demand rather than waiting for laws. Think of it as a complementary layer: market pressure prompting quicker, voluntary safety compliance.
05
Why is this important now?
AI is developing at breakneck speed with far-reaching impact—and risks. Without public pressure, many companies may overlook safety. Safebelt.ai aims to create that essential, informed demand—for the sake of our children and future.
