Responsible AI at Praxy.
How we build, what we will not do, and how we hold ourselves accountable.
Human-in-the-loop.
Praxy drafts; the recruiter decides. We surface candidates, write screening notes, and run first-round voice interviews — but every consequential action is taken by a human. There is no auto-reject, no auto-hire, no silent filter. The thread shows you what Praxy thought and why; the verdict is yours.
What we don't score on.
We do not score candidates on:
- — accent
- — appearance
- — age
- — gender identity
- — disability
- — national origin
- — race
- — religion
- — sexual orientation
We also do not use pseudo-scientific traits like "personality from voice."
Your data, your workspace.
Candidates and transcripts stay in your workspace. We do not train models on your customer data. Candidates may request deletion at any time and we honour the request within 30 days. Bias-audit details — including model lineage, evaluation methodology, and disparate-impact testing — are available on request to enterprise customers.
Compliance stance.
Praxy is designed with the spirit of NYC Local Law 144 (automated employment decision tools), EEOC guidance on AI-assisted hiring, and GDPR Article 22 in mind. Because every consequential decision is made by a recruiter — not by Praxy alone — we are not the sole automated decider in any candidate outcome. We will publish independent bias-audit summaries on request from customers operating in jurisdictions that require them.
Questions? Reach us at hello@praxel.in.