Description
About Alvarez & Marsal
Alvarez & Marsal (A&M) is a global consulting firm with over 10,000 entrepreneurial, action and results-oriented professionals in over 40 countries. We take a hands-on approach to solving our clients' problems and assisting them in reaching their potential. Our culture celebrates independent thinkers and doers who positively impact our clients and shape our industry. The collaborative environment and engaging work-guided by A&M's core values of Integrity, Quality, Objectivity, Fun, Personal Reward, and Inclusive Diversity - are why our people love working at A&M.
The Team
You will be joining a newly formed AI & Knowledge function led by a Chief AI & Knowledge Officer who is moving fast. The team is global, still being staffed, and operating with urgency. The RAI Leader will be one of the first new hires into this team, alongside a Head of Technology (AI & KM) and change management leadership. The Responsible AI Leader is a firm-wide role with global scope, responsible for orchestrating the RAI activities. Based in India and reporting to the AI & Knowledge Office Operations Leader this role will serve as the firm's primary activator on responsible AI. The role spans governance, ethics, risk, and regulatory compliance across all internal and client-facing AI activity, regardless of geography. This is a builder role with an urgent mandate. The firm has an active and expanding portfolio of approved AI tools, published use guidelines, and a nascent COE, but the operational infrastructure for responsible AI does not yet exist. The person hired into this role will need to move quickly: orienting fast, identifying the highest-priority gaps, and delivering tangible governance outputs within their first quarter.
How you will contribute
1. AI Governance & Policy
- Working in partnership with the CAIKO + Risk + GSO and with support of the AI Operating Committee (AIOC), help design and operationalize the firm's global Responsible AI framework, including policies, standards, intake processes, use case approvals, and risk escalation procedures.
- Serve as the primary staff lead for the AIOC's RAI sub-committee: prepare materials, manage the use case approval pipeline, track escalations, and ensure decisions are documented and actioned.
- Contribute to the AIOC's Vendor Review sub-committee by providing RAI assessments of proposed AI tools and partnerships, evaluating data residency, confidentiality, bias, and compliance risk.
- Own and maintain the firm's AI Responsible Use guidelines, updating them as the regulatory landscape evolves across key jurisdictions (EU AI Act, NIST AI RMF, ISO/IEC 42001, GDPR, India DPDP Act, and others).
- Define the RAI function's charter, roadmap, and KPIs in collaboration with the CAIKO; build the operating model for how RAI integrates with risk, delivery, legal, compliance, and IT security teams globally.
- Serve as a firm-wide subject matter expert on AI governance, policy, and responsible use strategy.
2. Ethics & Bias Review
- Design and lead a repeatable pre-deployment ethics review process for AI models and tools, including fairness audits, bias testing, impact assessments, documentation standards, escalation paths, and sign-off criteria.
- Build and strengthen client AI case review processes ensuring that AI used in client engagements meets appropriate standards for transparency, bias, accuracy, and data handling before delivery.
- Build playbooks and toolkits that embed responsible AI principles into project delivery lifecycles across all practice areas and geographies.
- Partner closely with the Change Management function to weave RAI messaging into the firm's broader AI adoption and culture change agenda, making responsible use a visible, positive part of the firm's AI transformation, not a compliance afterthought.
3. AI Risk & Audit
- Define and manage a global AI risk taxonomy and register; assess risk across the full AI lifecycle (data sourcing, model development, deployment, monitoring, and decommissioning).
- Establish audit and monitoring mechanisms for production AI systems, including tracking for model drift, performance degradation, data handling violations, and regulatory compliance gaps.
- Collaborate with Legal, Compliance, and IT Security to ensure all AI deployments meet applicable data privacy, security, and regulatory requirements across jurisdictions.
- Prepare and deliver executive-level risk reporting on the firm's AI risk posture for the AIOC and, where required, for escalation to the AI Oversight Committee or client audiences.
Qualifications
- 8–12 years of total experience, with at least 2 years focused on AI governance, responsible AI, technology risk, or a closely related discipline.
- Demonstrated experience building a governance framework, risk program, or compliance function. Prior experience standing up a new function or practice is strongly preferred.
- Working knowledge of AI/ML concepts, model development lifecycles, enterprise AI tools (including LLMs), and common bias and fairness methodologies.
- Solid command of key regulatory and standards frameworks relevant to a global consulting firm: EU AI Act, NIST AI RMF, ISO/IEC 42001, GDPR, and India's DPDP Act.
- Ability to operate effectively across geographies and time zones; comfortable collaborating with a primarily US-based leadership team from an India-based position.
- Strong written and verbal communication skills; able to translate complex risk and governance concepts for both technical practitioners and C-suite audiences.
- Proven ability to influence without authority in a matrixed, fast-paced professional services environment.
- Prior experience in a consulting or professional services firm, either in an internal RAI/governance function or advising clients on AI strategy, risk, or compliance.
- Experience supporting or preparing materials for executive governance bodies (risk committees, operating committees, or board-level forums); comfort operating at and above the C-suite level.
- Hands-on familiarity with enterprise AI platforms and LLM-based tools (e.g., Microsoft Copilot, ChatGPT, Claude, or comparable platforms), including their data handling, residency, and confidentiality considerations.
- Experience developing employee-facing AI policies and guidelines and driving adoption across a distributed workforce.
- Relevant certifications such as GRCP, CIPP, CRISC, or emerging Responsible AI credentials (e.g., IEEE CertifAIEd, Trustworthy AI).
Your journey at A&M
We recognize that our people are the driving force behind our success, which is why we prioritize an employee experience that fosters each person’s unique professional and personal development. Our robust performance development process promotes continuous learning, rewards your contributions, and fosters a culture of meritocracy. With top-notch training and on-the-job learning opportunities, you can acquire new skills and advance your career. We prioritize your well-being, providing benefits and resources to support you on your personal journey. Our people consistently highlight the growth opportunities, our unique, entrepreneurial culture, and the fun we have together as their favorite aspects of working at A&M. The possibilities are endless for high-performing and passionate professionals.