📚 Book Tie–In: High–Value Leadership, Mastering a High–Value Company Culture, and Rise & Thrive
By Che’ Blackmon, DBA Candidate | Founder & CEO, Che’ Blackmon Consulting
🔍 Introduction: The Algorithm Is Not Neutral
Artificial intelligence is transforming human resources at a pace that few predicted even five years ago. From resume screening and candidate ranking to performance evaluation and attrition prediction, AI powered people analytics tools promise to make HR faster, more efficient, and more objective. That final promise, objectivity, is the one that should keep every HR leader up at night.
Because AI is not objective. It never has been.
AI systems learn from historical data. When that data reflects decades of biased hiring decisions, inequitable promotion patterns, and culturally narrow definitions of “high performance,” the algorithm does not correct those patterns. It automates them. It scales them. And it does so with a veneer of mathematical authority that makes the bias harder to detect and harder to challenge.
A landmark 2025 study published in PNAS Nexus tested five leading large language models on approximately 361,000 fictitious resumes where candidates’ qualifications were identical but names signaled different racial and gender identities. The results confirmed what many practitioners have long suspected: AI hiring tools exhibit systematic bias along racial and gender lines. At critical hiring thresholds, these biases could affect hundreds of thousands of workers. As researchers at the University of Washington demonstrated in a separate 2025 study, human decision makers who interact with biased AI systems tend to mirror and amplify those biases, creating a compounding effect that deepens inequity rather than resolving it.
For Black women in corporate spaces, this is not an abstract concern. It is a direct threat to career advancement, equitable compensation, and leadership access in organizations that may believe they have eliminated bias simply because they deployed a data tool.
As the founder and CEO of Che’ Blackmon Consulting, with over 24 years of progressive HR leadership experience across manufacturing, automotive, healthcare, nonprofit, quick service, and professional services industries, and as a DBA Candidate researching predictive analytics for organizational culture transformation, I sit at the intersection of people strategy and data ethics every day. In “Mastering a High–Value Company Culture,” I wrote that culture is the lifeblood of any organization. When we allow biased algorithms to make decisions about that lifeblood without scrutiny, we are not innovating. We are automating inequality.
💡 Understanding AI Bias: Where It Comes From and Why It Persists
AI bias in people analytics is not a glitch or an edge case. It is a predictable outcome of how these systems are designed, trained, and deployed. Understanding the sources of bias is the first step toward preventing its harmful effects.
🧩 The Three Sources of Algorithmic Bias
- Training Data Bias: AI systems learn patterns from historical data. If your organization’s past hiring, promotion, and performance data reflects systemic inequity, which virtually all organizations’ data does, the algorithm will treat those inequitable patterns as the definition of success. A system trained on a decade of promotion data where white men advanced disproportionately will learn to associate the characteristics of white men with high potential, regardless of whether those characteristics are actually predictive of performance.
- Algorithm Design Bias: The engineers who build AI tools make choices about which variables to include, how to weight them, and what outcomes to optimize for. These choices are human decisions embedded in code, and they carry the unconscious biases of their creators. Research published in Humanities and Social Sciences Communications confirms that algorithmic bias stems not only from limited data sets but also from the perspectives and assumptions of algorithm designers themselves.
- Deployment Context Bias: Even a well designed AI tool can produce biased outcomes when deployed in an organizational culture that does not critically evaluate its recommendations. When HR leaders treat AI output as authoritative rather than advisory, they surrender the human judgment that is essential for equitable decision making. The tool becomes an oracle rather than an input, and the bias it carries becomes invisible.
In “High–Value Leadership: Transforming Organizations Through Purposeful Culture,” I discuss how Emotional Intelligence, the third pillar of the High–Value Leadership™ framework, requires leaders to remain attuned to the impact of their decisions on all people, not just the majority. This principle applies directly to AI deployment: emotionally intelligent leaders do not accept algorithmic recommendations without asking who those recommendations might disadvantage.
🏢 Case Studies: When AI Gets It Wrong
📋 Case Study 1: The Resume Screener That Learned Discrimination
One of the most widely reported examples of AI bias in HR comes from a major technology company that developed an internal AI recruiting tool trained on ten years of historical hiring data. The system learned to downgrade resumes that contained the word “women’s,” as in “women’s chess club captain” or “women’s college,” and penalized graduates of all women’s universities. The reason was straightforward: the historical data reflected a workforce that was predominantly male, so the algorithm learned to treat indicators of femaleness as negative signals. The company eventually scrapped the tool entirely, but the lesson remains: an AI system trained on biased data does not eliminate bias. It codifies it.
📋 Case Study 2: The Class Action That Changed the Conversation
In one of the most significant AI employment cases to date, a federal court conditionally certified a class action involving potentially millions of applicants over 40 who were rejected by a major HR software vendor’s automated screening system. The lawsuit alleges that the AI tool systematically filtered out qualified candidates based on age. By June 2025, the case had expanded to include age discrimination claims on behalf of a class that could number in the millions. The legal precedent being established is clear: employers cannot outsource accountability to algorithms. As one labor attorney stated, there is no defense in claiming that AI made the decision. If AI made the decision, the employer made the decision.
📋 Case Study 3: The People Analytics Dashboard That Missed the Whole Picture
There was a healthcare organization that deployed a people analytics platform to predict which employees were at highest risk of voluntary turnover. The tool identified several variables correlated with attrition, including commute distance, salary band, and tenure. What it did not capture was the emotional tax, the invisible burden of navigating microaggressions, code switching, and representational labor that disproportionately affects Black women and other traditionally overlooked employees.
As a result, the system predicted turnover risk based on factors that were proxies for demographic patterns rather than actual drivers of disengagement. Employees who lived farther from headquarters, were in lower salary bands, and had shorter tenure were flagged as high risk. These demographic proxies disproportionately affected women of color, who were then subjected to retention interventions designed for a problem they did not actually have, while the real drivers of their potential departure, which were cultural and relational, went entirely unaddressed.
In “Rise & Thrive: A Black Woman’s Blueprint for Leadership Excellence,” I describe the hypervisibility and invisibility paradox: the reality that Black women are scrutinized when they deviate from norms yet invisible when they need support. AI tools that rely on quantitative proxies without qualitative context risk reinforcing this paradox at scale.
📊 The Disproportionate Impact on Black Women and Traditionally Overlooked Talent
AI bias does not affect all employees equally. Its impact concentrates most heavily on those who are already underrepresented in the data that trains the system.
Black women occupy a unique intersection where racial bias and gender bias compound. Research from the PNAS Nexus study confirms that AI models exhibit complex intersectional biases that cannot be predicted from examining race or gender alone. A system might appear equitable when analyzed by race in isolation and by gender in isolation, yet produce significantly biased outcomes at the intersection of Black and female. This means that standard bias audits, which typically evaluate one demographic dimension at a time, can miss the very disparities that matter most.
The practical consequences are significant. When AI tools screen resumes, they may penalize naming conventions, educational institutions, or extracurricular activities that correlate with Black identity. When they evaluate performance, they may weight behavioral indicators that reflect dominant cultural norms of “leadership presence” or “executive communication,” which, as I discuss extensively in my previous articles on code switching and the invisible tax, are often proxies for cultural conformity rather than actual capability.
When AI tools predict promotion readiness, they may rely on variables like “visibility to senior leaders” or “high profile project assignments,” both of which Catalyst’s research confirms are systematically less accessible to Black women. The algorithm does not create the inequity. It inherits, accelerates, and legitimizes it.
The Women in the Workplace 2025 report found that for every 100 men promoted to manager, only 60 Black women received the same advancement. If AI tools are being used to inform those promotion decisions and those tools are trained on data that reflects this gap, the system will not close the gap. It will treat the gap as the norm and optimize around it.
⚖️ The Emerging Legal and Regulatory Landscape
The regulatory environment around AI in employment is evolving rapidly, and HR leaders who are not tracking these developments are exposing their organizations to significant legal risk.
New York City’s Local Law 144 already requires annual bias audits for automated employment decision tools and public reporting of results. The Colorado AI Act, effective June 2026, will require developers and users of AI hiring tools to exercise reasonable care to prevent algorithmic discrimination, including annual impact assessments and risk documentation. In California, new regulations finalized in October 2025 clarify how existing anti discrimination laws apply to AI tools used in hiring.
The European Union’s AI Act, which took effect in August 2024, classifies HR tools as “high risk” and imposes strict compliance requirements including transparency obligations, human oversight mandates, and fines of up to 35 million euros or 7% of global turnover. The Act’s extraterritorial reach means that U.S. companies using AI tools on EU candidates are subject to its provisions. Emotion recognition technology in job interviews became illegal in the EU as of February 2025.
The legal principle emerging from active litigation is unambiguous: employers cannot disclaim responsibility for discriminatory outcomes by attributing decisions to algorithms. If the tool discriminates, the employer discriminates. This principle applies whether the tool is built internally or purchased from a vendor, and whether the employer understands how the algorithm works or not.
✨ The High–Value Leadership™ Framework: Governing AI With Purpose
Deploying AI in people analytics is not simply a technology decision. It is a culture decision. The High–Value Leadership™ framework provides a structured approach for ensuring that AI tools serve the organization’s highest values rather than undermining them.
🎯 Pillar 1: Purpose–Driven Vision
Before deploying any AI tool, organizations must ask: What is this tool’s purpose, and does that purpose align with our stated commitment to equity and inclusion? If the answer is efficiency without equity, the tool should not be deployed. Purpose driven vision means that AI serves the mission, not the other way around. Every data tool should be evaluated against a clear articulation of what the organization values most, and if the tool cannot demonstrably advance those values, it is the wrong tool.
🌍 Pillar 2: Stewardship of Culture
Culture stewardship requires HR leaders to understand that AI tools do not exist outside of culture. They absorb it, reflect it, and amplify it. Stewards of culture insist on knowing what data the tool was trained on, what outcomes it optimizes for, and who was included in the design process. They audit outputs regularly, disaggregated by race, gender, and intersecting identities. In “Mastering a High–Value Company Culture,” I emphasize that culture requires relentless commitment. In the age of AI, that commitment must extend to the algorithms that increasingly shape employee experience.
💜 Pillar 3: Emotional Intelligence
Emotionally intelligent leaders recognize that data is not neutral and that efficiency is not the same as equity. They maintain human judgment as the final authority in people decisions, treating AI output as one input among many rather than as an unquestionable directive. They ask: Who does this recommendation benefit, who does it disadvantage, and what context is the algorithm unable to see?
⚖️ Pillar 4: Balanced Responsibility
Balanced responsibility means distributing accountability for AI outcomes across the organization, from the CHRO to the vendor to the data team to the line managers who act on AI recommendations. Nobody gets to say, “The algorithm decided.” Everyone who touched the process shares responsibility for the outcome.
🤝 Pillar 5: Authentic Connection
AI can process data at scale, but it cannot build relationships. Authentic connection reminds leaders that the people behind the data points have stories, contexts, and experiences that no algorithm can fully capture. The most critical people decisions, promotions, terminations, development investments, and succession planning, should always include direct human engagement with the individuals affected. Technology should inform these decisions, never replace the human connection at their core.
📋 Actionable Takeaways: Deploying AI Responsibly
🏠 For CHROs and Senior HR Leaders
- Establish an AI Ethics Review Board within your HR function that includes diverse representation, particularly from the demographic groups most likely to be affected by algorithmic bias. This board should review every people analytics tool before deployment and conduct annual audits of tools already in use.
- Require vendors to provide transparent documentation of training data, algorithm design, bias testing methodology, and demographic impact analysis. If a vendor cannot or will not provide this information, that is a disqualifying factor, not a negotiable one.
- Mandate intersectional bias audits. Standard audits that examine race and gender separately will miss the compounding effects at intersections such as Black women, Latina women, women with disabilities, and other multiply marginalized groups. Demand audits that examine outcomes at these intersections specifically.
- Create a Human in the Loop policy that requires a human decision maker to review and approve any AI generated recommendation related to hiring, promotion, termination, or compensation before it is acted upon. The algorithm advises. The human decides.
- Track and report AI impact on traditionally overlooked talent with the same rigor you apply to financial reporting. If your AI tools are producing disparate outcomes, you need to know immediately, not at the end of a quarterly review cycle.
👥 For HR Practitioners and People Analytics Teams
- Learn to ask critical questions about the tools you use. What data was the model trained on? What outcomes does it optimize for? Was it tested for bias across intersecting demographic categories? If you cannot answer these questions, you are deploying a tool you do not understand.
- Build qualitative context into your analytics practice. Data tells you what is happening. It does not tell you why. Pair quantitative dashboards with employee listening sessions, focus groups, and one on one conversations that capture the cultural dynamics no algorithm can measure.
- Advocate for transparency in your organization’s AI governance. Push for clear documentation of which people decisions are AI informed, how the AI influences those decisions, and who is accountable for outcomes.
- Stay current on the evolving legal landscape. New York, Colorado, California, Illinois, and the EU have all enacted or are enacting regulations that affect AI in employment. Ignorance of these regulations is not a defense.
- Partner with external experts who specialize in equitable AI deployment and culture transformation to bring independent perspective to your organization’s AI governance practice.
💪 For Black Women and Traditionally Overlooked Professionals
- Know your rights. If you suspect an AI tool played a role in a hiring decision, performance evaluation, or promotion outcome that disadvantaged you, document everything. Ask directly whether automated tools were used in the process. Regulatory frameworks are increasingly requiring employers to disclose this information.
- Advocate for transparency in your organization’s use of AI. Ask your HR leadership: What AI tools are being used in people decisions? Have they been audited for bias? Are the audit results available to employees?
- Build your board of advocates. Sponsorship and human relationships remain the most powerful counterbalance to algorithmic bias. A sponsor who speaks your name in decision making rooms provides something no algorithm can: context, advocacy, and the recognition that you are more than a data point.
- Use the SHIELD Resilience Strategy from “Rise & Thrive” to protect your energy as you navigate environments where technology and bias intersect. Self awareness, healthy coping, internal resources, external support, learning orientation, and daily practices provide the foundation for sustained advocacy.
- Connect with professional networks and communities that are actively engaged in AI equity conversations. The more informed you are about how these tools work, the more effectively you can advocate for yourself and others.
📈 Current Trends and Best Practices
The field of AI in people analytics is evolving rapidly, and several trends are shaping how responsible organizations approach deployment.
First, intersectional auditing is becoming the gold standard. Leading organizations are moving beyond single dimension bias checks to evaluate AI outcomes at the intersections of race, gender, age, disability, and other identity dimensions. This approach aligns with the research from PNAS Nexus demonstrating that AI biases are often most acute at demographic intersections.
Second, the concept of “algorithmic accountability” is gaining traction in corporate governance. Forward thinking boards are beginning to treat AI risk with the same seriousness they apply to financial and cybersecurity risk, incorporating AI ethics into enterprise risk management frameworks and requiring regular reporting to the board on AI related outcomes and incidents.
Third, the demand for explainable AI is increasing. Organizations are insisting that vendors provide not just predictions but explanations: why did the system recommend this candidate, flag this employee, or score this performance review the way it did? Black box models that cannot explain their reasoning are increasingly unacceptable to regulators, litigators, and employees alike.
Fourth, human centered AI governance is emerging as a best practice. Rather than treating AI as a replacement for human judgment, leading organizations are designing workflows where AI augments human decision making while preserving human authority over high stakes people decisions. This approach aligns directly with the Balanced Responsibility pillar of the High–Value Leadership™ framework.
Finally, there is growing recognition that AI equity requires cultural transformation, not just technical fixes. Debiasing an algorithm without debiasing the culture that surrounds it produces marginal results at best. The most effective organizations are pairing AI governance with the kind of deep cultural work I describe in “Mastering a High–Value Company Culture”: examining values, behaviors, and systems holistically rather than treating technology as an isolated variable.
❓ Discussion Questions for Reflection and Team Dialogue
Whether you are a CHRO evaluating your AI strategy, an HR practitioner working with analytics tools daily, or a professional navigating an organization that uses AI in people decisions, these questions are designed to spark meaningful conversation and purposeful action.
- What AI tools does your organization currently use in hiring, performance evaluation, promotion, or workforce planning? Have those tools been audited for bias, and are the audit results disaggregated by intersecting demographics?
- Who in your organization is accountable for the outcomes produced by AI people analytics tools? Is accountability clearly defined, or does it default to “the algorithm decided”?
- How does your organization balance the efficiency benefits of AI with the equity imperative of ensuring that traditionally overlooked talent is not systematically disadvantaged by algorithmic recommendations?
- If your AI hiring tool was trained on your organization’s historical data, what biases might that data contain? How would those biases manifest in the tool’s recommendations?
- Does your organization have a Human in the Loop policy that requires human review of AI generated recommendations before they are acted upon for high stakes people decisions?
- How are you ensuring that the qualitative dimensions of employee experience, such as the emotional tax, code switching pressure, and the hypervisibility and invisibility paradox, are captured alongside the quantitative data that AI tools process?
- What would it look like for your organization to treat AI ethics with the same governance rigor it applies to financial reporting and cybersecurity risk?
🚀 Next Steps: From Awareness to Accountable Action
AI in people analytics is not going away. The question is not whether your organization will use these tools. It is whether you will use them responsibly, equitably, and with the kind of human centered governance that protects every employee, especially those the system has historically overlooked.
- Share this article with your CHRO, your people analytics team, and your AI vendor. The conversation about responsible deployment must include every stakeholder in the chain.
- Pick up a copy of “High–Value Leadership: Transforming Organizations Through Purposeful Culture” or “Rise & Thrive: A Black Woman’s Blueprint for Leadership Excellence” to explore the leadership frameworks and cultural strategies discussed here in greater depth. All titles are available at https://books.by/blackmons–bookshelf.
- Connect with Che’ Blackmon Consulting for a consultation on building equitable AI governance, conducting culture audits that pair data analytics with human insight, and developing leadership strategies that ensure technology serves your people rather than sorting them. Whether you need fractional HR leadership, culture transformation advisory, or strategic guidance on responsible AI deployment, we meet you where you are.
| ✨ Ready to Deploy AI That Serves Your People, Not Sorts Them? ✨ Che’ Blackmon Consulting specializes in fractional HR leadership and culture transformation for organizations navigating the intersection of technology, equity, and human centered leadership. 📧 admin@cheblackmon.com 📞 888.369.7243 🌐 cheblackmon.com 📚 Explore Che’’s Books: books.by/blackmons–bookshelf 📥 Download the Free SHIELD Resilience Strategy Guide: Get It Here |
📖 About the Author
Che’ Blackmon is a DBA Candidate in Organizational Leadership and the Founder and CEO of Che’ Blackmon Consulting, a fractional HR and culture transformation consultancy. With over 24 years of progressive HR leadership experience across manufacturing, automotive, healthcare, nonprofit, quick service, and professional services industries, Che’ is the author of three books: “Mastering a High–Value Company Culture,” “High–Value Leadership: Transforming Organizations Through Purposeful Culture,” and “Rise & Thrive: A Black Woman’s Blueprint for Leadership Excellence.” She is the creator of the High–Value Leadership™ framework and host of the “Unlock, Empower, Transform” podcast and “Rise & Thrive” YouTube series. Her doctoral research focuses on predictive analytics for organizational culture transformation, and her work centers on building purposeful cultures where traditionally overlooked talent can lead, grow, and thrive.
#AIBias #PeopleAnalytics #HighValueLeadership #AlgorithmicBias #HRTech #ResponsibleAI #WorkplaceEquity #AIinHR #CheBlackmonConsulting #BlackWomenInLeadership #HumanInTheLoop #AIEthics #HiringBias #CultureTransformation #DEI #IntersectionalEquity #HRLeadership #DataDrivenHR #AIGovernance #PurposeDrivenLeadership
Leave a Reply