I did check out which one of these 11 Start-ups are still alive:

https://www.ai.nl/artificial-intelligence/11-dutch-startups-scaling-and-redefining-hr-and-recruitment-using-ai/ And its seems all but 1, that’s a nice score! Then I dinged into using AI for a while and read the following from HBR Jessica Kim-Schmid and Roshni Raveendhran on October 13, 2022.

Where AI Tools Can Go Wrong — and How to Mitigate This Risk

AI-driven tools are not one-size-fits-all solutions, however. Indeed, AI can be designed to optimize for different metrics and is only as good as the objective it is optimized for. Therefore, to leverage AI’s full potential for talent management, leaders need to consider what AI adoption and implementation challenges they may run into. Below, we describe key challenges as well as research-based mitigation strategies for each.

Low Trust in AI-Driven Decisions

People may not trust and accept AI-driven decisions — a phenomenon known as algorithm aversion. Research shows that people often mistrust AI because they don’t understand how AI works, it takes decision control out of their hands, and they perceive algorithmic decisions as impersonal and reductionistic. Indeed, one study showed that even though algorithms can remove bias in decision-making, employees perceived algorithm-based HR decisions as less fair compared to human decisions.

Mitigation strategies include:

Fostering algorithmic literacy: One way to reduce algorithm aversion is to help users learn how to interact with AI tools. Talent management leaders who use AI tools for making decisions should receive statistical training, for instance, that can enable them to feel confident about interpreting algorithmic recommendations.

Offering opportunities for decision control: Research suggests that when people have some control over the ultimate decision, even if minimal, they are less averse to algorithmic decisions. Moreover, people are more willing to trust AI-driven decisions in more objective domains. Therefore, carefully deciding which types of talent management decisions should be informed by AI, as well as determining how HR professionals can co-create solutions by working with AI-driven recommendations, will be critical for enhancing trust in AI.

AI Bias and Ethical Implications

While AI can reduce bias in decision-making, AI is not entirely bias-free. AI systems are typically trained using existing datasets, which may reflect historical biases. In addition to the infamous Amazon AI tool that disadvantaged women applicants, other examples of bias in AI includesourcing algorithms that pointedly target an audience of 85% women for supermarket cashier positions and target an audience that was 75% Black for jobs at taxi companies. Given AI’s vulnerability to bias, applications of AI in talent management could produce outcomes that violate organizational ethical codes and values, ultimately hurting employee engagement, morale, and productivity.

Mitigation strategies include:

Creating internal processes for identifying and addressing bias in AI: To systematically mitigate bias in AI technologies, it is important to create internal processes based on how one’s organization defines fairness in algorithmic outcomes, as well as setting standards for how transparent and explainable AI decisions within the organization need to be. Leaders should also be cautious about setting fairness criteria that do not account for equity, particularly for vulnerable populations. To address this, leaders can consider including variables such as gender and race in algorithms and proactively set different criteria for different groups to address pre-existing biases.

Building diverse teams to design AI systems: Research indicates that more diverse engineering teams create less biased AI. By fostering diversity throughout AI design and implementation processes within their talent management function, organizations could draw on diverse perspectives to minimize AI bias.

Erosion of Employee Privacy

Organizations have deployed AI technologies to track employees in real-time. If implemented poorly, these tools can severely erode employee privacy and lead to increased employee stress, faster burnout, deteriorated mental health, and decreased sense of agency. Reports show that the Covid-19 pandemic has driven a huge uptick in employer adoption of these tracking technologies, with as more than 50% of large employers currently using AI tools for tracking

Mitigation strategies include:

Being transparent about the purpose and use of tracking technology: Gartner Research reveals that the percentage of employees who are comfortable with certain forms of employer tracking has increased over the past decade. The increase in acceptance is much higher when employers explain the reasoning for tracking, growing from 30% to 50% when organizational leaders transparently discussed why these tools were being used.

Making tracking informational, not evaluative: Perhaps counter to intuition, recent research has discovered that employees are more accepting of tracking when it is conducted solely by AI without any human involvement. This work shows that technological tracking allows employees to get informational feedback about their own behavior without fear of negative evaluation. When tracking tools are deployed primarily for monitoring rather than to offer information to employees about their behaviors, they erode privacy and reduce intrinsic motivation. Therefore, the key consideration for leaders should be whether tracking can enhance informational outcomes for employees without causing evaluation concerns.

Potential for Legal Risk

According to the American Bar Association, employers could be held liable even for unintentional employment discrimination enacted by AI-driven systems. Additionally, the state, national, and international laws governing employers’ and employees’ AI-related rights and responsibilities are constantly evolving.

Mitigation strategies include:

Understanding current legal frameworks regulating AI use: While the current approach to AI regulation in the U.S. is still in early stages, the primary focus is on enabling accountability, transparency, and fairness of AI. The National AI Initiative Act (now a law) and Algorithmic Accountability Act of 2022 (pending) are two national level frameworks that have been initiated to regulate AI use in organizations. But states are currently at the forefront of enacting AI regulations, so it will be important for leaders to stay abreast with changing regulations especially when operating businesses at multiple locations.

Establishing a proactive risk management program: The wider policy landscape governing the use of AI for sensitive personnel decisions is still evolving. But organizations that hope to adopt AI tools to drive value in talent management should actively monitor pending legislation and create proactive risk management practices, such as designing AI systems with appropriate controls at various stages of the model development process.

. . .

Given the role that excellent talent management plays in maintaining competitiveness, especially in light of the Great Resignation, leaders should proactively consider how AI tools that target talent management pain points can drive impact. There are significant implementation challenges that need to be overcome to gain the full value that these tools can bring. Given these challenges, leaders should judiciously evaluate AI tools. They can make make managing talent easier and fairer, but it’s not as simple as plug and play — and if leaders want to get the most out of these tools, they need to remember that.