Adopting a balanced human-AI strategy opens up numerous opportunities, but businesses need to identify risks and any AI-specific gaps to ensure best practices in its deployment and a sound governance framework for its ongoing use, authors Tess Lumsdaine and Jonathan Isaacs suggest.
• Tess Lumsdaine, Head of Baker McKenzie’s Hong Kong Employment & Compensation Practice (pictured below, left)
• Jonathan Isaacs, Head of Baker McKenzie’s Asia Pacific Employment & Compensation Practice (pictured below, right)
Artificial intelligence has had a transformative impact on various industries in 2023. While the public is fascinated by AI's applications in content creation, such as in generating academic essays or hyper-realistic images, its integration into daily operations within businesses takes place in the background.
In the field of recruitment and HR management, AI is currently used as a tool to enhance existing processes and functions: another step up in the move to digitalise. Adopting a balanced human-AI strategy opens up numerous opportunities, but businesses need to identify risks and any AI-specific gaps to ensure best practices in its deployment and a sound governance framework for its ongoing use.
The boundless potential for AI in HR
The integration of AI by HR can boost productivity and free up resources from functions that are still largely manual. This can reshape how organisations manage their most valuable asset – their people. The benefits do not stop here.
HR professionals have long striven to make data-supported and well-informed decision. AI-driven algorithms have the capability to analyse vast amounts of data, providing valuable insights on patterns and trends. For example, AI tools may assist in performance reviews by collating data from qualitative email or messenger feedback as well as quantitative metrics, freeing up time for HR and managers to focus on coaching and career planning conversations with employees. They may also assist in identifying issues related to high employee turnover or in synthesising and analysing feedback from employee engagement or exit surveys.
AI offers time efficiency, particularly in recruitment-related activities. From drafting job descriptions to screening CVs or video applications and shortlisting candidates, many steps can be automated, thanks to AI's ability to analyse large volumes of information swiftly. AI tools may also be deployed to streamline the onboarding process or employee self-guided training by supporting employees to navigate systems and networks or to draft basic HR content, such as employee guides.
With sufficient historical data to establish robust and objective criteria, AI tools can effectively match candidates or existing employees with open roles, optimising the hiring or placement process and enhancing employee performance and business productivity. In recruitment, AI tools may already be useful for recruiting for well-defined roles with a large pool of candidates, such as internship programmes or junior administrative roles.
By relying on objective criteria and 'clean' data when building algorithms, AI has the potential to remove human bias from recruitment, promotion and performance processes, promoting diversity. This can result in a more inclusive and representative workforce.
What should HR leaders watch out for
Despite these benefits, it is important for HR leaders to consider a number of potential risks when deciding to incorporate AI into HR processes.
Data protection considerations
Collecting and processing personal data through AI or for use by AI will need consideration of data protection regulation and requirements. Many jurisdictions, like Hong Kong and Mainland China, will have notification or consent obligations when data is collected on how it will be used, to whom the data will be transferred, and whether there will be cross-border transfers. If use of data by AI is a new use, the business may need to revisit its data collection notifications or consents.
Information may also be fed through third-party systems which may not necessarily have the same safeguards on how such data is collected, used, managed and disclosed. It is essential for organisations to do due diligence on providers and put in place service agreements that include sufficient controls and protections.
AI specific regulations
Deployment of AI in business operations is still in its nascent stage. Governments are also still grappling with how to regulate AI. Businesses will need to ensure they stay on top of AI specific regulation as it comes into force.
Algorithmic bias or unlawful discrimination
While AI reduces human-based biases, hiring decisions reached using AI may be open to challenge as being unlawful discrimination if the AI models are trained on small or skewed data pools, unless an employer can comprehensively demonstrate a protected attribute, such as gender, played no part in the decision. Employers who have relied on AI in decision making but without a comprehensive understanding of how it works, may find this difficult to demonstrate.
Even in the absence of unlawful discrimination, AI tools may perpetuate existing preferences or affinity bias. For example, if AI models are taught to only using historical company data, then tools will likely exhibit preferences for individuals with characteristics which are the same as your existing employees.
Leakage of confidential and proprietary information
If employees use AI and scrape business data without the employer's knowledge, there may be risks to the businesses intellectual property or unapproved disclosure of its confidential information or trade secrets. If they do not already, HR should ensure that employees (including HR teams) have been issued with clear directions on permitted uses of AI tools.
Lack of trust and risks of activism
A key challenge will be striking a balance between deploying AI to enhance productivity and not overreaching which may erode trust with the workforce. A good example is the use of productivity trackers. These have long been used in certain sectors such as manufacturing, however the rise of ‘bossware’ has seen office and remote workers being subject to more sophisticated monitoring including having their keystrokes, clicks and idle time recorded.
Deployment of any technology should be proportionate to the specific issues the business is seeking to manage and should be accompanied by clear communications with employees on how the technology works and what data will be used for. Absent this, it may result in reduced worker engagement, worker collective action or strikes or even legal action.
HR leaders should not be discouraged. The risks outlined aren't new or unique, but already exist in many of the existing functions that HR teams, or businesses more broadly, perform.
- Clearly communicate to candidates and employees if and how AI is to be used. Transparency will be important to foster trust.
- Conduct proper due diligence on any service providers or platforms being used and ensure any service terms in place are compliant with laws and provide adequate protection to the business.
- Ensure that AI models are trained on representative, diverse, and unbiased data. Regular monitoring and auditing of AI systems will be critical in mitigating bias and unlawful discrimination.
- Maintain channels for communication with candidates and employees so that queries or concerns can be raised and addressed at an early stage. This will assist with the smooth adoption and effective use of AI tools and decrease the likelihood of individual claims or collective action.
- Neglect updating or implementing proper governance that guides the organisation on how AI tools can and should be used.
- Ignore data protection obligations, including in relation to notices / consents, before deploying AI tools.
- Over-rely on AI tools or automation especially, in complex or sensitive matters. Careful worker engagement and communications will be even more critical with the use of new technologies.
- Overlook communications and training to the workforce on the opportunities that AI brings for them. This will also allay concerns regarding job security which may destabilise employee groups.
Lead image / 123RF
Authors' photos / Provided