share on
As AI reshapes human capital management, James Saxton, VP Global Product Ambassador at Dayforce, shares insights on how organisations can adopt AI thoughtfully – ensuring ethical use, regulatory alignment, and meaningful impact for both business and workforce.
This article is brought to you by Dayforce.
AI has evolved from an intriguing HR add-on to a transformative force. While its potential is clear, so are the risks. As such, HR leaders face a paradox: they must drive innovation and efficiency through AI, while also ensuring ethical use, fairness, and compliance in a tightening regulatory climate.
In such a charged environment, where boards are demanding shareholder value, employees expect agility, and regulators want accountability, responsible AI adoption isn’t optional. CHROs must move beyond experimentation and hype to build practical, people-first AI frameworks that can deliver value and earn trust.
6 challenges HR faces in adopting AI
While enthusiasm for AI is rising, so are the implementation headaches. Here are six of the most common roadblocks HR teams face when adopting AI:
Compliance: Regional frameworks are constantly evolving, and AI solutions must be agile enough to adapt to these changes, helping organisations manage compliance today and tomorrow.
Bias: Employers, legislators, and employees alike are asking: how fairly and accurately can AI make judgments, especially when the goalposts for addressing bias are constantly shifting?
Ethics: Beyond compliance, organisations must ensure their use of AI does not cause harm to individuals or society, or compromise on fairness, transparency, and respect.
Transparency and quality: Understanding how AI arrives at its conclusions is concerning, especially when AI drives decisions affecting people. Additionally, AI models can deteriorate over time without robust monitoring.
Privacy: Given that AI solutions in HR will be used on employee data, it’s essential for them to embed privacy directly into the solution.
Tech adoption: Increasing AI adoption means taking concrete steps to building literacy, comfort, and creative thinking required for people to use the technology effectively.
To navigate these challenges successfully, HR needs a clear vision and a well-defined strategy for adopting AI thoughtfully and ethically.
As James Saxton, VP Global Product Ambassador at Dayforce, highlights, employees are increasingly seeking less friction in their work lives — whether it’s having greater flexibility in their careers, or seamless, consumer-grade digital experiences like mobile self-service. This, while organisations face mounting challenges, such as economic uncertainty and complex regulations.
How can we find balance between these business needs and employee expectations? Well, that’s where HR can truly shine. By harnessing AI strategically, HR leaders can simplify processes, personalise employee experiences, and build adaptive systems that prioritise both organisational resilience and workforce wellbeing.
Given his experience in engaging with key customers, partners, and industry influencers on shaping actionable strategies, we probe Dayforce’s Saxton a little further on the five non-negotiable principles in driving responsible, effective, and sustainable implementation of AI in human capital management (HCM). Let's dive into the insights.
Non-negotiable 1: Prepare your people
Saxton shares, that to drive AI adoption, organisations must take concrete steps to build literacy, comfort, and creative thinking required for people to use the technology effectively.
"This is where employee readiness and upskilling come in. Before investing in any AI, it’s essential to involve your team, as they are the primary users of the technology,” he adds, advising organisations to start by creating tailored learning programmes for everyone who will interact with AI.
"This training should cover AI ethics and correct usage, tailored to individuals' roles and the way they are expected to utilise the system. It would help to offer learning options that fit employees’ learning styles and monitor their progress to help ensure understanding."
However, he notes, such targeted training can be hard to implement, so using a learning management system (LMS) embedded in the HCM solution can help empower employees and manage timelines and completion criteria, allowing teams to track learners’ progress and provide feedback.
But it’s not just about theory. “Once training is set, building an AI “sandbox” can help users test what they’ve learned in a controlled environment and use the software as they would in the real world. This will help them build practical skills, get comfortable with the tools, and help them learn to think creatively as they use the tool so they’re ready once the real-world solution is deployed,” Saxton says.
These play zones can turn hesitant learners into confident AI champions.
Non-negotiable 2: Get real about compliance
Compliance in AI isn’t static — it’s a moving target that requires ongoing attention. From strict frameworks like the EU’s GDPR to new local regulations, organisations need AI solutions that not only meet today’s requirements but are designed to evolve with global regulations.
Saxton queries: “Is the AI solution agile enough to manage compliance today and tomorrow? This is critical, especially since legislation moves fast, but AI moves faster, and something that is legal today may become illegal in the future.”
True compliance means choosing AI tools that include processes for monitoring regulatory updates, adjusting policies as needed, and providing clear, auditable evidence of compliance across different regions. Relying on static certifications or promises isn’t enough.
Non-negotiable 3: Use AI only where and when you require it
When it comes to AI in HCM, a one-size-fits-all approach is a huge risk. Here are some ways you can stay ahead of the curve:
- Challenge the toggle trap: Avoid platforms with only on/off AI controls. Choose the proper tools that allow you to tailor AI by role, function, or region.
- Respect individual agency: Not everyone needs AI all the time. Enable opt-ins, opt-outs, or contextual functionality to build trust and boost adoption.
- Governance without gridlock: Regulations change quickly. Use solutions with agile governance to adapt AI use without disrupting operations.
Some important questions Saxton urges leaders to ponder are:
- How specific can you be about where and when AI is used?
- Can employees opt in or out of having access to it?
- Can you turn the tool on and off depending on which regulatory jurisdiction it's being used in?
In short, when it comes to AI in HCM, flexibility isn’t just a feature — it’s a must-have strategy to unlock human potential while maintaining robust control.
Non-negotiable 4: Shatter the black box… where possible
AI must be explainable. With a lack of transparency as one of the top challenges to AI deployment — particularly for third-party solutions—organisations should demand clarity on how models make decisions, both at a general level and in specific scenarios.
“It’s important to ask: how does AI arrive at its conclusions? How does the model influence decisions, both specific and general? How does the system get consent? And how can we ensure quality over time, especially once the AI model is influenced by real-world data?”
The best outcomes happen when AI and human insight work hand-in-hand. Here’s how that could happen:
- Keep your people involved, especially for sensitive or strategic decisions. AI should inform and assist human insight – not replace them.
- Ask the critical questions:
- How does the model reach its conclusions?
- Are users clearly informed when AI is in play?
- Is consent sought and recorded where customers are involved?
- Prioritise quality over bias by partnering with providers who actively monitor and update AI models to ensure decisions stay accurate, fair, and defensible over time.
Non-negotiable 5: Protect everyone
Security, privacy, and transparency must be integral from the get-go. As Saxton says, “Your AI solution must uphold the highest standards of privacy as it will be dealing with the most sensitive part of your organisation, your people.
"It’s essential for AI solutions to follow ‘Privacy by Design’, a methodology that helps responsible AI solution providers create technology that can help to anticipate and prevent invasive events before they happen."
In addition to following the ‘Privacy by Design’ principle in your AI stack, AI solutions should also be engineered to prevent breaches before they happen – from data collection to processing and storage.
HR also has a crucial role in fostering open dialogue about AI’s ethical use, its impact on people, and its limitations. Leaders should ask and verify the tough questions:
- Where exactly are AI models hosted?
- Is employee or customer data shared with third parties?
- Can individuals opt out of having their data used for training?
- Does the AI rely on public or proprietary data sets?
Before integrating AI tools into the workplace, organisations should not settle for vague assurances, instead, scrutinise how privacy and security safeguards are implemented. Protecting everyone means leaving nothing to chance.
The future: Toward a balanced, people-first AI strategy
Skills-based hiring and predictive talent models are becoming the new norm, while human-AI co-piloting is shifting from novelty to necessity. But success won’t hinge on technology alone, it depends on how well organisations centre employee experience in their AI journey. That means empowering employees to make smarter decisions, automate the mundane, and focus on more strategic, creative, and value-driven work.
The new employee experience includes learning how to partner with AI, not compete with it.
"Much about the conversation around AI has centred around whether this technology can replace humans or if humans can delegate tasks to machines, but a lot of technology still needs the human touch. AI’s main role is still to understand and elevate work, not replace it," Saxton affirms.
To truly benefit from AI, organisations must help employees learn the steps of this "new tango" — the delicate balance between human and machine collaboration.
Conclusion
AI can unlock new levels of productivity, efficiency, and agility – when used the right way. Saxton drives home the importance of a true AI partner, that can allow HR professionals to work on the business and for employees in a strategically connected and fulfilling role.
"No more Job Descriptions, no more training material content, corporate policy documents, reporting metrics, and so on – all this being done automatically for you, so you can focus on what truly matters and do the work that you’re meant to do."
That’s where Dayforce comes in, with its offering of comprehensive, AI-powered people platform designed to help organisations unlock their full potential and work with confidence.
Go on, find out more information on how Dayforce can transform your workforce management, please visit here.
Do the work you’re meant to do with Dayforce. Learn more at dayforce.com/asia.
share on