For this project, our client—a major global tech company—asked us to explore how AI could be embedded into their HR processes in meaningful, forward-thinking ways. Working in a team of six, I was responsible for generating multiple use cases for AI in HR—all of which were ultimately presented to the client.
In addition to ideating solutions, I was also asked to develop a framework to “risk score” each use case. We had over 60 potential ideas but no system for evaluating them. I created a brand-new scoring model based on eight key criteria:
Ethical Concerns
Policy Alignment
Data Type
Costs/Financial
Human Input Required
Internal Acceptance
External Acceptance
Legal Considerations
Each use case was scored on a 1–5 scale, with 1 representing minimal to no risk, and 5 signaling high-risk proposals. The model helped the client quickly identify which ideas were ready for action and which needed more review. Ideas that scored a 5 were flagged for deeper assessment, while 1s were often seen as “no-brainer” improvements.
Client feedback?
"This is some of the best work we’ve ever seen from Accenture."
This was one of the most thought-provoking projects I’ve worked on, not just because of its technical complexity but also because of the ethical questions it raised.
One use case proposed using AI for sentiment monitoring during interviews, essentially trying to read a candidate’s emotions using facial expressions or tone. From the start, I voiced strong concerns. Studies have shown that AI algorithms consistently struggle to interpret the emotions of people of color due to built-in data biases (Links below). As a person of color, this wasn’t just theoretical; it was personal. Implementing a system like this would risk deepening the disparities DEI policies were created to address.
While many companies are scaling back Diversity, Equity, and Inclusion initiatives, we must remain vigilant. The rollback of DEI doesn't erase the problems that led to its necessity in the first place. We can’t afford to build tools that quietly recreate old injustices under the guise of innovation.
Another use case suggested an AI tool to monitor employees in real-time, identify mistakes, and generate personalized training modules (like our internal Tech Quotient or TQ). While I see value in a system like this for new employees or training contexts, running it continuously in the background raises significant concerns. It would likely be expensive, intrusive, and could erode trust. When people feel like they’re being constantly watched, performance often declines, not improves.
What struck me most throughout the project was how different the reactions between executives and everyday employees were when we discussed these ideas. There’s often a disconnect between top-down innovation and on-the-ground experience. As I led the risk-scoring effort, those conversations helped shape a more nuanced view of responsible AI.
I strongly believe that we need a standardized AI scorecard that clearly outlines risks for any product or service involving artificial intelligence. The two examples above represent just a fraction of what companies are exploring. Without transparency, oversight, and stronger regulation, we risk building systems that do more harm than good.