
As AI tools like ChatGPT grow in popularity for personal therapy and emotional support, the risks—particularly for young users—have been widely reported. Less discussed is the use of generative AI by employers to monitor employees’ mental health and offer emotional support at work.
Since the pandemic-driven shift to remote work, sectors from healthcare to HR and customer service have increasingly adopted AI systems that assess employees’ emotions, flag those in distress, and offer support.
This represents a significant leap beyond general chat tools or personal therapy apps. As researchers examining AI’s impact on workplace emotions and relationships, we are concerned with key questions: What are the implications of employers accessing your emotional data? Can AI truly deliver the emotional support employees need? What if the AI fails—and who is accountable if it does?
How the Workplace Differs
Many companies have begun offering automated counseling programs similar to personal therapy apps, with some proven benefits. Early studies show that in virtual, doctor-patient-style conversations, AI responses can make people feel more heard than human ones. One study even found AI chatbots to be “as empathic as, and sometimes more than, human therapists.
This isn’t entirely surprising: AI provides constant attention and consistently supportive replies. It doesn’t interrupt, judge, or grow frustrated when concerns are repeated. For some employees—especially those facing stigmatized issues like mental health struggles or workplace conflicts—this predictability can feel safer than interacting with humans.
For some employees, however, these tools spark new concerns. A 2023 study found that many workers hesitated to join company mental health programs over fears about confidentiality and stigma, worried that sharing personal information could harm their careers.
Other AI systems dig even deeper, monitoring employee communications in real time—through emails, Slack messages, and Zoom calls. This generates detailed profiles of emotional states, stress levels, and psychological vulnerabilities. Corporate systems store all this sensitive data, often with vague privacy safeguards that favor the employer’s interests.
AI Emotional Monitoring: Support or Surveillance?
Workplace Options, a global employee assistance provider, has teamed up with Wellbeing.ai to implement a platform that uses facial analytics to monitor 62 different emotional states. It produces well-being scores that companies can use to identify stress or morale issues, effectively integrating AI into highly sensitive emotional areas of work and blurring the line between support and surveillance.
Here, the same AI that helps employees feel heard also gives organizations unprecedented insight into workforce emotions. Companies can track burnout trends in specific departments, pinpoint employees at risk of leaving, and monitor emotional reactions to organizational changes.
However, tools like these turn emotional data into management intelligence, creating a real dilemma for many companies. Some progressive organizations enforce strict data governance, limiting access to anonymized trends instead of individual conversations, while others succumb to the temptation to use emotional insights for performance reviews and personnel decisions.
Continuous monitoring can help ensure no employee in distress is overlooked, but it can also make workers self-censor to avoid drawing attention. Research on workplace AI surveillance shows that employees experience higher stress and change their behavior when they know management can review their interactions. This monitoring undermines the sense of safety needed to seek help. Another study found that such systems heightened employee distress, due to privacy loss and fear of repercussions if the AI flagged them as stressed or burned out.
When Simulated Empathy Carries Real-World Impact
These findings are significant because the risks may be even greater in the workplace than in personal settings. AI lacks the nuanced judgment to differentiate between accepting someone as a person and endorsing harmful behavior. In a professional context, this means AI could unintentionally validate unethical practices or miss situations where human intervention is crucial.
AI systems can also make other errors. One study found that emotion-tracking tools disproportionately affected employees of color, trans and nonbinary individuals, and those with mental health conditions. Participants voiced serious concerns that AI might misinterpret moods, tone, or verbal cues due to biases related to ethnicity, gender, and other factors inherent in these systems.
There’s also an issue of authenticity. Studies show that when people know they’re interacting with AI, they perceive the same empathetic responses as less genuine than if they came from a human. Yet some employees favor AI precisely because it isn’t human, appreciating the sense of anonymity and freedom from social consequences—even if that sense is mostly perceived rather than real.
The technology also raises questions about human managers. If employees increasingly turn to AI for emotional support, what does that say about leadership? Some companies are using AI insights to train managers in emotional intelligence, effectively using the technology as a mirror to highlight gaps in human skills.
Next Steps
The discussion around AI-driven emotional support at work isn’t just about technology—it’s about the kind of workplaces people want. As these systems become more widespread, key questions arise: Should employers value genuine human connection over constant availability? How can personal privacy coexist with organizational insights? Is it possible to leverage AI’s empathetic abilities while maintaining the trust essential for strong workplace relationships?
The most effective approaches treat AI as a complement, not a replacement, for human empathy. By handling routine emotional tasks—late-night anxieties, pre-meeting jitters, or processing difficult feedback—AI frees managers to focus on deeper, more meaningful connections with their teams.
However, this demands thoughtful execution. Organizations that set clear ethical limits, enforce robust privacy safeguards, and define exactly how emotional data is used are better positioned to avoid the risks of these systems—especially when they acknowledge the moments where human judgment and genuine presence are essential.
Read the original article on: Tech Xplore
Read more: Chinese Firm Unveils Highly Agile Life-sized Robotic Hand
