Adam Miribyan has spent nearly a decade building software for EAP and wellness providers. He currently leads a development team of 20 and oversees AI integrationAdam Miribyan has spent nearly a decade building software for EAP and wellness providers. He currently leads a development team of 20 and oversees AI integration

Building the Software Layer for Workplace Mental Health

2026/02/11 01:24
9 min read

Adam Miribyan has spent nearly a decade building software for EAP and wellness providers. He currently leads a development team of 20 and oversees AI integration into a platform serving tens of thousands of members. We spoke with him about deploying AI when the stakes involve someone’s wellbeing, how he thinks about crisis detection systems, and what acquirers miss when evaluating mental health technology platforms. 

Adam, for readers unfamiliar with this space, what are Employee Assistance Programs, and why has software become so central to how they deliver mental health support at scale? 

An Employee Assistance Program (EAP) is a voluntary, confidential, employer-sponsored benefit designed to help employees and their family members address personal or work-related issues that might impact performance or well-being. EAPs often include counseling for mental health, financial and legal guidance, or work life balance resources. 

Software has become the backbone of modern EAPs. It delivers content and resources created by licensed clinicians that reach unlimited people facing similar struggles. Machine learning, guided by clinical experts, makes member experience more personalized. Members and counselors can communicate asynchronously in secure, private spaces. It simplifies real-time or in-person appointment scheduling and removes access barriers. Cloud computing and enterprise infrastructure scale these services to millions. Encryption and strict compliance protocols safeguard personal health data above all.  

You’ve spent nearly a decade building technology for EAP and wellness providers. How did you end up specializing in this particular intersection of software and mental health? 

I’ve always been fascinated by medicine and technology, and I loved studying economics in high school. I taught myself programming when I was 13 and later started freelancing as a web developer. During my second year in medical school, I dropped out to pursue a degree in economics and technology instead. 

Back in 2013, one of my clients was a company called CipherHealth, where I was building software for healthcare providers. That intersection of healthcare and software felt like where I could make the biggest difference. 

At the time I was living in Prague, in my early twenties, trying to figure out life one problem at a time — which included battling depression. That experience deepened my empathy for others facing mental health challenges, and drew me toward creating software that could provide real support. 

Your team at Curalinc Healthcare has started integrating AI into a platform that serves tens of thousands of members. What does AI actually do in an EAP context, and how do you approach deployment when the stakes involve someone’s mental health? 

Given the sensitivity of mental health, we take a careful approach to rollout. Every AI feature goes through clinical review, we monitor closely after launch, and we default to caution. 

We’re not trying to build AI that “replaces” the humans who provide mental health support. The clinician remains in the driver’s seat, while AI enhances the process. Whether a member wants to use AI is entirely up to them — we’ve put in considerable technical effort to ensure it’s easy for someone to opt out of AI completely in their EAP platform. 

For example, AI can answer a member’s question, but that answer will always be grounded in data produced or curated by our team of licensed clinicians. AI can analyze trends in member behavior to aid the clinician, but it will never prescribe care itself. 

Can you walk us through a specific technical challenge your team solved that improved how members access care or how clinicians deliver it? 

A recent challenge I’m proud of is building real-time crisis detection in our member portal. When someone is struggling with suicidal ideation, they often don’t call — instead they search. They might type “I don’t want to be here anymore” into our search bar, and the challenge was detecting the intent at that moment and intervening appropriately. 

We built a semantic risk detection layer that runs parallel to every search query. It was important that it wasn’t just keyword matching. The language around suicide is full of euphemisms. Phrases like “I just want the pain to stop” don’t contain obvious crisis words, but they indicate serious distress. We decided to favor over-detection, because a false positive means someone gets offered support they didn’t need, while a false negative means we miss someone in crisis. 

When high-risk language is detected, we suppress normal search results and immediately display a supportive safety message with pathways to human support. Beyond the engineering work, the clinical language also had to pass legal and ethical review. 

It functions as a digital safety net. Members who might never pick up the phone are being identified and guided to human care in real time. 

Curalinc was recently acquired by private equity. You’ve also done technical due diligence on other healthcare platform acquisitions. What do acquirers typically overlook when evaluating a mental health technology platform, and what should they be asking? 

The standard diligence areas — revenue, contracts, HIPAA, tech debt — buyers are usually fine there. But they sometimes miss the stuff that’s specific to mental health. 

First is clinical workflow. How does data actually flow between the platform and the clinicians? Can a therapist see how the member interacted with the app before the session or their case history from digital interactions? A lot of platforms look integrated, but the clinical team is often working blind. 

Second is configuration complexity. Mental health platforms serve hundreds of employers, each with its own eligibility rules, branding, platform preferences. That adds up fast. Buyers can end up inheriting thousands of edge cases that nobody’s documented. 

Third is outcomes. Can you actually prove the platform improves clinical outcomes? Beyond logins or page visits — real PHQ-9 score changes, crisis interventions, return-to-work. If you can’t measure that, it limits growth. 

Honestly, the question I’d ask in diligence is simple: Show me what happens when a member in crisis uses your platform at midnight. That one question tells you about safety architecture, clinical integration, and operational maturity all at once. 

You lead a development team of about 20 people while also serving as part-time CTO at a hospitality tech company. How do you context-switch between healthcare and hospitality, and does that cross-pollination inform how you build software? 

I work best when I have long stretches of uninterrupted time, so I try to avoid frequent context-switching. I block half the day each Monday and Wednesday to work on the hospitality tech company, which leaves me ample time for my work with Curalinc. 

The cross-pollination is real. Healthcare software demands high compliance and enterprise-grade architecture. That discipline directly informs how we build the hospitality platform. We get more scalable, secure foundations while maintaining a startup-like delivery pace. On the flip side, my experience with the hospitality startup helped me better understand team incentive structures and sharpen my ability to spot talent when hiring. It’s one of the reasons my team at Curalinc maintains a strong delivery pace despite the rapid growth. 

Many healthcare organizations struggle to move AI projects from pilot to production. What separates the implementations that actually reach patients from the ones that stall out? 

AI proof-of-concepts are so compelling that they’re easy to green-light. The reality is that if you have AI in your project, you get to benefit from the tailwind of easier leadership buy-in, market pressure and FOMO. But past a certain point you still do have to tackle all of the complexity of a healthcare software project. Using AI adds its own challenges with increased scrutiny around data governance and compliance, security requirements, infrastructure, and vendor agreements. So, in a way, it makes it really easy to start but harder to finish. 

Overall, that makes releasing AI projects harder, especially in healthcare. So the projects that do get released are released because of better product vision and deeper understanding of the value AI is creating. Teams that use AI to amplify their product are more successful in that sense. Conversely, by hyper-focusing on AI as your main value proposition, you lose track of what really has the potential to differentiate your product — especially if your AI feature only does what ChatGPT, Claude, or Gemini will offer natively in a month. OpenAI’s ChatGPT Health and Claude’s Apple Health integration only reinforce that this is where things are headed. 

Mental health data is particularly sensitive. How do you balance the potential of AI-driven insights against the privacy and trust concerns that come with this kind of information? 

When someone reaches out for mental health support, they’re trusting us with something real. That shapes how we build. 

We only collect what’s really necessary to help — not everything we could analyze, but what we should. 

The underlying technical foundation is non-negotiable: end-to-end encryption, de-identification protocols, strict role-based access controls, and comprehensive audit trails are table stakes. 

But protection that people can’t see doesn’t build trust. So we’re designing our experience around felt control — giving members clear choices about what’s shared, what’s analyzed, and what stays private, as well as the ability to access their data, correct it, or delete it entirely. It’s front and center and we don’t bury it in settings. 

The potential of AI in mental health only exists because of trust. Privacy is the foundation, not a constraint on innovation.  

What changes do you expect in how technology supports workplace mental health over the next few years, and what role will AI play in that shift? 

Workplace mental health support is headed toward more proactive and preventative care. Over the next few years, I expect tools to become more integrated into workplace systems and more personalized, while also putting greater emphasis on privacy and trust. 

AI will play a central role by detecting early risk patterns and helping clinicians respond sooner with targeted resources and interventions. Used well, AI can become part of the infrastructure that empowers human connection, creativity, and performance. Companies that treat mental health as a checkbox will fall behind, and those that treat it as core infrastructure will lead.  

Market Opportunity
Solayer Logo
Solayer Price(LAYER)
$0.08157
$0.08157$0.08157
+0.69%
USD
Solayer (LAYER) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.