Working with Generative AI in enterprise environments is one thing.
Building and shipping it for a real family business, where real people rely on the output, is something else entirely.
At MedicLabs, our medical analysis platform, I wasn’t just managing delivery.
I was also responsible for designing, developing, and integrating the GenAI feature myself.
That changed how I think about GenAI more than any corporate project ever did.
The Problem We Were Trying to Solve
Patients receive blood test results every day.
Most of them:
-
Don’t understand the medical terms
-
Misinterpret values
-
Panic unnecessarily
-
Or, worse, ignore important signals
Doctors are busy.
Lab reports are technical.
And patients are left somewhere in between.
The goal was not to replace doctors.
It was to translate medical data into understandable, responsible guidance — safely.
Designing for Responsibility First
From the beginning, I treated this as a medical-support system, not a generic AI chatbot.
Key principles guided the design:
-
No diagnosis claims
-
No absolute medical decisions
-
Clear explanations of what values mean
-
Suggestions framed as guidance, not conclusions
-
Strong emphasis on consulting a physician when needed
Trust mattered more than intelligence.
Doing the Development Work Myself
I personally handled:
-
Data structure design for lab results
-
Prompt engineering for medical context
-
Multilingual output handling (Arabic, English, French)
-
Safety constraints and phrasing control
-
Integration into the existing web platform
-
UI flow to avoid overwhelming the patient
This was not a plug-and-play solution.
It required iteration, testing, and constant refinement.
Every small wording change mattered.
What the System Actually Does
When a patient uploads or views blood analysis results, the system:
-
Explains each value in simple language
-
Highlights what is within normal range and what isn’t
-
Provides general lifestyle and health advice
-
Suggests potential follow-up tests based on patterns
-
Clearly states limitations and when to consult a doctor
No fear-based messaging.
No false certainty.
How This Helped Patients
The most immediate impact was clarity.
Patients:
-
Understood their results without panic
-
Felt more confident discussing results with doctors
-
Became more proactive about their health
-
Returned to the platform to track changes over time
Instead of raw numbers, they received context.
How This Helped the Business
From a business perspective, the impact was equally clear.
The AI-driven insights:
-
Increased engagement time on the platform
-
Reduced repetitive inquiries to lab staff
-
Encouraged patients to perform recommended follow-up tests
-
Improved trust in the MedicLabs brand
-
Turned a static report into a living health tool
This was not aggressive upselling.
It was relevant, medically justified guidance.
When done responsibly, value creation benefits both sides.
The Hardest Part Was Not the Technology
The hardest part was balancing:
-
Helpfulness vs safety
-
Clarity vs overconfidence
-
Automation vs responsibility
Every GenAI decision had ethical weight.
This is where being both the developer and the Technical Project Manager mattered.
I could control not just what was built, but how and why.
What This Project Changed for Me
This project reinforced something I strongly believe now:
GenAI is not impressive because it can generate text.
It’s impressive when it:
-
Reduces confusion
-
Improves decision quality
-
Respects domain boundaries
-
And creates sustainable value
Especially in healthcare, responsibility is not optional.
Closing Thought
Building GenAI features for real users — especially in sensitive domains — forces a different level of discipline.
This wasn’t a demo.
It wasn’t a slide deck.
It was a system people actually use.
And doing both the technical implementation and the delivery ownership made one thing very clear to me:
The future of GenAI belongs to teams who can build responsibly, not just quickly.