ADVERTISEMENT
  • About
  • Advertise
  • Careers
  • Contact Us
Tuesday, August 19, 2025
  • Login
No Result
View All Result
NEWSLETTER
Tech | Business | Economy
  • News
  • Tech
    • DisruptiveTECH
    • ConsumerTech
    • How To
    • TechTAINMENT
  • Business
    • Telecoms
    • Mobility
    • Environment
    • Travel
    • StartUPs
      • Chidiverse
    • TE Insights
    • Security
  • Partners
  • Economy
    • Finance
    • Fintech
    • Digital Assets
    • Personal Finance
    • Insurance
  • Features
    • IndustryINFLUENCERS
    • Guest Writer
    • EventDIARY
    • Editorial
    • Appointment
  • TECHECONOMY TV
  • Apply
  • TBS
  • BusinesSENSE For SMEs
  • Chidiverse
  • News
  • Tech
    • DisruptiveTECH
    • ConsumerTech
    • How To
    • TechTAINMENT
  • Business
    • Telecoms
    • Mobility
    • Environment
    • Travel
    • StartUPs
      • Chidiverse
    • TE Insights
    • Security
  • Partners
  • Economy
    • Finance
    • Fintech
    • Digital Assets
    • Personal Finance
    • Insurance
  • Features
    • IndustryINFLUENCERS
    • Guest Writer
    • EventDIARY
    • Editorial
    • Appointment
  • TECHECONOMY TV
  • Apply
  • TBS
  • BusinesSENSE For SMEs
  • Chidiverse
No Result
View All Result
Tech | Business | Economy
No Result
View All Result
ADVERTISEMENT
Home Features IndustryINFLUENCERS

Building Trustworthy AI: Harrison Enofe Obamwonyi Shares Insights on Data Science, Deployment, and Real-World Impact

by Adetunji Tobi
November 11, 2024
in IndustryINFLUENCERS
0
Trustworthy Ai by Harrison Enofe Obamwonyi
Meet Harrison Enofe Obamwonyi

Meet Harrison Enofe Obamwonyi

UBA
Advertisements

In today’s rapidly evolving digital economy, data scientists are increasingly being called upon to balance technical excellence with real-world impact, stakeholder trust, and long-term business value.

In an exclusive interview, Harrison Enofe Obamwonyi, Senior Data Scientist, opens up about his approach to tackling ambiguous project requirements, integrating models into production, and ensuring sustainable value beyond accuracy metrics.

From navigating stakeholder disagreements with empathy, to deploying real-time fraud detection systems and championing best practices in data science teams, Harrison highlights not just the technical rigor but also the collaborative mindset required to make data science solutions truly impactful. His reflections shed light on how trust, explainability, and adaptability remain at the heart of delivering data-driven transformation.

How do you approach scoping a data science project when requirements are ambiguous or evolving?

Harrison Obamwonyi: When scoping a data science project with ambiguous or evolving requirements, I start by clarifying the underlying business objective through targeted questions, even if it’s initially vague. I explore multiple solution paths, conduct a quick data audit to validate feasibility, and break the project into phases with clear feedback loops. I document assumptions and risks early and maintain continuous communication with stakeholders to ensure alignment. This iterative, collaborative approach helps me deliver value incrementally while adapting as requirements evolve.

Describe a case where your model performed well in development but failed in production. What did you learn?

Harrison Obamwonyi: In a project, I developed a churn prediction model that performed well during development, with strong metrics like 0.82 AUC and good cross-validation results. However, once deployed, business teams noticed the predictions weren’t aligning with actual churn behaviour.

On investigation, we realized the model had learned patterns tied to data features that were stale or delayed in production, such as support ticket resolution times, which weren’t updated in real-time.

The lesson was twofold: first, always ensure feature parity between training and production environments, and second, involve engineering teams early to validate pipeline readiness. Since then, I’ve made it standard practice to test models in a staging environment using live data before full deployment.

How do you balance model performance with interpretability when working on high-stakes or regulated domains?

Harrison Obamwonyi: When working in high-stakes or regulated domains, I prioritize trust and accountability as much as performance. I start by aligning with stakeholders on the acceptable trade-offs and often opt for interpretable models like logistic regression or decision trees if the use case requires auditability or explanation to non-technical users. If a more complex model like XGBoost or a neural net significantly outperforms simpler ones, I use explainability tools like SHAP or LIME to surface feature importance and local explanations.

I also implement model monitoring and documentation frameworks to ensure ongoing transparency. The key is stakeholder trust: a slightly less accurate but explainable model is often more useful and usable in sensitive environments.

How do you measure the long-term impact of your models or analytical solutions beyond just accuracy or precision?

Harrison Obamwonyi: Beyond traditional metrics like accuracy or precision, I measure long-term impact by focusing on business outcomes, model adoption, and sustained performance over time. This includes tracking KPIs such as revenue lift, cost reduction, or improved operational efficiency tied directly to model usage. I also monitor user engagement, whether stakeholders are acting on model insights, and whether the solution is embedded into decision-making workflows. In addition, I set up post-deployment monitoring to track model drift, data quality, and performance degradation over time. Finally, I often run A/B tests or back testing to validate whether the model’s recommendations drive real-world improvements. The goal is to ensure the solution remains relevant, trusted, and valuable in the long run.

What’s your experience integrating data science models into end-to-end systems or products?

Harrison Obamwonyi: I’ve worked closely with engineering teams to integrate data science models into end-to-end systems, from building APIs for real-time predictions to embedding models into internal tools or customer-facing products.

In one project, I developed a lead scoring model for a sales team and collaborated with backend engineers to deploy it via a REST API, ensuring the pipeline pulled fresh data daily and returned scores in milliseconds. I also handled versioning, monitoring, and retraining workflows using  MLflow and Airflow.

A key part of my role was ensuring feature consistency between training and production, managing edge cases, and working with DevOps to maintain scalability and uptime. I see integration not as a hand-off, but as a joint effort to translate models into reliable, usable solutions that drive business value.

What role did you play in deployment and maintenance?

Harrison Obamwonyi: As a Senior Data Scientist, I have played an active role in both deployment and ongoing maintenance of models. For deployment, I have worked alongside MLOps and engineering teams to containerize models using Docker, expose them via REST APIs, and ensure seamless integration into production systems. I also contributed to designing CI/CD pipelines for automated testing and model version control.

On the maintenance side, I set up monitoring dashboards to track model performance, input data drift, and latency in real-time.

I defined retraining strategies whether time-based or performance-triggered and ensured logging and alerting were in place for issues like prediction anomalies or missing features. Ultimately, I treat deployment as the beginning of a model’s lifecycle, not the end ongoing reliability, trust, and adaptability are critical for long-term success.

Tell me about a time you disagreed with the direction of a project or a stakeholder request. How did you handle it?

MTN ADS

Harrison Obamwonyi: In one project, a stakeholder requested a complex deep learning model to forecast sales, believing it would impress leadership. However, the data was sparse, and a simpler time series approach was more appropriate. I disagreed with the direction, but instead of outright rejecting the idea, I scheduled a working session to walk through the data constraints, modelling options, and potential risks of overengineering the solution. I then built a quick prototype of both models, the deep learning model and a simpler SARIMA model and compared their performance and interpretability.

The SARIMA model performed comparably and was far easier to explain and deploy.

By backing my position with data and transparency, the stakeholder agreed to go with the simpler approach, which saved time and delivered value faster.

This experience reinforced the importance of collaboration, evidence-based discussion, and empathy in navigating disagreements.

What frameworks or tools do you use for experiment design and AB testing, and how do you validate causal impact?

Harrison Obamwonyi: For experiment design and A/B testing, I typically use a combination of Python (pandas, stats models, scipy) and platform-specific tools like Optimizely, Google Optimize, or internal experimentation platforms. For designing the test, I focus on randomization, sample size calculation, and power analysis to ensure statistical validity from the start.

To validate causal impact, I use: Classical hypothesis testing (t-tests, chi-squared) for clean A/B setups; pre-post analysis with control groups to detect changes over time; difference-in-differences (DiD) when working with observational or rollout data, and causal inference tools like DoWhy or EconML for more complex scenarios

I also check for randomization balance, control for confounders where needed, and monitor metrics beyond the primary KPI to avoid unintended consequences. Ultimately, I aim to ensure not just statistical significance, but practical significance and robustness in real-world settings.

Have you contributed to or led the development of data science best practices in a team setting? What impact did it have?

Harrison Obamwonyi: Yes, I’ve both contributed to and led the development of data science best practices within teams. In a previous role, I initiated the creation of a team-wide model development framework that standardized processes for data cleaning, feature engineering, validation, versioning, and documentation. We implemented code templates using Jupyter notebooks and modular Python scripts, along with Git for version control and MLflow for experiment tracking.

I also introduced peer review checklists, model cards for explainability, and a shared knowledge base for reusable components. This significantly improved collaboration, reproducibility, and onboarding time for new hires, while reducing bugs and inconsistencies in production. Over time, these practices became part of our team’s culture, and stakeholders gained more confidence in our models due to clearer documentation and consistent communication.

Harrison Obamwonyi: How do you stay current with new methods, tools, or research in data science? Can you share an example where you applied something cutting-edge?

Harrison Obamwonyi: I stay current by combining structured learning with practical application. I regularly follow top conferences like NeurIPS, ICML, and KDD, and stay active on platforms like arXiv, Medium, and Twitter/X where researchers and practitioners share insights. I also take online courses or certifications when diving deeper into emerging topics. For example, I recently explored causal machine learning techniques using Microsoft’s EconML library.

I applied it in a marketing uplift modelling project to estimate the true incremental effect of a campaign, rather than just predicting conversion likelihood. Traditional models were overestimating impact due to selection bias, but using causal forests improved targeting and led to a 15% lift in ROI. This experience reinforced the value of staying updated and translating cutting-edge methods into real business value.

What’s the most technically complex solution you’ve implemented, and what made it successful or not?

Harrison Obamwonyi: One of the most technically complex solutions I implemented was a real-time fraud detection system for a financial services platform.

The challenge involved building a streaming model that could detect suspicious transactions within milliseconds, using a combination of historical behaviour profiling, anomaly detection, and supervised learning.

We used a hybrid architecture with Apache Kafka for data streaming, Spark Structured Streaming for feature engineering in near real-time, and an ensemble of models (random forest + isolation forest) deployed via a RESTful API.

To optimize latency, I worked closely with engineers to prune features, implement batch scoring where possible, and move some logic to in-memory databases like Redis.

What made it successful was tight cross-functional collaboration, rigorous offline vs. online validation, and setting up real-time monitoring to catch false positives early.

One key learning, however, was the need to simplify over time, initially, the complexity created maintainability issues, so we later refactored parts of the system for better scalability and transparency. It taught me that technical brilliance needs to balance with operational simplicity.

Loading

MTN ADS

0Shares

MTN ADS
Tags: Harrison Enofe ObamwonyiTrustworthy Ai
Adetunji Tobi

Adetunji Tobi

Tobi Adetunji is a Business Reporter with Techeconomy. Contact: adetunji.tobi@techeconomy.ng

Next Post
Federal Accounts Allocation Committee

36 States’ Combined Debt Reaches N11.4tn Amidst IGR, FAAC Allocations

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recommended

NCC TELCARE in Abuja

NCC Commissions Telecom Consumer Desk (TELCARE) at Abuja Airport

2 years ago
Group CEO of MTN with MoMo PSB staff

Deposit Old Naira Notes with MoMo and Four other Lessons from MoMo PSB Twitter Space

3 years ago

Popular News

    Connect with us

    • About
    • Advertise
    • Careers
    • Contact Us

    © 2025 TECHECONOMY.

    No Result
    View All Result
    • News
    • Tech
      • DisruptiveTECH
      • ConsumerTech
      • How To
      • TechTAINMENT
    • Business
      • Telecoms
      • Mobility
      • Environment
      • Travel
      • StartUPs
        • Chidiverse
      • TE Insights
      • Security
    • Partners
    • Economy
      • Finance
      • Fintech
      • Digital Assets
      • Personal Finance
      • Insurance
    • Features
      • IndustryINFLUENCERS
      • Guest Writer
      • EventDIARY
      • Editorial
      • Appointment
    • TECHECONOMY TV
    • Apply
    • TBS
    • BusinesSENSE For SMEs

    © 2025 TECHECONOMY.

    Welcome Back!

    Login to your account below

    Forgotten Password?

    Retrieve your password

    Please enter your username or email address to reset your password.

    Log In
    Translate »
    This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.