Hajar Hakkoum | Machine learning | Best Scholar Award

Dr. Hajar Hakkoum | Machine learning |Best Scholar Award

šŸ‘Øā€šŸ«Profile Summary

In my current role as a Postdoctoral Researcher at INRAE in Versailles, France, I specialize in plant cell cycle dynamics through image analysis and compressed fluorescence acquisition. As a Software Engineer at IS Technologies, Courbevoie, France, I contributed to robust server-side components, integrated AI APIs, and enhanced search engine recommendation systems. My Ph.D. in Machine Learning Interpretability from ENSIAS, UM5*, Rabat, involved groundbreaking research in medical AI, including mentoring fellow researchers and contributing to peer-reviewed journals. Proficient in Python, data mining, and deep learning, I possess strengths in scientific writing and presentation skills. Fluent in Arabic, English, French, Spanish, and German, I hold a C1 IELTS certification and received the Best Poster Award at the 15th International Conference on Health Informatics in 2022. Beyond my professional endeavors, I nurture a keen interest in languages, reading, and running. My educational background includes a Software Engineering Degree (Web & Mobile) from ENSIAS, UM5*, Rabat, and completion of Engineering Preparatory Classes at CPGE Salmane Al Faressi in SalƩ, Morocco.

šŸŒ Professional Profiles

šŸ“š Education:

Software Engineering Degree (Web & Mobile): ENSIAS, UM5*, Rabat, MAR (2016-2019). Engineering Preparatory Classes: CPGE Salmane Al Faressi, SalƩ, MAR (2014-2016)

šŸ” Professional Experience:

Postdoctoral Researcher INRAE, Versailles, FR (March 2024 ā€“ Present) Conducting image analysis of plant cell cycle dynamics. Implementing compressed fluorescence acquisition techniques.. Software Engineer IS Technologies, Courbevoie, FR (October 2022 ā€“ February 2024) Developing robust server-side components using ASP.NET and PostgreSQL. Analyzing and integrating AI APIs for text translation, camera filters, and ChatGPT. Enhancing search engine recommendation systems and assessing employee/user satisfaction using Python. PhD in Machine Learning Interpretability in Medicine ENSIAS, UM5, Rabat, MAR (January 2020 ā€“ April 2023) Conducting a systematic literature review on interpretability techniques in medicine. Quantitatively evaluating the interpretability of ML black-box models in medicine. Assessing the impact of categorical feature encoding on ML interpretability techniques in medicine. Guiding two new PhD students in research projects on ML interpretability in Biodiversity and Cybersecurity. Publishing research papers in reputable peer-reviewed journals and conferences emphasizing the significance of interpretability in medical AI. Contributing to the peer-review process within the medical AI domain as a reviewer for the Scientific African journal (Q2 with IF: 2.9). PhD Internship (ERASMUS) Facultad de InformĆ”tica, Murcia, SP (January – June 2022) Investigating the impact of categorical data on interpretability techniques. Participating in interdisciplinary discussions to bridge the gap between ML and domain experts’ needs. Projects and Internships: Final Degree Project – Research Initiation: Interpretability of ANNs for breast cancer diagnosis (Python). Third Year Project – Handwritten Digits and French Numbers Image Recognition using CNNs and collected images (Python). Second Year Internship – ChatBot development for Banks Q/A (.NET). First Year Internship – Checks Amounts Validation for Banks (C#, Azure APIs).

šŸ† Certificates:

IELTS: C1 (8.5 Reading & Listening, 7.5 Speaking, and 6.5 Writing). Best Poster Award: 15th International Conference on Health Informatics (2022)

šŸŽÆ Interests:

Languages: “A different language is a different vision of life.” Reading: “A reader lives a thousand lives before he dies.” Running: “Exercise is a tribute to the heart.”

 

šŸ“šTop Noted Publication

  1. “Interpretability in the medical field: A systematic mapping and review study” (2022, Applied Soft Computing):
    • This paper likely serves as a comprehensive review and mapping study on the topic of interpretability in the medical field, possibly summarizing existing research and identifying trends or gaps in the literature.

 

  1. “Assessing and comparing interpretability techniques for artificial neural networks breast cancer classification” (2021, Computer Methods in Biomechanics and Biomedical Engineering):
    • The focus here is on assessing and comparing interpretability techniques specifically applied to artificial neural networks for breast cancer classification. The paper likely delves into the methods used to make these complex models interpretable.

 

  1. “Ensemble blood glucose prediction in diabetes mellitus: A review” (2022, Computers in Biology and Medicine):
    • This study appears to be a review on ensemble methods for predicting blood glucose levels in the context of diabetes mellitus, exploring the various techniques employed in aggregating predictions for improved accuracy.

 

  1. “Artificial neural networks interpretation using LIME for breast cancer diagnosis” (2020, Trends and Innovations in Information Systems and Technologies):
    • This paper seems to focus on the interpretability of artificial neural networks for breast cancer diagnosis, specifically using the Local Interpretable Model-agnostic Explanations (LIME) technique.

 

  1. “A Systematic Map of Interpretability in Medicine” (2022, HEALTHINF):
    • This paper likely provides a systematic map, possibly a visual representation, of interpretability in medicine. It could outline the landscape of interpretability techniques, their applications, and potential challenges in the medical field.

 

  1. “Global and local interpretability techniques of supervised machine learning black box models for numerical medical data” (2024, Engineering Applications of Artificial Intelligence):
    • This study seems to explore both global and local interpretability techniques applied to supervised machine learning models for numerical medical data. The emphasis is likely on making black-box models more understandable and transparent.

 

  1. “Evaluating Interpretability of Multilayer Perceptron and Support Vector Machines for Breast Cancer Classification” (2022 IEEE/ACS 19th International Conference on Computer Systems andā€¦):
    • This paper likely evaluates the interpretability of two different machine learning models, Multilayer Perceptron and Support Vector Machines, in the context of breast cancer classification.