Aymen Sekhri
Aymen Sekhri
Ph.D. candidate in machine learning (cotutelle) at University of Poitiers (France) and NTNU (Norway).
My research sits at the intersection of computer vision, augmented reality, and medical imaging. I develop learning-based methods that connect human visual perception with objective image quality assessment, with a focus on immersive AR experiences and clinically relevant imaging workflows.
Current PhD research
- AR visual quality assessment: building blind/no-reference quality metrics for augmented reality content.
- Perceptual modeling: designing lightweight transformer models guided by human ranking feedback.
- Reliable AI for imaging: improving interpretability and robustness for medical imaging tasks, including knee osteoarthritis severity grading.
Academic affiliations
- Université de Poitiers, XLIM Laboratory, France.
- Norwegian University of Science and Technology (NTNU), Colourlab / Department of Computer Science, Gjøvik, Norway.
Selected publications
ARaBIQA: A Novel Blind Image Quality Assessment Model for Augmented Reality
Introduces ARaBIQA, a blind image quality assessment method tailored to augmented reality content, with emphasis on distortions specific to immersive visual pipelines.
Lightweight Image Quality Prediction Guided by Perceptual Ranking Feedback
A ranking-guided training strategy for lightweight image quality prediction that integrates perceptual ordering feedback to improve model calibration and pairwise consistency.
Enhancing Content Representation for AR Image Quality Assessment Using Knowledge Distillation
This work studies knowledge distillation for AR image quality assessment, improving compact model representations and preserving perceptual sensitivity in no-reference quality prediction.
TransformAR: A Light-Weight Transformer-Based Metric for Augmented Reality Quality Assessment
A lightweight transformer-based approach for perceptual quality assessment in augmented reality, designed to reduce computational cost while preserving prediction performance on subjective AR quality datasets.
Shifting Focus: From Global Semantics to Local Prominent Features in Swin Transformers for Knee Osteoarthritis Severity Assessment
Conventional imaging diagnostics frequently encounter bottlenecks due to manual inspection, which can lead to delays and inconsistencies. Although deep learning offers a pathway to automation and enhanced accuracy, fo...
Do Digital Images Tell the Truth?
Since the advent of digital cameras, image editing tools have made it straightforward to manipulate content. Copy-move forgeries—where a region is duplicated and pasted within the same frame—are particularly challengi...
Research themes and contributions
Immersive media and AR quality
- Proposed AR-focused blind IQA approaches (including ARaBIQA and transformer-based lightweight metrics).
- Worked on model distillation strategies to improve efficiency while preserving perceptual performance.
Medical imaging AI
- Developed Swin Transformer-based systems for automated knee osteoarthritis assessment.
- Explored domain adaptation and localization-aware modeling in clinically oriented computer vision pipelines.
Collaboration
I am co-advised by Prof. Mohamed-Chaker Larabi and Prof. Seyed Ali Amirshahi. I welcome collaborations in AR/VR quality assessment, perceptual modeling, and applied machine learning for medical imaging.
Contact: aymen.sekhri@univ-poitiers.fr
