news

Apr. 2026 I’ll be presenting our working paper on Text as the Richest Preference Signal at the ICLR 2026 Workshop on AI for Mechanism Design :speech_balloon:
Jan. 2026 Our paper on how verifier imperfection affects test-time scaling was accepted to ICLR 2026! :confetti_ball:
May. 2025 I’ll be speaking at the AAPOR idea group on “Using Multiple Data Sources for AI Alignment: Bridging Survey Research and Machine Learning”, in St. Louis, Missouri.
Jan. 2025 I’m thrilled to be spending the next 6 months as a Visiting Scholar at Harvard University, hosted by Rediet Abebe and Sham Kakade.
Sep. 2024 Our paper on “Evaluating language models as risk scores” was accepted to NeurIPS 2024 :bookmark:
Jan. 2024 Our paper “Unprocessing Seven Years of Algorithmic Fairness” was accepted as an oral at ICLR 2024 (notable top 5%) :tada:
Jan. 2023 Our paper “FairGBM: Gradient Boosting Models with Fairness Constraints” was accepted at ICLR 2023!
Oct. 2022 I started my PhD at the Max Planck Institute for Intelligent Systems, in Tübingen, Germany.
Dec. 2021 Our paper “Promoting Fairness through Hyperparameter Optimization” was presented at IEEE ICDM 2021! :balance_scale:
Aug. 2021 Our paper “TimeSHAP: Explaining Recurrent Models through Sequence Perturbations” was presented at ACM KDD 2021! :stopwatch:
Aug. 2020 I joined Feedzai as a Research Scientist on the FATE (Fairness, Accountability, Transparency, and Ethics in AI) department.
Jul. 2020 I finished my MSc in Computer Science at the University of Porto with a top 1% GPA 🎓