👨🏻‍💻 About Me

Hi, I’m Thomas Ferraz, I’m Research Scientist at NAVER LABS Europe, on the LLM Agents Team, and also PhD Candidate at Université Grenoble-Alpes, advised by Vassilina Nikoulina, Maxime Peyrard, and Eric Gaussier, exploring how large language models can reason and plan more effectively by combining efficient neural computation with interpretable latent structures.
My main research contributions rely on Efficient Learning for Language Models. Before working on LLM agents and reasoning, I contributed to Multilingual NLP and Low-Resource NLP, including works on multilingual speech recognition, distillation, and cross-lingual modeling.
I previously completed a Master’s in Applied Math & AI (MVA) at ENS Paris-Saclay and Institut Polytechnique de Paris, and an engineering degree from the University of São Paulo, where I graduated top of my class. I have also gained experience through research internships at Meta, Amazon, Apple, and NAVER LABS Europe.

🔬 Research Interests

  • LLM Agents: Memory-augmented planning and reasoning for autonomous, multi-step agentic tasks.
  • Efficient LLMs: Sparse, modular, and adaptive architectures enabling scalable and continual learning.
  • Efficient Reasoning: Latent-space reasoning and cognitive-inspired mechanisms for faster, cheaper and robust deliberation.
  • Interpretability: Neuro-symbolic methods and mechanistic analysis to expose and steer internal model computations.

📰 News

📝 Selected Publications

A selection of papers that reflect my main research focus and contributions.