Maryam Rezaee AI Researcher

Selected Highlights

Preprint Manuscript

A Concept-Level Energy-Based Framework for Interpreting Black-Box LLM Responses

Tackles the critical interpretability challenge of closed-API LLMs. We propose a model-agnostic framework that uses an energy model to score prompt-response consistency. This energy trains a lightweight interpreter into an efficient, standalone tool that explains LLM outputs by quantifying prompt influence—all without further API calls.

LLMs Interpretability XAI EBMs Black-Box
View Proof-of-Concept
Active Research Project

Concept-Based Interpretability for RAG Systems

Investigating the use of Concept Bottleneck Models (CBMs) to explain internal generation processes as a Research Mentor in the Trustworthy and Generative ML Lab. This work aims to enhance the transparency and traceability of RAG outputs by extracting and tracing concepts using Concept Activation Vectors (CAV) and ACE.

LLMs RAG XAI CBMs
See Details (Coming Soon)
Theoretical Research Paper

A Neuro-Symbolic Architecture for Translating Literary Formulas into Coherent Narrative Generation

Proposes a “Director-Actor” architecture to solve the “wandering plot” problem in LLMs. We operationalize literary theory into computational algorithms—modeling dramatic arcs via Signal Processing, Gated FSMs, and A* Search—framing narrative as an Active Inference optimization task.

Neurosymbolic AI Cognitive Modeling Algorithms Computational Narratology Active Inference
View Paper
Technical Implementation

Neurosymbolic VQA Program Generator

An exploration of neurosymbolic VQA (Johnson et al., 2017) on the CLEVR dataset. This project implements and compares three distinct strategies for translating natural language questions into executable symbolic programs: Supervised Learning (LSTM/Transformer), Reinforcement Learning (REINFORCE), and In-Context Learning (LLM).

Neurosymbolic AI VQA Program Synthesis Reinforcement Learning In-Context Learning LLMs
View on GitHub

Core Technologies

Python
PyTorch
TensorFlow
NumPy
Scikit-learn
Hugging Face
Pandas
MATLAB
R
PostgreSQL
Git
Bash
VSCode
HTML5
CSS3
JavaScript
RISC-V
LaTeX
Figma
Illustrator
Photoshop
InDesign
Premiere Pro
After Effects
Audition
Python
PyTorch
TensorFlow
NumPy
Scikit-learn
Hugging Face
Pandas
MATLAB
R
PostgreSQL
Git
Bash
VSCode
HTML5
CSS3
JavaScript
RISC-V
LaTeX
Figma
Illustrator
Photoshop
InDesign
Premiere Pro
After Effects
Audition

Current Focus

M.Sc. in Computer Science

Sharif University of Technology

Master’s Thesis

Currently working on my Master’s Thesis on “Interpretability in Generative Models: Investigating the Mechanisms Behind Output Generation in Large Language Models.”

Research Exploration

Conducting independent research into Neurosymbolic Architectures and LLM Reasoning with a focus on integrating principles from Cognitive Science to build human-like systems.

System Status: Expanding Archives

Detailed pages for About, Research, Experience, Projects, and Talks are currently under active development. For now, please refer to the Curriculum Vitae as an overview.