Back to Portfolio
AI/MLSep 2025 - Nov 2025

Differential Privacy in LLM's

Privacy-Preserving Clinical AI with Differential Privacy

PythonGPT-2OpacusLoRATensorFlowHealthcare Data
Differential Privacy in LLM's

Project Overview

Fine-tuned GPT-2 models for clinical note generation using Differential Privacy (DP) and non-DP approaches. Leveraged LoRA for parameter-efficient fine-tuning with multi-agent pipelines for privacy-aware clinical documentation.

Technical Implementation

Implemented differential privacy using Opacus library to provide mathematical privacy guarantees. Used LoRA adapters for efficient fine-tuning while maintaining model utility. Created multi-agent inference pipeline for automated clinical documentation.

Key Features

  • Differential Privacy fine-tuning with Opacus for privacy guarantees
  • Non-DP baseline for comparison
  • LoRA Adapters for efficient parameter tuning
  • Multi-Agent Inference: Automated agents for intake, SOAP note generation, coding, safety auditing
  • PHI Redaction: Scrubbing personal health information
  • Privacy Evaluation: Membership inference attacks (MIA)
  • Comprehensive metrics logging and visualization

Impact & Results

Demonstrated how to maintain model utility while protecting sensitive healthcare data with differential privacy guarantees.

Contributors & Team

S

Syed Ibad Ali

Lead Developer

NED University

LinkedIn

Want to discuss this project or work together?

Get in Touch