top of page

Work and Case Studies

Analysis

My research focuses on confidential machine learning, specifically looking into property inference threats within federated networks. I'm interested in identifying vulnerabilities during model updates and data exchange, with the goal of helping build more secure global standards

Profession

My background includes roles at leading tech firms where I specialized in systems analysis and database management. Whether it was overseeing network updates or leading student organizations, I’ve always enjoyed the challenge of managing complex systems and people at the same time

Context and Obstacles

The whole point of Federated Learning is privacy, but the way nodes exchange gradients can actually open doors to new risks. My work is centered on uncovering these specific threats and developing strategies to ensure that 'collaborative' doesn't have to mean 'vulnerable

I focused on property inference, specifically how information leaks during the weight-sharing process in distributed systems. I was able to show that it's possible to extract private data directly from model updates. This highlights a critical flaw in how we currently handle communication: our frameworks are more transparent than we think, and it's clear that our encryption standards haven't yet caught up to these sophisticated threats

Approach and Design

Protocol Security Review

Looking at global training frameworks to find structural problems where gradient updates could accidentally show private client information through changes in metadata patterns and weights.

Flow Quality Assurance

A strict check of how information moves from local hardware to central hubs, finding specific points where sensitive computational metadata could be exposed during the transfer.

Threat Monitoring

Using advanced analytical models to find out how big data leaks are and testing how well inference attacks can get through different types of protected and distributed dataset structures.

Discovering core points of data exposure across all training and distribution networks.

This project successfully identified critical vulnerabilities in Federated Learning systems, establishing a framework for assessing risks associated with inference exploits. The study examined complex data leaks at different stages of training, offering a framework for developing more secure collaborative models. This had a direct impact on the development of strong security layers designed to protect private user information from advanced modern threats.

Core Expertise

Privacy Computing

Risk Evaluation Lab

Cybersecurity Defense

Secure ML Engineering

Network Architecture

Database Management

bottom of page