Jaydeep Borkar (जयदीप बोरकर)

jaijborkar at gmail dot com

I'm a Computer Science PhD student at Northeastern University where I'm fortunate to be advised by David A. Smith. Previously, I was an external research student at MIT-IBM Watson AI Lab advised by Pin-Yu Chen where I worked on adversarial machine learning. I have a Bachelor's degree in Computer Engineering from the University of Pune.

I'm also one of the founding organizers of the Trustworthy ML Initiative, which aims to lower the entry barriers into trustworthy machine learning. I specifically love disseminating the works of students and junior researchers through @trustworthy_ml.

Twitter  /  Semantic Scholar  /  Google Scholar  /  LinkedIn  /  GitHub  /  Listening!

profile photo
Research

I study privacy and security in language models. My current work focuses on problems such as training data extraction (memorization) and their potential privacy implications.


Please get in touch with me if you would like to collaborate on research or go mountain biking.

Papers

Recite, Reconstruct, Recollect: Memorization in LMs as a Multifaceted Phenomenon (preprint)
USVSN Sai Prashanth, Alvin Deng, Kyle O’Brien, Jyothir S V, Mohammad Aflah Khan, Jaydeep Borkar, Christopher A. Choquette-Choo, Jacob Ray Fuehne, Stella Biderman, Tracy Ke, Katherine Lee, and Naomi Saphra.

Mind the gap: Analyzing lacunae with transformer-based transcription
Jaydeep Borkar and David A. Smith
ICDAR 2024 Workshop on Computational Paleography

What can we learn from Data Leakage and Unlearning for Law?
Jaydeep Borkar
ICML 2023 Workshop on Generative AI + Law (GenLaw)
[poster] [preprint]

Simple Transparent Adversarial Examples
Jaydeep Borkar and Pin-Yu Chen
ICLR 2021 Workshop on Security and Safety in Machine Learning Systems
[poster] [preprint]
News

April 2024 Giving a guest lecture on privacy and security in LLMs for CS 5100 Foundations of AI class at Northeastern

June 2023 Stoked to present my work on memorization + law in LLMS at the first Generative AI + law (GenLaw) workshop at ICML in Honolulu, Hawai'i.

March 2021 Simple Transparent Adversarial Examples paper (co-authored with Pin-Yu Chen) accepted to ICLR 2021 workshop on Security and Safety in Machine Learning Systems.

September 2020 Excited to be in the team of and launch the Trustworthy ML Initiative.

For past news, please check this page.

Service

Organizing

The Trustworthy ML Initiative (together with Hima Lakkaraju, Sara Hooker, Sarah Tan, Subho Majumdar, Chhavi Yadav, Chirag Agarwal, Haohan Wang, and Marta Lemanczyk).

Program Committee

Workshop on Privacy in Natural Language Processing, ACL 2024.
Some fun stuff!

I enjoy (mountain) biking in Boston/Cambridge (Fells is my favorite), playing tennis, hanging out at bookstores and libraries, and Bollywood dancing in my free time.

Listening

I believe in the power of kind and empathetic listening, and I think that everyone deserves a good listener in their life. However, there are so many of us who don't have good listeners. If you want someone to listen to you, I'm just an email away. I'll try my best to listen to you.

Thanks to Jon Barron for the template!