Do Membership Inference Attacks Work on Large Language Models?
Paper
β’
2402.07841
β’
Published
These datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models.
The datasets can be applied to any model trained on The Pile, including (but not limited to):
To load the dataset:
from datasets import load_dataset
dataset = load_dataset("iamgroot42/mimir", "pile_cc", split="ngram_7_0.2")
arxiv, dm_mathematics, github, hackernews, pile_cc, pubmed_central, wikipedia_(en), full_pile, c4, temporal_arxiv, temporal_wikingram_7_0.2, ngram_13_0.2, ngram_13_0.8 (for most sources), 'none' (for other sources)member (str), nonmember (str), member_neighbors (List[str]), nonmember_neighbors (List[str])For evaluating MIA methods on our datasets, visit our GitHub repository.
If you find our codebase and datasets beneficial, kindly cite our work:
@inproceedings{duan2024membership,
title={Do Membership Inference Attacks Work on Large Language Models?},
author={Michael Duan and Anshuman Suri and Niloofar Mireshghallah and Sewon Min and Weijia Shi and Luke Zettlemoyer and Yulia Tsvetkov and Yejin Choi and David Evans and Hannaneh Hajishirzi},
year={2024},
booktitle={Conference on Language Modeling (COLM)},
}