It uses differential privacy to obfuscate responses for queries that fall near a model's decision boundary.
This research focuses on optimizing discrete prompts for large language models (LLMs) without needing access to the model's internal weights or gradients.
This archive would typically contain the Python scripts (such as pmi_ngram.py ) and training datasets mentioned in their official GitHub repository .
A more recent 2023 paper from (TMLR) uses the same acronym for Black-box Discrete Prompt Learning .
You can find the full text through the official Springer link or IEEE Xplore. 2. "Black-box Discrete Prompt Learning" (BDPL)
This is the most probable match. Published in (European Symposium on Research in Computer Security), this paper introduces a security layer designed to protect machine learning models from being "stolen" or extracted by adversaries.
It uses differential privacy to obfuscate responses for queries that fall near a model's decision boundary.
This research focuses on optimizing discrete prompts for large language models (LLMs) without needing access to the model's internal weights or gradients.
This archive would typically contain the Python scripts (such as pmi_ngram.py ) and training datasets mentioned in their official GitHub repository .
A more recent 2023 paper from (TMLR) uses the same acronym for Black-box Discrete Prompt Learning .
You can find the full text through the official Springer link or IEEE Xplore. 2. "Black-box Discrete Prompt Learning" (BDPL)
This is the most probable match. Published in (European Symposium on Research in Computer Security), this paper introduces a security layer designed to protect machine learning models from being "stolen" or extracted by adversaries.