International Journal on Uncertainty, Fuzziness and Knowledge-based Systems
Vol. 10, Issue 5, Pages 571-588
2002
Abstract
Often a data holder, such as a hospital or bank, needs to share person-specific records in such a way that the identities of the individuals who are the subjects of the data cannot be determined. One way to achieve this is to have the released records adhere to k-anonymity, which means each released record has at least (k-1) other records in the release whose values are indistinct over those fields that appear in external data. So, k-anonymity provides privacy protection by guaranteeing that each released record will relate to at least k individuals even if the records are directly linked to external information. This paper provides a formal presentation of combining generalization and suppression to achieve k-anonymity. Generalization involves replacing (or recoding) a value with a less specific but semantically consistent value. Suppression involves not releasing a value at all. The Preferred Minimal Generalization Algorithm (MinGen), which is a theoretical algorithm presented herein, combines these techniques to provide k-anonymity protection with minimal distortion. The real-world algorithms Datafly and m-Argus are compared to MinGen. Both Datafly and m-Argus use heuristics to make approximations, and so, they do not always yield optimal results. It is shown that Datafly can over distort data and m-Argus can additionally fail to provide adequate protection.
Citation
Sweeney, Latanya. "Achieving k-Anonymity Privacy Protection Using Generalization and Suppression." International Journal on Uncertainty, Fuzziness and Knowledge-based Systems 10.5 (2002): 571-588.