Skip to main content

ANJANA is an open source framework for applying different anonymity techniques.

Project description

ANJANA

License: Apache 2.0 codecov

Python version

Anonymity as major assurance of personal data privacy:

ANJANA is a Python library for anonymizing sensitive data.

The following anonymity techniques are implemented, based on the Python library pyCANON:

  • k-anonymity.
  • (α,k)-anonymity.
  • ℓ-diversity.
  • Entropy ℓ-diversity.
  • Recursive (c,ℓ)-diversity.
  • t-closeness.
  • Basic β-likeness.
  • Enhanced β-likeness.
  • δ-disclosure privacy.

:bulb: Installation

First, we strongly recommend the use of a virtual environment. In linux:

virtualenv .venv -p python3
source .venv/bin/activate

Using pip:

Install anjana (linux and windows):

pip install anjana

Using git:

Install the most updated version of anjana (linux and windows):

pip install git+https://github.com/IFCA-Advanced-Computing/anjana.git

:rocket: Getting started

For anonymizing your data you need to introduce:

  • The pandas dataframe with the data to be anonymized. Each column can contain: indentifiers, quasi-indentifiers or sensitive attributes.
  • The list with the names of the identifiers in the dataframe, in order to suppress them.
  • The list with the names of the quasi-identifiers in the dataframe.
  • The sentive attribute (only one) in case of applying other techniques than k-anonymity.
  • The level of anonymity to be applied, e.g. k (for k-anonymity), (for ℓ-diversity), t (for t-closeness), β (for basic or enhanced β-likeness), etc.
  • Maximum level of record suppression allowed (from 0 to 100).
  • Dictionary containing one dictionary for each quasi-identifier with the hierarchies and the levels.

Example: apply k-anonymity, ℓ-diversity and t-closeness to the adult dataset with some predefined hierarchies:

import pandas as pd
import anjana
from anjana.anonymity import k_anonymity, l_diversity, t_closeness

# Read and process the data
data = pd.read_csv("adult.csv") 
data.columns = data.columns.str.strip()
cols = [
    "workclass",
    "education",
    "marital-status",
    "occupation",
    "sex",
    "native-country",
]
for col in cols:
    data[col] = data[col].str.strip()

# Define the identifiers, quasi-identifiers and the sensitive attribute
quasi_ident = [
    "age",
    "education",
    "marital-status",
    "occupation",
    "sex",
    "native-country",
]
ident = ["race"]
sens_att = "salary-class"

# Select the desired level of k, l and t
k = 10
l_div = 2
t = 0.5

# Select the suppression limit allowed
supp_level = 50

# Import the hierarquies for each quasi-identifier. Define a dictionary containing them
hierarchies = {
    "age": dict(pd.read_csv("hierarchies/age.csv", header=None)),
    "education": dict(pd.read_csv("hierarchies/education.csv", header=None)),
    "marital-status": dict(pd.read_csv("hierarchies/marital.csv", header=None)),
    "occupation": dict(pd.read_csv("hierarchies/occupation.csv", header=None)),
    "sex": dict(pd.read_csv("hierarchies/sex.csv", header=None)),
    "native-country": dict(pd.read_csv("hierarchies/country.csv", header=None)),
}

# Apply the three functions: k-anonymity, l-diversity and t-closeness
data_anon = k_anonymity(data, ident, quasi_ident, k, supp_level, hierarchies)
data_anon = l_diversity(
    data_anon, ident, quasi_ident, sens_att, k, l_div, supp_level, hierarchies
)
data_anon = t_closeness(
    data_anon, ident, quasi_ident, sens_att, k, t, supp_level, hierarchies
)

The previous code can be executed in less than 4 seconds for the more than 30,000 records of the original dataset.

Define your own hierarchies

All the anonymity functions available in ANJANA receive a dictionary with the hierarchies to be applied to the quasi-identifiers. In particular, this dictionary has as key the names of the columns that are quasi-identifiers to which a hierarchy is to be applied (it may happen that you do not want to generalize some QIs and therefore no hierarchy is to be applied to them, just do not include them in this dictionary). The value for each key (QI) is formed by a dictionary in such a way that the value 0 has as value the raw column (as it is in the original dataset), the value 1 corresponds to the first level of transformation to be applied, in relation to the values of the original column, and so on with as many keys as levels of hierarchies have been established.

For a better understanding, let's look at the following example. Supose that we have the following simulated dataset (extracted from the hospital_extended.csv dataset used for testing purposes) with age, gender and city as quasi-identifiers, name as identifier and disease as sensitive attribute. Regarding the QI, we want to apply the following hierarquies: interval of 5 years (first level) and 10 years (second level) for the age. Suppression as first level for both gender and city.

name age gender city disease
Ramsha 29 Female Tamil Nadu Cancer
Yadu 24 Female Kerala Viralinfection
Salima 28 Female Tamil Nadu TB
Sunny 27 Male Karnataka No illness
Joan 24 Female Kerala Heart-related
Bahuksana 23 Male Karnataka TB
Rambha 19 Male Kerala Cancer
Kishor 29 Male Karnataka Heart-related
Johnson 17 Male Kerala Heart-related
John 19 Male Kerala Viralinfection

Then, in order to create the hierarquies we can define the following dictionary:

age = data['age'].values
# Values: [29 24 28 27 24 23 19 29 17 19] (note that the following can be automatized)
age_5years = ['[25, 30)', '[20, 25)', '[25, 30)',
              '[25, 30)', '[20, 25)', '[20, 25)',
              '[15, 20)', '[25, 30)', '[15, 20)', '[15, 20)']

age_10years = ['[20, 30)', '[20, 30)', '[20, 30)',
               '[20, 30)', '[20, 30)', '[20, 30)',
               '[10, 20)', '[20, 30)', '[10, 20)', '[10, 20)']

hierarchies = {
    "age": {0: age,
            1: age_5years,
            2: age_10years},
    "gender": {
        0: data["gender"].values,
        1: np.array(["*"] * len(data["gender"].values)) # Suppression
    },
    "city": {0: data["city"].values,
             1: np.array(["*"] * len(data["city"].values))} # Suppression
}

:scroll: License

This project is licensed under the Apache 2.0 license.

:warning: Project status

This project is under active development.

Funding and acknowledgments

This work is funded by European Union through the SIESTA project (Horizon Europe) under Grant number 101131957.


Note: Anjana and the mythology of Cantabria

"La Anjana" is a character from the mythology of Cantabria. Known as the good fairy of Cantabria, generous and protective of all people, she helps the poor, the suffering and those who stray in the forest.

- Partially extracted from: Cotera, Gustavo. Mitología de Cantabria. Ed. Tantin, Santander, 1998.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

anjana-0.0.2.tar.gz (15.6 kB view details)

Uploaded Source

Built Distribution

anjana-0.0.2-py3-none-any.whl (21.7 kB view details)

Uploaded Python 3

File details

Details for the file anjana-0.0.2.tar.gz.

File metadata

  • Download URL: anjana-0.0.2.tar.gz
  • Upload date:
  • Size: 15.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for anjana-0.0.2.tar.gz
Algorithm Hash digest
SHA256 490b8230a2b6459c17d628dafd218dfbd28b6ebcbe3506f6e5ab2a2d9dc07d5a
MD5 397e9c11c120d28450ff90406a78265a
BLAKE2b-256 87fe3c38dd35326da8ecfb55e59bb01b1fafdf311834b740d6c312ba1efbacc7

See more details on using hashes here.

File details

Details for the file anjana-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: anjana-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 21.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.3

File hashes

Hashes for anjana-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 e85d85d0baa42f43f463ce08459028d6a61dd7585fb245f9a144d7af78002eb7
MD5 37e8267852bceabf5d9caf4ef8bde575
BLAKE2b-256 bd6a42df439e279e29dd5c3320d19fe9bdd3ceb1bbc05643bf0f8db98a4ec272

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page