Coder Social home page Coder Social logo

edyvision / pii-codex Goto Github PK

View Code? Open in Web Editor NEW
58.0 3.0 7.0 675 KB

A research python package for detecting, categorizing, and assessing the severity of personal identifiable information (PII)

License: BSD 3-Clause "New" or "Revised" License

Makefile 0.52% Python 51.89% Jupyter Notebook 44.23% Shell 0.04% TeX 3.31%
personal-identifiable-information pii pii-detection poetry presidio python3 research research-software research-tool analysis

pii-codex's Introduction

alt text

PII Detection, Categorization, and Severity Assessment

made-with-python Maintenance codecov License Python 3.9 DOI status


Author: Eidan Rosado - @EdyVision
Affiliation: Nova Southeastern University, College of Computing and Engineering

Project Background

The PII Codex project was built as a core part of an ongoing research effort in Personal Identifiable Information (PII) detection and risk assessment (to be publicly released later in 2023). There was a need to not only detect PII in text, but also identify its severity, associated categorizations in cybersecurity research and policy documentation, and provide a way for others in similar research efforts to reproduce or extend the research. PII Codex is a combination of systematic research, conceptual frameworks, third-party open source software, and cloud service provider integrations. The categorizations are directly influenced by the research of Milne et al. (2016) while the ranking is a result of category severities on the scale provided by Schwartz and Solove (2012) from Non-Identifiable, Semi-Identifiable, and Identifiable.

The outputs of the primary PII Codex analysis and adapter functions are AnalysisResult or AnalysisResultSet objects that will provide a listing of detections, severities, mean risk scores for each string processed, and summary statistics on the analysis made. The final outputs do not contain the original texts but instead will provide where to find the detections should the end-user care for this information in their analysis.

Statement of Need

The general knowledge base of identifiable data, the usage restrictions of this data, and the associated policies surrounding it have shifted drastically over the years. The tech industry has had to adjust to many policy changes regarding the tracking of individuals, the usage of data from online profiles and platforms, and the right to be forgotten entirely from a service or platform (GDPR). While the shift has provided data protections around the globe, the majority of technology users continue to have little to no control over their personal information with third-party data consumers (Trepte, 2020).

Understanding if identifiable data types exist in a data set can prevent accidental sharing of such data by allowing its detection in the first place and, in the case of this software package, present sanitized strings, the reasons to why the token was considered to be PII, and permit for the results to be publishable.

Potential Usages

Potential usages include sanitizing of dataset strings (e.g. a collection of social media posts), presenting results to users for software examining their interactions (e.g. UX research on user-awareness in cybersecurity applications), etc.


Running Locally with Poetry

This project uses Poetry. To run this project, install poetry and proceed to follow the instructions under /docs/LOCAL_SETUP.md.

Note: This project has only been tested with Ubuntu and MacOS and with Python versions 3.9 and 3.10. You may need to upgrade pip ahead of installation.

Installing with PIP

Video capture of install provided in LOCAL_SETUP.md file. Make sure you set up a virtual environment with either python 3.9 or 3.10 and upgrade pip with:

pip install --upgrade pip
pip install -U pip setuptools wheel # only needed if you haven't already done so 

Before adding pii-codex on your project, download the spaCy en_core_web_lg model:

pip install -U spacy
python3 -m spacy download en_core_web_lg

For more details on spaCy installation and usage, refer to their docs.

The repository releases are hosted on PyPi and can be installed with:

pip install pii-codex
pip install "pii-codex[detections]"

Note: The extras installed with pii-codex[detections] are the spaCy, Micrisoft Presidio Analyzer, and Microsoft Anonymzer packages.

Using Poetry:

poetry update
poetry add pii-codex
poetry install pii-codex --extras="detections"

For those using Google Collab, check out the example notebook:

Open In Colab

Usage

Video capture of usage provided in LOCAL_SETUP.md.

Sample Input / Output

The built-in analyzer uses Microsoft Presidio. Feed in a collection of strings with analyze_collection() or just a single string with analyze_item(). Those analyzing a collection of strings will also be provided with statistics calculated on the risk scores for detected items.

from pii_codex.services.analysis_service import PIIAnalysisService
PIIAnalysisService().analyze_collection(
    texts=["your collection of strings"],
    language_code="en",
    collection_name="Data Set Label", # Optional Labeling
    collection_type="SAMPLE" # Defaults to POPULATION, used stats calculations
)

You can also pass in a data param (dataframe) instead of simple text array with a text column and a metadata column to be analyzed for those analyzing social media posts. Current metadata supported are URL, LOCATION, and SCREEN_NAME.

Sample output (results object converted to dict from notebook):

{
    "collection_name": "PII Collection 1",
    "collection_type": "POPULATION",
    "analyses": [
        {
            "analysis": [
                {
                    "pii_type_detected": "PERSON",
                    "risk_level": 3,
                    "risk_level_definition": "Identifiable",
                    "cluster_membership_type": "Financial Information",
                    "hipaa_category": "Protected Health Information",
                    "dhs_category": "Linkable",
                    "nist_category": "Directly PII",
                    "entity_type": "PERSON",
                    "score": 0.85,
                    "start": 21,
                    "end": 24,
                }
            ],
            "index": 0,
            "risk_score_mean": 3,
            "sanitized_text: "Hi! My name is <REDACTED>",
        },
        ...
    ],
    "detection_count": 5,
    "risk_scores": [3, 2.6666666666666665, 1, 2, 1],
    "risk_score_mean": 1.9333333333333333,
    "risk_score_mode": 1,
    "risk_score_median": 2,
    "risk_score_standard_deviation": 0.8273115763993905,
    "risk_score_variance": 0.6844444444444444,
    "detected_pii_types": {
        "LOCATION",
        "EMAIL_ADDRESS",
        "URL",
        "PHONE_NUMBER",
        "PERSON",
    },
    "detected_pii_type_frequencies": {
        "PERSON": 1,
        "EMAIL_ADDRESS": 1,
        "PHONE_NUMBER": 1,
        "URL": 1,
        "LOCATION": 1,
    },
}

Docs

For more information on usage, check out the respective documentation for guidance on using PII-Codex.

Topic Document Description
PII Type Mappings PII Mappings Overview of how to perform mappings between PII types and how to review store PII types.
PII Detections and Analysis PII Detection and Analysis Overview of how to detect and analyze strings
Local Repo Setup Local Repo Setup Instructions for local repository setup
Example Analysis Example Analysis Notebook Notebook with example analysis using MSFT Presidio
PII-Codex Docs docs/pii_codex/index.html Autogenerated docs on classes, services, and models

Attributions

This project benefited greatly from a number of PII research works like that from Milne et al (2016) with the definition of the types and categories, Schwartz and Solove (2012) with the severity levels of Non-Identifiable, Semi-Identifiable, and Identifiable, and the documentation by NIST, DHS (2012), and HIPAA (full list of foundational publications provided below). A special thanks to all the open source projects, and frameworks that made the setup and structuring of this project much easier like Poetry, Microsoft Presidio, spaCy (2017), Jupyter, and several others.

Foundational Publications

The following publications that inspired and provided a foundation for this repository:

Concept Document Description
PII Type Mappings Milne et al., (2016) PII token categories and NIST and DHS categorizations.
Risk Continuum Schwartz & Solove, (2011) Risk continuum concept and definition (what lead to the ranking in PII-Codex).
Privacy and Affordances Trepte, (2020) Third-Party data consumption and user control (lack thereof) background.
Social Media and Privacy Beigi & Liu, (2010) Privacy issues with social media and third-party data consumption.
Privacy Settings and Data Access Moura & Serrão, (2016) Privacy settings, data access, and unauthorized usage.
Information Privacy Review Bélanger & Crossler, (2011) Concept of aggregation of data to identify individuals.
Big Data and Third Party Data Consumption Tene & Polonetsky, (2013) Third-party data usage, user control, and privacy.
PII and Confidentiality McCallister et al., (2010) NIST guidance on PII confidentiality protections for federal agencies.
Data Capitalism and Privacy West, (2017) Data capitalism, surveillance, and privacy.

The remaining resources such as python library citations, cloud service provider docs, and cybersecurity guidelines are included in the paper.bib file.

Community Guidelines

For community guidelines and contribution instructions, please view the CONTRIBUTING.md file.

pii-codex's People

Contributors

edyvision avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pii-codex's Issues

Error in ASCII codec can decode byte

I'm facing a problem on linux in config > PII_MAPPER() > open_pii_type_mapping_csv func. In that pd.read_csv(file) gives UnicodeDecodeError: 'ascii' codec cannot decode byte 0xe2 in position 2598:ordinal not in range(128).

If anyone knows about this, Please respond to this issue

Thank you

Thank you

Just wanted to write a note of thanks for this work. This has been incredibly useful for us in designing a tool that sniffs out the PIIs from data storage.

Error while detecting US SSN and US Bank account number.

It seems that when US_SSN is detected in the sentence it always errors to the below error message:

"Exception: An error occurred while processing the detected entity US_SSN"

Traceback:

  File "C:\Python_Local\Cerebro\cerebro-flask-api\venv\lib\site-packages\pii_codex\services\analysis_service.py", line 75, in analyze_item
    analysis, sanitized_text = self._perform_text_analysis(
  File "C:\Python_Local\Cerebro\cerebro-flask-api\venv\lib\site-packages\pii_codex\services\analysis_service.py", line 280, in _perform_text_analysis
Exception: An error occurred while processing the detected entity US_SSN

After looking closer in the code it seems that this entity type is missing from the csv attached in the data folder.

file: pii_mapping_util.py

    def __init__(self):
        self._pii_mapping_data_frame = open_pii_type_mapping_csv("v1")

file: file_util.py

    file_path = get_relative_path(
        f"../data/{mapping_file_version}/{mapping_file_name}.csv"
    )

The file contains PII_Type = "US_SOCIAL_SECURITY_NUMBER" instead of "US_SSN"

Same exception happens for bank number as well:
Exception: An error occurred while processing the detected entity US_BANK_NUMBER

  File "C:\Python_Local\Cerebro\cerebro-flask-api\venv\lib\site-packages\pii_codex\services\assessment_service.py", line 21, in assess_pii_type
    return PII_MAPPER.map_pii_type(detected_pii_type)
  File "C:\Python_Local\Cerebro\cerebro-flask-api\venv\lib\site-packages\pii_codex\utils\pii_mapping_util.py", line 45, in map_pii_type
    raise Exception(
Exception: An error occurred while processing the detected entity US_BANK_NUMBER

The file contains PII_Type = "US_BANK_ACCOUNT_NUMBER" instead of "US_BANK_NUMBER".

Hope this helps.

Thank you

Cloud examples

hi,

Do you guys have examples for docs/UC1_Converting_Existing_Detections_With_Adapters.png?

Thanks
GP

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.