
doi: 10.3233/aise220038
Using Natural Language Processing (NLP) as a discipline of machine learning, organizations can better organize their data in order to better represent their internal knowledge. To make NLP models easier to re-use in other contexts, they should be protected accordingly. This raises the question of the appropriate privacy-ensuring technology (PET). To be able to address this question this paper conducts a literature review regarding the ensuring of privacy in NLP following the PRISMA framework. After following the identification process of significant sources, 22 valuable studies were selected. Some of these studies have shown promising results, however the field of privacy ensuring in NLP is still uncategorized with different approaches difficult to compare.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
