A well-known Swiss financial services provider, which also issues credit cards, needs to test its own software on a regular basis. Its application is a complex, in-house development that has to comply with the most stringent laws and regulations, such as the Payment Card Industry Data Security Standard (PCI-DSS). That’s why the company needs to ensure that its test data comprises 16-digit credit card numbers that are linked with the corresponding security numbers, encoding and tax information. It then has to convert the original data to anonymous test data so that nobody can infer anything about the real data.
However, the company’s biggest challenge was that the solution had to be compatible with its system and also support web applications, files and databases like DB2 and Oracle. By using the eperi pseudonymization solution, the financial services provider is able to adhere to all legal and compliance-related requirements. A flexible tokenization process ensures that no information can be inferred from the original data, but still allows the data to be tested in real-life conditions. The company is also able to configure the test data to suit its exact requirements and modify individual parameters as necessary. Another plus point is that the solution is seamlessly integrated with the firm’s existing systems – no changes were required. The eperi test data generation solution is used as a transparent proxy that is installed in front of the test systems, such as databases, applications and file systems. This ensures that integration work is kept to a minimum and the customer can generate test data in real time that remains consistent across all systems.
Let’s look more closely at what pseudonymization and tokenization actually mean – and where anonymization fits into the picture.
The General Data Protection Regulation (GDPR) stipulates that personal data must be pseudonymized or anonymized, and this is just as crucial in test environments as elsewhere. But how do these techniques differ?
The GDPR defines pseudonymization as a way of processing personal data “in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information” (Art. 4, Para. 5, GDPR). This means that the original value is replaced by another value and the correspondences are stored in a specific table. That enables the original to be restored whenever required. The table of correspondences can be stored separately from systems, applications and databases. Tokenization – the generation of replacement values for certain original values – is a part of pseudonymization. In the eperi solutions, the original values are encrypted once they have been tokenized and only then are they stored in the solution’s own token database.
Anonymization is stricter and goes a step further than pseudonymization. Personal data is rendered anonymous “in such a manner that the data subject is not or no longer identifiable” (Art. 4, Para. 26, GDPR). In contrast to pseudonymization, this therefore precludes the storage of the original data and the table of correspondences. The stringency of this data protection approach means that anonymization is ideal for test data generation as it is no longer possible to infer anything at all from the original data.
Recommended for You