Connect with us

Politics

Secure and Transform Your Organization’s Data Through Data Masking

Published

on

Secure and Transform Your Organization’s Data Through Data Masking


With the realization of what data can do in catering to users in providing a unique experience of a product or service, businesses are collating data from all sources. The collected data is huge in volume and is shared with many stakeholders to derive meaningful insights or to serve the customers.

This data sharing results in regular data breach occurrences that affect companies of all sizes and in every industry — exposing the sensitive data of millions of people every year and costing businesses millions of dollars. According to an IBM report, the average cost of a data breach in 2022 is $4.35 million, up from $4.24 million in 2021. It becomes imperative to secure access to sensitive data that flows across an organization for faster development, service, and production at scale without compromising its privacy.

Data masking anonymizes and conceals sensitive data

Data masking anonymizes or conceals this sensitive data while allowing it to be leveraged for various purposes or within different environments.

Create an alternate version in the same format as of data

The data masking technique protects data by creating an alternate version in the same format as of data. The alternate version is functional but cannot be decoded or reverse-engineered. The modified version of the original data is consistent across multiple Databases. It is used to protect different types of data.

Common data types (Sensitive data) for Data Masking

  • PII: Personally Identifiable Information
  • PHI: Protected Health Information
  • PCI-DSS: Payment Card Industry Data Security Standard
  • ITAR: Intellectual Property Information

According to a study by Mordor Intelligence, “The Data Masking Market” was valued at USD 483.90 million in 2020 and is expected to reach USD 1044.93 million by 2026, at a CAGR of 13.69% over the forecast period 2021 — 2026.

In this information age, cyber security is very important.” Data masking helps secure this sensitive data by providing a masked version of the real-time data while preserving its business value (see: k2view dotcom; “what is data masking”). It also addresses threats, including Data Loss, Data Exfiltration, insider threats or account breach, etc.

Many data masking techniques are used to create a non-identifiable or undeciphered version of sensitive data to prevent any data leaks. It maintains data confidentiality and helps businesses to comply with data security standards such as General Data Protection Regulation (GDPR), Payment Card Industry Data Security Standard (PCI DSS), etc.

Common Methods of Data Masking

1.  Static Data Masking

This method of data masking is very commonly used to mask data in a production environment. In this method, the hidden data retains its original structure without revealing the actual information. The data is altered to make it look accurate and close to its original characteristics so that it can be leveraged in development, testing, or training environments.

2.  Dynamic data masking

This method is different from static masking in a way that active or live data is masked without altering the original data form. Thus, in this method, the data is masked only at a particular database layer to prevent unauthorized access to the information in different environments.

With this method, organizations can conceal data dynamically while managing data requests from third-party vendors, parties, or internal stakeholders. It is used to process customer inquiries around payments or handle medical records within applications or websites.

Informatica offers PowerCenter with PowerExchange for Extract Transform Load (ETL) and ILM for data masking. These products embody best practices for handling large datasets across multiple technologies and sources.

Informatica Dynamic Data Masking anonymizes data and manages unauthorized access to sensitive information in production environments, such as customer service, billing, order management, and customer engagement. Informatica PowerCenter Data Masking Option transforms production data into real-looking anonymized data.

3.  On-the-fly data masking

The on-the-fly data masking method is considered ideal for organizations that integrate data continuously. With this method, the data is masked when transferred from a production environment to another environment, such as a development or test. A portion of data or smaller subsets of data is masked, as required, thus eliminating the need to create a continuous copy of masked data in a staging environment, which is used to prepare data.

Different platforms use each or a combination of these methods to implement data masking. For example, K2view offers data masking through the data product platform that simplifies the data masking process of all the data related to specific business entities, such as customers, orders, credit card numbers, etc.

The K2view platform manages the integration and delivery of this sensitive data of each business entity masked in its encrypted Micro-Database. It uses dynamic data masking methods for operational services like customer data management (customer 360) or Test data (test data management), etc.

Another example of using both static and dynamic data masking methods is Baffle Data Protection Services (DPS). It helps to mitigate the risks of data leakage from different types of data, such as PII, Test data across a variety of sources. With Baffle, businesses can build their own Data Protection Service layer to store personal data at the source and manage strong access controls at that source with Adaptive Data Security.

Popular Data Masking Techniques

Data Encryption is the most common and reliable data-securing technique. This technique hides data that needs to be restored to its original value when required. The encryption method conceals the data and decrypts it using an encryption key. Production data or data in motion can be secured using data encryption technology, as the data access can be limited to only authorized individuals and can be restored as required.

The Data Scrambling technique secures some types of data by rearranging the original data with characters or numbers in random order. In this technique, once the data is scrambled with random content, the original data cannot be restored. It is a relatively simple technique, but the limitation lies with only particular types of data and less security. Any data undergoing scrambling is viewed differently (with randomized characters or numbers) in different environments.

The Nulling Out technique assigns a null value to sensitive data in order to bring anonymity to the data to protect data from unauthorized usage. In this technique, the null value in place of original information changes the characteristics of data and affects the usefulness of data. The method of removing data or replacing data with a null value takes away its usefulness — making it unfit for test or development environments. Data integration becomes a challenge with this type of data manipulation, which is replaced with empty or null values.

The shuffling data technique makes the hidden data look authentic by shuffling the same column values that are shuffled randomly to reorder the values. For instance, this technique is often used to shuffle employee names columns of records such as Salaries; or, in the case of patient names, columns shuffled across multiple patient records.

The shuffled data appear accurate but do not give away any sensitive information. The technique is popular for large datasets.

  • Data Redaction (blacklining)

The Data Redaction technique, also known as blacklining, does not retain the attributes of the original data and masks data with generic values. This technique is similar to nulling out and is used when sensitive data in its complete and original state is not required for development or testing purposes.

For instance, the replacement of credit card number with x’s (xxxx xxxx xxxx 1234) displayed on payment pages in the online environment helps to prevent any data leak. At the same time, the replacement of digits by x helps developers to understand what the data might look like in real-time.

The Substitution technique is considered to be the most effective for preserving the data’s original structure, and it can be used with a variety of data types. The data is masked by substituting it with another value to alter its meaning.

For example, in the customer records substituting the first name ‘X’ with ‘Y’ retains the structure of the data and makes it appear to be a valid data entry, yet provides protection against accidental disclosure of the actual values.

Conclusion

Data masking has emerged as a necessary step for transforming real-time data to non-production environments while maintaining the security and privacy of sensitive data.

Masking of data is crucial when managing large volumes of data and gives the authorization to dictate the access of data in the best possible way.

Featured Image Credit: Provided by the Author; Pexels; Thank you!

Yash Mehta

Yash is an entrepreneur and early-stage investor in emerging tech markets. He has been actively sharing his opinion on cutting-edge technologies like Semantic AI, IoT, Blockchain, and Data Fabric since 2015.
Yash’s work appears in various authoritative publications and research platforms globally. Yash Mehta’s work was awarded “one of the most influential works in the connected technology industry,” by Fortune 500.
Currently, Yash heads a market intelligence, research and advisory software platform called Expersight. He is co-founder at Esthan and Intellectus SaaS platform.

Politics

Fintech Kennek raises $12.5M seed round to digitize lending

Published

on

Google eyed for $2 billion Anthropic deal after major Amazon play


London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from Kennek.io; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading

Politics

Fortune 500’s race for generative AI breakthroughs

Published

on

Deanna Ritchie


As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Politics

UK seizes web3 opportunity simplifying crypto regulations

Published

on

Deanna Ritchie


As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.