Connect with us

Politics

Ethical Considerations in IoT Data Collection

Published

on

How Does a Ransomware Negotiation Work?


Last year, a court determined Richard Dabate — who police had found with one arm and one leg zip-tied to a folding chair in his home — was guilty of his wife’s murder. His elaborate story of a home invasion might have held water had it not been for Connie Dabate’s Fitbit, which showed her moving around for an hour after the alleged intruder took her life.

Few would argue this was a case of unethical data collection, but ethics and privacy have a complicated, at times sordid history. Rising from the ashes of such experiments as Henrietta Lacks’ cancer cell line, in which a biologist cultured a patient’s cells without her knowledge or consent, a new era of privacy ethics is taking shape — and it has people questioning right from wrong.

What Is IoT?

The Internet of Things (IoT) is shorthand for the vast, interconnected network of smart devices that collect and store information online. Projected to be worth over $1 trillion by 2030, it includes appliances people use at home — like TVs, voice assistants, and security cameras — as well as infrastructure like smart streetlights and electric meters. Many businesses use IoT to analyze customer data and improve their operations.

Unethical Data Collection and Use

There’s no question that IoT data is helpful. People use it for everything from remotely turning off the AC to drafting blueprints for city streets, and it has enabled significant improvements in many industries. However, it can also lead to unethical data collection and applications.

For example, using a person’s demographic information without their consent or for purposes beyond marketing and product development can feel like a breach of trust. Data misuse includes the following violations.

1. Mishandling Data

Collecting and storing vast amounts of data brings ethics and privacy into question. Some 28% of companies have experienced a cyberattack due to their use of IoT infrastructure, and these breaches often expose people’s sensitive or confidential information.

The average data breach cost in 2022 was $4.35 million — and a loss of consumer trust. For example, hospital network hacks can reveal patients’ medical history, credit card numbers, and home addresses, leaving already-struggling people even more vulnerable to financial woes. The loss of privacy can make people wary about using a service again.

Mishandling data isn’t unique to IoT devices, of course — 40% of salespeople still use informal methods like email and spreadsheets to store customer info, and these areas are also targets for hackers. But IoT devices often collect data beyond what you’d find on a spreadsheet.

2. Collecting Highly Personal Info

Home IoT devices are privy to uniquely private data. Although 55% of consumers feel unseen by the brands they interact with, many people would be shocked at how much businesses actually know about them.

Some smartwatches use body temperature sensors to determine when a user is ovulating, guessing their fertility levels, or predicting their next period. Smart toothbrushes reduce dental insurance rates for people who brush regularly and for the recommended two-minute interval.

In many cases, smart devices collect as much information as a doctor would, but without being bound by pesky HIPAA privacy laws. As long as users consent, companies are free to use the data for research and marketing purposes.

It’s an easy way to find out what customers really want. Like hidden trail cameras capturing snapshots of elusive animals, smart devices let businesses into the heart of the home without resorting to customer surveys or guesswork.

3. Not Following Consent and Privacy Ethics

It’s one thing to allow your Alexa speaker to record you when you say its name; most users know this feature. However, few realize Amazon itself holds onto the recordings and uses them to train the algorithm. There have also been cases where an Amazon Echo secretly recorded a conversation and sent it to random people on the users’ contact list, provoking questions about unethical data collection and privacy ethics.

Getting explicit consent is crucial when collecting, analyzing, and profiting off of user data. Many companies bury their data use policies deep in a terms-and-conditions list they know users won’t read. Some use fine print many people struggle to make out.

Then, there’s the question of willing consent. If users have to sign up for a specific email service or social media account for work, do they really have a choice of whether to participate in data collection? Some of the most infamous cases of violating privacy ethics dealt with forced participation.

For example, U.S. prisoners volunteered to participate in studies that would help the war effort during World War II. Still, they could not fully consent because they were physically trapped in jail. They tested everything from malaria drugs to topical skin treatments. Some volunteered in exchange for cigarette money or to potentially shorten their sentences.

Even if users give explicit consent, most people now consider collecting data — medical or otherwise — unethical by coercing people into doing so. Collecting data from people unaware they’re giving away sensitive information is also an ethics and privacy violation.

Characteristics of Ethical Data Use

How can data scientists, marketers, and IoT manufacturers keep users’ best interests in mind when collecting their data?

1. Ask for Permission

It’s crucial to always ask before using someone’s data — and ensure they heard you. IoT devices should come with detailed information about how the device will collect data, how often it will do so, how it will use the information, and why it needs it in the first place. These details should be printed in a clear, legible, large font and not be buried deep in a manual heavy enough to use as a paperweight.

2. Gather Just Enough

Before collecting information, decide if you really need it. How will it help advance your company’s objectives? What will you and your customers gain from it? Only gather data relevant to the problem at hand, and avoid collecting potentially sensitive information unless absolutely necessary.

For example, smart beds can track users’ heart rates, snoring, and movement patterns, but they can also collect data about a person’s race or gender. How many of these metrics are necessary for marketing and product development purposes?

3. Protect Privacy

After gathering data, keep it hidden. Strong cybersecurity measures like encryption and multi-factor authentication can hide sensitive data from prying eyes.

Another way to protect consumer privacy is to de-identify a data set. Removing all personally identifiable information from a data set and leaving just the numbers behind ensures that even if someone leaks the data, no one can connect it to real people.

4. Examine Outcomes

How might your data be used — intentionally or not — for other purposes? It’s important to consider who your data could benefit or harm if it leaves the confines of your business.

For example, if the data becomes part of an AI training set, what overall messages does it send? Does it contain any inherent biases against certain groups of people or reinforce negative stereotypes? Long after you gather data, you must continually track where it goes and its effects on the world at large.

Prioritizing Ethics and Privacy

Unethical data collection has a long history, and IoT plays a huge role in the continued debate about privacy ethics. IoT devices that occupy the most intimate of spaces — the smart coffee maker that knows you’re not a morning person, the quietly humming, ever-vigilant baby monitor — give the most pause when it comes to data collection, making people wonder if it’s all worth it.

Manufacturers of smart devices are responsible for protecting their customers’ privacy, but they also have strong incentives to collect as much useful data as possible, so IoT users should proceed with caution. It’s still a wild west for digital ethics and privacy laws. At the end of the day, only you can decide whether to unwind with a smart TV that might be watching you back — after all, to marketing companies, you are the most interesting content.

Featured Image Credit:

Zac Amos

Zac is the Features Editor at ReHack, where he covers tech trends ranging from cybersecurity to IoT and anything in between.

Politics

Fintech Kennek raises $12.5M seed round to digitize lending

Published

on

Google eyed for $2 billion Anthropic deal after major Amazon play


London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from Kennek.io; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading

Politics

Fortune 500’s race for generative AI breakthroughs

Published

on

Deanna Ritchie


As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Politics

UK seizes web3 opportunity simplifying crypto regulations

Published

on

Deanna Ritchie


As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.