Connect with us

Politics

Judge Restricts Biden Officials from Contacting Tech Companies

Published

on

Top E-Signature Products for 2023


The efforts of the Biden administration to combat misinformation and regulate content on social media platforms were dealt a significant blow when a federal judge in Louisiana issued a ruling restricting government officials from communicating or meeting with tech giants such as Facebook, YouTube, and Twitter. This ruling was a direct hit to the administration’s efforts to combat misinformation and regulate content on social media platforms.

Judge Terry A. Doughty of the United States District Court for the Western District of Louisiana ruled that it is against the law for federal agencies like the FBI and the Department of Health and Human Services to report content and accounts on social media platforms. The judge did, however, make exceptions for the reporting of content on social media that engaged in illegal activity or posed a threat to the nation’s security.

This decision came about as a result of a legal challenge that had been lodged by Republican attorneys general in the states of Louisiana and Missouri. The lawsuit made the allegation that government officials had conspired with social media platforms to suppress conservative voices and viewpoints under the guise of preventing the spread of false information. This allegation was made in the context of the lawsuit. The attorneys general argued that posts pertaining to a wide variety of subjects, such as the COVID pandemic and Hunter Biden’s laptop, were unfairly targeted for removal and that they should not have been deleted.

In their court filings, the attorneys general went as far as to assert that the actions in question represented “the most egregious violations of the First Amendment in the history of the United States of America.” In the injunction that was handed down by the judge, it was stated that the evidence that was presented by Louisiana and Missouri demonstrated a “massive effort” on the part of federal agencies, all the way down to the White House, to suppress speech based on the content of the speech. This evidence was acknowledged by the judge and was presented as part of the injunction.

The actions of government officials have been defended by stating that they intended to reduce the number of deaths caused by COVID by combating harmful misinformation and alerting social media companies to illegal activities such as human trafficking and terrorism. This justification has been provided in order to justify the government officials’ actions. They argued that social media platforms have a responsibility to take into account the effect that their platforms have on the people of the United States and to make their own decisions regarding the information that they present to users. This responsibility includes taking into account the effect that their platforms have on the people of the United States.

According to a representative from the White House, the Justice Department is currently reviewing the injunction that was handed down by the court in order to determine the various courses of action that it can take. Regarding the implications of this ruling for the government’s efforts to combat the spread of false information and to regulate social media platforms, there is still a lot of uncertainty.

It is important to note that the recent ruling that was issued by Judge Doughty prohibits government officials from contacting technology companies about content moderation. On the other hand, it is important to note that this ruling is not a final ruling on the matter. The injunction sheds light on the ongoing legal battle between conservative voices and the administration of former Vice President Joe Biden over the regulation of social media platforms. The injunction was initially brought to the public’s attention by the Washington Post.

Because of the injunction, it is now categorically against the law for government agencies like the FBI and the Department of Health and Human Services to report potentially malicious behavior on social media posts and accounts. On the other hand, it does make exceptions for situations involving illegal activity and potential dangers to the nation’s security.

The decision might have significant repercussions for social media platforms such as Facebook and Twitter, in addition to the businesses that are their parents. The parent company of Facebook and Instagram, Meta, has decided not to provide any comment regarding the judge’s decision. The response from Twitter, on the other hand, was an automated poop emoji, which indicated that the social media platform did not agree with or accept the ruling. It was impossible to get a comment from Google, which was one of the companies that was named in the lawsuit. The lawsuit was filed against a number of companies.

It is unknown how social media platforms will navigate the legal landscape as they continue to face pressure from both governmental entities and users who have varying opinions on how content moderation should be handled. This pressure is likely to come from both sides: the users and the government.

As a direct consequence of this decision, the discussion that has been going on for some time about how content should be moderated on social media platforms has been brought to the forefront. Although there is widespread consensus that harmful content, false information, and illegal activities need to be combatted, there is a wide range of opinions concerning how this should be accomplished and who should be responsible for it.

The administration of Vice President Joe Biden, much like the administrations that came before it, is of the opinion that social media platforms have a responsibility to take preventative measures to ensure the public’s health, safety, and security. This view is shared by the administration of former President Barack Obama. On the other hand, there are those who hold the opinion that efforts of this nature can violate individuals’ rights to free speech and that they unfairly target specific voices and points of view.

As the legal battle plays out and additional decisions are handed down by the courts, it is highly likely that the future of content moderation will continue to develop. Keeping a healthy balance between the need to combat harmful content and the protection of free speech and the maintenance of a diverse range of voices on social media platforms continues to be a difficult and controversial challenge. This challenge is made more difficult by the fact that there is a lot of controversy surrounding this issue.

First reported on USA Today

John Boitnott

John Boitnott is a news anchor at ReadWrite. Boitnott has worked at TV News Anchor, print, radio and Internet companies for 25 years. He’s an advisor at StartupGrind and has written for BusinessInsider, Fortune, NBC, Fast Company, Inc., Entrepreneur and Venturebeat. You can see his latest work on his blog, John Boitnott

Politics

Fintech Kennek raises $12.5M seed round to digitize lending

Published

on

Google eyed for $2 billion Anthropic deal after major Amazon play


London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from Kennek.io; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading

Politics

Fortune 500’s race for generative AI breakthroughs

Published

on

Deanna Ritchie


As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Politics

UK seizes web3 opportunity simplifying crypto regulations

Published

on

Deanna Ritchie


As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.