Connect with us

Politics

The Rise of Photonics in Computing: Next Big Thing for DeepTech VCs The Rise of Photonics in Computing: Next Big Thing for DeepTech VCs

Published

on

The Rise of Photonics in Computing: Next Big Thing for DeepTech VCs The Rise of Photonics in Computing: Next Big Thing for DeepTech VCs


What if there was no lag in anything that you clicked on your PC, laptop, phone, or your smartwatch? Such low latency that the human brain does not even notice it. That would be the flow of information and processing at the speed of light enabled by an optical computer based on photonic technology right there at your fingertips.

The system you are using right now works on electrical signals to perform calculations; optical computing uses light. It makes it faster, more efficient, and more compact. Could it be the next big thing for DeepTech VCs? 8X Ventures conducted research to consolidate the evolution of hardware computing and where it’s heading. Let’s find out through the prism of computing history.

The first electronic computer

The Electronic Numerical Integrator And Computer (ENIAC) was the first electronic computer, and it was made up of vacuum tubes, developed during World War II in the 1940s. It might sound like a real estate hazard if I told you how big it was. It was approximately the size of a room, measuring 8 ft high, 3 ft deep, and 100 ft long. Don’t listen to people complaining about the ergonomics of a smartwatch on their wrist. What we have achieved in those 80 years is no less than magic.

Though it was our first electronic computer and had calculations to construct a hydrogen bomb as its first task, it still was highly inefficient, slow, and took a lot of space, right? Therefore, in the coming decades, we needed technology that can overcome those limitations. And that’s when electronic computers based on transistor technology came into being. Let’s talk about them. Before we do that, if you are a history or tech buff or both, you can find a portion of that giant machine on exhibit at the Smithsonian Institution in Washington, D.C.

The age of binary computation

Transistors, typically made of Silicon, enable the logic of present computing systems. Millions of microscopic transistors on a chip manipulate the electrical current and thus enable the binary system of 0 and 1, the language our computers understand. ENIAC was definitely an odd bird and relied on a 10-digit decimal system. It was the invention of transistors that changed the game.

When compared to ENIAC, transistors are very compact, fast, and efficient. They are holding the backbone of our digital economy, the data centers. But as per, a report by International Energy Agency (IEA) suggests that data centers and data transmission networks each account for 1-1.5% of global electricity use. The whole internet infrastructure? Take it to 10%.

Most data centers use almost as much non-computing (cooling and power consumption) power as they do to power their servers. Even though they are exponentially better than vacuum-tube-based processors and are continually improving, the graph of this technology is about to saturate. Moore’s law is not dead but decaying.

At some point, the transistors on a chip will reach the size of individual atoms, and it will no longer be possible to continue shrinking them and increasing their density. Scaling would be almost impossible, power consumption would increase, and they would not be as affordable. We won’t be able to fit more silicon transistors on a chip as we did for the last few decades.

What do we do then? We look for emerging technologies, such as quantum computing, neuromorphic chips, photonics, and new materials. The progress in optical computing makes it the next best contender to revolutionize hardware computing again. Let’s find out how?

Optical computing: The power of God

A typical computer is the result of three things it does really well: compute+communicate+store. In the electronic configuration, these processes are performed when current is manipulated with the help of transistors, capacitors, resistors, and other components. In photonic computing, light is manipulated using photodetectors, phase modulators, waveguides, and more. These are the building blocks of electronic computers and optical computing, respectively.

Unlike electronic computing, which works on electron manipulation, photonic computing relies on the properties of photons. This technology is based on the idea that light can be used to perform many of the same functions as an electrical current in a computer, such as performing calculations, storing and retrieving data, and communicating with other devices.

So, how is it better if it has the same functions and works on binary language?

Higher bandwidth:

The wave properties of light allow parallel computing capabilities that enable optical computing to package more information and therefore see higher bandwidth. This makes optical computers even more compact and process way more complex data.

Highly efficient:

Light is also less prone to transmission losses as compared to electrical current and thus does not generate the same level of heat as electrical computing. That means it makes optical computing highly energy efficient. Also, one does not need to worry about electrical short circuits since they are immune to electromagnetic interference.

Faster processing

Even under perfect conditions, the speed of the electric current is 50-95% of the speed of light. And that makes optical computers faster than the prevalent ones.

How far are we?

The image you see at the top of this article depicts the final destination of an optical computer: a crystal slab with no screen but holographic projection in the air for input and output. It will take decades to get there, and that’s a moonshot as of now.

But already realizable applications are seen in near-edge computing and data centers. That means, with near-edge computing capabilities, a 5G-enabled IoT device at a retail store could compute and store a portion of its generated data then and there instead of transferring all the raw data to a faraway data center. Result: low latency, low transmission losses.

Lightspeed Photonics, a deeptech startup in Singapore, is building next-generation optical interconnects that send data into the chips directly through lasers (no cables!) and integrate computing chips for high bandwidth data processing at low power. This is a more realizable example of optical computing as compared to optical computing’s full potential.

Conclusion

Optical computing isn’t completely replacing electronic computing for the next few decades, but its integration is heavily augmenting it, thus removing roadblocks that we face today. The progress on this integration positions DeepTech investors to bet their money on startups working on transforming the computing industry early on.

Featured Image Credit: Created with DALL·E 2 – OpenAI; Provided by the Author; Thank you!

Chirag Gupta

Managing Partner, 8X Ventures

Chirag is the Managing Partner at 8X Ventures, a Deep Tech focused venture capital firm. He invests and enables startups that will lead disruptive innovation and create sustainable value. His focus regions are India, Singapore and the UK.

Politics

Fintech Kennek raises $12.5M seed round to digitize lending

Published

on

Google eyed for $2 billion Anthropic deal after major Amazon play


London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from Kennek.io; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading

Politics

Fortune 500’s race for generative AI breakthroughs

Published

on

Deanna Ritchie


As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Politics

UK seizes web3 opportunity simplifying crypto regulations

Published

on

Deanna Ritchie


As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.