Connect with us

Politics

Multi-Cloud Cost Optimization – ReadWrite

Published

on

Multi-Cloud Cost Optimization - ReadWrite


The popularity and confidence in cloud computing platforms continues to grow unabated.  More and more businesses are moving mission-critical workloads to public clouds.  Forbes recently projected that by 2021, 32% of IT budgets will be spent on public cloud platforms.  Forbes also points out that cloud spending has grown 59% on average since 2018.

The Recent Trends of Multi-Cloud Optimization will Continue — Elevating the Importance of a Multi-Cloud Strategy.

The elasticity of cloud platforms provides great potential from an engineering perspective but great challenges from a cost-containment perspective. Traditional engineering teams using on-premises infrastructure are not accustomed to considering cost in a pay as you go environment. When migrating from limited on-premises hardware to the comparatively infinite expanse and variety of cloud, cost containment, tracking and optimization have to be considered.

Cost discipline, by necessity, becomes part of engineering awareness and vigilance — a requirement for businesses looking to exploit the new paradigm.

The Multi-Cloud Way

Many businesses already have a presence in multiple cloud platforms, either due to a strategy, or more likely, due to organic growth.  The benefits of cloud technology include the lack of a reliance on a single provider, agility, scalability, high availability, SaaS services, and PaaS platforms. These higher quality services, along with the pay as you go billing model, is very attractive.

Controlling the associated costs requires a well thought out multi-cloud strategy.

A multi-cloud cost strategy considers workload placement by factors.

  • Workload/platform optimization. Does the application utilize sufficient platform features to justify placement there?  Conversely, does the availability zone provide needed features for the workload?  How can inter-region bandwidth charges be balanced against fixed availability zone costs in a distributed deployment?
  • Performance. Can the workload be placed on a platform, region, or server-class with overall lower performance without impact?  Workloads that can tolerate lower average performance can benefit from right-sizing the computing environment.  Similarly, for storage; can the workload tolerate lower performance or even object storage to lower costs.
  • Availability. Are some workloads tolerant of low (or at least not high) availability?  Can they be placed on cloud excess capacity when available?  Most cloud platforms have far cheaper preemptible instances for workloads that can tolerate it ( e.g., ETL / batch jobs that can snapshot progress).
  • Serverless. Does the workload require a dedicated server?  Similar to shopping for excess capacity, serverless offerings have the potential for cost savings by not maintaining a running server and only incurring costs based on resource consumption on a highly granular basis.

Hybrid cloud strategies also can have an important impact on cost. Hybrid cloud, using on-premises capacity along with public cloud resources, should be considered when excess on-premise capacity exists — or where public cloud offerings aren’t cost-competitive.

For many businesses, compliance requirements will make a hybrid approach necessary. For others, hybrid cloud deployments are simply the result of a phased migration of workloads to the cloud, which may take many months or years.

The basic promise of the public cloud, the efficient consumption of resources on-demand as an operational expense vs. large capital plus operational expense, isn’t guaranteed to make sense under all circumstances.

Cloud Cost Assessment

If some workloads are already running on the public cloud, the first step is quantifying the costs of existing workloads and services over time as a baseline. Quantifying the cost-baseline is key to getting a detailed profile of consumption and waste beyond simple aggregation of spending.  Once this baseline is established, it can serve as a starting point for identifying problem areas and building an understanding of how cost relates to system usage.

It is critical to correlate current costs to internal teams or projects to enable accountability.

It is critical for cost control to correlate current costs to internal teams or projects to enable accountability and identify the “low hanging fruit.” The correlation can be very difficult without the assignment of tags/labels to cloud instances as a general policy for teams that are deploying cloud workloads.

One of the benefits of a high-level of cloud automation is the ability to tag workloads transparently so that cost traceability can be achieved consistently. The benefits of cloud workload orchestration in the context of day to day operations (CI/CD processes) are discussed later.

Cloud providers offer tools that can assist with cost analysis. For example, AWS has its “Cost Explorer” and its “Cost and Usage Report.”  These are particularly useful in combination with AWS cost allocation tagging.

Azure offers “Cost Management” from the Azure console, which can provide detailed reports. Azure also uses resource tagging to associate cloud resources with accounts (and other indicator-like “projects”).

Google Cloud has a similar service. In addition to the native tools, cloud management platform vendors such as Flexera, Cloudbolt, CloudApp and others provide cost analysis tools across multiple cloud platforms.

Cloud Cost Control

It is critical to raise awareness in teams that use cloud resources of the cost behavior of their workloads so the impact of design and operational decisions can be understood in context. Teams may be consuming large compute instances, retaining unneeded logs or other data on cloud storage, or not tearing down idle resources.

Even with all the benefits of a multi-cloud strategy, the tracking and forecasting associated with the operation of workloads hosted on multiple cloud platforms is a challenge. Add to that the unpredictability of workload scale, one of the major benefits of cloud architectures, and the complexity can become overwhelming.

A strategy for dealing with cost control is needed, potentially along with controls that can overlap with modern DevOps practices.

A casual survey of cloud billing models may lead to the impression that they are the same — but actual costs can be highly workload-dependent.  Using the baseline measurement to identify cost hot spots, compare public cloud billing models to identify significant savings.

The complexity and effort to migrate and maintain services on multiple cloud platforms is significant and requires a significant benefit. The costs and benefits are highly workload-dependent. Because of this dependency, any multi-cloud strategy will benefit from a multi-cloud orchestration layer.

The orchestration layer will provide a degree of portability and make it easier to exploit new cloud providers and changing cost advantages. In addition, discounts provided by cloud providers can provide significant savings for organizations.

Flexera reports that less than half, much less in some cases, of customers, exploit cloud discounts such as AWS spot instances — meaning Azure low priority instances and Google ad hoc negotiated discounts.

Besides operational automation, the adoption of a multi-cloud orchestrator that integrates with modern DevOps practices can provide cost containment benefits.

An orchestrator with a declarative “infrastructure as code” approach makes templates a reviewable part of the release process. Cost containment policies can be applied to the template during review to effectively deny the deployment of problematic workloads. Labels or tags are then applied automatically for cost tracking.

For example, the attempted use of inappropriate-instance-types can be denied far in advance of any damage being done. Furthermore, a competent orchestrator will be capable of applying user/group or even time-specific barriers to workload deployment.

In addition, an orchestrator can limit scaling behavior — thus ensuring that complex deployments are completely cleaned up. Cleaned up deployments are critical to avoid zombie-cost-sources like abandoned unattached storage.

Summary

The journey to an optimal, cost-efficient multi/hybrid cloud strategy is a complex one. It is important to understand current costs, including on-premise workloads. Understanding the current costs will be your foundation for advancement and growth. You’ll understand which of the various platforms have provided the tools you require.

Automation will play a key role in standardizing and controlling the approved interactions and workload placement on various platforms and provide a degree of workload portability.

Portability is key because the world of cloud providers never stands still — and cloud billing models vary over time — requiring adaptability.

Finally, besides ongoing cost auditing, a practice of manual and automated orchestration-template-review must be in place to avoid unpleasant billing surprises.

Politics

Fintech Kennek raises $12.5M seed round to digitize lending

Published

on

Google eyed for $2 billion Anthropic deal after major Amazon play


London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from Kennek.io; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading

Politics

Fortune 500’s race for generative AI breakthroughs

Published

on

Deanna Ritchie


As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Politics

UK seizes web3 opportunity simplifying crypto regulations

Published

on

Deanna Ritchie


As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.