Connect with us

Politics

How Companies Can Achieve Cloud Cost Optimization and Get the Most out of It – ReadWrite

Published

on

How Companies Can Achieve Cloud Cost Optimization and Get the Most out of It - ReadWrite


Today, companies are actively using cloud technologies, as these technologies help to cut costs and attract profits. However, like any other resource, the cloud has nuances that you need to know to get the most out of it for your business. We will tell you how to achieve cloud cost optimization and what factors should be taken into account to maximize the benefits of working with the cloud.

How cloud technologies save companies money

Companies that operate in data centers spend about 75% of their budgets on equipment purchasing and upgrading, license updating, maintenance and support, and other similar procedures. For fast-growing businesses, new equipment can be too large, expensive, and inconvenient.

Let us say there are one hundred employees in your office who need access to a certain application, so you have to buy one hundred named user licenses. It will be necessary to acquire and deploy the entire hardware infrastructure for hundreds of users and train your IT staff to install, maintain, and troubleshoot the application.

Cloud saves on hardware and software

When using a cloud application, there is no need to buy any hardware or software. If staff expansion is expected in the future, purchasing a subscription for additional users is enough. Thus, the costs for cloud computing are fully aligned with the level of its use. The costs only include a monthly fee for architecture changes or setting up the cloud infrastructure when moving to the cloud. The costs for repairing or replacing equipment are shifted onto sellers.

Transition to cloud storage frees up space and reduces energy costs and equipment repair

Along with that, large data centers take up a considerable portion of the office space and generate a lot of heat. Transition to cloud storage frees up space and reduces energy costs. In addition, you don’t need to keep a large team for maintaining the cloud – any DevOps specialist can handle the job. If servers or other equipment need to be repaired, the cloud provider will solve the issue. Thus, the transition to the cloud reduces the cost for repairing equipment and keeping additional personnel.

Cloud computing has an impact on business profitability. Deployment of cloud-based software is way faster than an ordinary installation. For example, the German corporation Daimler AG has transferred its business system to the Azure cloud.

Thanks to this, the company was able to launch a large-scale project within 12 weeks instead of the 12 months that would’ve been required if working on the old model. As a result, hardware costs were reduced by 40%, and NPS (Net Promoter Score) management costs were reduced by 50%.

Another example that has transferred its infrastructure to the cloud is General Electric. This has made it possible to reduce the number of DPCs (Data at Point of Care) from 34 to 2, optimize data transfer up to 500,000 records per second, and save millions of dollars.

Most cloud storages are accessible over the Internet from anywhere, so employees can work both from the office and home.

By transferring workloads from on-premises to the cloud, enterprises release their staff from labor-intensive and time-consuming operations. Instead, engineers can focus on more valuable activities: develop applications, fix defects, explore innovations, and so on. Thus, moving to the cloud increases companies’ agility and accelerates technology adoption.

How to optimize ownership of cloud infrastructures

Here are ten steps you can take to get the most out of your transition to the cloud. How to reduce costs associated with cloud infrastructure

Step 1. Optimize the volume of cloud infrastructure from the start.

Before searching for a cloud provider, it is recommended to decide on the minimum performance criteria. At this stage, the most common mistake occurs – a company copies the performance parameters of its on-premises infrastructure and applies them in the cloud infrastructure.

Before migrating, you need to assess the functional capabilities of your business and accurately calculate workloads that are commensurate with their actual performance. For example, the right storage size, chosen according to your data type and usage, can reduce associated costs by up to 50%.

For example, AWS offers over 300 different types of instances, each suitable for certain workloads.

Choosing the best instance is challenging even for experienced cloud architects. If opting for the wrong instance family and the wrong size, instances of large size are created. As a result, developers deploy computing resources in the cloud, forget about them, and leave them dormant.

Step 2. Choose a provider and a suitable offer.

After deciding on the production capacity, it is necessary to choose an offer, the price, and the requirements of which work for you. For example, AWS has several pricing options – spot instances that allow you to request spare computing capacity at up to 90% off the on-demand price.

If you plan to expand and move to Paas or SaaS in the future, you can start with Microsoft Azure IaaS. According to Flexera, Amazon Web Services and Microsoft Azure were the most popular among corporate enterprises in 2020.

Step 3. Use long-term subscription.

Public cloud providers – whether Amazon, Azure, Google Cloud, or others – have a built-in mechanism reducing the cloud cost due to the long use of the resource.

For example, if you subscribe for a year, you can get a discount of up to 40%. If your subscription term is three or more years, you can reserve capacity with a discount of up to 75%. With the right workloads and auto-scaling, AWS customers have been able to save up to 36%.

Step 4. Change infrastructure design.

It is also recommended to optimize the infrastructure by replacing an unsuitable stack – for example, develop a serverless architecture built on additional functions of the application.

When a user needs to log into a page with a Google account, some function is triggered on the back-end at that moment. When the next person enters the application, the process repeats.

A recent survey by O’Reilly shows that 40% of organizations have accepted serverless architecture. This has allowed them to reduce costs, increase scalability and developer productivity, and improve other metrics.

Step 5. Use monitoring tools.

Customers often configure computing or storage instances the wrong way – without using auto-scaling or other monitoring tools. This is usually the case with development and test environments due to their temporary nature. Therefore, it is rational to reach out to a DevOps development company or hire a Cloud Architect.

Optimization helps to maintain control over a constantly increasing volume of data coming from different sources. It is necessary to efficiently distribute workloads between rotating drives and flash memory to improve information storage and control.

Step 6. Stay informed about new optimized cloud offers.

Cloud platforms like AWS constantly update service packages, offering technologies to optimize performance and reduce cloud costs. For example, to save money, a customer can replace their monitoring services with Amazon CloudWatch or their traditional computing instances with a serverless implementation.

When considering new technologies, the good idea is to focus on network solutions that provide maximum flexibility. In addition, it is important to identify which of the current investments will help minimize future costs and which of them will hinder the development and transformation of the business.

Step 7. Establish cost transparency.

An organization pays for the public cloud as and when it is used. For example, engineers regularly run virtual machines and containers, and data flows into and out of the cloud. Therefore, the monthly bill may either exceed the planned amount or not reach it.

The company doesn’t have to pay for unused software. Hence determining the lower and upper limits of the budget allocated for cloud expenses and monitoring them in real-time wouldn’t go amiss.

In addition, you can take an ax to pay-as-you-go software at any time if it doesn’t work for you. Cloud solutions provide fantastic flexibility for companies that need top-tier products but don’t have much money.

Step 8. Optimize software licensing costs.

Whether on-premises or cloud, software license fees take a significant portion of operating costs. Since they are difficult to manage in the cloud, organizations can end up paying for unused licenses. That’s why it is reasonable to use cloud services to estimate software costs to reveal unnecessary licenses and remove them from your expenses.

Step 9. Suspend unused services.

AWS offers tremendous cloud storage capabilities, including computing and storage resources. However, a company pays for them even when the services are out of action. Therefore, there is an opportunity to suspend services that are not being used to minimize costs for them.

Step 10. Automate Amazon EBS snapshot management with Data Lifecycle Manager.

Some clouds like AWS provide the possibility to automatically or manually take point-in-time snapshots of EBS volumes. The snapshots can be stored in S3 and run on another EBS volume in any region.

One of the cloud cost management tools is Amazon Data Lifecycle Manager (Amazon DLM) that automates the creation, storage, and deletion of Amazon EBS snapshots (Elastic Block Store). This approach eliminates the need for complex configurable scripts to manage EBS snapshots, saving time and money. In addition, the use of Amazon DLM is free in all AWS regions.

How to make the right cloud optimization decision

Not all companies make good use of computing and storage capabilities. When an organization moves its workloads and applications to the cloud, it needs experts specializing in access control, storage, networking, and monitoring.

  • Cloud reliability and accessibility – find out what service-level agreements (SLAs) the cloud provider offers and how they correlate with your internal SLAs;
  • DPC and quality of operations – compare the quality of cloud and on-premises data of the DPC; assess how the team is staffed and when operators are available;
  • Compatibility – consider how easily software can “make friends” with other applications;
  • Scalability – evaluate whether it is convenient to customize the application for the needs of your organization, taking into account future expansion;
  • Security and privacy – find out what security and privacy policies the cloud provider offers and how they correlate with your company’s policies;
  • Capacity – ensure that the cloud application makes it easier to manage processes, low performance, and infrastructure.

To calculate cloud costs and determine the level of optimization, you can use the TCO calculator. Similar services are available in AWS, Azure, and Google.

To make the necessary calculations, you only need to enter the project specifications: types of servers, configurations, number of virtual machines, etc. In such a way as you can get a quick comparison of cloud and on-premises systems.

To make the right decision on increasing or decreasing computing resources, applying stress testing is useful.

When a company operates from a data center, an increase in computing resources requires additional investments, while a decrease leads to sunk costs. Cloud infrastructure, in turn, is based on such an operating model where an organization pays only for what it uses.

Conclusion

Competition between industries is growing, so a company needs to understand how efficiently it uses its resources.

Budget estimation is a process that requires attention, as incorrect calculations reduce productivity and become an obstacle to improving business performance. Cloud cost optimization is not supposed to be complicated, but it does require a special approach.

If your company doesn’t possess such skills, it is reasonable to use dedicated tools – for example, cloud cost management software.

Image Credit: aleksandar pasaric; pexels; thank you!

Artsiom Balabanau

My name is Artsiom Balabanau, and I have ten years of experience in the IT industry building a path from a Business Innovation Consultant to a Senior Manager. Currently, I work as a CIO at Andersen. Being a part of the IT family for years, I aim at transforming IT processes in support of business transformation.

Politics

Fintech Kennek raises $12.5M seed round to digitize lending

Published

on

Google eyed for $2 billion Anthropic deal after major Amazon play


London-based fintech startup Kennek has raised $12.5 million in seed funding to expand its lending operating system.

According to an Oct. 10 tech.eu report, the round was led by HV Capital and included participation from Dutch Founders Fund, AlbionVC, FFVC, Plug & Play Ventures, and Syndicate One. Kennek offers software-as-a-service tools to help non-bank lenders streamline their operations using open banking, open finance, and payments.

The platform aims to automate time-consuming manual tasks and consolidate fragmented data to simplify lending. Xavier De Pauw, founder of Kennek said:

“Until kennek, lenders had to devote countless hours to menial operational tasks and deal with jumbled and hard-coded data – which makes every other part of lending a headache. As former lenders ourselves, we lived and breathed these frustrations, and built kennek to make them a thing of the past.”

The company said the latest funding round was oversubscribed and closed quickly despite the challenging fundraising environment. The new capital will be used to expand Kennek’s engineering team and strengthen its market position in the UK while exploring expansion into other European markets. Barbod Namini, Partner at lead investor HV Capital, commented on the investment:

“Kennek has developed an ambitious and genuinely unique proposition which we think can be the foundation of the entire alternative lending space. […] It is a complicated market and a solution that brings together all information and stakeholders onto a single platform is highly compelling for both lenders & the ecosystem as a whole.”

The fintech lending space has grown rapidly in recent years, but many lenders still rely on legacy systems and manual processes that limit efficiency and scalability. Kennek aims to leverage open banking and data integration to provide lenders with a more streamlined, automated lending experience.

The seed funding will allow the London-based startup to continue developing its platform and expanding its team to meet demand from non-bank lenders looking to digitize operations. Kennek’s focus on the UK and Europe also comes amid rising adoption of open banking and open finance in the regions.

Featured Image Credit: Photo from Kennek.io; Thank you!

Radek Zielinski

Radek Zielinski is an experienced technology and financial journalist with a passion for cybersecurity and futurology.

Continue Reading

Politics

Fortune 500’s race for generative AI breakthroughs

Published

on

Deanna Ritchie


As excitement around generative AI grows, Fortune 500 companies, including Goldman Sachs, are carefully examining the possible applications of this technology. A recent survey of U.S. executives indicated that 60% believe generative AI will substantially impact their businesses in the long term. However, they anticipate a one to two-year timeframe before implementing their initial solutions. This optimism stems from the potential of generative AI to revolutionize various aspects of businesses, from enhancing customer experiences to optimizing internal processes. In the short term, companies will likely focus on pilot projects and experimentation, gradually integrating generative AI into their operations as they witness its positive influence on efficiency and profitability.

Goldman Sachs’ Cautious Approach to Implementing Generative AI

In a recent interview, Goldman Sachs CIO Marco Argenti revealed that the firm has not yet implemented any generative AI use cases. Instead, the company focuses on experimentation and setting high standards before adopting the technology. Argenti recognized the desire for outcomes in areas like developer and operational efficiency but emphasized ensuring precision before putting experimental AI use cases into production.

According to Argenti, striking the right balance between driving innovation and maintaining accuracy is crucial for successfully integrating generative AI within the firm. Goldman Sachs intends to continue exploring this emerging technology’s potential benefits and applications while diligently assessing risks to ensure it meets the company’s stringent quality standards.

One possible application for Goldman Sachs is in software development, where the company has observed a 20-40% productivity increase during its trials. The goal is for 1,000 developers to utilize generative AI tools by year’s end. However, Argenti emphasized that a well-defined expectation of return on investment is necessary before fully integrating generative AI into production.

To achieve this, the company plans to implement a systematic and strategic approach to adopting generative AI, ensuring that it complements and enhances the skills of its developers. Additionally, Goldman Sachs intends to evaluate the long-term impact of generative AI on their software development processes and the overall quality of the applications being developed.

Goldman Sachs’ approach to AI implementation goes beyond merely executing models. The firm has created a platform encompassing technical, legal, and compliance assessments to filter out improper content and keep track of all interactions. This comprehensive system ensures seamless integration of artificial intelligence in operations while adhering to regulatory standards and maintaining client confidentiality. Moreover, the platform continuously improves and adapts its algorithms, allowing Goldman Sachs to stay at the forefront of technology and offer its clients the most efficient and secure services.

Featured Image Credit: Photo by Google DeepMind; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Politics

UK seizes web3 opportunity simplifying crypto regulations

Published

on

Deanna Ritchie


As Web3 companies increasingly consider leaving the United States due to regulatory ambiguity, the United Kingdom must simplify its cryptocurrency regulations to attract these businesses. The conservative think tank Policy Exchange recently released a report detailing ten suggestions for improving Web3 regulation in the country. Among the recommendations are reducing liability for token holders in decentralized autonomous organizations (DAOs) and encouraging the Financial Conduct Authority (FCA) to adopt alternative Know Your Customer (KYC) methodologies, such as digital identities and blockchain analytics tools. These suggestions aim to position the UK as a hub for Web3 innovation and attract blockchain-based businesses looking for a more conducive regulatory environment.

Streamlining Cryptocurrency Regulations for Innovation

To make it easier for emerging Web3 companies to navigate existing legal frameworks and contribute to the UK’s digital economy growth, the government must streamline cryptocurrency regulations and adopt forward-looking approaches. By making the regulatory landscape clear and straightforward, the UK can create an environment that fosters innovation, growth, and competitiveness in the global fintech industry.

The Policy Exchange report also recommends not weakening self-hosted wallets or treating proof-of-stake (PoS) services as financial services. This approach aims to protect the fundamental principles of decentralization and user autonomy while strongly emphasizing security and regulatory compliance. By doing so, the UK can nurture an environment that encourages innovation and the continued growth of blockchain technology.

Despite recent strict measures by UK authorities, such as His Majesty’s Treasury and the FCA, toward the digital assets sector, the proposed changes in the Policy Exchange report strive to make the UK a more attractive location for Web3 enterprises. By adopting these suggestions, the UK can demonstrate its commitment to fostering innovation in the rapidly evolving blockchain and cryptocurrency industries while ensuring a robust and transparent regulatory environment.

The ongoing uncertainty surrounding cryptocurrency regulations in various countries has prompted Web3 companies to explore alternative jurisdictions with more precise legal frameworks. As the United States grapples with regulatory ambiguity, the United Kingdom can position itself as a hub for Web3 innovation by simplifying and streamlining its cryptocurrency regulations.

Featured Image Credit: Photo by Jonathan Borba; Pexels; Thank you!

Deanna Ritchie

Managing Editor at ReadWrite

Deanna is the Managing Editor at ReadWrite. Previously she worked as the Editor in Chief for Startup Grind and has over 20+ years of experience in content management and content development.

Continue Reading

Copyright © 2021 Seminole Press.