Syllabus
Cloud computing challenges: Economics of the cloud, cloud interoperability and standards, scalability and fault tolerance, energy efficiency in clouds, federated clouds, cloud computing security, fundamentals of computer security, cloud security architecture, cloud shared responsibility model, security in cloud deployment models.
Topic-wise textbooks/references
Cloud computing challenges:
4.1. Open Challenges-TB1- 4.5
4.2. Economics of the cloud- TB1-4.4
4.3. cloud interoperability and standards TB1-4.5.2
4.4. scalability and fault tolerance TB1-4.5.3
4.5. energy efficiency in clouds TB1-14.1
4.6. federated clouds-TB1-14.3
Cloud computing security:
4.7. fundamentals of computer security-TB1-12.2
4.8. cloud security architectureTB1-12.3.4
4.9. cloud shared responsibility model-TB1-12.4
4.10. Security in cloud deployment models –TB1-12.6
Abbreviations used in the Unit:
NIST - National Institute of Standards and Technologies
UCSB - University of California, Santa Barbara
CRM - Customer Relationship Management
ERP - Enterprise Resource Planning = Integrate business process (Finance, HR, Manufacturing, Supply Chain, Inventory) in to a centralized system
OCCI - Open Cloud Computing Interface
CCIF- Cloud Computing Interoperability Forum
DMTF - Distributed Management Task Force = developing open standards that enable interoperability, manageability, and portability across hybrid and multi-cloud environments.
Cloud Definition:
o NIST-definition: on-demand self-service as SaaS, PaaS, and IaaS with deployment models like public cloud, private cloud, community cloud and hybrid cloud.
o UCSB's ontology: Cloud has five layers: applications, software environments, software infrastructure, software kernel, and hardware.
o Infancy in the definition and formalization of Cloud concepts, interoperability, security, scalability, fault tolerance, and organizational aspects.
o Cloud Definition remains an open challenge because the characterization used today is a 'working definition' that continuously changes over time as the phenomenon evolves.
Security, Trust and Privacy:
o Security, trust, and privacy are major obstacles for massive Cloud adoption; the massive use of virtualization creates new threats where applications hosted in the Cloud can process sensitive information that may be accessed without required permissions.
o A lack of control over data and processes poses problems for trust and privacy; when services are delivered through a complex stack involving third parties, identifying liability for violations becomes difficult.
Organizational aspects:
o Cloud computing introduces a new billing model requiring cultural and organizational process maturity, raising questions about the new role of IT departments, compliance, and legal implications for organizations.
o From an organizational point of view, when IT infrastructure moves to the Cloud, existing IT staff lose their reference for IT troubleshooting and must develop different competencies with generally fewer skills.
Scalability and fault tolerance:
o These both are among the most important open challenges, as the Cloud middleware must be designed along dimensions of performance, size, and load to handle huge numbers of resources and users.
Cloud Interoperability and Standards:
o The interoperation between different Clouds and the creation of open standards remain unsolved, as the current state of Cloud standards resembles the early Internet era where each organization operated its own independent network.
The main drivers of Cloud computing are economy of scale and simplicity of software delivery and operation; the biggest financial benefit is the pay-as-you-go model offered by Cloud providers. Cloud computing allows enterprises to: (i) reduce capital costs associated with IT infrastructure, (ii) eliminate depreciation or lifetime costs of IT capital assets, (iii) replace software licensing with subscriptions, and (iv) cut maintenance and administrative costs of IT resources.
A capital cost is a one-time expense paid upfront that contributes over the long term to generate profit; IT resources - hardware and software - constitute capital costs for any enterprise and are subject to depreciation over time, which reduces profit.
o Before cloud computing: datacenters incurring electricity, cooling, and IT support costs.
o With Cloud computing: capital costs are shifted to operational costs by renting infrastructure and paying subscriptions for software
For small startups with no existing IT assets, Cloud computing can completely eliminate capital costs by covering IT infrastructure, software development, and CRM/ERP through leased services.
For enterprises with considerable IT assets, IaaS-based Cloud solutions help manage unplanned capital costs by turning them into operational costs that last only as long as the need exists, such as handling peak loads without permanent capital expenditure.
Three pricing models are adopted by Cloud providers:
(a) Tiered Pricing - services offered in tiers with fixed specs and SLA at specific price per unit time (e.g., Amazon EC2);
(b) Per-unit Pricing - charged by units of specific services like data transfer or RAM/hour (e.g., GoGrid);
(c) Subscription-based Pricing - periodic fee for software usage (e.g., SaaS providers).
Cloud computing also eliminates indirect costs such as software licensing fees as it is done by service provider
All Cloud pricing models are based on the pay-as-you-go model, which converts IT capital costs into operational costs
Cloud computing is a service-based model for delivering IT infrastructure and applications as utilities; introducing standards and allowing interoperability between solutions from different vendors is of fundamental importance to fully realize this vision.
Vendor lock-in is one of the major strategic barriers against the seamless adoption of Cloud computing; it can prevent a customer from switching to another competitor's solution, or make switching possible only at considerable cost and time.
The current state of Cloud standards resembles the early Internet era where there was no common agreement on protocols and technologies; organizations such as CCIF, Open Cloud Consortium, and DMTF Cloud Standards Incubator are leading standardization efforts.
In the IaaS market, the use of a proprietary virtual machine format is a major reason for vendor lock-in; the Open Virtualization Format (OVF) is an attempt to provide a common format for storing VM information and metadata, enabling platform-independent packaging and distribution.
OVF supports three levels of portability:
Level 1 - runs on a specific product/CPU
Level 2 - runs on a specific family of virtual hardware
Level 3- runs on multiple families of virtual hardware; the expected level for seamless Cloud Federation collaboration.
Another direction for standards involves devising a general reference architecture for Cloud computing systems and providing a standard interface for interaction; at present, compatibility between different Cloud solutions is restricted and a common set of APIs is lacking.
In the IaaS market, Amazon Web Services plays a leading role and other IaaS solutions mostly provide AWS-compatible APIs, constituting themselves as valid alternatives; however, there is no consistent trend in devising truly common APIs for IaaS.
The Open Cloud Computing Interface (OCCI) is a community-driven open organization providing specifications for protocols and APIs covering IaaS, PaaS, and SaaS; it defines the OCCI core model, OCCI infrastructure extensions for IaaS, and the OCCI HTTP rendering model.
The Cloud Data Management Interface (CDMI) defines a functional interface for creating, retrieving, updating, and deleting data elements from the Cloud; proposed by SNIA, it introduces Data Storage-as-a-Service (DaaS) and organizes an object model with Data Objects, Container Objects, Domain Objects, Queue Objects, and Capability Objects.
The Open Cloud Manifesto, drafted in 2009 with more than 400 Cloud service providers, is a declaration of intent for an interoperable and open Cloud computing platform emphasizing four goals: Choice (select best provider with open technology), Flexibility (easier switching between providers), Speed and Agility (scale on demand), and Skills (common knowledge across providers).
The ability to scale on demand is one of the most attractive features of Cloud computing; Clouds allow scaling beyond the limits of existing in-house IT resources whether they are infrastructure (compute and storage) or application services.
To implement scalability, the Cloud middleware must be designed with the principle of scalability along different dimensions - performance, size, and load - so that it can manage a huge number of resources and users.
The Cloud middleware manages resources and users who rely on the Cloud to obtain horsepower they cannot obtain within the premises without bearing considerable administrative and maintenance costs; these costs are a reality only for those who develop, manage, and maintain the Cloud middleware.
Within a scalable Cloud scenario, the ability to tolerate failure becomes fundamental, sometimes even more important than providing an extremely efficient and optimized system; hence the challenge is designing highly scalable and fault-tolerant systems that are easy to manage and provide competitive performance.
Recent developments in virtualization enable dynamic migration of VMs across physical nodes according to QoS requirements; unused VMs can be logically resized and consolidated on a minimal number of physical nodes, while idle nodes can be turned off or hibernated to save energy.
Through consolidation of VMs, a large number of users can share a single physical server, which increases utilization and reduces the total number of servers required; VM consolidation can also be applied dynamically by capturing workload variability.
Two crucial issues must be addressed to explore both performance and energy efficiency in Cloud data centers: (i) turning off resources in a dynamic environment puts QoS at risk as aggressive consolidation may cause insufficient resources for load spikes; (ii) agreed SLAs bring challenges to application performance management in virtualized environments.
Current resource allocation in a Cloud data center aims at providing high performance while meeting SLAs, with limited consideration for energy consumption during VM allocations; effective consolidation policies must minimize energy use without compromising user QoS requirements.
Novel analytical models and QoS-based resource allocation algorithms that optimize VM placements with the objective of minimizing energy consumption under performance constraints are needed for achieving both scalability and fault tolerance.
Cloud providers have been deploying data centers in multiple locations (e.g., Amazon EC2 in US, Europe, Singapore), leading to the emergence of 'InterCloud' that supports scalable delivery of application services by harnessing multiple data centers, adding an additional dimension to scalability and fault tolerance management.
Modern data centers host applications ranging from short-running web requests to long-running simulations; managing multiple applications in response to time-varying workloads with statically allocated resources has made energy consumption a critical challenge alongside performance.
According to McKinsey's report on 'Revolutionizing Data Center Energy Efficiency', a typical data center consumes as much energy as 25,000 households; the total energy bill for data centers in 2010 was over $11 billion, and energy costs in a typical data center double every five years.
Data centers are not only expensive but also environmentally unfriendly; carbon emissions from worldwide data centers now exceed those of both Argentina and the Netherlands, and Cloud service providers must adopt measures to ensure high energy costs do not dramatically reduce their profit margin.
According to Amazon.com's estimate, energy-related costs amount to 42% of the total budget including direct power consumption and cooling infrastructure amortized over 15 years; companies like Google, Microsoft, and Yahoo are therefore building data centers near the Columbia River to exploit cheap hydroelectric power.
Green Cloud computing is envisioned to achieve not only efficient processing and utilization of computing infrastructure but also to minimize energy consumption; this is essential to ensure the future growth of Cloud computing is sustainable (Below Fig.4.5.1- Green Cloud-Computing Scenario).
Fig-4.5.1 Green Cloud-Computing Scenario
The Green Cloud architecture (Fig.4.5.2 - High-level System Architectural Framework for Green Cloud computing) consists of four main components: (a) Consumers/Brokers who submit service requests; (b) Green Resource Allocator acting as interface between Cloud infrastructure and consumers; (c) Virtual Machines (VMs) that are dynamically started/stopped; and (d) Physical Machines providing underlying hardware.
Fig.4.5.2 High-level system architectural framework for green cloud computing.
The Green Resource Allocator includes sub-components: Green Negotiator (finalizes SLAs with prices and penalties), Service Analyzer (interprets and evaluates service requests), Consumer Profiler (grants special privileges to important consumers), Pricing module, Energy Monitor (observes which machines to power on/off), Service Scheduler, VM Manager, and Accounting module.
Energy-aware dynamic resource allocation uses VM consolidation: by dynamically migrating VMs across physical machines, workloads can be consolidated and unused resources can be put on a low-power state or configured using DVFS (Dynamic Voltage and Frequency Scaling) to operate at low-performance levels to save energy.
The concept of InterClouds supports energy efficiency by routing requests to data centers where renewable or cheaper energy is available at a given time; since local electricity demand varies by time of day and weather, and each site has different energy sources (coal, hydro-electric, wind), load balancing across sites can reduce energy costs.
Sending load to remote data centers incurs both delay costs and increased data transfer energy costs; improvements in energy-efficient transport technology should lead to significant reductions in power consumption of Cloud software services, making InterCloud a promising approach for sustainable computing.
Cloud Federation (also called InterCloud) refers to an aggregation of Cloud computing providers which have separate administrative domains; within a Cloud computing context, 'federation' implies agreements between different Cloud providers allowing them to leverage each other's services in a privileged manner.
Cloud Federation is defined by Reuven Cohen (CTO, Enomaly Inc.) as: a system that 'manages consistency and access controls when two or more independent geographically distinct Clouds share authentication, files, computing resources, command and control or access to storage resources.'
InterCloud - introduced by Cisco - represents a 'Cloud of Clouds' based on future standards and open interfaces; whereas Cloud Federation is more general and also includes ad hoc aggregations between Cloud providers on the basis of private agreements and proprietary interfaces.
Fig:4.6.1 Cloud Federation Reference Stack
Fig:4.6.2 Cloud Federation Reference Stack
The Cloud Federation Reference Stack (Fig.4.6.1) consists of three levels: (i) Conceptual Level addressing motivations, advantages, opportunities, and obligations; (ii) Logical and Operational Level defining the Federation Model, Service Level Agreements, and pricing models; and (iii) Infrastructural Level handling protocols, interfaces, standards, and federation platforms like RESERVOIR and InterCloud.
From a provider perspective, being part of a federation is favorable if it helps increase revenue, provides new business opportunities, or sustains quality of service at peak loads; functional motivations include supplying low-latency access, handling bursts in demand, scaling existing applications, and making revenue from unused capacity.
Non-functional motivations for joining a Cloud Federation include: meeting compulsory regulations about the location of data (geo-location compliance), containing transient operational cost spikes during natural disasters or sudden electricity price changes, and providing disaster recovery when a provider's data center goes offline.
The Logical and Operational Level identifies how and when to lease a service to or leverage a service from another provider; this is where market-oriented Cloud computing is implemented, and SLAs (which originated in telecommunications in 1980) define purpose, restrictions, validity period, scope, parties, SLOs, penalties, optional services, and administration.
RESERVOIR (a European research project) implements Cloud Federation at the IaaS layer; it is based on dynamic federation where each infrastructure provider is an autonomous business that deploys RESERVOIR middleware to orchestrate leasing of internal resources to other providers within the context of a negotiated SLA (Fig. 4.6.2 - RESERVOIR Cloud Deployment; Fig.4.6.2- RESERVOIR Architecture).
Fig:4.6.3 RESERVOIR architecture
Fig:4.6.4 InterCloud architecture.
The InterCloud architectural framework (Fig.4.6.4 - InterCloud Architecture) is a service-oriented framework composed of two main elements: (a) CloudExchange - the market-making component that allows providers to find each other, publish resources, run auctions, and trade Cloud assets; and (b) CloudCoordinator - present on each federation member to manage domain-specific issues, negotiate with remote coordinators, and trigger federation actions when local resources are insufficient or underutilized.
Cloud Federation security introduces additional challenges: a baseline security must be guaranteed across all Cloud vendors in the federation; federated identity management (using standards like Liberty Alliance, OASIS SAML, and WS-Federation) provides standards-based authentication, SSO, and role-based access control, enabling users to authenticate once to access services across the entire federated network.
Computer security encompasses a range of concepts, tools, and technologies designed to establish measures and controls that guarantee the confidentiality, integrity, and availability of data and information processed and stored by computers; it is categorized into four main domains shown in (Fig.4.7.1 - Categories of Computer Security): Application Security, Network Security, Information Security, and Endpoint Security.
Fig.4.7.1 - Categories of Computer Security
Application Security (AppSec) involves implementing security software, hardware techniques, and best practices to protect applications from unauthorized access; its sub-domains include web application security, API security, and Cloud-Native application security; OWASP has identified the top 10 cloud-native security risks including insecure configurations, injection flaws, and inadequate authentication.
Network Security (also called cybersecurity) safeguards data, systems, and services from unauthorized access across networks; its sub-domains cover physical, technical, and administrative network security; common network threats include Phishing, Denial of Service (DoS) attacks, Malware, and Ransomware.
Information Security mitigates risks from unauthorized access, use, disclosure, destruction, modification, and disruption of information; it follows the CIA triad - Confidentiality, Integrity, and Availability - along with Nonrepudiation (data integrity in transit), Authenticity (trusted sources), and Accountability (user access control).
Endpoint Security refers to the collection of measures implemented to safeguard end-user devices such as mobile devices, laptops, and IoT devices from hackers; it utilizes antivirus software, establishes comprehensive security policies, and Cloud vendors often incorporate it as an integral part of their cloud infrastructure solutions.
Vulnerability is a flaw, bug, misconfiguration, or weakness in applications, databases, networks, or infrastructure that exposes data to threats; vulnerabilities are classified into two types: Technical vulnerabilities (bugs in code and software errors) and Human vulnerabilities (caused by employees falling for phishing or social engineering attacks).
A Threat is a malicious activity or action that exploits a vulnerability, affecting confidentiality, integrity, and availability; threats are categorized as Intentional (malware, ransomware, phishing - deliberately proposed by hackers), Unintentional (human errors like forgetting to update a firewall), and Natural (earthquakes, floods that are unpredictable but damaging).
Risk is the probability of a harmful event occurring due to threats exploiting vulnerabilities; it is expressed as Risk = Threat × Vulnerability; risk management involves regular security assessments, risk tolerance levels, and knowledge of vulnerabilities, and cyber risks are classified as Internal risks (insider threats, human errors) and External risks (DDoS attacks from outside organizations).
Cryptography protects information using mathematical algorithms to convert messages (Plaintext) into unreadable form (Ciphertext) via Encryption, and back via Decryption as shown in (Fig. 4.7.2 - Basic Operations of Cryptography); its four primary features are: Confidentiality (accessible only to intended users), Integrity (data not altered in transit), Nonrepudiation (cannot deny having sent a message), and Authentication (sender and receiver are confirmed).
Fig. 4.7.2 - Basic Operations of Cryptography
Fig. 4.7.3 - Concept of AAA
The AAA (Authentication, Authorization, and Accounting) security framework, illustrated in (Fig.4.7.3 - Concept of AAA), controls access to computer resources: Authentication proves user identity (via passwords, PKI, biometrics); Authorization grants granular access permissions for specific tasks; Accounting monitors and logs user activity for auditing and billing; together they provide high accountability and are implemented in cloud using IAM policies, SSH, SSL, PKI, and digital certificates.
The cloud security architecture, illustrated in (Fig.4.8.1 - Cloud Security Architecture), is built upon the cloud computing reference architecture and consists of fundamental service layers: IaaS, PaaS, and SaaS; each service layer has distinct security measures and components, with a Shared Responsibility Model (CSM) that provides a clear understanding of security responsibilities between CSPs and users; this model has been adopted by Google, AWS, and Azure.
Fig.4.8.1 - Cloud Security Architecture
Fig.4.8.2 - Working of Digital Signatures
IaaS Security covers the underlying system infrastructure (physical servers, storage, network), with the CSP responsible for physical security, network-level security (virtual routers, switches, software-defined networks), and hypervisor-level security for VMs; the User is responsible for OS-level security, identity management, access management rules, security groups for network flows, code-level security, and application data security.
At the core middleware level of IaaS, the CSP incorporates hypervisor technologies to manage virtual pools of compute, storage, and network resources; VM security is ensured through isolation, machine instruction security via ISA, and safeguarding VM data; network security is enforced through inbound/outbound traffic rules (security groups), Virtual Private Cloud (VPC), and Network Address Translation (NAT)
For IaaS object storage security, Identity and Access Management (IAM) controls public access to stored objects; block storage is protected with disk encryption and authentication; the core middleware also provides secure and up-to-date OS images free from vulnerabilities; at the organizational level, AAA is implemented using IAM policies to establish access permissions across user accounts.
PaaS Security primarily focuses on safeguarding application code, including code repositories and container images; the CSP oversees networks, servers, OS, and storage, while PaaS users retain control over their code, workflows, configurations, and application hosting; PaaS CSPs offer data encryption for data at rest and in transit, but users should exercise caution when transmitting data through REST APIs over HTTPS.
SaaS Security is focused on safeguarding private and enterprise data and applications delivered through subscription-based cloud platforms; CSPs play a pivotal role in establishing security measures to prevent unauthorized access and data breaches, implementing DDoS attack protocols, and deploying Web Application Firewalls (WAFs); end-users bear responsibility for securely managing their login credentials and avoiding storing sensitive information in browser cookies.
Symmetric Key Cryptography allows both sender and receiver to use the same key for encryption and decryption; it is simpler and faster but requires secure key exchange beforehand; popular algorithms are DES (Data Encryption Standard) and AES (Advanced Encryption Standard); in contrast, Asymmetric Key Cryptography uses two distinct keys - the Public Key (openly accessible) for encryption and the Private Key (exclusive to the receiver) for decryption, with RSA being the widely used algorithm.
Public Key Infrastructure (PKI) is the governing body responsible for managing digital certificates within a Public Key Cryptography scheme; its components include Public and Private Keys, Public Key Certificates (issued by Certificate Authority - CA), Certificate Authority (CA) that builds mutual trust, Registration Authority (RA) that verifies legitimacy, and Digital Signatures that validate authenticity and integrity of documents as shown in (Fig.4.8.2 - Working of Digital Signatures).
Fig.4.8.3 - Working of SSH
Fig.4.8.4 - Example of Public Key Cryptography
Secure Socket Layer (SSL) secures, authenticates, and encrypts communications between clients and servers over the internet using standard port 443 with HTTPS; Secure Shell (SSH) is a secured network protocol using TCP port 22 and client-server architecture that provides two authentication methods - Password-based (least secure) and Key-based using public/private keys (most reliable) - to create an encrypted channel as shown in (Fig.4.8.3 - Working of SSH).
Public Key Cryptography (also called asymmetric key cryptography) is widely used in internet-based applications; a pair of keys is used for encryption and decryption, where each public key matches exactly one private key; an example is shown in (Fig.4.8.4 - Example of Public Key Cryptography) where anyone with Bob's public key can encrypt information, but only Bob can decrypt it using his private key; digital certificates build the cryptographic trust link between key ownership and the entity, and PKI manages these certificates at cloud infrastructure level.
The Shared Responsibility Model (SRM) is a cloud security framework that outlines the security responsibilities of both the cloud provider (CSP) and the user; it encompasses infrastructure, hardware, data identities, workloads, network settings, and more, assigning specific accountabilities to each party; it has become crucial as organizations increasingly migrate from on-premise to public cloud environments.
The shared responsibility model defines the division of security responsibilities across the three cloud service models as shown in (Fig.4.9.1 - Cloud Service Model Management Complexity): in IaaS, the CSP manages infrastructure up to the virtualization layer while the user manages OS, API, middleware, runtime, and application layers; in PaaS, the security divide is at the platform level; in SaaS, the CSP manages almost all security elements and the user's role primarily involves data access management.
Fig.4.9.1 - Cloud Service Model Management Complexity Fig.4.9.2 - Microsoft Azure Shared Responsibility Model
Table.4.9.1 Shared Responsibility Model of Cloud Security
Table.4.9.1 (Shared Responsibility Model of Cloud Security) describes the security elements at all layers of IaaS, PaaS, and SaaS: Physical security is always CSP's responsibility; in IaaS - Host infrastructure and Network flow controls are shared (Both); in PaaS - Identity/access controls, Application data storage, APIs/Middleware are shared; in SaaS - User/Endpoint Security is shared while all other elements are CSP's responsibility.
TABLE-LEGEND:
User - Responsibility of the User/Customer
CSP - Responsibility of the Cloud Service Provider
Both - Shared responsibility between User and CSP
Microsoft Azure's Shared Responsibility Model (Fig.4.9.2 - Microsoft Azure Shared Responsibility Model) provides a clear breakdown of responsibilities: Azure takes full responsibility for the data center or infrastructure level (physical servers, network infrastructure, data center); users have specific responsibilities for their own data, identity management, and devices; areas of shared responsibility include applications, identity and access management, network controls, and operating systems.
The AWS Shared Responsibility Model (Fig.4.9.3 - AWS Shared Responsibility Model) draws a clear line between user and AWS security responsibilities: AWS's primary focus is protecting the infrastructure where all services run, encompassing hardware, software, networking, and facilities; the user's responsibility depends on the service chosen - for EC2, users must use PKI with secure keys, design security groups with firewall rules, and manage the guest OS; AWS also defines inherited control (user fully inherits AWS controls) and shared control (applies to both infrastructure and customer layers).
Fig.4.9.3 - AWS Shared Responsibility Model Fig.4.9.4 - Google Cloud Shared Responsibility Model
Google Cloud's Shared Responsibility Model (Fig.4.9.4 - Google Cloud Shared Responsibility Model), also known as Shared fate, defines responsibilities based on workload type, industry regulatory framework (PCI DSS, GDPR, HIPAA), and location of data centers; in IaaS, most security responsibilities belong to users and Google Cloud handles underlying infrastructure and physical security; in PaaS, Google Cloud takes more responsibilities including network controls, storage, encryption, and identity management; in SaaS, Google Cloud owns major security responsibilities while users are only responsible for access control and data.
In IaaS under the shared responsibility model, the CSP provides security at physical environment, network-level (virtual routers, switches, software-defined networks), and hypervisor-level for the virtualization stack; the user manages most layers including OS, identity management, access management rules, security groups for network flows, code security, endpoint security at API and middleware level, and application data security.
In PaaS, the CSP provides a hosted runtime environment and is responsible for security at platform level, middleware, network, and servers (with OS-level security); the user is responsible for application code security, data security, and APIs; shared responsibility exists for identity and access control management; PaaS users must maintain strict control over development workflows, ensure comprehensive monitoring, logging, and auditability throughout the entire development lifecycle.
The cloud security frameworks that support the shared responsibility model and provide guidelines include: NIST's cybersecurity framework (for core functionalities in public cloud adopting five functions: Identify, Protect, Detect, Respond, Recover), FedRAMP (for government cloud adoption), CSA STAR (Cloud Security Alliance Security, Trust and Assurance Registry for demonstrating ability to defend against threats), and ISO/IEC 27017:2015 (for cloud-specific security controls).
Six best practices for cloud security based on the shared responsibility model include:
(1) Understand the shared responsibility model and identify your obligations;
(2) Enable multi-factor authentication to protect accounts;
(3) Implement proper IAM policies with least privilege;
(4) Encrypt data at rest and in transit;
(5) Pay attention to credible security warnings as CSPs offer tools that notify users of potential risks;
(6) Security is your responsibility - using cloud services does not mean outsourcing security, and users must adhere to standard practices while using cloud services and data.
Cloud deployments are classified into three main categories - Private, Public, and Hybrid - each carrying distinct security implications; the cloud security model's challenges, architectures, advantages, and disadvantages vary significantly depending on the chosen deployment model; private clouds operate within the organization's firewall rules and network configurations ensuring high security; public clouds share infrastructure among multiple users requiring strong identity management; hybrid clouds combine both and introduce unique cross-cloud security challenges.
(a) Private Cloud Security: Private clouds are specialized environments dedicated to a single tenant or organization; the primary advantage is keeping sensitive business data under the direct control and security of the organization; in private cloud setups, the shared responsibility model places the onus on the organization (both CSP and user) to protect all aspects including compute resources, storage, network infrastructure, applications, and compliance; overall security must be implemented at Infrastructure Level, Storage Level, Platform Level, and through Identity and Access Management.
Private Cloud Security Risks include: Physical Security challenges (installing surveillance cameras, fire protection, robust access control), Insider Threats (privileged users or employees may misuse access), Data Loss or Leakage (due to hardware failures, natural disasters, human error, or inadequate backup), Inadequate Access Controls (weak mechanisms causing unauthorized access), Malware and External Attacks (DDoS, targeted exploits), Lack of Patch Management (failure to apply timely security patches), Compliance and Regulatory Issues (failure to adhere to industry-specific regulations), Data Segregation and Multi-Tenancy risks (improper data separation for multiple business units), and Lack of Visibility and Monitoring (insufficient logging leading to delayed incident response).
Securing Private Clouds requires four key best practices:
(1) Choose the Right Platform and Provider - consider track record, reputation, security services at each layer, and certifications in data security (providers include VMware, OpenStack, Redhat, Azure Stack, AWS Outposts);
(2) Implement a Patch Management Strategy - regularly update VM images, firmware, and hardware components;
(3) Educate the Staff - disable default passwords, use PKIs, train users on managing and troubleshooting private clouds;
(4) Regular Audits and Update Security Policies - undergo CSA STAR evaluations and keep security policies updated as threats evolve.
Fig.4.10.1 - Overview of Azure Cloud Security Services
Fig.4.10.2 - Overview of Google Cloud Security Services
Fig.4.10.3 - Overview of AWS Cloud Security Services
(b) Public Cloud Security: Security considerations in public cloud environments primarily focus on adopting the NIST cybersecurity framework; since the cloud infrastructure is shared among multiple users, security services provided by public cloud vendors are categorized into: Infrastructure Security (VM security, keys, network security groups, firewall rules, data encryption), Network Security (VPNs, DDoS protection, firewall, monitoring), Application Security (Web Application Firewalls, vulnerability scanning, runtime self-protection), Endpoint Security (antivirus, anti-malware, firewall protection), Identity and Access Management (IAM with multi-factor authentication and federation), Storage Security (access control and encryption for data at rest and in transit), and Risks and Compliance (tools for PCI DSS or HIPAA compliance).
(c)Hybrid Cloud Security: The hybrid cloud presents an opportunity to utilize existing on-premise IT infrastructure for storing sensitive information while scaling to public cloud resources as needed; the Cloud Security Alliance (CSA) Hybrid Cloud Security Working Group described four cross-cloud security capabilities: Perimeter security (defining physical and logical boundaries between on-premises and public cloud), Transmission security (security controls for migrating VMs, containers, applications, and data), Storage security (data storage, backup, and restoration of security policies), and Management security (operation management, permission engagement, identity, and authentication with unified management across multiple cloud environments).
Hybrid Cloud Security Risks and Best Practices: Key risks include DDoS Attacks (disrupting both external and internal communications), Data Breach (higher risk of leakage due to misconfigurations, unauthorized access, and man-in-the-middle attacks), Compliance difficulties (moving data between on-premise and cloud complicates government regulations), SLA misalignment (each CSP has its own API, tools, SLAs making consistency complex), Cloud Skills gaps (different security configurations for AWS, Azure, Google Cloud leading to misconfigurations), and Risk Assessment challenges (fragmented approach for each cloud provider); Best practices to secure hybrid clouds include:
(i) Create a Unified Access Management Strategy using unified IAM with multi-factor authentication,
(ii) Automate Configuration and Validation Across All Clouds using cloud security posture management frameworks,
(iii) Adopt New Security Standard Approaches like DevSecOps to integrate security in development pipeline,
(iv) Wider Scope to Enhance Data Security using hardware security modules and strong IAM policies,
(v) Use Zero Trust Principles - using novel architectures with context-aware authentication and access control for non-secure environments.
Important Questions (2Marks):
1. Explain the following terms involved in operating a cloud
(a) Security, Trust and Privacy
(b) Organizational aspects
(c) Scalability and fault tolerance
(d) Cloud Interoperability and Standards
2. Define the "Pay-as-you-go" model in Cloud computing and state its primary financial advantage for an enterprise.
3. What is "Vendor Lock-in" in the context of Cloud computing, and why is it considered a strategic barrier?
4. In the context of Cloud Scalability, what is the significance of "VM Consolidation" and how does it impact resource utilization?
5. Why is energy consumption considered a "critical challenge" for modern Cloud data centers, and what is the "Green Cloud" vision?
6. Define "Cloud Federation" and distinguish it from the "InterCloud" concept as introduced by Cisco.
7. Define the relationship between Risk, Threat, and Vulnerability using the standard security formula, and provide one example of a "Human Vulnerability."
8. Explain the "Shared Responsibility Model" (CSM) in Cloud security and name two major providers that adopt this model.
9. Compare the user's security responsibility in an IaaS model versus a SaaS model.
10. State the primary advantage of Private Cloud security and list two specific risks associated with this deployment model.
Important Questions (Essay): (1, 2, 8, 9)
1. Explain various open challenges involved in maintaining a cloud.
2. “Cloud computing shifts the financial burden from capital costs to operational costs." Elaborate on this statement by discussing the following:
(a) The difference between IT costs before and after Cloud adoption.
(b) How the Cloud benefits small startups versus large enterprises with existing assets.
(c) The three main pricing models adopted by Cloud providers
3. Discuss the importance of Interoperability and Standards in Cloud computing. In your answer, explain the role of OVF, OCCI, and the Open Cloud Manifesto in overcoming vendor lock-in.
4. In a scalable Cloud scenario, the ability to tolerate failure is as fundamental as providing an efficient system. Discuss about:
(a) different dimensions of scalability that Cloud middleware must manage
(b) trade-off between energy efficiency (consolidation) and Quality of Service (QoS)
(c) role of "InterCloud" in enhancing scalability and fault tolerance
5. Discuss the architectural framework and strategies for achieving Green Cloud computing. In your answer, include the following:
(a) The components of the High-level System Architectural Framework for Green Cloud.
(b) The role of the Green Resource Allocator and its sub-components.
(c) Techniques such as VM Consolidation, DVFS, and the use of InterClouds for energy efficiency
6. Explain the architectural framework and motivations behind Cloud Federation covering three levels of the Cloud Federation Reference Stack, Functional and non-functional motivations for providers to join a federation.
7. Elaborate on the core frameworks and technologies used to guarantee data security regarding four main domains, CIA Triad, basic operations of Cryptography and AAA security framework
8. Discuss the layered security measures in Cloud Computing and the cryptographic protocols used to secure them that cover security responsibilities, Symmetric and Asymmetric Key Cryptography, Public Key Infrastructure (PKI) and role of SSL/SSH in secure communication.
9. Explain the six best practices for maintaining a robust security posture under the SRM.
10. Compare IaaS, SaaS and PaaS in relation to Shared Responsibility Model (SRM).
11. Compare and contrast the security implications of Private, Public, and Hybrid cloud deployment models.
Mastering Cloud Computing-Powering AI, Big Data and IoT Applications - 2e- Rajkumar Buyya, Christian Vecchiola, Thamarai Selvi, Sivananda Poojara, Satish N. Srirama