




























































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Red hat software define storage guide
Typology: Study Guides, Projects, Research
1 / 172
This page cannot be seen from the preview
Don't miss anything!
Last Updated: March 14, 2017
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit
http://www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2017 Cisco Systems, Inc. All rights reserved.
Executive Summary
Modern data centers increasingly rely on a variety of architectures for storage. Whereas in the past organizations focused on traditional storage only, today organizations are focusing on Software Defined Storage for several reasons:
Software Defined Storage offers unlimited scalability and simple management.
Because of the low cost per gigabyte, Software Defined Storage is well suited for large-capacity needs, and therefore for use cases such as archive, backup, and cloud operations.
Software Defined Storage allows the use of commodity hardware.
Enterprise storage systems are designed to address business-critical requirements in the data center. But these solutions may not be optimal for use cases such as backup and archive workloads and other unstructured data, for which OLTP-style data latency is not especially important.
Red Hat Ceph Storage is an example of a massively scalable, Open Source, software-defined storage system that gives you unified storage for cloud environments. It is an object storage architecture, that can easily achieve enterprise-class reliability, scale-out capacity, and lower costs with an industry-standard server solution.
The Cisco UCS S3260 Storage Server, originally designed for the data center, together with Red Hat Ceph Storage is optimized for Software Defined Storage solutions, making it an excellent fit for unstructured data workloads such as backup, archive, and cloud data. The S3260 delivers a complete hardware with exceptional scalability for computing and storage resources together with 40 Gigabit Ethernet networking. The S3260 is the platform of choice for Software Defined Storage solutions because it provides more than comparable platforms:
Proven server architecture that allows you to upgrade individual components without the need for migration.
High-bandwidth networking that meets the needs of large-scale object storage solutions like Red Hat Ceph Storage.
Unified, embedded management for an easy-to-scale infrastructure.
API access for cloud-scale applications.
Cisco and Red Hat are collaborating to offer customers a scalable Software Defined Storage solution for unstructured data that is integrated with Red Hat Ceph Storage. With the power of the Cisco UCS management framework, the solution is cost effective to deploy and manage and will enable the next- generation cloud deployments that drive business agility, lower operational costs and avoid vendor lock-in.
Solution Overview
Traditional storage systems are limited in their ability to easily and cost-effectively scale to support massive amounts of unstructured data. With about 80 percent of data being unstructured, new approaches using x servers are proving to be more cost effective, providing storage that can be expanded as easily as your data grows. Software Defined Storage is a scalable and cost-effective approach for handling massive amounts of data.
Red Hat Ceph Storage is a massively scalable, open source, software-defined storage system that supports unified storage for a cloud environment. With object and block storage in one platform, Red Hat Ceph Storage efficiently and automatically manages the petabytes of data needed to run businesses facing massive data growth. It is proven at web scale and has many deployments in production environments as an object store for large, global corporations. Red Hat Ceph Storage was designed from the ground up for web-scale block and object storage and cloud infrastructures.
Scale-out storage uses x86 architecture storage-optimized servers to increase performance while reducing costs. The Cisco UCS S3260 Storage Server is well suited for scale-out storage solutions. It provides a platform that is cost effective to deploy and manage using the power of the Cisco Unified Computing System (Cisco UCS) management: capabilities that traditional unmanaged and agent-based management systems can’t offer. You can design S3260 solutions for a computing-intensive, capacity-intensive, or throughput- intensive workload.
Both solutions together, Red Hat Ceph Storage and Cisco UCS S3260 Storage Server, deliver a simple, fast and scalable architecture for enterprise scale-out storage.
The current Cisco Validated Design (CVD) is a simple and linearly scalable architecture that provides Software Defined Storage for block and object on Red Hat Ceph Storage 2.1 and Cisco UCS S3260 Storage Server. The solution includes the following features:
Infrastructure for large scale-out storage.
Design of a Red Hat Ceph Storage solution together with Cisco UCS S3260 Storage Server.
Simplified infrastructure management with Cisco UCS Manager.
Architectural scalability – linear scaling based on network, storage, and compute requirements.
Operational guide to extend a working Red Hat Ceph cluster with Ceph RADOS Gateway (RGW) and Ceph OSD nodes.
This document describes the architecture, design and deployment procedures of a Red Hat Ceph Storage solution using six Cisco UCS S3260 Storage Servers with two C3x60 M4 server nodes each as OSD nodes, three Cisco UCS C220 M4 S rack server each as Monitor nodes, three Cisco UCS C220 M4S rackserver each as RGW node, one Cisco UCS C220 M4S rackserver as Admin node, and two Cisco UCS 6332 Fabric Interconnect managed by Cisco UCS Manager. The intended audience for this document includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner engineering, and customers who want to deploy Red Hat Ceph Storage on the Cisco Unified Computing System (UCS) using Cisco UCS S3260 Storage Servers.
Technology Overview
The Cisco Unified Computing System (Cisco UCS) is a state-of-the-art data center platform that unites computing, network, storage access, and virtualization into a single cohesive system.
The main components of Cisco Unified Computing System are:
Computing - The system is based on an entirely new class of computing system that incorporates rack- mount and blade servers based on Intel Xeon Processor E5 and E7. The Cisco UCS servers offer the patented Cisco Extended Memory Technology to support applications with large datasets and allow more virtual machines (VM) per server.
Network - The system is integrated onto a low-latency, lossless, 10-Gbps unified network fabric. This network foundation consolidates LANs, SANs, and high-performance computing networks which are separate networks today. The unified fabric lowers costs by reducing the number of network adapters, switches, and cables, and by decreasing the power and cooling requirements.
Virtualization - The system unleashes the full potential of virtualization by enhancing the scalability, performance, and operational control of virtual environments. Cisco security, policy enforcement, and diagnostic features are now extended into virtualized environments to better support changing business and IT requirements.
Storage access - The system provides consolidated access to both SAN storage and Network Attached Storage (NAS) over the unified fabric. By unifying the storage access the Cisco Unified Computing System can access storage over Ethernet (NFS or iSCSI), Fibre Channel, and Fibre Channel over Ethernet (FCoE). This provides customers with choice for storage access and investment protection. In addition, the server administrators can pre-assign storage-access policies for system connectivity to storage resources, simplifying storage connectivity, and management for increased productivity.
The Cisco Unified Computing System is designed to deliver:
A reduced Total Cost of Ownership (TCO) and increased business agility.
Increased IT staff productivity through just-in-time provisioning and mobility support.
A cohesive, integrated system which unifies the technology in the data center.
Industry standards supported by a partner ecosystem of industry leaders.
Technology Overview
The Cisco UCS® S3260 Storage Server (Figure 1) is a modular, high-density, high-availability dual node rack server well suited for service providers, enterprises, and industry-specific environments. It addresses the need for dense cost effective storage for the ever-growing data needs. Designed for a new class of cloud- scale applications, it is simple to deploy and excellent for big data applications, software-defined storage environments such as Ceph and other unstructured data repositories, media streaming, and content distribution.
Figure 1 Cisco UCS S3260 Storage Server
Extending the capability of the Cisco UCS C3000 portfolio, the Cisco UCS S3260 helps you achieve the highest levels of data availability. With dual-node capability that is based on the Intel® Xeon® processor E5- 2600 v4 series, it features up to 600 TB of local storage in a compact 4-rack-unit (4RU) form factor. All hard- disk drives can be asymmetrically split between the dual-nodes and are individually hot-swappable. The drives can be built-in in an enterprise-class Redundant Array of Independent Disks (RAID) redundancy or be in a pass-through mode.
This high-density rack server comfortably fits in a standard 32-inch depth rack, such as the Cisco® R Rack.
The Cisco UCS S3260 is deployed as a standalone server in both bare-metal or virtualized environments. Its modular architecture reduces total cost of ownership (TCO) by allowing you to upgrade individual components over time and as use cases evolve, without having to replace the entire system.
The Cisco UCS S3260 uses a modular server architecture that, using Cisco’s blade technology expertise, allows you to upgrade the computing or network nodes in the system without the need to migrate data migration from one system to another. It delivers:
Dual server nodes
Up to 36 computing cores per server node
Up to 60 drives mixing a large form factor (LFF) with up to 28 solid-state disk (SSD) drives plus 2 SSD SATA boot drives per server node
Up to 512 GB of memory per server node (1 terabyte [TB] total)
Support for 12-Gbps serial-attached SCSI (SAS) drives
A system I/O Controller with Cisco VIC 1300 Series Embedded Chip supporting Dual-port 40Gbps
Technology Overview
The Cisco UCS VIC 1387 provides the following features and benefits:
Stateless and agile platform: The personality of the card is determined dynamically at boot time using the service profile associated with the server. The number, type (NIC or HBA), identity (MAC address and World Wide Name [WWN]), failover policy, bandwidth, and quality-of-service (QoS) policies of the PCIe interfaces are all determined using the service profile. The capability to define, create, and use interfaces on demand provides a stateless and agile server infrastructure
on the Cisco UCS fabric interconnect, providing complete network separation for each virtual cable between a PCIe device on the VIC and the interface on the fabric interconnect
The Cisco UCS 6300 Series Fabric Interconnects are a core part of Cisco UCS, providing both network connectivity and management capabilities for the system (Figure 4). The Cisco UCS 6300 Series offers line- rate, low-latency, lossless 10 and 40 Gigabit Ethernet, Fibre Channel over Ethernet (FCoE), and Fibre Channel functions.
Figure 4 Cisco UCS 6300 Series Fabric Interconnect
The Cisco UCS 6300 Series provides the management and communication backbone for the Cisco UCS B- Series Blade Servers, 5100 Series Blade Server Chassis, and C-Series Rack Servers managed by Cisco UCS. All servers attached to the fabric interconnects become part of a single, highly available management domain. In addition, by supporting unified fabric, the Cisco UCS 6300 Series provides both LAN and SAN connectivity for all servers within its domain.
From a networking perspective, the Cisco UCS 6300 Series uses a cut-through architecture, supporting deterministic, low-latency, line-rate 10 and 40 Gigabit Ethernet ports, switching capacity of 2.56 terabits per second (Tbps), and 320 Gbps of bandwidth per chassis, independent of packet size and enabled services. The product family supports Cisco®^ low-latency, lossless 10 and 40 Gigabit Ethernet unified network fabric capabilities, which increase the reliability, efficiency, and scalability of Ethernet networks. The fabric interconnect supports multiple traffic classes over a lossless Ethernet fabric from the server through the fabric interconnect. Significant TCO savings can be achieved with an FCoE optimized server design in which network interface cards (NICs), host bus adapters (HBAs), cables, and switches can be consolidated.
The Cisco UCS 6332 32-Port Fabric Interconnect is a 1-rack-unit (1RU) Gigabit Ethernet, and FCoE switch offering up to 2.56 Tbps throughput and up to 32 ports. The switch has 32 fixed 40-Gbps Ethernet and FCoE ports.
Both the Cisco UCS 6332UP 32-Port Fabric Interconnect and the Cisco UCS 6332 16-UP 40-Port Fabric Interconnect have ports that can be configured for the breakout feature that supports connectivity between 40 Gigabit Ethernet ports and 10 Gigabit Ethernet ports. This feature provides backward compatibility to existing hardware that supports 10 Gigabit Ethernet. A 40 Gigabit Ethernet port can be used as four 10 Gigabit Ethernet ports. Using a 40 Gigabit Ethernet SFP, these ports on a Cisco UCS 6300 Series Fabric Interconnect can connect to another fabric interconnect that has four 10 Gigabit Ethernet SFPs. The breakout
Technology Overview
feature can be configured on ports 1 to 12 and ports 15 to 26 on the Cisco UCS 6332UP fabric interconnect. Ports 17 to 34 on the Cisco UCS 6332 16-UP fabric interconnect support the breakout feature.
The Cisco Nexus®^ 9000 Series Switches (Figure 5) include both modular and fixed-port switches that are designed to overcome these challenges with a flexible, agile, low-cost, application-centric infrastructure.
Figure 5 Cisco Nexus 9332PQ Switch
The Cisco Nexus 9300 platform consists of fixed-port switches designed for top-of-rack (ToR) and middle-of- row (MoR) deployment in data centers that support enterprise applications, service provider hosting, and cloud computing environments. They are Layer 2 and 3 nonblocking 10 and 40 Gigabit Ethernet switches with up to 2.56 terabits per second (Tbps) of internal bandwidth.
The Cisco Nexus 9332PQ Switch is a 1-rack-unit (1RU) switch that supports 2.56 Tbps of bandwidth and over 720 million packets per second (mpps) across thirty-two 40-Gbps Enhanced QSFP+ ports
All the Cisco Nexus 9300 platform switches use dual- core 2.5-GHz x86 CPUs with 64-GB solid-state disk (SSD) drives and 16 GB of memory for enhanced network performance.
With the Cisco Nexus 9000 Series, organizations can quickly and easily upgrade existing data centers to carry 40 Gigabit Ethernet to the aggregation layer or to the spine (in a leaf-and-spine configuration) through advanced and cost-effective optics that enable the use of existing 10 Gigabit Ethernet fiber (a pair of multimode fiber strands).
Cisco provides two modes of operation for the Cisco Nexus 9000 Series. Organizations can use Cisco®^ NX- OS Software to deploy the Cisco Nexus 9000 Series in standard Cisco Nexus switch environments. Organizations also can use a hardware infrastructure that is ready to support Cisco Application Centric Infrastructure (Cisco ACI™) to take full advantage of an automated, policy-based, systems management approach.
Cisco UCS®^ Manager (Figure 6) provides unified, embedded management of all software and hardware components of the Cisco Unified Computing System™^ (Cisco UCS) across multiple chassis, rack servers and thousands of virtual machines. It supports all Cisco UCS product models, including Cisco UCS B-Series Blade Servers, C-Series Rack Servers, and M-Series composable infrastructure and Cisco UCS Mini, as well as the associated storage resources and networks. Cisco UCS Manager is embedded on a pair of Cisco UCS 6300 or 6200 Series Fabric Interconnects using a clustered, active-standby configuration for high availability. The manager participates in server provisioning, device discovery, inventory, configuration, diagnostics, monitoring, fault detection, auditing, and statistics collection.
Figure 6 Cisco UCS Manager
Technology Overview
Designed to help organizations make a seamless transition to emerging datacenter models that include virtualization and cloud computing, Red Hat Enterprise Linux includes support for major hardware architectures, hypervisors, and cloud providers, making deployments across physical and different virtual environments predictable and secure. Enhanced tools and new capabilities in this release enable administrators to tailor the application environment to efficiently monitor and manage compute resources and security.
Red Hat® Ceph Storage is an open, cost-effective, software-defined storage solution that enables massively scalable cloud and object storage workloads. By unifying object, block storage and file storage in one platform, Red Hat Ceph Storage efficiently and automatically manages the petabytes of data needed to run businesses facing massive data growth. Ceph is a self-healing, self-managing platform with no single point of failure. Ceph enables a scale-out cloud infrastructure built on industry standard servers that significantly lowers the cost of storing enterprise data and helps enterprises manage their exponential data growth in an automated fashion.
For OpenStack environments, Red Hat Ceph Storage is tightly integrated with OpenStack services, including Nova, Cinder, Manila, Glance, Keystone, and Swift, and it offers user-driven storage life-cycle management. Voted the No. 1 storage option by OpenStack users, the product’s highly tunable, extensible, and configurable architecture offers mature interfaces for enterprise block and object storage, making it well suited for archival, rich media, and cloud infrastructure environments.
Red Hat Ceph Storage is also ideal for object storage workloads outside of OpenStack because it is proven at web scale, flexible for demanding applications, and offers the data protection, reliability, and availability enterprises demand. It was designed from the ground up for web-scale object storage. Industry-standard APIs allow seamless migration of, and integration with, an enterprise’s applications. A Ceph object storage cluster is accessible via S3, Swift, or native API protocols.
Ceph has a lively and active open source community contributing to its innovation. At Ceph’s core is RADOS, a distributed object store that stores data by spreading it out across multiple industry standard servers. Ceph uses CRUSH (Controller Replication Under Scalable Hashing), a uniquely differentiated data placement algorithm that intelligently distributes the data pseudo-randomly across the cluster for better performance and data protection. Ceph supports both replication and erasure coding to protect data and also provides multi- site disaster recovery options.
Red Hat collaborates with the global open source Ceph community to develop new Ceph features, then packages changes into predictable, stable, enterprise-quality SDS product, which is Red Hat Ceph Storage. This unique development model takes combines the advantage of a large development community with Red Hat’s industry-leading support services to offer new storage capabilities and benefits to enterprises.
Solution Design
The current solution based on Cisco UCS and Red Hat Ceph Storage is divided into multiple sections and covers three main aspects:
— Integration and configuration of the Cisco UCS hardware into Cisco UCS Manager
— Base installation of Red Hat Enterprise Linux
— Deployment of Red Hat Ceph Storage
Figure 7 Deployment Parts for the Cisco Validated Design
— Expansion of the current cluster by adding one more Cisco UCS S3260 Storage Server with two C3x60 M4 server nodes working as OSD nodes
— Expansion of the current cluster by adding three more Cisco UCS C220 M4S Rack Server working as RADOS gateways for object storage
A general design of a Red Hat Ceph Storage solution should consider the principles shown in Figure 8.
Solution Design
IOPS Min. 10 OSD nodes
10G - 40G 4 - 10 Core per OSD / 16 GB + 2 GB per OSD
SSD:NVMe or all NVMe with co- located journals
Ceph RBD (Block) Replicated Pools
Throughput Min. 10 OSD nodes
when > 12 HDDs/node )
1 Core per 2 HDD / 16 GB + 2 GB per OSD
HDD:NVMe, or 4 - 5 :1 → HDD:SSD
Ceph RBD (Block) Replicated Pools Ceph RGW (Object) Replicated Pools
Capacity- Archive
Min. 7 OSD nodes
10G (or 40G for latency sensitive requiremen ts)
1 Core per 2 HDD / 16 GB + 2 GB per OSD
All HDD with co- located journals
Ceph RGW (Object) Erasure- Coded Pools
The solution for the current Cisco Validated Design follows a mixed workload setup of Throughput- and Capacity-intensive configurations and is classified as follows^2 :
Cluster Size: Starting with 10 OSD nodes and adding two more OSD nodes.
Network: All Ceph nodes connected with 40G.
CPU / Memory: All nodes come with 128 GB memory and more than 40 Core-GHz.
OSD Disk: The solution is configured for a 6:1 HDD:SSD ratio.
Data Protection: Ceph RBD with 3 x Replication and Ceph RGW with Erasure Coding.
Ceph Admin, Monitor, and RADOS gateway nodes are deployed on Cisco UCS C220 M4S rack server.
Ceph OSD nodes are deployed on Cisco UCS S3260 Storage Server.
Deploying the solution is based on three steps; the first step is integrating Cisco UCS S3260 Storage Server and Cisco UCS C220 M4S into Cisco UCS Manager, connected to Cisco UCS 6332 Fabric Interconnect and then Cisco Nexus 9332PQ; the second step is the installation of Red Hat Enterprise Linux and preparation for the third step; the installation, configuration and deployment of Red Hat Ceph Storage. Figure 9 illustrates the deployment steps.
Figure 9 Deployment Parts for Red Hat Ceph Storage on Cisco UCS
(^2) A detailed Bill of Material list can be found at Bill of Materials
Solution Design
As an addition to the design and deployment part of the Red Hat Ceph Storage solution on Cisco UCS, the Cisco Validated Design gives an operational guidance on how to add more capacity and another access layer to the starting configuration.
The first part of installation and configuration of the solution contains one Admin node, three Ceph Monitor nodes and 10 Ceph OSD nodes and is shown in Figure 10. This comes along with the minimum size of a Throughput-intensive Ceph cluster of 10 Ceph OSD nodes.
Figure 10 Base Installation of Red Hat Ceph Storage on Cisco UCS
In the second step, the environment gets expanded by adding one more Cisco UCS S3260 Storage Server enclosure with two C3x60 M4 server nodes inside. All steps will be described, showing the simplicity of adding further capacity in less than 30 minutes. Figure 11 shows the additional integration of a Cisco UCS S3260 Storage Server.
Figure 11 Expansion of Red Hat Ceph Storage Cluster with Ceph OSD Nodes