




























































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A detailed guide on how to install, manage, and monitor a ceph storage cluster using various methods such as salt, ceph-deploy, and manual installation. It covers topics like adding and removing ceph osd nodes, monitoring usage graphs, server maintenance, networking settings, and more.
Typology: Study Guides, Projects, Research
1 / 320
This page cannot be seen from the preview
Don't miss anything!
SUSE Enterprise Storage 4
Publication Date: 02/28/
SUSE LLC 10 Canal Park Drive Suite 200 Cambridge MA 02141 USA https://www.suse.com/documentation
Copyright © 2017 SUSE LLC
Copyright © 2010-2014, Inktank Storage, Inc. and contributors.
The text of and illustrations in this document are licensed by Inktank Storage under a Creative Commons Attribution- Share Alike 4.0 International ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/ licenses/by-sa/4.0/legalcode. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
This document is an adaption of original works found at http://ceph.com/docs/master/ (2015-01-30).
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, JBoss, MetaMatrix, Fedora, the Infinity Logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the United States and other countries. Java® is a registered trademark of Oracle and/or its affiliates. XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries. MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries. All other trademarks are the property of their respective owners.
Contents
vii Administration and Deployment Guide
viii Administration and Deployment Guide
17.4 Managing RADOS Gateway Access 134 Managing S3 Access 134 • Managing Swift Access 135 17.5 Multi-site Object Storage Gateways 135 Terminology 135 • Example Cluster Setup 136 • System Keys 137 • Naming Conventions 137 • Default Pools 138 • Creating a Realm 138 • Deleting the Default Zonegroup 139 • Creating a Master Zonegroup 139 • Creating a Master Zone 140 • Creating a Secondary Zone 144 • Adding RADOS Gateway to the Second Cluster 147
18 Ceph iSCSI Gateway 152 18.1 iSCSI Block Storage 152 The Linux Kernel iSCSI Target 153 • iSCSI Initiators 153 18.2 General Information about lrdb 154 18.3 Deployment Considerations 155 18.4 Installation and Configuration 156 Install SUSE Enterprise Storage and Deploy a Ceph Cluster 156 • Installing the ceph_iscsi Pattern 156 • Create RBD Images 157 • Export RBD Images via iSCSI 157 • Optional Settings 160 • Advanced Settings 160 18.5 Connecting to lrbd-managed Targets 165 Linux (open-iscsi) 165 • Microsoft Windows (Microsoft iSCSI initiator) 168 • VMware 176 18.6 Conclusion 182
19 Clustered File System 183 19.1 Ceph Metadata Server 183 Adding a Metadata Server 183 • Configuring a Metadata Server 184 19.2 CephFS 184 Creating CephFS 184 • Mounting CephFS 185 • Unmounting CephFS 188 • CephFS in /etc/fstab 188 19.3 Managing Failover 188 Configuring Standby Daemons 188 • Examples 190
- I SUSE ENTERPRISE STORAGE About This Guide xv
xvi Feedback SES 4
2 Feedback
Several feedback channels are available:
User Comments We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/ feedback.html and enter your comments there.
Mail For feedback on the documentation of this product, you can also send a mail to doc- team@suse.de. Make sure to include the document title, the product version, and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).
3 Documentation Conventions
The following typographical conventions are used in this manual:
/etc/passwd : directory names and file names
placeholder : replace placeholder with the actual value
PATH : the environment variable PATH
ls , --help : commands, options, and parameters
user : users or groups
Alt (^) , Alt (^) – F1 (^) : a key to press or a key combination; keys are shown in uppercase as on a keyboard
File , File Save As : menu items, buttons
Dancing Penguins (Chapter Penguins , ↑Another Manual): This is a reference to a chapter in another manual.
xvii About the Making of This Manual SES 4
4 About the Making of This Manual
This book is written in Geekodoc, a subset of DocBook (see http://www.docbook.org ). The XML source files were validated by xmllint , processed by xsltproc , and converted into XSL- FO using a customized version of Norman Walsh's stylesheets. The final PDF can be formatted through FOP from Apache or through XEP from RenderX. The authoring and publishing tools used to produce this manual are available in the package daps. The DocBook Authoring and Publishing Suite (DAPS) is developed as open source software. For more information, see http:// daps.sf.net/.
2 Introduction SES 4
1 About SUSE Enterprise Storage
1.1 Introduction
SUSE Enterprise Storage is a distributed storage designed for scalability, reliability and performance based on the Ceph technology. As opposed to conventional systems which have allocation tables to store and fetch data, Ceph uses a pseudo-random data distribution function to store data, which reduces the number of look-ups required in storage. Data is stored on intelligent object storage devices (OSDs) by using daemons, which automates data management tasks such as data distribution, data replication, failure detection and recovery. Ceph is both self- healing and self-managing which results in reduction of administrative and budget overhead.
The Ceph storage cluster uses two mandatory types of nodes—monitors and OSD daemons:
Monitor Monitoring nodes maintain information about cluster health state, a map of other monitoring nodes and a CRUSH map. Monitor nodes also keep history of changes performed to the cluster.
OSD Daemon An OSD daemon stores data and manages the data replication and rebalancing processes. Each OSD daemon handles one or more OSDs, which can be physical disks/partitions or logical volumes. OSD daemons also communicate with monitor nodes and provide them with the state of the other OSD daemons.
The Ceph storage cluster can use the following optional node types:
Metadata Server (MDS) The metadata servers store metadata for the Ceph file system. By using MDS you can execute basic file system commands such as ls without overloading the cluster.
RADOS Gateway RADOS Gateway is an HTTP REST gateway for the RADOS object store. You can use this node type also when using the Ceph file system.
3 Introduction SES 4
We strongly recommend to install only one node type on a single server.
The Ceph environment has the following features:
Controlled, Scalable, Decentralized Placement of replicated Data using CRUSH The Ceph system uses a unique map called CRUSH (Controlled Replication Under Scalable Hashing) to assign data to OSDs in an efficient manner. Data assignment offsets are generated as opposed to being looked up in tables. This does away with disk look-ups which come with conventional allocation table based systems, reducing the communication between the storage and the client. The client armed with the CRUSH map and the metadata such as object name and byte offset knows where it can find the data or which OSD it needs to place the data. CRUSH maintains a hierarchy of devices and the replica placement policy. As new devices are added, data from existing nodes is moved to the new device to improve distribution with regard to workload and resilience. As a part of the replica placement policy, it can add weights to the devices so some devices are more favored as opposed to others. This could be used to give more weights to Solid State Devices (SSDs) and lower weights to conventional rotational hard disks to get overall better performance. CRUSH is designed to optimally distribute data to make use of available devices efficiently. CRUSH supports different ways of data distribution such as the following:
n-way replication (mirroring) RAID parity schemes Erasure Coding Hybrid approaches such as RAID-
Reliable Autonomic Distributed Object Storage (RADOS) The intelligence in the OSD Daemons allows tasks such as data replication and migration for self-management and self-healing automatically. By default, data written to Ceph storage is replicated within the OSDs. The level and type of replication is configurable. In case of failures, the CRUSH map is updated and data is written to new (replicated) OSDs. The intelligence of OSD Daemons enables to handle data replication, data migration, failure detection and recovery. These tasks are automatically and autonomously managed. This also allows the creation of various pools for different sorts of I/O.