In this post, we look at how to deploy a Ceph cluster (v16 +) and then use that with Apache CloudStack and KVM on Ubuntu 20.04.

Refer to Ceph docs as necessary. If you’re new to Ceph, you can start here, or deep dive into the architecture.

This references the ShapeBlue three-part Ceph and CloudStack blog series:

https://www.shapeblue.com/ceph-and-cloudstack-part-1/

https://www.shapeblue.com/ceph-and-cloudstack-part-2/

https://www.shapeblue.com/ceph-and-cloudstack-part-3/

Host Configuration

In the this Ceph cluster, we’ve three hosts/nodes that serve as both mon and osd nodes, and one admin node that is used to server as mgr and run the Ceph dashboard.

192.168.1.10 cloudpi   # Admin/mgr and dashboard
192.168.1.11 pikvm1    # mon and osd
192.168.1.12 pikvm2    # mon and osd
192.168.1.13 pikvm3m   # mon and osd

Configure SSH config on admin node:

tee -a ~/.ssh/config<<EOF
Host *
    UserKnownHostsFile /dev/null
    StrictHostKeyChecking no
    IdentitiesOnly yes
    ConnectTimeout 0
    ServerAliveInterval 300
EOF

Copy the ceph admin SSH public key across other nodes (say from your laptop):

ssh-copy-id -f -i /etc/ceph/ceph.pub root@192.168.1.11
ssh-copy-id -f -i /etc/ceph/ceph.pub root@192.168.1.12
ssh-copy-id -f -i /etc/ceph/ceph.pub root@192.168.1.13

Install cephadm

Newer Ceph versions recommend using cephadm to install and manage Ceph cluster using containers and systemd. Cephadm requirements include python3, systemd, podman or docker, ntp and lvm2. Let’s install them on all nodes:

sudo apt-get install -y python3 ntp lvm2

Install podman:

source /etc/os-release
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
wget -nv https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/xUbuntu_${VERSION_ID}/Release.key -O- | sudo apt-key add -
sudo apt-get update
sudo apt-get -y install podman

Finally, configure the ceph repository (pacific/v16 in this example) and install cephadm and ceph-common on all the nodes:

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-pacific/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
apt-get update
apt-get install -y cephadm

If you install cephadm using wget, you can add specific release repos using:

cephadm add-repo --release pacific
cephadm install ceph-common

Bootstrap Cluster

Bootstrap Ceph cluster by running the following only on the admin node (192.168.1.10 in the example):

cephadm bootstrap --mon-ip 192.168.1.10 \
                  --initial-dashboard-user admin \
                  --initial-dashboard-password Passw0rdHere

On successful run, the above command will bootstrap a Ceph cluster with ceph config in /etc/ceph/ceph.conf and SSH public key /etc/ceph/ceph.pub using container images that are orchestrated by podman.

The dashboard will be available on the mon IP https://192.168.1.10:8443/ which you can log in using the user admin and password as provided in the command (Passw0rdHere in the example above).

Add hosts

Add hosts after disabling automatic mon deployment:

ceph orch apply mon --unmanaged
ceph orch host add pikvm1 192.168.1.11
ceph orch host add pikvm2 192.168.1.12
ceph orch host add pikvm3 192.168.1.13

Add Monitors

Read more about monitors here.

Specify monitor traffic/CIDR:

ceph config set mon public_network 192.168.1.0/24

Add mons:

ceph orch daemon add mon pikvm1:192.168.1.11
ceph orch daemon add mon pikvm2:192.168.1.12
ceph orch daemon add mon pikvm3:192.168.1.13

Now, enable automatic placement of daemons:

ceph orch apply mon --placement="pikvm1,pikvm2,pikvm3" --dry-run
ceph orch apply mon --placement="pikvm1,pikvm2,pikvm3"

Add OSDs

Read more about OSD here.

List available physical disks of the added hosts:

ceph orch device ls

Then, use the syntax to specify the host and device you want to add as OSD:

 ceph orch daemon add osd pikvm1:/dev/sda
 ceph orch daemon add osd pikvm2:/dev/sda
 ceph orch daemon add osd pikvm3:/dev/sda

Finally, you may check your osds across hosts with:

ceph osd tree

Optional: Additional admin host

Hosts with _admin label will have ceph.conf and client.admin keyring copied to /etc/ceph that allows hosts access the ceph CLI. For example, add the label as:

ceph orch host label add pikvm1 _admin

Optional: Disable SSL on Dashboard

To disable SSL on Ceph Dashboard when, for example, using inside an internal network:

ceph config set mgr mgr/dashboard/ssl false
ceph config set mgr mgr/dashboard/server_addr 192.168.1.10
ceph config set mgr mgr/dashboard/server_port 8000
ceph dashboard set-grafana-api-ssl-verify False
ceph mgr module disable dashboard
ceph mgr module enable dashboard

Now, the dashboard is accessible over http://192.168.1.10:8000/

Optional: Tuning

Refer https://docs.ceph.com/en/latest/start/hardware-recommendations/#memory

For example, configure per OSD memory limit and MDS cache memory limit to 2GB each (or as required):

ceph config set osd osd_memory_target 2G
ceph config set mds mds_cache_memory_limit 2G

You confirm the value using the ceph config get <key> <config> command, for example:

ceph config get osd osd_memory_target
ceph config get mds mds_cache_memory_limit
ceph config get mon public_network

Add Ceph Storage to CloudStack

Check Ceph status using the following command (or using the Ceph dashboard):

ceph -s


Once you’ve ensure your Ceph cluster is up and healthy, let’s create a new Ceph pool and add to CloudStack:

ceph osd pool create meowceph 64 replicated
ceph osd pool set meowceph size 3
rbd pool init meowceph

Next, create a dedicated auth key for this pool:

ceph auth get-or-create client.cloudstack mon 'profile rbd' osd 'profile rbd pool=meowceph'

Finally, you can add this pool as a CloudStack zone-wide Ceph primary storage using the above credential as RADOS secret for the user cloudstack as well specify the monitor domain or IP with a storage tag. For example:


Next, you can create specific compute and disk offering with the same storage tag so VM deployments would use your newly added Ceph storage pool.

Additional: Fun with CephFS

CephFS is a POSIX-compliant file system over RADOS. I’ve started using CephFS as a distributed-shared directory for storing documents and photos which along with rsync allows me to keep all my home computers in sync with each other.

To create a CephFS (say with the name cephfs), simply run:

 ceph fs volume create cephfs

That’s it!

Now to mount and use CephFS on your client/computer, you can either use Ceph FUSE or on Linux environment simply use the kernel based module. For that install ceph-common:

apt-get install ceph-common

Now, you can mount your CephFS named cephfs using:

sudo mount -t ceph 192.168.1.11:6789:/ ceph -o name=<username>,secret=<password secret>

Note: you create dedicated client for use with your CephFS.

To make this mountable using mount -a or upon your Linux machine boot, you can put this in your /etc/fstab where you can specify multiple mon hosts IPs (this assumes port 6789 by default when not explicitly defined):

192.168.1.11,192.168.1.12,192.168.1.13:/ /home/rohit/ceph  ceph  name=rohit,secret=<SecretHere>,defaults 0 2

I then use rsync to sync local folder to cephfs and have cephfs mounted on all of my home computers to keep data in sync:

rsync -avzP --delete-after documents/ ceph/documents/