The WolfspyreLabs Blog/ 2023/ Posts from January/ Load Balancing/ Load Balancing Back to the top… Ceph Configuration (( You are Here )) Testing Enabling the Ceph Rados Gateway # Setting up the Ceph Rados components to actually function. Click to Expand ↩⤵ ### Setup #### Secrets ##### radosgw Keyring ``` ceph-authtool --create-keyring /etc/pve/priv/ceph.client.radosgw.keyring ``` ###### On each node ln -s /etc/pve/priv/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring ln -s /etc/pve/priv/ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring ##### Node keys ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-40 --gen-key ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-41 --gen-key ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-42 --gen-key ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-43 --gen-key ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-44 --gen-key ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-45 --gen-key ##### Privileges ###### create the privilege tokens ceph-authtool -n client.radosgw.px-m-40 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/pve/priv/ceph.client.radosgw.keyring ceph-authtool -n client.radosgw.px-m-41 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/pve/priv/ceph.client.radosgw.keyring ceph-authtool -n client.radosgw.px-m-42 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/pve/priv/ceph.client.radosgw.keyring ceph-authtool -n client.radosgw.px-m-43 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/pve/priv/ceph.client.radosgw.keyring ceph-authtool -n client.radosgw.px-m-44 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/pve/priv/ceph.client.radosgw.keyring ceph-authtool -n client.radosgw.px-m-45 --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/pve/priv/ceph.client.radosgw.keyring ###### Add the newly minted auth tokens to the cluster Using the admin keyring, add the newly minted tokens to the cluster. ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-40 -i /etc/pve/priv/ceph.client.radosgw.keyring ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-41 -i /etc/pve/priv/ceph.client.radosgw.keyring ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-42 -i /etc/pve/priv/ceph.client.radosgw.keyring ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-43 -i /etc/pve/priv/ceph.client.radosgw.keyring ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-44 -i /etc/pve/priv/ceph.client.radosgw.keyring ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-45 -i /etc/pve/priv/ceph.client.radosgw.keyring added key for client.radosgw.px-m-40 added key for client.radosgw.px-m-41 added key for client.radosgw.px-m-42 added key for client.radosgw.px-m-43 added key for client.radosgw.px-m-44 added key for client.radosgw.px-m-45 Config # Host Config Click to Expand ↩⤵ #### Update /etc/services ``` radosgw 7480/tcp # Ceph Rados gw ``` #### Increase Systemic Limits Ceph Config Click to Expand ↩⤵ #### Adjust ceph config file ``` [client.radosgw.px-m-40] host = px-m-40 keyring = /etc/pve/priv/ceph.client.radosgw.keyring log file = /var/log/ceph/client.radosgw.$host.log rgw_dns_name = dog.wolfspyre.io [client.radosgw.px-m-41] host = px-m-41 keyring = /etc/pve/priv/ceph.client.radosgw.keyring log file = /var/log/ceph/client.radosgw.$host.log rgw_dns_name = dog.wolfspyre.io [client.radosgw.px-m-42] host = px-m-42 keyring = /etc/pve/priv/ceph.client.radosgw.keyring log file = /var/log/ceph/client.radosgw.$host.log rgw_dns_name = dog.wolfspyre.io [client.radosgw.px-m-43] host = px-m-43 keyring = /etc/pve/priv/ceph.client.radosgw.keyring log file = /var/log/ceph/client.radosgw.$host.log rgw_dns_name = dog.wolfspyre.io [client.radosgw.px-m-44] host = px-m-44 keyring = /etc/pve/priv/ceph.client.radosgw.keyring log file = /var/log/ceph/client.radosgw.$host.log rgw_dns_name = dog.wolfspyre.io [client.radosgw.px-m-45] host = px-m-45 keyring = /etc/pve/priv/ceph.client.radosgw.keyring log file = /var/log/ceph/client.radosgw.$host.log rgw_dns_name = dog.wolfspyre.io rgw_zone = ``` ``` rgw_dns_name = dog.wolfspyre.io rgw_log_nonexistent_bucket = true rgw_enable_ops_log = true rgw_enable_usage_log = true osd_map_message_max=10 objecter_inflight_ops = 24576 rgw_thread_pool_size = 512 rgw_admin_entry rgw_zone rgw_zone_id rgw_zone_root_pool rgw_default_zone_info_oid rgw_region rgw_region_root_pool rgw_default_region_info_oid rgw_zonegroup rgw_zonegroup_id rgw_zonegroup_root_pool rgw_default_zonegroup_info_oid rgw_realm rgw_realm_id rgw_realm_id_oid ``` https://docs.ceph.com/en/latest/radosgw/config-ref/#confval-rgw_relaxed_s3_bucket_names ### Installation and Service Enablement Install Click to Expand ↩⤵ #### Package Installation `apt-get install radosgw librados2-perl python3-rados librados2 librgw2` #### Service enablement `systemctl enable radosgw` `service radosgw start` Actually connecting # Moving parts related to having a functional front door Click to Expand ↩⤵ ## Setting up the external gateway ### DNS Records #### wildcard and A records I chose `dog.wolfspyre.io` as the root subdomain. Since I want to facilitate my offsite hosts to be able to access this as well, I need to enable external and internal resolution of the endpoints. ##### external records ``` # wolfspyre.io dog IN A 108.221.46.29 *.dog IN A 108.221.46.29 ``` ##### internal records ``` # wolfspyre.io dog IN NS ns01.wolfspyre.io. dog IN NS ns02.wolfspyre.io. dog IN NS ns03.wolfspyre.io. ``` ``` # dog.wolfspyre.io @ IN A 198.18.1.33 * IN A 198.18.1.33 skwirreltrap IN A 198.18.198.1 atticus IN A 198.18.198.2 evey IN A 198.18.198.3 px-m-40 IN A 198.18.198.40 px-m-41 IN A 198.18.198.41 px-m-42 IN A 198.18.198.42 px-m-43 IN A 198.18.198.43 px-m-44 IN A 198.18.198.44 px-m-45 IN A 198.18.198.45 ``` ### firewall adjustment - I needed to permit traffic from internal hosts to the VIP on `tcp:443` - I needed to permit traffic from the firewalls to the proxmox nodes on `TCP:7480` ### OPNsense Haproxy config - - - #### Real Servers (Backends) ------ ###### Main info - Name or Prefix : px-m-40-7080 - Description : px-m-40-rados - Type : static ###### Static Server - FQDN or IP : px-m-40.dog.wolfspyre.io - Port : 7480 - Mode : active [default] - Multiplexer Protocol : auto-selection [recommended] - Prefer IP Family : prefer IPv4 ###### Common Options - SSL : [ ] - SSL SNI : px-m-40.dog.wolfspyre.io - Verify SSL Certificate : [ ] - SSL Verify CA : Nothing Selected - SSL Verify CRL : None - SSL Client Certificate : None - Max Connections : N/A - Weight : N/A - Check Interval : N/A - Down Interval : N/A - Port to check : N/A - Source address : 198.18.198.1 - Option pass-through : N/A ------ #### Backend Pools - - - - advanced mode : [ x ] - Enabled : [ x ] - Name : PXMCeph-S3-Pool - Description : Proxmox Ceph S3 Backend Pool - Mode : HTTP (Layer 7) [default] - Balancing Algorithm : Source-IP Hash [default] - Random Draws : 2 - Proxy Protocol : none - Servers : - pxm-40-8006 - pxm-41-8006 - pxm-42-8006 - pxm-43-8006 - pxm-44-8006 - pxm-45-8006 - FastCGI Application : none - Resolver : none - Resolver Options : none - Prefer IP Family : prefer IPv4 - Source address : 198.19.198.1 - Enable Health Checking : [x] ###### Health Checking - Health Monitor : PXM UI Port 8006 Check - Log Status Changes : a - Check Interval : a - Down Interval : a - Unhealthy Threshold : a - Healthy Threshold : a - E-Mail Alert : none ###### HTTP(S) settings - Enable HTTP/2 : [ ] - HTTP/2 without TLS : [ ] - Advertise Protocols (ALPN) : - HTTP/1.1 - HTTP/1.0 ###### Persistence - Persistence type : Stick-table persistence [default] ##### Stick-table persistence - Table type : none - Stored data types : Connection count - Expiration time : 30m - Size : 50k - Cookie name : none - Cookie length : none - Connection rate period : 60s - Session rate period : 60s - HTTP request rate period : 60s - HTTP error rate period : 60s - Bytes in rate period : 60s - Bytes out rate period : 60s ###### Basic Authentication - Enable : [ ] - Allowed Users : Nothing selected - Allowed Groups : Nothing selected ###### Tuning Options - Connection Timeout : 20s - Check Timeout : 10s - Server Timeout : 20s - Retries : 1 - Option pass-through : none - Default for server : none - Use Frontend port : [ ] - HTTP reuse : Always - Enable Caching : [ X ] ###### Rules - Select Rules : noneyet ###### Error Messages - Select Error Messages : Nothing selected ### Condition COND:HostEndsWith-dog_wolfspyre_io COND:HostMatches-dog_wolfspyre_io ### Rules RUL-AllowHTTPReq-EndsWith-dog_wolfspyre_io ### Health Check readiness check on TCP:7480 ### Backend Pool Ceph-S3-VIP-Pool ### Frontend Pools #### Internal pool #### External pool Testing #Maintenance # 4: Ceph Configuration 3: Enabling RADOS 2: PreRequisites 1: Reasoning Load Balancing 6: Testing 7: Maintenance and Monitoring 8: Reading and References Back to the top… label https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.3/html/object_gateway_guide_for_red_hat_enterprise_linux/object_gateway_configuration_reference https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/load_balancer_administration/ceph_example