Skip to main content
  1. 2023/
  2. Posts from January/

Robusifying the Proxmox UI

Why? #

We’re going to be playing with HAproxy and proxmox for our S3 endpoint in a little bit… so putting the web ui of proxmox behind haproxy is a great starting point.

Resources #

Pre-Requirementss #

HAProxy setup #

Kinda goes without saying, that having a functional haproxy facade is likely a necessary prerequisite.

I imagine this general strategy would work with just plain haproxy too, but I’m specifically going to be talking about the HAProxy implementation within opnsense.

A functional PVE cluster #

It kinda goes without saying. If you want to use proxmox’s ceph setup to enable this, you prolly need a proxmox cluster.

Monitoring privilegd token #

To programatically identify if a specific node is healthy, you’ll need an API token.

Sure, we could just start with simple port liveness….

Lets do that first.

Frontend #

You’re going to need something to control and distribute connections to the endpoint you’re creating.

ssl cert #

You’ll want a wildcard ssl cert.

Enabling the RGW endpoint in ProxMox’s Ceph storage #

Setup #

secrets #

radosgw Keyring #

ceph-authtool –create-keyring /etc/pve/priv/ceph.client.radosgw.keyring

On each node #

ln -s /etc/pve/priv/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring ln -s /etc/pve/priv/ceph.client.radosgw.keyring /etc/ceph/ceph.client.radosgw.keyring

Node keys #

ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-40 –gen-key ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-41 –gen-key ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-42 –gen-key ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-43 –gen-key ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-44 –gen-key ceph-authtool /etc/pve/priv/ceph.client.radosgw.keyring -n client.radosgw.px-m-45 –gen-key

Privileges #
create the privilege tokens #

ceph-authtool -n client.radosgw.px-m-40 –cap osd ‘allow rwx’ –cap mon ‘allow rwx’ /etc/pve/priv/ceph.client.radosgw.keyring ceph-authtool -n client.radosgw.px-m-41 –cap osd ‘allow rwx’ –cap mon ‘allow rwx’ /etc/pve/priv/ceph.client.radosgw.keyring ceph-authtool -n client.radosgw.px-m-42 –cap osd ‘allow rwx’ –cap mon ‘allow rwx’ /etc/pve/priv/ceph.client.radosgw.keyring ceph-authtool -n client.radosgw.px-m-43 –cap osd ‘allow rwx’ –cap mon ‘allow rwx’ /etc/pve/priv/ceph.client.radosgw.keyring ceph-authtool -n client.radosgw.px-m-44 –cap osd ‘allow rwx’ –cap mon ‘allow rwx’ /etc/pve/priv/ceph.client.radosgw.keyring ceph-authtool -n client.radosgw.px-m-45 –cap osd ‘allow rwx’ –cap mon ‘allow rwx’ /etc/pve/priv/ceph.client.radosgw.keyring

Add the newly minted auth tokens to the cluster #

Using the admin keyring, add the newly minted tokens to the cluster.

ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-40 -i /etc/pve/priv/ceph.client.radosgw.keyring ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-41 -i /etc/pve/priv/ceph.client.radosgw.keyring ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-42 -i /etc/pve/priv/ceph.client.radosgw.keyring ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-43 -i /etc/pve/priv/ceph.client.radosgw.keyring ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-44 -i /etc/pve/priv/ceph.client.radosgw.keyring ceph -k /etc/pve/priv/ceph.client.admin.keyring auth add client.radosgw.px-m-45 -i /etc/pve/priv/ceph.client.radosgw.keyring

added key for client.radosgw.px-m-40 added key for client.radosgw.px-m-41 added key for client.radosgw.px-m-42 added key for client.radosgw.px-m-43 added key for client.radosgw.px-m-44 added key for client.radosgw.px-m-45

Adjust ceph config file #

[client.radosgw.px-m-40]
        host = px-m-40
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

[client.radosgw.px-m-41]
        host = px-m-41
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

[client.radosgw.px-m-42]
        host = px-m-42
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

[client.radosgw.px-m-43]
        host = px-m-43
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

[client.radosgw.px-m-44]
        host = px-m-44
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

[client.radosgw.px-m-45]
        host = px-m-45
        keyring = /etc/pve/priv/ceph.client.radosgw.keyring
        log file = /var/log/ceph/client.radosgw.$host.log
        rgw_dns_name = dog.wolfspyre.io

rgw_log_nonexistent_bucket = true rgw_enable_ops_log = true rgw_enable_usage_log = true

Package Installation #

apt-get install radosgw librados2-perl python3-rados librados2 librgw2

Update /etc/services #

radosgw         7480/tcp                        # Ceph Rados gw

Service enablement #

systemctl enable radosgw service radosgw start

Setting up the external gateway #

DNS Records #

wildcard and A records #

I chose dog.wolfspyre.io as the root subdomain. Since I want to facilitate my offsite hosts to be able to access this as well, I need to enable external and internal resolution of the endpoints.

external records #
# wolfspyre.io
dog IN A 108.221.46.29
*.dog IN A 108.221.46.29
internal records #
# wolfspyre.io
dog IN NS ns01.wolfspyre.io.
dog IN NS ns02.wolfspyre.io.
dog IN NS ns03.wolfspyre.io.
# dog.wolfspyre.io
@            IN A 198.18.1.33
*            IN A 198.18.1.33

skwirreltrap IN A 198.18.198.1
atticus      IN A 198.18.198.2
evey         IN A 198.18.198.3

px-m-40      IN A 198.18.198.40
px-m-41      IN A 198.18.198.41
px-m-42      IN A 198.18.198.42
px-m-43      IN A 198.18.198.43
px-m-44      IN A 198.18.198.44
px-m-45      IN A 198.18.198.45

firewall adjustment #

I needed to permit traffic from internal hosts to the VIP on tcp:443

OPNsense Haproxy config #


Real Servers (Backends) #


Main info #
  • Name or Prefix
    px-m-40-7080
  • Description
    px-m-40-rados
  • Type
    static
Static Server #
  • FQDN or IP
    px-m-40.dog.wolfspyre.io
  • Port
    7480
  • Mode
    active [default]
  • Multiplexer Protocol
    auto-selection [recommended]
  • Prefer IP Family
    prefer IPv4
Common Options #
  • SSL
    [ ]
  • SSL SNI
    px-m-40.dog.wolfspyre.io
  • Verify SSL Certificate
    [ ]
  • SSL Verify CA
    Nothing Selected
  • SSL Verify CRL
    None
  • SSL Client Certificate
    None
  • Max Connections
    N/A
  • Weight
    N/A
  • Check Interval
    N/A
  • Down Interval
    N/A
  • Port to check
    N/A
  • Source address
    198.18.198.1
  • Option pass-through
    N/A

Backend Pools #


  • advanced mode
    [ x ]
  • Enabled
    [ x ]
  • Name
    PXMCeph-S3-Pool
  • Description
    Proxmox Ceph S3 Backend Pool
  • Mode
    HTTP (Layer 7) [default]
  • Balancing Algorithm
    Source-IP Hash [default]
  • Random Draws
    2
  • Proxy Protocol
    none
  • Servers
    • pxm-40-8006
    • pxm-41-8006
    • pxm-42-8006
    • pxm-43-8006
    • pxm-44-8006
    • pxm-45-8006
  • FastCGI Application
    none
  • Resolver
    none
  • Resolver Options
    none
  • Prefer IP Family
    prefer IPv4
  • Source address
    198.19.198.1
  • Enable Health Checking
    [x]
Health Checking #
  • Health Monitor
    PXM UI Port 8006 Check
  • Log Status Changes
    a
  • Check Interval
    a
  • Down Interval
    a
  • Unhealthy Threshold
    a
  • Healthy Threshold
    a
  • E-Mail Alert
    none
HTTP(S) settings #
  • Enable HTTP/2
    [ ]
  • HTTP/2 without TLS
    [ ]
  • Advertise Protocols (ALPN)
    • HTTP/1.1
    • HTTP/1.0
Persistence #
  • Persistence type
    Stick-table persistence [default]
Stick-table persistence #
  • Table type
    none
  • Stored data types
    Connection count
  • Expiration time
    30m
  • Size
    50k
  • Cookie name
    none
  • Cookie length
    none
  • Connection rate period
    60s
  • Session rate period
    60s
  • HTTP request rate period
    60s
  • HTTP error rate period
    60s
  • Bytes in rate period
    60s
  • Bytes out rate period
    60s
Basic Authentication #
  • Enable
    [ ]
  • Allowed Users
    Nothing selected
  • Allowed Groups
    Nothing selected
Tuning Options #
  • Connection Timeout
    20s
  • Check Timeout
    10s
  • Server Timeout
    20s
  • Retries
    1
  • Option pass-through
    none
  • Default for server
    none
  • Use Frontend port
    [ ]
  • HTTP reuse
    Always
  • Enable Caching
    [ X ]
Rules #
  • Select Rules
    noneyet
Error Messages #
  • Select Error Messages
    Nothing selected

Testing #

Maintenance #


https://www.symmcom.com/docs/how-tos/storages/how-to-configure-s3-compatible-object-storage-on-ceph

https://docs.ceph.com/en/latest/radosgw/s3/ https://docs.ceph.com/en/latest/man/8/ceph-authtool/

https://docs.ceph.com/en/latest/radosgw/config-ref/ https://docs.ceph.com/en/latest/radosgw/admin/ https://docs.ceph.com/en/latest/architecture/#data-striping label

https://docs.ceph.com/en/latest/man/8/radosgw/ https://docs.ceph.com/en/latest/man/8/radosgw-admin/ https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/1.3/html/object_gateway_guide_for_red_hat_enterprise_linux/object_gateway_configuration_reference

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/load_balancer_administration/ceph_example