avatar

Underkube

Using sushy-tools in a container to simulate RedFish BMC

I wanted to simulate a RedFish BMC to be able to power on/off libvirt virtualmachines and attach ISOs as I do for baremetal hosts. Entering sushy-tools πŸ”—sushy-tools include a RedFish BMC emulator as sushy-emulator (see the code in the official repo). Basically it can connect to the libvirt socket to perform the required actions exposing a RedFish API. metal3-io/sushy-tools container image πŸ”—To easily consume it, the metal3 folks already have a container image ready for consumption at quay.

Using systemd-path to keep specific folder permissions

I wanted to have specific permissions on the /var/lib/libvirt/images folder to be able to write as my user. To do it, you can just use setfacl as: $ sudo setfacl -m u:edu:rwx /var/lib/libvirt/images The issue is sometimes those permissions were reset to the default ones… but why? and most important… who? auditd πŸ”—To find the culprit I used auditd to monitor changes in that particular folder as: $ sudo auditctl -w /var/lib/libvirt/images -p a -k libvirt-images Then, performed a system update just in case… and after a while…

Howto configure a CentOS 8 Stream host as a network router and provide dhcp and dns services

I wanted to configure a VM to act as a router between two networks, providing DHCP and DNS services as well. β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”œβ”€β”€β”€β”€ vm01 β”‚ β”œβ”€β”€ dhcprouter β”œβ”€β”€β”€ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€β”€β”€ vm02 β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ public network private network public network is the regular libvirt network created by default (192.

Quick and dirty way to compile a golang binary using a container

I wanted to compile the hypershift binary but it requires golang 1.17 which is not included in Fedora 35, so I ended up doing this: mkdir ./tmp/ && \ podman run -it -v ${PWD}/tmp:/var/tmp/hypershift-bin/:Z --rm docker.io/golang:1.17 sh -c \ 'git clone --depth 1 https://github.com/openshift/hypershift.git /var/tmp/hypershift/ && \ cd /var/tmp/hypershift && \ make hypershift && \ cp bin/hypershift /var/tmp/hypershift-bin/' && \ cp ${PWD}/tmp/hypershift ~/bin/ HTH

Running hpasmcli commands on a container using podman

To be able to monitor hardware health, status and information on HP servers running RHEL, it is required to install the HP’s Service Pack for Proliant packages. It seems the Management Component Pack is the same(agent software but for community distros, for enterprise, use SPP. There is more info about those HP tools on the HP site Basically you just need to add a yum/dnf repository, install the packages and start a service (actually the service is started as part of the RPM post-install, which is not a good practice…)

Customizing OpenShift 4 baremetal IPI network at installation time

When deploying OpenShift IPI on baremetal, there is only so much you can tweak at installation time in terms of networking. Of course you can do changes after the installation, such as applying bonding configurations or vlan settings via machine configs… but what if you need those changes at installation time? In my case, I have an OpenShift environment composed by physical servers where each of them have 4 NICs. 1 unplugged NIC, 1 NIC connected to the provisioning network and 2 NICs connected to the same switch and to the same baremetal subnet.

Using an external registry with OpenShift 4

In this blog post I’m trying to perform the integration of an external registry with an OpenShift environment. The external registry can be any container registry, but in this case I’ve configured harbor to use certificates (self generated), the β€˜library’ repository in the harbor registry to be private (aka. require user/pass) and created an β€˜edu’ user account with permissions on that β€˜library’ repository. Harbor installation πŸ”—Pretty straightforward if following the docs, but for RHEL7:

Nextcloud with podman rootless containers and user systemd services. Part I - Introduction

Introduction πŸ”—I’ve been using Nextcloud for a few years as my personal β€˜file storage cloud’. There are official container images and docker-compose files to be able to run it easily. For quite a while, I’ve been using the nginx+redis+mariadb+cron docker-compose file as it has all the components to be able to run an β€˜enterprise ready’ Nextcloud, even if I’m only using it for personal use :) In this blog post I’m going to try to explain how do I moved from that docker-compose setup to a podman rootless and systemd one.

Nextcloud with podman rootless containers and user systemd services. Part II - Nextcloud pod

Running a rootless Nextcloud pod πŸ”—Instead of running Nextcloud as independant containers, I’ve decided to leverage one of the multiple podman features which is being able to run multiple containers as a pod (like a kubernetes pod!) The main benefit to me of doing so is they they use a single network namespace, meaning all the containers running in the same pod can reach each other using localhost and you only need to expose the web interface.

Nextcloud with podman rootless containers and user systemd services. Part III - NFS gotchas

Nextcloud in container user IDs πŸ”—The nextcloud process running in the container runs as the www-data user which in fact is the user id 82: $ podman exec -it nextcloud-app /bin/sh /var/www/html # ps auxww | grep php-fpm 1 root 0:10 php-fpm: master process (/usr/local/etc/php-fpm.conf) 74 www-data 0:16 php-fpm: pool www 75 www-data 0:15 php-fpm: pool www 76 www-data 0:07 php-fpm: pool www 84 root 0:00 grep php-fpm /var/www/html # grep www-data /etc/passwd www-data:x:82:82:Linux User,,,:/home/www-data:/sbin/nologin NFS and user IDs πŸ”—NFS exports can be configured to have a forced uid/gid using the anonuid, anongid and all_squash parameters.