Guacamole cannot connect via SSH – FIX

If you get “SSH handshake failed” when trying to use Guacamole to connect to Ubuntu via SSH, you need to use a workaround:

In the meantime a workaround is adding “HostKeyAlgorithms +ssh-rsa” to the end of /etc/ssh/sshd_config on the Ubuntu machine and restart sshd. Note: I don’t have an understanding of the security implications of this, so use at your own risk

The solution was posted here:
https://www.reddit.com/r/linuxquestions/comments/ued2vq/comment/i736why/?utm_source=share&utm_medium=web2x&context=3

Ubuntu 20.04/18.04/16.04 Multi User Remote Desktop Server

https://hub.docker.com/r/danielguerra/ubuntu-xrdp/

Add Docker Capabilities to TrueNAS CORE

https://getmethegeek.com/blog/2021-01-07-add-docker-capabilities-to-truenas-core/

How To Use Rsync to Sync Local and Remote Directories

https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories

Introduction

Rsync, which stands for “remote sync”, is a remote and local file synchronization tool. It uses an algorithm that minimizes the amount of data copied by only moving the portions of files that have changed.

In this guide, we will cover the basic usage of this powerful utility.

What Is Rsync?

Rsync is a very flexible network-enabled syncing tool. Due to its ubiquity on Linux and Unix-like systems and its popularity as a tool for system scripts, it is included on most Linux distributions by default.

Basic Syntax

The basic syntax of rsync is very straightforward, and operates in a way that is similar to ssh, scp, and cp.

We will create two test directories and some test files with the following commands:

cd ~
mkdir dir1
mkdir dir2
touch dir1/file{1..100}

 Copy

We now have a directory called dir1 with 100 empty files in it.

ls dir1

 Copy

Outputfile1    file18  file27  file36  file45  file54  file63  file72  file81  file90
file10   file19  file28  file37  file46  file55  file64  file73  file82  file91
file100  file2   file29  file38  file47  file56  file65  file74  file83  file92
file11   file20  file3   file39  file48  file57  file66  file75  file84  file93
file12   file21  file30  file4   file49  file58  file67  file76  file85  file94
file13   file22  file31  file40  file5   file59  file68  file77  file86  file95
file14   file23  file32  file41  file50  file6   file69  file78  file87  file96
file15   file24  file33  file42  file51  file60  file7   file79  file88  file97
file16   file25  file34  file43  file52  file61  file70  file8   file89  file98
file17   file26  file35  file44  file53  file62  file71  file80  file9   file99

We also have an empty directory called dir2.

To sync the contents of dir1 to dir2 on the same system, type:

rsync -r dir1/ dir2

 Copy

The -r option means recursive, which is necessary for directory syncing.

We could also use the -a flag instead:

rsync -a dir1/ dir2

 Copy

The -a option is a combination flag. It stands for “archive” and syncs recursively and preserves symbolic links, special and device files, modification times, group, owner, and permissions. It is more commonly used than -r and is usually what you want to use.

An Important Note

You may have noticed that there is a trailing slash (/) at the end of the first argument in the above commands:

rsync -a dir1/ dir2

 Copy

This is necessary to mean “the contents of dir1”. The alternative, without the trailing slash, would place dir1, including the directory, within dir2. This would create a hierarchy that looks like:

~/dir2/dir1/[files]

 Copy

Always double-check your arguments before executing an rsync command. Rsync provides a method for doing this by passing the -n or --dry-run options. The -v flag (for verbose) is also necessary to get the appropriate output:

rsync -anv dir1/ dir2

 Copy

Outputsending incremental file list
./
file1
file10
file100
file11
file12
file13
file14
file15
file16
file17
file18
. . .

Compare this output to the output we get when we remove the trailing slash:

rsync -anv dir1 dir2

 Copy

Outputsending incremental file list
dir1/
dir1/file1
dir1/file10
dir1/file100
dir1/file11
dir1/file12
dir1/file13
dir1/file14
dir1/file15
dir1/file16
dir1/file17
dir1/file18
. . .

You can see here that the directory itself is transferred.

How To Use Rsync to Sync with a Remote System

Syncing to a remote system is trivial if you have SSH access to the remote machine and rsync installed on both sides. Once you have SSH access verified between the two machines, you can sync the dir1 folder from earlier to a remote computer by using this syntax (note that we want to transfer the actual directory in this case, so we omit the trailing slash):

rsync -a ~/dir1 username@remote_host:destination_directory

 Copy

This is called a “push” operation because it pushes a directory from the local system to a remote system. The opposite operation is “pull”. It is used to sync a remote directory to the local system. If the dir1 were on the remote system instead of our local system, the syntax would be:

rsync -a username@remote_host:/home/username/dir1 place_to_sync_on_local_machine

 Copy

Like cp and similar tools, the source is always the first argument, and the destination is always the second.

Useful Options for Rsync

Rsync provides many options for altering the default behavior of the utility. We have already discussed some of the more necessary flags.

If you are transferring files that have not already been compressed, like text files, you can reduce the network transfer by adding compression with the -z option:

rsync -az source destination

 Copy

The -P flag is very helpful. It combines the flags --progress and --partial. The first of these gives you a progress bar for the transfers and the second allows you to resume interrupted transfers:

rsync -azP source destination

 Copy

Outputsending incremental file list
./
file1
           0 100%    0.00kB/s    0:00:00 (xfer#1, to-check=99/101)
file10
           0 100%    0.00kB/s    0:00:00 (xfer#2, to-check=98/101)
file100
           0 100%    0.00kB/s    0:00:00 (xfer#3, to-check=97/101)
file11
           0 100%    0.00kB/s    0:00:00 (xfer#4, to-check=96/101)
. . .

If we run the command again, we will get a shorter output, because no changes have been made. This illustrates rsync’s ability to use modification times to determine if changes have been made.

rsync -azP source destination

 Copy

Outputsending incremental file list
sent 818 bytes received 12 bytes 1660.00 bytes/sec
total size is 0 speedup is 0.00

We can update the modification time on some of the files and see that rsync intelligently re-copies only the changed files:

touch dir1/file{1..10}
rsync -azP source destination

 Copy

Outputsending incremental file list
file1
            0 100%    0.00kB/s    0:00:00 (xfer#1, to-check=99/101)
file10
            0 100%    0.00kB/s    0:00:00 (xfer#2, to-check=98/101)
file2
            0 100%    0.00kB/s    0:00:00 (xfer#3, to-check=87/101)
file3
            0 100%    0.00kB/s    0:00:00 (xfer#4, to-check=76/101)
. . .

In order to keep two directories truly in sync, it is necessary to delete files from the destination directory if they are removed from the source. By default, rsync does not delete anything from the destination directory.

We can change this behavior with the --delete option. Before using this option, use the --dry-run option and do testing to prevent data loss:

rsync -a --delete source destination

 Copy

If you wish to exclude certain files or directories located inside a directory you are syncing, you can do so by specifying them in a comma-separated list following the --exclude= option:

rsync -a --exclude=pattern_to_exclude source destination

 Copy

If we have specified a pattern to exclude, we can override that exclusion for files that match a different pattern by using the --include= option.

rsync -a --exclude=pattern_to_exclude --include=pattern_to_include source destination

 Copy

Finally, rsync’s --backup option can be used to store backups of important files. It is used in conjunction with the --backup-dir option, which specifies the directory where the backup files should be stored.

rsync -a --delete --backup --backup-dir=/path/to/backups /path/to/source destination

 Copy

Conclusion

Rsync can simplify file transfers over networked connections and add robustness to local directory syncing. The flexibility of rsync makes it a good option for many different file-level operations.

A mastery of rsync allows you to design complex backup operations and obtain fine-grained control over what is transferred and how.

Install Plex Media Server on Ubuntu 20.04

https://www.howtoforge.com/install-plex-media-server-on-ubuntu-2004/

Plex Media Server UFW rule

Setup minikube on VirtualBox

https://vovaprivalov.medium.com/setup-minikube-on-virtualbox-7cba363ca3bc

WordPress on Kubernetes in Ubuntu

WordPress on Kubernetes in Ubuntu

based on:

https://github.com/bitnami/charts/tree/master/bitnami/wordpress/#installing-the-chart
https://vitux.com/install-and-deploy-kubernetes-on-ubuntu/

1) Install snapd

sudo apt update 
sudo apt install snapd

2) Install helm

sudo snap install helm --classic

3) Install and enable Docker

sudo apt install docker.io
sudo systemctl enable docker

4) Add the Kubernetes signing key on both the nodes

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

5) Add Xenial Kubernetes Repository

sudo apt-get install software-properties-common
sudo apt-get update
sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

6) Install Kubeadm

sudo apt install kubeadm

7) Disable swap memory – Kubernetes does not perform properly on a system that is using

swap memory
sudo swapoff -a

8) Set hostname

sudo hostnamectl set-hostname master-node

9) Initialize Kubernetes on the master node

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

10) To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

11) Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.185.98:6443 --token 3fblch.ja2qp2uymppvd92n --discovery-token-ca-cert-hash sha256:77bef2579a7c22a3b8a55f94f70595f35112b406ac12a04f67e7a73e1a50e62b

12) Deploy a Pod Network through the master node

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

13) view the status of the network

kubectl get pods --all-namespaces

14) Install Bitnami WordPress (a help chart)

helm install my-blog bitnami/wordpress

15) Result

NAME: my-blog
LAST DEPLOYED: Mon Apr 27 18:35:23 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
** Please be patient while the chart is being deployed **

To access your WordPress site from outside the cluster follow the steps below:

  1. Get the WordPress URL by running these commands: NOTE: It may take a few minutes for the LoadBalancer IP to be available.
    Watch the status with: ‘kubectl get svc –namespace default -w my-blog-wordpress’ export SERVICE_IP=$(kubectl get svc –namespace default my-blog-wordpress –template “{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}”)
    echo “WordPress URL: http://$SERVICE_IP/”
    echo “WordPress Admin URL: http://$SERVICE_IP/admin”
  2. Open a browser and access WordPress using the obtained URL.
  3. Login with the following credentials below to see your blog: echo Username: user
    echo Password: $(kubectl get secret –namespace default my-blog-wordpress -o jsonpath=”{.data.wordpress-password}” | base64 –decode)

16) Uninstall

helm delete my-blog

Letsencrypt + Certbot

Letsencrypt provides free SSL certificates

Certbot automates re-newal and installation of the certificates

Install Certbot:

$ sudo add-apt-repository ppa:certbot/certbot
$ sudo apt-get update
$ sudo apt-get install python-certbot-apache

Generate and install the certificate

$ certbot --apache

Generate the certificate only

$ certbot --apache certonly

Generate wildcard certificate

$ sudo certbot certonly --manual --preferred-challenges=dns --email yourname@yourdomain.com --server https://acme-v02.api.letsencrypt.org/directory --agree-tos -d *.yourdomain.com

Renewal

The Certbot packages on your system come with a cron job that will renew your certificates automatically before they expire. Since Let’s Encrypt certificates last for 90 days, it’s highly advisable to take advantage of this feature.

$ sudo certbot renew --apache