Skip to content

About Proxmox VE

Proxmox offers the server virtualization management platform Proxmox VE, and the Proxmox Mail Gateway an antispam and antivirus solution for mail server protection.

Set up Proxmox

Partition Description

Warning

Proxmox has a specific partitioning scheme with an allocation of space that must be studied according to the conditions. So take note of the scores below

Proxmox has a particular partitioning scheme as described beneath.

  • hdsize=nGB

This sets the total amount of hard disk to use for the Proxmox installation. This should be smaller than your disk size.

  • maxroot=nGB

Sets the maximum size to use for the root partition. This is the max size so if the disk is too small, the partition may be smaller than this.

  • swapsize=nGB

Sets the swap partition size in gigabytes.

  • maxvz=nGB

Sets the maximum size in gigabytes that the data partition will be. Again, this is similar to maxroot and the final partition size may be smaller.

  • minfree=nGB

Sets the amount of free space to remain on the disk after the Proxmox installation.

Activate HTTPS for the panel

To get SSL certificate for panel we use function include in Proxmox.

Select your node go to certificate => ACME.

Click on "Edit domains" and add your domain like testing.ovh or sub.testing.ovh. When It's done you can click "Order certificate". After that you must reload the web gui and It works !

Warning

You may not be able to see the valid certificate after refresh. You should think about emptying your navigation cache or visiting it in private session.

Swap Management

By default linux starts using the swap as soon as we use more than 40% of our total memory. We will modify this behavior, the latter will slow down the server sharply by using the swap frequently and unnecessarily.

To make the swappiness persistent, go edit /etc/sysctl.conf with an editor of your choice and add the following line at the end of the file

vm.swappiness = 10

We verify if it's well applied.

cat /proc/sys/vm/swappiness #return the value of sysctl.conf

Scheduled Backup

In Proxmox VE we have the possibility to launch the creation of a backup directly from webgui.

When the backup is done he's stored under /var/lib/vz/dump and he have .vma.gz at the end of his name.

We must transfert these saves on our backup server.

To do an scheduled backup we have several tools, in our case we choose crontab to schedule + rsync over ssh for the files transfert.

Edit your crontab by using crontab -e and add the following line

0 4 * * * rsync -azvhP -e 'ssh -p 65022' --stats /var/lib/vz/dump/ username@YOUR_SERVER_IP:/data/backups/ares_srv/vm/

In my case I want to do a backup of our VMs at 04:00AM every day of every month all long year.

Warning

To avoid password using you can copy your id_rsa.pub on the backup server with this method you don't need to enter your password.

Email with transfert status

At this point we can do our backup, it is schedule and automatic.

Next goal is the reception of an email with a resume of what are doing rsync.

In proxmox we must modify /etc/pve/user.cfg to activate email status for root crontab.

user:nicolas@pve:1:0:Nicolas:facio:nicolas.facio@piperzel.be:::
user:michael@pve:1:0:Michael:ambozzi:michael.ambozzi@piperzel.be:::
user:root@pam:1:0::::::

group:Admin:michael@pve:Administrator:
group:Tech:nicolas@pve:Technician:

role:PVESysAudit:Sys.Audit:

acl:1:/:michael@pve:Administrator:
acl:1:/:@Admin:PVEAdmin:
acl:1:/:@Tech:PVEDatastoreUser,PVESysAudit,PVEVMAdmin:
user:nicolas@pve:1:0:Nicolas:facio:nicolas.facio@piperzel.be:::
user:michael@pve:1:0:Michael:ambozzi:michael.ambozzi@piperzel.be:::
user:root@pam:1:0:::admin@piperzel.be:::

group:Admin:michael@pve:Administrator:
group:Tech:nicolas@pve:Technician:


role:PVESysAudit:Sys.Audit:

acl:1:/:michael@pve:Administrator:
acl:1:/:@Admin:PVEAdmin:
acl:1:/:@Tech:PVEDatastoreUser,PVESysAudit,PVEVMAdmin:

Tweaks

Improving graphical performance of RDP

If you want to get correct graphical performances for your virtual machine with a GUI.

Edit the configuration file of your VM-ID located under /etc/pve/qemu-server and add this line.

args: -global qxl-vga.ram_size_mb=256 -global qxl-vga.vram_size_mb=256 -global qxl-vga.vgamem_mb=64

Exemple for a Windows virtual machine

args: -global qxl-vga.ram_size_mb=256 -global qxl-vga.vram_size_mb=256 -global qxl-vga.vgamem_mb=64
bootdisk: virtio0
cores: 4
cpu: host
ide2: local:iso/Win10_1809Oct_EnglishInternational_x64.iso,media=cdrom
memory: 4096
name: test
net0: e1000=xx:xx:xx:xx:xx:xx,bridge=vmbr0
numa: 0
ostype: win10
sata0: local:iso/virtio-win.iso,media=cdrom,size=315276K
scsihw: virtio-scsi-pci
smbios1: uuid=0a4328d9-68da-4620-b631-d300ad52ef9e
sockets: 1
vga: qxl
virtio0: local:114/vm-114-disk-0.qcow2,cache=writethrough,size=60G
vmgenid: 9b8b3876-fc24-4758-9e5e-08096b84a62c

Note

The value of qxl-vga.ram_size_mb / qxl-vga.vram_size_mb /qxl-vga.vgamem_mb is indicated in MegaBytes

USB passthrough

Check the devices id with lsusb , in this case 0403:6001

root@pve:~#
Bus 002 Device 005: ID 0403:6001 Future Technology Devices International, Ltd FT232 USB-Serial (UART) IC
Bus 002 Device 003: ID 8087:07da Intel Corp.
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Attach the devices to the VM using the VMID (100 in this case)

qm set 100 -usb0 host=0403:6001

And restart the vm

qm stop 100
qm start 100