Proxmox

 

This document refers to Proxmox. From 6.2.3 we include an initial version that it’s going to be improved in the following releases. If you are an abiquo customer use our support portal to request improvement or provide feedback.

 


Abiquo and Proxmox

Abiquo can manage Proxmox virtual Environment in your multi-cloud platform

  • Follow Proxmox documentation for an standard installation


Supported versions

Abiquo has been tested with the following latest versions.

Product

Version

Build number

Notes and known issues

Product

Version

Build number

Notes and known issues

Proxmox virtual environment

8.2.4

 

 


Proxmox Features

This section describes the Proxmox Virtual environment features supported by Abiquo. You can manage Proxmox technology in Abiquo.

Feature

Description

Feature

Description

Kernel based virtualmachines (KVM)

 Abiquo integration allow to deploy virtualmachines on Proxmox

Command line interface & Rest API

 Abiquo integration is done using both interfaces. By experiencie CLI interface nowadays is reacher than the rest API

Live/online migration

Abiquo detects Proxmox Live/Online migration

Container based virtualization

 Abiquo does not support fault tolerance because it requires two VMs with the same name to be present in a cluster at the same time.

Proxmox VE HA

Abiquo detects Proxmox HA

Linux Networking Stack

 Abiquo integration works with the standard bridged mode supported by proxmox

Software-Defined Network (SDN)

Initial version not supported to avoid to increase enviromental requirements. Anyway if you have specific needs on SDN please contact abiquo team.

Templates

 Q2COW ( default format)
 VMDK sparse and flat and stream-optimized

DVD

The ISO feature supports DVDs, and you can import CD-ROM configurations on VMs with IDE and SATA controllers

Remote access

Uses standard Abiquo remote access with guacamole technology to allow remote access

Network drivers

E1000, PCNet32, VMXNET3

CPU hot-add 
RAM hot-add 

 With supported guest operating system. The user can mark supported templates and perform hot-add

Hot-reconfigure of NICs, disks 

 With supported guest operating system. The user can mark supported templates and perform hot-reconfigure

Snapshots

Abiquo lets users obtain a snapshot of their VM through the UI.
See Snapshoting

GuestOS

Support in VM template

Metrics

Customize the built-in metrics to obtain from ESXi

Proxmox guest tools

Supported, allowing to config vm password and some basic stuff

Proxmox Backup

Not supported. As well we are monitoring veeam in order to see when they enable support for proxmox

Cloud-init

Supported

 


Storage features

Feature

Description

Feature

Description

Storage disks

Abiquo support standard Proxmox storage configurations. Please share any special configuration that should be considered by Abiquo

VM disk controller types

  • IDE (default for compatibility reasons),

  • SCSI

  • SATA

  • VIRTIO

VM SCSI controllers

Defined at VM level

VM system disks

By default, for VMs deployed on Proxmox hypervisor:

  • If disks are deployed from QCow2 templates, they will be deployed in the same format

  • Non supported disk types are going to be confverted to the devault QCOw2

Disk resize

  System disk resize
  Hard disk resize

See Configure VM disks for VMware


Node Discovery

This section describes the required parameters to add Proxmox nodes to Abiquo.

Please refer to Create a physical machine in order to know how to create a Physical Machine.
Below, the steps and attributes for each step will be represented.

Add Proxmox as a Physical Machine

image-20240926-125321.png
Physical Machine Attributes for Proxmox VE

 

  • Hypervisor type: Proxmox VE

  • Manager IP: Address where proxmox is deployed.

  • Agent user: User from Proxmox

  • Manager user: User name from API Tokens section on Proxmox.

  • Agent password: Password from Agent user

  • Manager password: API token from Manager user. Its structured as TokenName=Token

  • Agent port: Port to access with Agent Users *Not Required

  • Manager port: Port to access with Manager Users *Not Required

Select Hosts

 

image-20240926-125941.png
Proxmox nodes selected pending to be edited

 

It is possible to add any of the nodes, or multiple nodes. Please refer to Create a physical machine | Register a physical machine in Abiquo in order to enable the hosts and create the Physical Machine.


Host configuration

This section describes configuration of Proxmox hosts.

Proxmox nodes as hosts

You can add each node of a Proxmox cluster as a host within abiquo, so Abiquo will add this hosts and admin is going to be able to see this nodes agregated by cluster.

This allow then to add multiple clusters within the same abiquo datacenter, within the same Rack or in different datacenters and racks, allowing to benefit from Abiquo flexibility allowing to offer different levels of service for each cluster.

The main advantages of Proxmox clusters and discover all hosts within abiquo are:

  • You can make more efficient use of your infrastructure, because when users deploy VMs, Abiquo will allocate in the best node the cluster

  • More control and granularity within abiquo.

  • Capacity to offer multiple SLA for your customers

All of the VMs in the cluster will be directly listed under the cluster and not on individual hosts.

 

 

Even in a test system, do not add the same Proxmox servers in more than one datacenter


Host datastores

Abiquo will deploy VMs to datastores that you register in the platform.

For vCenter clusters, you should use shared datastores, so when VMs move, they will always be accessible to all hosts.

For vCenter hosts, when you use a shared datastore, Abiquo creates a different datastore on each physical machine that is using the datastore. This means that a shared datastore can be enabled on one host and disabled on another, either as a result of user configuration or an issue (e.g. an NFS communication error on one host).

If you need to work with local datastores, create a datastore tier for the host, and add local datastores with shared datastores. This will ensure that the platform can always create or access VMs on a valid datastore. Do not create a tier containing local datastores from different hosts.


Datastore discovery

Abiquo should discover and list all of the datastores that you will use for deploying VMs. When you add your Proxmox node, you will select datastores from this list to use in Abiquo.

Abiquo expects to be able to identify each datastore with a UUID, so it will initially try to create a UUID folder on each datastore.

By default, Abiquo retrieves a datastore if it is accessible and not in maintenance mode on all the hosts in the cluster that mount it.

So by default, if there is a host that has all the datastores mounted, but Abiquo cannot access them (e.g. because the host is down), Abiquo will not return any datastores. You can configure Abiquo to return a datastore mounted on at least one host.

You can also configure Abiquo to ignore datastores by name.

Abiquo will automatically ignore datastores that are not accessible.

See Configure datastore discovery for VMware


Datastore checks

As part of the infrastructure check, Abiquo will check datastores to ensure that it can deploy VMs.

Abiquo checks for the following conditions:

  • datastore is accessible

  • datastore is mounted as read/write

  • datastore is not in maintenance mode

If you activated a datastore, but it fails the datastore check, then Abiquo will automatically deactivate it. If it passes a check in the future, then Abiquo will automatically activate it again. If a deactivated datastore fails the datastore check, then Abiquo will ignore it!


Configuration of VM disks

Abiquo supports thin provisioning, and IDE, SCSI, and SATA disk controllers, and you can configure defaults for the platform.

See Configure VM disks for VMware and for more options, such as ISO disks, see Configure SATA for VMware hypervisors.

 


VM templates

You can upload OVA and disk files to the Catalogue so users can create VMs from VM templates by self-service.

See Template compatibility tableAdd VM templates to the catalogue  and Importing templates into the catalogue


VM snapshots

Abiquo enables users to manage VM snapshots in the user interface. For a full feature description and configuration details, see Abiquo VM snapshoting.

 


Capture VMs

To prepare and onboard existing VMs into the platform, see About import and capture virtual machines and Import and capture virtual machines.


Remote access to VMs

The platform supports remote access to Proxmox VMs with guacamole.

See Troubleshouting access with Guacamole

 

Limitations

  • Datastore UUID is null, not used

  • To build providerIds we need to use the cluster name but note that cluster name must be unique but is setted at installation time only if two proxmox clusters are sharing network. This means that the name of the cluster in a client can be duplicated and is not an editable property.

  • In the case that we need to onboard multiple proxmox clusters, we need to ensure that all the names of the nodes are unique, to avoid providerid collisions in Abiquo.

  • Unable to get com.abiquo.hypervisor.model.DiskDescription.capacityInBytes (maybe using qemu-img, it requires an agent)

  • Skipping physical cdrom “cdrom,media=cdrom”

  • Snapshot size is not available

  • Is not possible to hot-reconfigure disks controllers

  • Not possible to configure QEMU guest actions per vm like: freezefsonbackup or fstrimcloneddisks (Possible problems with Windows templates)

 

Proxmox plugin properties

Copyright © 2006-2024, Abiquo Holdings SL. All rights reserved