Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 23 Current »


This document describes how to use the migration tool to migrate from vCenter hosts to vCenter Cluster as hosts. Please contact Abiquo Support to obtain this tool.

Procedure

Prepare your environment

  1. The vCenter Cluster plugin requires a configured dvSwitch on each cluster, so make sure that the distributed virtual switch is available on each cluster beforehand.
  2. Double check your environment's disk usage to make sure that no partitions are full or close to.
  3. Check the licenses installed in your environment:
    1. Your license has to be the 'vCenter Cluster' plugin enabled.
    2. Your license has to allow the use of at least as many cores as your cluster currently have.
  4. Check that there are no database inconsistencies. From the CLI for your database, run the following queries, there should be no results:
    1. Check for VMs that are not allocated but have a datastore assigned:
      1. SELECT vm.name, dm.idManagement, dm.idDatastore, vm.idHypervisor FROM kinton.disk_management dm, kinton.rasd_management rm, kinton.virtualmachine vm WHERE dm.idManagement = rm.idManagement AND rm.idVM = vm.idVM AND vm.state = 'NOT_ALLOCATED' AND dm.idDatastore IS NOT NULL;
    2. Check for VMs that are not allocated but have a hypervisor assigned:
      1. SELECT vm.name, dm.idManagement, dm.idDatastore, vm.idHypervisor FROM kinton.disk_management dm, kinton.rasd_management rm, kinton.virtualmachine vm WHERE dm.idManagement = rm.idManagement AND rm.idVM = vm.idVM AND vm.state = 'NOT_ALLOCATED' AND vm.idHypervisor IS NOT NULL;
    3. Check for which disks are stored in which datastores that are referencing non-existant virtual machines.
      1. SELECT dm.idManagement, dm.idDatastore, dm.type, dm.label, SUBSTRING(pm.name,1,7) as pmName FROM kinton.disk_management AS dm INNER JOIN kinton.rasd_management AS rm INNER JOIN kinton.scheduled_resources AS sr INNER JOIN kinton.physicalmachine AS pm WHERE dm.idDatastore = sr.idDatastore AND sr.idPhysicalMachine = pm.idPhysicalMachine AND dm.idManagement = rm.idManagement AND rm.idVM IS NULL;
    4. Check for virtual machines which disks are stored in the datastores that will be deleted along their physical machine associated:
      1. SELECT vm.name, h.idPhysicalMachine, dm.idDatastore FROM kinton.disk_management AS dm, kinton.rasd_management AS rm, kinton.virtualmachine AS vm, kinton.hypervisor AS h WHERE rm.idManagement = dm.idManagement AND rm.idVM = vm.idVM AND vm.idHypervisor = h.id AND dm.idDatastore IN (SELECT sr.idDatastore FROM kinton.physicalmachine AS pm, kinton.scheduled_resources AS sr WHERE pm.idPhysicalMachine != h.idPhysicalMachine AND sr.idPhysicalMachine = pm.idPhysicalMachine);
  5. Check that there are no additional database inconsistencies. From the CLI for your database, run the following queries, there should be no NULL values in the idVM column:
    1. Check for disk management entries referencing datastores but which are not assigned to a virtual machine:
      1. SELECT dm.idManagement, dm.idDatastore, rm.idVM FROM kinton.disk_management AS dm INNER JOIN kinton.rasd_management AS rm WHERE dm.idManagement = rm.idManagement AND idVM IS NULL AND idDatastore IS NOT NULL;

Step by step migration

To upgrade your currently added vCenter hypervisors to vCenter cluster, then the process is as follows:

  1. Familiarize yourself with the migration tool. Please execute it without parameters to see the available options:
    1. # java -jar vcenter-cluster-upgrade-0.3.jar
       -apipass (--api-password) VAL                             : abiquo API login password
       -apiuser (--api-user) VAL                                 : abiquo API login user
       -dcid (--datacenter-id) VAL                               : id of the datacenter to migrate
       -oauthAccessToken (--oauth-access-token) VAL              : abiquo API oauth access token
       -oauthAccessTokenSecret (--oauth-access-token-secret) VAL : abiquo API oauth access token secret
       -oauthConsumerKey (--oauth-consumer-key) VAL              : abiquo API oauth consumer key
       -oauthConsumerSecret (--oauth-consumer-secret) VAL        : abiquo API oauth consumer secret
       -op (--option) VAL                                        : operation to perform [check, apply]
       -passfile (--password-file) VAL                           : file with passwords to vcenter
       -url (--api-url) VAL                                      : abiquo API base url
  2. Get the following information from your environment:
    1. API_ENDPOINT: FQDN (and maybe the port) to the Abiquo API.
      • For example: https://dev.myabiquo.com/api
    2. , API_USER: an enabled administrator account from the Abiquo environment where the migration will take place.
      1. For example: admin
    3. API_PASS: the password for the previous user account.
    4. DC_ID: Unique DB ID for the Abiquo datacenter you are about to migrate. One way to obtain that information is by querying the Abiquo database directly and get the appropriate integer from the idDataCenter column:

      SELECT idDataCenter, situation FROM kinton.datacenter;
  3. Create a temporary file with the appropriate vCenter credentials; this is the passfile parameter for the migration tool. This file has to follow the following guidelines:
    1. it must have 3 columns, separated by a single space.
    2. the first column must be the vCenter IP.
    3. the second and third columns must contain the vCenter credentials for the cloud admin.
  4. An example for this file is as follows:
    1. VCENTER_IP1 VCENTER_USER1 VCENTER_PASS1
      VCENTER_IP2 VCENTER_USER2 VCENTER_PASS2
  5. Check that the tool has network access to both the Abiquo API and the appropriate vCenter.
  6. Execute the tool in dry run mode and check for no errors, replacing API_ENDPOINT, API_USER, API_PASS, DC_ID and PASS_FILE with the appropriate values from the previous steps:
    1. # java -jar vcenter-cluster-upgrade-0.3.jar  -op check -url 'API_ENDPOINT' -apiuser 'API_USER' -apipass 'API_PASS' -dcid 'DC_ID' -passfile 'PASS_FILE'
      
      Using datacenter dc
      machine 10.95.9.23 [10.95.9.242] - 10.95.9.23 cluster[my-cluster-70]
      machine 10.95.9.27 [10.95.9.242] - host1.test.example.org cluster[Functionals-Cluster]
      machine 10.95.9.56 [10.95.9.242] - host2.test.example.org cluster[Functionals-Cluster]
      machine 10.95.9.29 [10.95.9.242] - 10.95.9.29 cluster[my-Cluster-65]
      cluster my-cluster-70 in rack test
      cluster my-Cluster-65 in rack test
      cluster Functionals-Cluster in rack test
      netIface Functionals DVS to NST http://localhost:8009/api/admin/datacenters/1/networkservicetypes/1
      netIface vSwitch0 to NST null
      netIface DC1dvSwitchQA to NST http://localhost:8009/api/admin/datacenters/1/networkservicetypes/1
      datastore 50dd038e-939b-4bec-a8c2-d32cffe92fdd to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 71a9fa3c-9edb-4bd1-8a69-323b0cb6f4f9 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 221a1948-aa5f-49f4-9977-7a517edc07b8 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 5b153bce-89ad-4845-b28f-18afc3db69fb to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 1bac4142-c2e2-468e-bebd-3a25e7141c26 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 5be3393d-d221-4e8a-90a4-e3b221bbdd3d to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore aa22a1ed-3bd9-4fe5-8170-5cb7d28d4e1a to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore acbc296b-a8d3-4986-b115-9d2aa4b47cd4 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore c6afd0eb-1850-4cd2-b3dc-cde8dfc05d5f to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 5cae7c27-37e7-40f1-ab92-9018f360e4b8 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 06f0ee15-ac23-4622-8454-4626a68ae876 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      ----------------- WARNING, this info will not be recreated -----------------
      done. review and run with '' -op=apply '' 
  7. Execute the tool again replacing the op parameter from check to apply, replacing API_ENDPOINT, API_USER, API_PASS, DC_ID and PASS_FILE with the appropriate values from the previous steps:
    1. # java -jar vcenter-cluster-upgrade-0.3.jar  -op apply -url 'API_ENDPOINT' -apiuser 'API_USER' -apipass 'API_PASS' -dcid 'DC_ID' -passfile 'PASS_FILE'
      
      Using datacenter dc
      machine 10.95.9.23 [10.95.9.242] - 10.95.9.23 cluster[my-cluster-70]
      machine 10.95.9.27 [10.95.9.242] - host1.test.example.org cluster[Functionals-Cluster]
      machine 10.95.9.56 [10.95.9.242] - host2.test.example.org cluster[Functionals-Cluster]
      machine 10.95.9.29 [10.95.9.242] - 10.95.9.29 cluster[my-Cluster-65]
      cluster my-cluster-70 in rack test
      cluster my-Cluster-65 in rack test
      cluster Functionals-Cluster in rack test
      netIface Functionals DVS to NST http://localhost:8009/api/admin/datacenters/1/networkservicetypes/1
      netIface vSwitch0 to NST null
      netIface DC1dvSwitchQA to NST http://localhost:8009/api/admin/datacenters/1/networkservicetypes/1
      datastore 50dd038e-939b-4bec-a8c2-d32cffe92fdd to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 71a9fa3c-9edb-4bd1-8a69-323b0cb6f4f9 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 221a1948-aa5f-49f4-9977-7a517edc07b8 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 5b153bce-89ad-4845-b28f-18afc3db69fb to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 1bac4142-c2e2-468e-bebd-3a25e7141c26 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 5be3393d-d221-4e8a-90a4-e3b221bbdd3d to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore aa22a1ed-3bd9-4fe5-8170-5cb7d28d4e1a to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore acbc296b-a8d3-4986-b115-9d2aa4b47cd4 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore c6afd0eb-1850-4cd2-b3dc-cde8dfc05d5f to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 5cae7c27-37e7-40f1-ab92-9018f360e4b8 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      datastore 06f0ee15-ac23-4622-8454-4626a68ae876 to tier http://localhost:8009/api/admin/datacenters/1/datastoretiers/1
      ----------------- WARNING, this info will not be recreated -----------------
      going to delete hypervisors
      Waiting confirmation of all the monitors released
      added cluster 10.95.9.242-domain-c241
      added cluster 10.95.9.242-domain-c305
      added cluster 10.95.9.242-domain-c7
      Getting remote virtualmachines from 10.95.9.242-domain-c241
      Getting remote virtualmachines from 10.95.9.242-domain-c305
      Getting remote virtualmachines from 10.95.9.242-domain-c7

Known errors

  • Error 409. MACHINE-21 - At least one network interface must have a network service type assigned: your cluster has to have configured a distributed virtual switch in Abiquo before starting the migration, as stated before.
  • Error 500 Internal Server Error - Unexpected exception: please check the api.log log file.

Important considerations

  • Resources in infrastructure will be changed from VMX_04 to VCENTER_CLUSTER.
  • The VDC hypervisor type will change from VMX_04 to VCENTER_CLUSTER (registered on the Events tab).
  • The migration process will automatically retrieve all VMs from all hosts.
  • From Abiquo 4.5.1 onwards, you do not need to recapture VMs. After you complete the migration check your VMs
  • In Abiquo 4.7.6 and from 5.1 onwards, the platform can proceed with the migration even when there are protected VMs.

If you are using the Networker plugin...

After you run the migration tool, if you are using Networker, edit the networker.properties and change the hosts parameters to use the name of the vCenter cluster instead of the IP address of the hosts. For example, if you previously had the following lists of ESXi hosts defined:

abiquo.networker.siteA.hosts=esxi1.example.com,esxi2.example.com
abiquo.networker.siteB.hosts=esxi3.example.com,esxi4.example.com

 You would now change the list of hosts to the names of the vCenter clusters.

abiquo.networker.siteA.hosts=vCenter-Cluster1.example.com-domain-c1
abiquo.networker.siteB.hosts=vCenter-Cluster2.example.com-domain-c2

After you change these properties, restart the abiquo-tomcat process (unless you are using the reloadable property functionality!)

If you had previously captured VMs, after you complete the migration, check the VMs.

  • No labels