The PIM migrator performs the provider ID migration in the compute, backup, and networking providers via the Remote Services servers.
It updates the VMs, backups, firewalls, and load balancers in the providers and in Redis
It also tests the migration in dry run mode
What does it output?
An SQL file to run on the kinton database
A log in standard output (you should redirect it to a file)
PIM migrator requirements
The PIM migrator requires the following.
The migration plan from the PIM planner
Access to:
the Redis instance of the Remote Service server.
the hypervisors in the datacenter
Before you run the PIM migrator
Do these steps.
Start the Abiquo upgrade to 5.3 as usual including
Stop the Abiquo platform
Create backups including snapshots, Database dump, and Redis dump
Upgrade the database
STOP the upgrade
Run the PIM planner and obtain the migration plan
Run the PIM migrator
Log in to ALL Remote Services Servers Do these steps to run the PIM migrator.
Warning |
---|
Before you run the PIM migrator on your production servers:
|
Log in to ALL remote services servers and do these steps on EACH server.
Install the PIM tools
Code Block yum install abiquo-pim-tools
The default install folder is /opt/abiquo/pim-tools
Obtain the datacenter-id of the Remote Services server from the value of the abiquo.datacenter.id property in the abiquo.properties file.
Code Block abiquo.datacenter.id=abq_dc1
In this case, the value of the datacenter-id parameter will be "abq_dc1"
Run the PIM migrator in dry run mode, which is the default mode that doesn't make any changes.
The "-noseed" parameter is required. The default value is false, which means the migrator will use the platform's default seed file. You can specify a seed file with the "-seed" parameter.
We recommend that you save the log in standard output to a file. And we recommend that you give the output files names that will identify the Remote Services server.
For
...
example
Code Block java -jar /opt/abiquo/pim-tools/pimmigrator.jar -dc=abq_dc1 -redishost=localhost -plan=migration-plan.data -noseed -output=update_DC1.sql |
...
tee pimmigrator_dry_run_DC1.log
Check the output file. If there are any errors or warnings, resolve them.
If necessary, contact Abiquo Support.Run the PIM migrator in update mode, by setting the "no dry run" option to true.
Code Block java -jar /opt/abiquo/pim-tools/pimmigrator.jar -nodry -dc=abq_dc1 -redishost=localhost -plan=migration-plan.data -noseed -output=update_DC1.sql | tee pimmigrator_DC1.log
Copy
...
the SQL file from the
...
remote services server to the Abiquo database server.
For example
Code Block scp update_DC1.sql root@my.database.server:~/
After you run the migrator on ALL Remote Services servers, run the SQL upgrades on the database server as described in the next step.
Update the Abiquo database server
Update the Abiquo database with all of the update.sql files. For example
Code Block mysql kinton < update_DC1.sql mysql kinton < update_DC2.sql ...
Now continue with the Abiquo upgrade
PIM migrator options
Option | Alias | Required | Description |
---|---|---|---|
-dc | --datacenter-id | yes | ID of the current datacenter to migrate |
-nodry | --no-dry-run | no | Set to 'true' in order to perform the changes (default: false) |
-noseed | --no-seed |
no
yes | If true, don't use a seed file |
and ignore the seed property value (default: false) | |||
-output | --output-file | no | SQL output file (by default 'update.sql') |
-plan | --migration-plan-file | yes | Migration plan data file |
-redishost | --redis-host | yes | Redis server host for this datacenter |
-redisport | --redis-port | no | Redis server port for this datacenter (by default 6379) |
-seed | --seed-file | no | Seed file (by default '/etc/abiquo/.store') |
PIM migrator notes
The migrator does not process the following providers.
Hyper-V
Networker
Google Cloud Platform (see note below)
Firewalls and load balancers that are not in NSX
For Google Cloud Platform, if you have VMs deployed before the upgrade, after you apply the PIM tools, do these steps:
Delete the VMs from the platform only
in the API, delete them with the "logicaldelete" parameter set to true. See Delete a VM API doc
Delete the virtual datacenters
Onboard the resources again