Introduction
In some cases, you may want to provide Abiquo access to different departments or organisations with different DNS subdomain name and a customised Abiquo UI theme per each subdomain. There are several ways to achieve this setup depending on the needs and resources available but this document covers a simple path with minimal resources required.
Sample Configuration
Scenario
We are going to describe this setup assuming the scenario described below. Note that you will need to adjust the rest of the commands in this document to suit your environment (mostly host names and IPs).
Machine | Host name | IP Address |
---|---|---|
Abiquo Monolithic & Apache2 SSL front-end | api.example.com | 192.168.1.100 |
Datacenter2 Remote Services | dc2rs.example.com | 192.168.1.150 |
Setup disclaimer
- Users must reach api.example.com, theme1.example.com and theme2.example.com. (DNS names must be resolvable)
- Abiquo API and RS must reach each other by all its DNS names
Once environment has been described, we are going into changes needed to achieve Abiquo UI theming per subdomain.
Abiquo UI configuration
In order to provide different themes per subdomain we need to do some changes in the default UI folder structure, by default looks like:
# tree /var/www/html/ui/ . ??? config ? ??? client-config.json ... ??? theme ? ??? abicloudDefault ...
We need to create a configuration file and theme folder per each subdomain. In the sample scenario the structure will look like:
# tree /var/www/html/ui/ . ??? config ? ??? theme1.json ? ??? theme2.json ... ??? theme1 ? ??? abicloudDefault ??? theme2 ? ??? abicloudDefault ...
You can have different property configuration in each config json file, but most important is that config.endpoint property points to its related API subdomain URL:
grep endpoint /var/www/html/ui/config/* /var/www/html/ui/config/theme1.json: "config.endpoint": "https://theme1.example.com/api", /var/www/html/ui/config/theme2.json: "config.endpoint": "https://theme2.example.com/api",
Now, place you customised themes in the Abiquo UI root folder with a suitable name and ensure that the structure looks like how has been described before.
Apache SSL front-end
Apache will provide the SSL layer to Abiquo components and serve the Abiquo UI. We need to tell Apache to work with NameVirtualHost directive as we want to serve different content depending in the accessed hostname.
Ensure that NameVirtualHost directive is enabled for both HTTP and HTTPS protocols:
# grep "NameVirtualHost" /etc/httpd/conf/httpd.conf NameVirtualHost *:80 NameVirtualHost *:443
We need a VirtualHost configuration file per each subdomain plus a VirtualHost configuration file for API and Appliance Managers:
# tree /etc/httpd/conf.d/ /etc/httpd/conf.d/ ??? api.conf ??? theme1.conf ??? theme2.conf
First of all, let's have a look to api.conf. This VirtualHost configuration file will group SSL access for all Abiquo Appliance Manager webapps and API endpoint: (api.conf)
<VirtualHost *:80> RewriteEngine On RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] </VirtualHost> <VirtualHost *:443> ServerName api.example.com RewriteEngine On ProxyRequests Off ProxyPreserveHost On # Avoid CORS when uploading a template from different domains <IfModule mod_headers.c> SetEnvIfNoCase Origin "https?://(api\.example\.com|theme1\.example\.com|theme2\.example\.com|dc2rs\.example\.com)(:\d+)?$" AccessControlAllowOrigin=$0 Header set Access-Control-Allow-Origin %{AccessControlAllowOrigin}e env=AccessControlAllowOrigin </IfModule> # Subdomain1 download RewriteRule RewriteCond %{HTTP_REFERER} ^https://theme1\.example\.com/ui/ [NC] RewriteCond %{REQUEST_URI} ^/(.*)/files/.*$ [NC] RewriteRule /(.*)/files/(.*) https://theme1.example.com/$1/files/$2 [R,L] # Subdomain2 download RewriteRule RewriteCond %{HTTP_REFERER} ^https://theme2\.example\.com/ui/ [NC] RewriteCond %{REQUEST_URI} ^/(.*)/files/.*$ [NC] RewriteRule /(.*)/files/(.*) https://theme2.example.com/$1/files/$2 [R,L] <Location /api> ProxyPass ajp://localhost:8010/api retry=0 ProxyPassReverse ajp://localhost:8010/api </Location> # All Abiquo Appliance Managers managed in each datacenter # Datacenter1 Appliance Manager <Location /am> ProxyPass ajp://192.168.1.100:8010/am retry=0 timeout=1800 ProxyPassReverse ajp://192.168.1.100:8010/am </Location> # Datacenter2 Appliance Manager <Location /am-barcelona> ProxyPass ajp://192.168.1.150:8010/am retry=0 keepalive=On timeout=1800 ProxyPassReverse ajp://192.168.1.150:8010/am </Location> SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /etc/httpd/ssl/example.com.pem SSLCertificateKeyFile /etc/httpd/ssl/example.com.key CustomLog /var/log/httpd/api-access.log combined ErrorLog /var/log/httpd/api-error.log </VirtualHost>
There are few things we need highlight about the api.conf:
<IfModule mod_headers.c> ... </IfModule> section: This section is intended to deal with CORS browser protection when different subdomains interacts together for the Template upload functionality. As you can see, if new Datacenters or subdomains are added to the environment, this section will require to be modified to allow those new subdomains.
Subdomain X download RewriteRule section: This section is intended to deal with CORS browser protection when different subdomains interacts together for the Template download functionality. As you can see, if new subdomains are added to the environment, you will need to create correspondent RewriteCond and RewriteRule to allow the download.
Datacenter X Appliance Manager section: This section is intended to provide the SSL layer to the Appliance Manager webapp on all Datacenters. If Abiquo UI is running SSL, the Appliance Manager endpoint should be accessed also through SSL to avoid browser's Mixed-Content protection. Because of this, is a good practise to proxy all Appliance Manager request through Apache2 front-end which will provide the SSL layer. As you can see, if new Datacenters are added to the environment, you will need to create the correspondent Location section for the new Datacenter Appliance Manager.
There are other sections and parameters such as the certificate configuration, apache log files and ProxyPass extra options such retry, keepalive and timeout that can be modified depending in your environment. Refer to Apache website documentation to further information.
Now, we should create a VirtualHost configuration file per each subdomain. All files will look almost the same:
theme1.conf
<VirtualHost *:80> RewriteEngine On RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] </VirtualHost> <VirtualHost *:443> ServerName theme1.example.com RewriteEngine On ProxyRequests Off ProxyPreserveHost On <Directory "/var/www/html/ui"> Options MultiViews AllowOverride None Order allow,deny Allow from all </Directory> RewriteRule ^/$ /ui/ [R] # Theme and config AliasMatch AliasMatch ^/ui/theme/(.*)$ /var/www/html/ui/theme1/$1 AliasMatch ^/ui/config/client-config.json /var/www/html/ui/config/theme1.json <Location /api> ProxyPass ajp://192.168.1.100:8010/api retry=0 ProxyPassReverse ajp://192.168.1.100:8010/api </Location> <Location /m> ProxyPass ajp://192.168.1.100:8010/m retry=0 ProxyPassReverse ajp://192.168.1.100:8010/m </Location> <Location /am> ProxyPass ajp://192.168.1.100:8010/am retry=0 timeout=1800 ProxyPassReverse ajp://192.168.1.100:8010/am </Location> <Location /am-barcelona> ProxyPass ajp://192.168.1.150:8010/am retry=0 keepalive=On timeout=1800 ProxyPassReverse ajp://192.168.1.150:8010/am </Location> <Location /legal> ProxyPass ajp://192.168.1.100:8010/legal retry=0 ProxyPassReverse ajp://192.168.1.100:8010/legal </Location> SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /etc/httpd/ssl/example.com.pem SSLCertificateKeyFile /etc/httpd/ssl/example.com.key CustomLog /var/log/httpd/theme1-access.log combined ErrorLog /var/log/httpd/theme1-error.log </VirtualHost>
theme2.conf
<VirtualHost *:80> RewriteEngine On RewriteRule .* https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] </VirtualHost> <VirtualHost *:443> ServerName theme2.example.com RewriteEngine On ProxyRequests Off ProxyPreserveHost On <Directory "/var/www/html/ui"> Options MultiViews AllowOverride None Order allow,deny Allow from all </Directory> RewriteRule ^/$ /ui/ [R] # Theme and config AliasMatch AliasMatch ^/ui/theme/(.*)$ /var/www/html/ui/theme2/$1 AliasMatch ^/ui/config/client-config.json /var/www/html/ui/config/theme2.json <Location /api> ProxyPass ajp://192.168.1.100:8010/api retry=0 ProxyPassReverse ajp://192.168.1.100:8010/api </Location> <Location /m> ProxyPass ajp://192.168.1.100:8010/m retry=0 ProxyPassReverse ajp://192.168.1.100:8010/m </Location> <Location /am> ProxyPass ajp://192.168.1.100:8010/am retry=0 timeout=1800 ProxyPassReverse ajp://192.168.1.100:8010/am </Location> <Location /am-barcelona> ProxyPass ajp://192.168.1.150:8010/am retry=0 keepalive=On timeout=1800 ProxyPassReverse ajp://192.168.1.150:8010/am </Location> <Location /legal> ProxyPass ajp://192.168.1.100:8010/legal retry=0 ProxyPassReverse ajp://192.168.1.100:8010/legal </Location> SSLEngine on SSLProtocol all -SSLv2 SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW SSLCertificateFile /etc/httpd/ssl/example.com.pem SSLCertificateKeyFile /etc/httpd/ssl/example.com.key CustomLog /var/log/httpd/theme2-access.log combined ErrorLog /var/log/httpd/theme2-error.log </VirtualHost>
The only differences are the ServerName directive and the AliasMatch rules required to apply the desired theme per subdomain:
theme1.example.com
ServerName theme1.example.com AliasMatch ^/ui/theme/(.*)$ /var/www/html/ui/theme1/$1 AliasMatch ^/ui/config/client-config.json /var/www/html/ui/config/theme1.json
theme2.example.com
ServerName theme2.example.com AliasMatch ^/ui/theme/(.*)$ /var/www/html/ui/theme2/$1 AliasMatch ^/ui/config/client-config.json /var/www/html/ui/config/theme2.json
# tree /etc/httpd/conf.d/
/etc/httpd/conf.d/
??? api.conf
??? ssl.conf
??? theme1.conf
??? theme2.conf
grep endpoint /var/www/html/ui/config/*
/var/www/html/ui/config/theme1.json: "config.endpoint": "https://theme1.bcn.abiquo.com/api",
/var/www/html/ui/config/theme2.json: "config.endpoint": "https://theme2.bcn.abiquo.com/api",
In the sample scenario the Apache2 SSL front-end is also running the API and Remote Services for Datacenter 1. As has been indicated previously, this is a sample but you will easily extrapolate required configuration
Remote services 1 and 2 will be the nodes forming our cluster, and serving the remote services webapps on the cluster-wide IP address. This IP will switch from one server to another in the case of a failure. Redis needs to be on a separate host, as the data must be shared for both remote services machines. It is out of the scope of this document to describe a highly available setup for Redis. If you want to run Redis in such a setup, please refer to to Redis's documentation.
It is assumed in this guide that all the machines can reach each other using either the IP or the short name (via hosts file or DNS). Also, it is assumed that both rsha1 and rsha2 have Abiquo remote services installed on them. Follow the remote services installation guide to install them. You also need to keep the abiquo.properties configuration file synced on both RS nodes, which in our case is:
[remote-services] abiquo.appliancemanager.localRepositoryPath = /opt/vm_repository abiquo.appliancemanager.repositoryLocation = 10.60.1.72:/opt/vm_repository abiquo.datacenter.id = rsha abiquo.rabbitmq.host = 10.60.13.28 abiquo.rabbitmq.password = guest abiquo.rabbitmq.port = 5672 abiquo.rabbitmq.username = guest abiquo.redis.host = 10.60.13.59 abiquo.redis.port = 6379
Installation of the cluster stack
Abiquo recommends that you use Clusterlab's pacemaker as the cluster resource manager. As described on Clusterlab's site, in RedHat based distributions, the pacemaker stack uses CMAN for cluster communication. The steps below are extracted from pacemaker's quick start guide for RedHat systems. We will use the same conventions used in pacemaker's guide, that is, [ALL] # denotes a command that needs to be run on all cluster machines, and [ONE] # indicates a command that only needs to be run on one cluster host.
So, start by installing pacemaker and all the needed tools:
[ALL] # yum install pacemaker cman pcs ccs resource-agents
Next, create a CMAN cluster and populate it with oyur nodes:
[ONE] # ccs -f /etc/cluster/cluster.conf --createcluster pacemaker1 [ONE] # ccs -f /etc/cluster/cluster.conf --addnode rsha1 [ONE] # ccs -f /etc/cluster/cluster.conf --addnode rsha2
Then you need to configure cluster fencing, even if you don't use it:
[ONE] # ccs -f /etc/cluster/cluster.conf --addfencedev pcmk agent=fence_pcmk [ONE] # ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect rsha1 [ONE] # ccs -f /etc/cluster/cluster.conf --addmethod pcmk-redirect rsha2 [ONE] # ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk rsha1 pcmk-redirect port=rsha1 [ONE] # ccs -f /etc/cluster/cluster.conf --addfenceinst pcmk rsha2 pcmk-redirect port=rsha2
CMAN was originally written for rgmanager and assumes the cluster should not start until the node has quorum, so before trying to start the cluster, disable this behavior:
[ALL] # echo "CMAN_QUORUM_TIMEOUT=0" >> /etc/sysconfig/cman
Now you are ready to start up your cluster:
[ALL] # service cman start [ALL] # service pacemaker start [ALL] # chkconfig cman on [ALL] # chkconfig pacemaker on
Setting basic cluster options
With so many devices and possible topologies, it is nearly impossible to include Fencing in a document like this. For now, disable it.
[ONE] # pcs property set stonith-enabled=false
As we are using a 2-node setup, the concept of quorum does not make sense, as you can't have more than half of the nodes available in case of a failure, so disable it too:
[ONE] # pcs property set no-quorum-policy=ignore
Also, we will set a resource stickiness value which will prevent the resources to be moved back to the original host when the cluster recovers from a failure:
[ONE] # pcs resource defaults resource-stickiness=100
Adding resources
So up to this point you have a functional cluster but it is not managing any resources. We will add resources to the cluster to manage every component needed to run the Abiquo services, but as the cluster will be in charge of starting up the abiquo-tomcat service, first stop it and disable it at boot time:
[ALL] # service abiquo-tomcat stop [ALL] # chkconfig abiquo-tomcat off
Now, start adding resources to the cluster:
[ONE] # pcs resource create vm_repository ocf:heartbeat:Filesystem device=10.60.1.72:/opt/vm_repository directory=/opt/vm_repository fstype=nfs options=defaults [ONE] # pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=10.60.13.58 cidr_netmask=24 [ONE] # pcs resource create tomcat-service lsb:abiquo-tomcat
We've added 3 resources here:
- A filesystem mount, responsible for mounting the NFS template repository on the active node.
- An IP address (cluster IP) that will switch to the remaining node in case of failure.
- Finally, the abiquo-tomcat init script that needs to be run to start or stop the Abiquo tomcat service.
Then set up a group of resources so that these resources always run on the same machine:
[ONE] # pcs resource group create abiquo_rs vm_repository ClusterIP tomcat-service
Note the order of the names in the command also determines the startup and shutdown order in the group. In this example, the cluster will first mount the NFS share, then bring up the cluster IP and then start the tomcat service. Shutdown order is the reverse order.
This alone will suffice for the cluster to switch the resource group from node to node in case of a crash. In the case of a network failure though, services might be running on both machines because they won't have any way to contact each other to determine the status, which will cause a "split brain" situation. To avoid this, add an extra resource to ping your gateway IP address and shutdown services in case of a network failure:
[ONE] # pcs resource create ping ocf:pacemaker:ping host_list=10.60.13.1 timeout=5 attempts=3 [ONE] # pcs resource clone ping connection
And lastly, add a colocation constraint so the tomcat service, cluster IP and NFS mount are located on the node with a successful ping:
[ONE] # pcs constraint colocation add abiquo_rs with ping score=INFINITY
And thats it! You can check the actual status of the cluster with crm_mon command:
[root@rsha1 ~]# crm_mon -1 Last updated: Thu Sep 18 01:35:00 2014 Last change: Wed Sep 17 04:53:07 2014 via cibadmin on rsha1 Stack: cman Current DC: rsha2 - partition with quorum Version: 1.1.10-14.el6_5.3-368c726 2 Nodes configured 5 Resources configured Online: [ rsha1 rsha2 ] Resource Group: abiquo_rs vm_repository (ocf::heartbeat:Filesystem): Started rsha1 ClusterIP (ocf::heartbeat:IPaddr2): Started rsha1 tomcat-service (lsb:abiquo-tomcat): Started rsha1 Clone Set: ping-clone [ping] Started: [ rsha1 rsha2 ] [root@rsha1 ~]#
Running crm_mon -1 prints the info and quits. Running crm_mon with no arguments will enter a "watch" mode that will periodically refresh the info.
You can now add the RS cluster to Abiquo, remembering to use the cluster IP to ensure the failover is available.