WordPress MainWP BackWPUp job creation

For WordPress administrators WordPress MainWP BackWPUp job creation should be a simple process.

MainWP is a very handy console for those of us managing lots of WordPress sites.

BackWPUp is a very popular plugin to assist with managing those sites.

My experience has been not so supportive of either notion.

WordPress MainWP BackWPUp job creation proved to be a lengthy process of diagnosis of multiple bugs and ‘features’ that were eventually worked out but with a lot of time wasted.

This was using MainWP with MainWP Child and BackWPUp plugin in both the MainWP central dashboard and the child site.

MainWP and Child Utility

I use MainWP as a centralised management tool and it offers a connection or interface to manage BackWPUp from the central dashboard. In the main it works, but there are a number of gotcha’s.

Modifications are not Sync’d

The first is when you modify a backup job in the child site (the client site) and you sync from the child site to the MainWP dashboard. The changed configuration of the BackWPUp job is NOT changed in the MainWP dashboard from what was previously configured. (At least this is the case with fields in the FTP transfer and job name fields. I did not test every setting field)

So you must make changes only from the MainWP dashboard as a ‘sending’ sync will transfer the details from MainWP to the child site, but my testing shows that it will not ‘get’ or receive setting changes from the child site. So management is only one way for BackWPUp at least. I trawled the MainWP documentation but cannot find any precise description of what a Sync Data process is meant to do.

Allowed Tags Do not Transfer

Secondly, the instructions in MainWP for BackWPUp are really misleading (I wasted hours on this task alone because what seemed clear and concise was completely misleading).

The MainWP BackWPUp page setting for the backup job name and archive name state that you can use variable tags:

Allowed tags: %sitename%, %url%, %date%, %time%

and this would appear to be straight-forward. You should be able to make the date / time variable in the archive name in order to retain multiple day backups where needed.

So I confidently applied an archive name of daily-%sitename%-%date%-%time% with the expectation that MainWP would pass this to the child site in the archive name field for the BackWPUp job.

Gotcha !! MainWP takes the variables and converts them at the time that you save the job in MainWP and transfers the resulting archive filename to the child site for the BackPWUp job !

So in the child site we now have an archive filename that does NOT get a changing date / time stamp and will replace the archive on a daily basis. Nasty if you think you have 7 days of backups.

Allowed Tags are Inconsistent

Finally, I tried copying and pasting the daily-%sitename%-%date%-%time% parameter for the archive file name field value directly into the child site job. But the was perplexed as I got some crazy file name that included % signs etc…… but another Gotcha!

Yep, the MainWP “Allowed Tags” only apply within MainWP, which would be fine if they worked, but they don’t.  So the %sitename% parameter is interpreted in BackWPUp as %s (for seconds as in time) and the ‘seconds’ is inserted in the archive name along with “itename%-“.

The rest of the archive filename consists of correctly interpreted but clearly invalid ‘tags’. This is because BackWPUp provides for a completely different set of parameters or allowed tags that are completely inconsistent with the MainWP configuration.

So as a last resort, I configured the child site job for BackWPUp with something like daily-thowden.com.au-%Y%m%d-%H%i so that I get a changed date / time within the archive file name.

I then manually copied that from the child site to the MainWP dashboard.  (Yes, manually, remember that the sync process is one-way!). From that point on I now have a working backup cycle.

BackWPUp

Ok, so I now have a configured BackWPUp job ready to test and schedule on my child site.

BackWPUp Date Time Stamp Issue

Now that the backup would run I got to check the BackWPUp logs and found that they are incorrectly date / time stamped with the system date being ignored and using UTC / GMT time instead. Painful when trying to confirm local time of events. The date/time issue also impacts on the scheduling which must be done as at UTC 0000 to get the right timing for the schedule.

When an Error is not!

If we get over the date / time stamp issue, the log reports an ERROR: as 2 errors when in fact it completed successfully!!!

If an FTP transfer attempt fails, an initial error is flagged with ERROR: and then a 2nd (and a third) attempt will be made for the FTP transfer.

If that second transfer attempt is successful the job completes, but because of the first upload ERROR: condition existing the whole job is flagged as ERROR: Failed.

The crazy thing is that the last line ERROR: condition is also counted so that 2 errors are reported in the summary from a successful completion!

Arghhh… this of course wastes a heap of time checking what are in fact successful backups.

It works ! but…. it would be a whole lot easier and better if all the above had worked as expected.

Issue Summary

1. MainWP need to correct their process so that the sync is bi-directional.
2. MainWP need to correct the description of the allowed tags so that they are consistent with what BackWPUp actually uses.
3. MainWP must pass the ‘tagged’ version of the archive file name to the job and not do the conversion.
4. BackWPUp really should fix the date / time issue so that the logs and schedules are accurate and consistent with the WordPress site configuration so that they make sense.
5. BackWPUp should fix the ERROR reporting so that successful backups that may have contained one FTP transfer error but do complete successfully are not flagged as ERROR (twice)

Xenserver install without Local Storage

This was another Xenserver install without Local Storage on the drive after the install completed. I installed new drives to both an HP 1RU server and an HP Blade server. All the drives were 1T SATA and should have formatted identically.

The first server created c0d0 and c0d1 where the c0d1 was the remaining space of the 1Tbyte drive that was installed in an HP server. The /cciss/c0d1 was created with a Xenserver UUID and connecting to it was a simple(ish) process. I wrote up the process I used in another post titled Xenserver has no Local Storage.

So I expected that the other server would be the same issue, but it was different.

This server is a blade server and while the 1Tbyte drive is the same, the installer for some reason created the remaining space as an additional partition, /cciss/c0d0p3, but there was no UUID created.

Using vgs to confirm that the only volume group was the Dom(0) 4G partition:

# vgs
VG                                                 #PV #LV #SN Attr   VSize VFree
VG_XenStorage-50423669-52dc-b116-0aae-6cc1545a3013   1   1   0 wz–n- 3.99G 3.98G

Step 1. Create another volume group.

# vgcreate VG-LocalDisk /dev/cciss/c0d0p3
No physical volume label read from /dev/cciss/c0d0p3
Writing physical volume data to disk “/dev/cciss/c0d0p3”
Physical volume “/dev/cciss/c0d0p3” successfully created
Volume group “VG-LocalDisk” successfully created

Step 2. Create the storage repository (SR)

# xe sr-create content-type=”Local SR” host-uuid=bd73aed4-5583-4d3e-94b3-a271c2446d12 type=ext device-config-device=/dev/cciss/c0d0p3 shared=false name-label=”Local Storage”

Noting that I liked the shortcut of host-uuid=<tab key> to get the current host uuid without lookup / copy / paste !

Step 3. Done.

The drive should now be present in the XenCenter details for the server.

The only thing I noted here was an apparent loss of around 14G of storage in the 931G formated drive, less the 8G roughly for Xen Dom(0) should have netted me around 923G but it only reports 909G. I looked at it and decided it was not worth pursuing.

I am also guessing that this may not be an efficient method of using / allocating the disk space, but as with most things I do, it was expedient.

References:

http://xmodulo.com/how-to-change-xenservers-local-storage.html

http://thinkvirt.com/?q=node/283

 

Xenserver has no Local Storage

I added some new disks to a couple of HP servers including a 1RU stand-alone and a blade server recently, but in the end the fresh Xenserver has no Local Storage. Installing Xenserver 6.5 seemed to complete as expected, except that when I looked at the new server there was no Local Storage.

A DVD drive and a USB Removable device were recognised correctly.

I am still not clear on why it did not complete the process of preparing the Local Storage, but the following process resolved it for me.

Postscript: I then went to the second blade server and found a different config partly completed during the install resulting in the same issue but a different method to address it. See my other post Xenserver install without Local Storage.

In a nutshell, Xenserver Dom(0) was installed but did not configure the rest of the disk as a usable device.

Step 1. From the console identify the host uuid like this for the test server DS3001:

# xe host-list
uuid ( RO)                : 4c2e3091-502f-47b8-8f64-64e8feba806b
name-label ( RW): DS3001
name-description ( RW): Default install of XenServer

Step 2. Find the partition/drive configuration:

# cat /proc/partitions
major minor  #blocks  name

7        0      57216 loop0
104        0  292935982 cciss/c0d0
104        1    4193297 cciss/c0d0p1
104        2    4193297 cciss/c0d0p2
104        3  284546333 cciss/c0d0p3
104       16  976729816 cciss/c0d1         <<—  this is the one we want as it is the largest partition
11        0    1048575 sr0
11        1     589594 sr1
8        0 1953514583 sda
8        1 1953513559 sda1
8       16 1953514582 sdb
8       17 1953512534 sdb1

and then get the UUID :

# ls -lah /dev/disk/by-id
total 0
drwxr-xr-x 2 root root 180 Jul  6 12:29 .
drwxr-xr-x 7 root root 140 Jul  6 12:29 ..
lrwxrwxrwx 1 root root  16 Jul  6 12:29 cciss-3600508b100103231372020202020000f -> ../../cciss/c0d0
lrwxrwxrwx 1 root root  16 Jul  6 12:29 cciss-3600508b1001032313720202020200010 -> ../../cciss/c0d1

Step 3. Fill in the uuid and the device in to the following command:

# xe sr-create content-type=user device-config:device=/dev/disk/by-id/<cciss-xxxxxxxxxxxxxxxxxxxxxxxxx> host-uuid=<host-uuid> name-label=”Local Storage 2” shared=false type=lvm

Using the examples, to create:
# xe sr-create content-type=user device-config:device=/dev/disk/by-id/cciss-3600508b1001032313720202020200010 host-uuid=4c2e3091-502f-47b8-8f64-64e8feba806b name-label=”Local Storage” shared=false type=lvm

Run that command on Dom(0) console and the new Local Storage will appear in XenCenter.

References:

http://support.citrix.com/article/CTX121313?_ga=1.168442916.42972411.1435579385

 

At least one other site is using the same HTTPS binding

“At least one other site is using the same HTTPS binding …..”  is a prompt that every Windows Server IIS administrator has come across at some point.  It arises when trying to change or update an SSL certificate on Windows server IIS platform where there are multiple websites and potentially multiple certificates.

Multiple Sites Using Same IP and SSL

Multiple sites sharing an IP address use a process of host-header recognition in order to accept the in-bound connection. Where this is on port 80 (http) there is no issue.

However, with port 443, the IP address and port number are also bound to a certificate and changing one site certificate will impact all the other sites on the same IP address and port combination. Hence the following Alert (error) message is displayed.

IIS SSL Multiple Sites Alert
IIS SSL Multiple Sites Alert

Accepting or rejecting really depends on your server and what sites and certificates are actually in use. However, this may impact on the other sites and my past experience has been that other sites can be left in an unstable state either without a binding, a certificate, or a mix-up on which certificate.

Change SSL Certificate for Multiple Sites

Use the following steps to prepare manual change at the command line in order to avoid the above error message and address all sites using the same IP address : port and certificate at the same time.

All the detailed information has been sanitised to use dummy data, you will need to substitute the relevant information for your certificates and server.

First examine the certificates in use opening a command prompt – this is all read activity so Run as Administrator is not required, yet.

certutil -store My

This will display lists of certificates and applications like the following. I selected the 2 that I was looking for as follows:

the old certificate – based on NotBeforeDate – you need the highlighted hash from each certificate

================ Certificate 7 ================
Serial Number: 1234567890abcdef1234567890abcdef1234
Issuer: CN=AlphaSSL CA – G2, O=AlphaSSL
NotBefore: 01/01/2014 11:27 AM
NotAfter: 31/12/2016 11:27 AM
Subject: CN=*.yourdomain.tld, OU=Domain Control Validated
Non-root Certificate
Template:
Cert Hash(sha1): 12 34 56 78 90 ab cd ef 12 34 56 78 90 ab cd ef 12 34 56 78
Key Container = 12345a8277cd156abcd09d20dcba5c31_g3239vv5-8181-1234-b6ba-bbbb
78ccd34
Provider = Microsoft RSA SChannel Cryptographic Provider
Encryption test FAILED
CertUtil: -store command completed successfully.

and the new certificate – based on NotBeforeDate

================ Certificate 4 ================
Serial Number: 67890abcdef12341234567890abcdef12345
Issuer: CN=AlphaSSL CA – SHA256 – G2, O=GlobalSign nv-sa, C=BE
NotBefore: 01/01/2015 9:02 AM
NotAfter: 31/12/2016 11:27 AM
Subject: CN=*.yourdomain.tld, OU=Domain Control Validated
Non-root Certificate
Template:
Cert Hash(sha1): 78 90 ab cd ef 12 34 56 78 90 ab cd ef 12 34 56 78 12 34 56
Key Container = 1234abcd54d7161def4863d4d6b96633_f3239aa5-8080-1234-b6ba-abcd
78ccd34
Provider = Microsoft RSA SChannel Cryptographic Provider
Encryption test FAILED

Next, identify the ip address that is in use and, assuming that standard https is being used, port 443. This could be done by checking within IIS first to check which common IP address is being used.

netsh http show sslcert

Which will show all the ssl certificate bindings, or if you know which ipaddress, then be selective

netsh http show sslcert ipport=223.27.11.71:443

Will show the results like:

SSL Certificate bindings:
————————-IP:port                 : 223.27.11.71:443
Certificate Hash        : 1234567890abcdef1234567890abcdef12345678
Application ID          : {34567812-3456-7890-abcd-ef123456789d}
Certificate Store Name  : MY
Verify Client Certificate Revocation    : Enabled
Verify Revocation Using Cached Client Certificate Only    : Disabled
Usage Check    : Enabled
Revocation Freshness Time : 0
URL Retrieval Timeout   : 0
Ctl Identifier          : (null)
Ctl Store Name          : (null)
DS Mapper Usage    : Disabled
Negotiate Client Certificate    : Disabled

The application ID is what is needed from the above but check that the correct certificate hash (the old one) is associated with this binding.

Now select all the relevant information from the results as shown

Old certificate hash (with spaces removed)

1234567890abcdef1234567890abcdef12345678

New certificate hash (with spaces removed)

7890abcdef1234567890abcdef12345678123456

and the AppID

{34567812-3456-7890-abcd-ef123456789d}

The following two steps will need a new elevated command window selected with ‘Run as Administrator’

Delete old binding

netsh http delete sslcert ipport=223.27.11.71:443

Then add new using hash and appid

netsh http add sslcert ipport=223.27.11.71:443 certhash=7890abcdef1234567890abcdef12345678123456 appid={34567812-3456-7890-abcd-ef123456789d}

which should result in

SSL Certificate successfully added

And finally if you want to check that it has been applied

netsh http show sslcert | findstr /R "7890abcdef1234567890abcdef12345678123456"

or to check that the old certificate hash is not still in use on another ipaddress:port binding use the above with the old certificate hash.

 

Reference: http://serverfault.com/questions/610841/replace-wildcard-certificate-on-multiple-sites-at-once-using-command-line-on-i