It was a simple process. Back up three virtual servers, reload the host operating system, and restore the virtual servers. No problem, right? Yeah. For two of the virtual servers it was no problem at all. The third one was running a MySQL database. MySQL doesn’t play well with VSS technology. It still should not be a problem because the MySQL database had been backed up by MySQL.
It was a problem.
The virtual server was restored, and MySQL would not start. The next step is to clean up the database (in other words, remove it) so that we could at least get MySQL to start. Done. The next step is to restore the MySQL database from the dumps. Done. Now, the application/service should start up and everything should be back to normal. Right?
It’s never easy.
Error after error and it was finally time to call it a night at 4:00 am. The next day is a new day and time to start completely over. Sure, I could have got in touch with my vendor for support on Monday, and maybe, just maybe, by next Monday they might call me back, only for me to miss this call, or to suggest something unrelated, and then have to wait a week after that. After all, there has been an ongoing problem with it for over 3 months now that has yet to be resolved.
Instead I decided to start from scratch again. Hopefully this time around the unresolved issue doesn’t pop up. If I responded to my clients’ issues as well as my vendors responded to my issues, I’d be somewhere protesting for $15 an hour (and even at today’s minimum wage, it would still be a raise).
At this point, I have no idea how long this is going to take. I imagine several hours just installing all the updates on a 7 year old operating system. I suspect some, if not all, of my issues may have been related to using Server 2012 R2, so this time around I’m sticking with Server 2008 R2 which is what the vendor uses in it’s demonstrations.