Friday, September 12, 2014

When your VM gets stuck in suspended mode in Red Hat KVM...

I needed to re-do my network interfaces in KVM - make more bridged Ethernet connection for the VMs to use.  I totally forgot that I had a guest VM running as I began my reboot.  "Suspending SSB2" it says as I panic suddenly realizing what I had done.  It's ok - I've done this before I quickly remember.  It does a great job of automatically suspending and resuming UNLESS YOU JUST HAPPEN TO CHOSE THAT REBOOT TO SCREW UP THE NETWORK CONFIGURATION.

So, it reboots and networking is goofed.  I quickly see what I did, fix and reboot again.  This time it's ok but the SSB2 VM does not start.  I try to start it manually and get the message:

libvirtError: error creating macvtap type of interface: Device or resource busy

Whatever that means.  (It actually means I tried to un-suspend but couldn't talk to the iSCSI network so we're leaving it in limbo).  I google and google and find a lot of folks in the same boat.  They came up with some very iffy and convoluted solutions involving editing multiple XML files, etc.  I follow the threads to the bottom of each.  None of them seem quite right.

New to KVM, I try clonging the non-startable SSB2 and succeed.  It boots and works (requires a lot of network changes, but oh well).

Still not satisfied I see mention of "/var/lib/libvirt/qemu/save/"  (it's in my case).  Again some very complicated procedures involving editing multiple XML files, etc. with no guaranteed results by the author.

Since I have a working clone, I figure "What the heck!  I'll just delete "/var/lib/libvirt/qemu/save/" and try starting.  It worked!  -Except for one snag - I already had an exact doppleganger running.  It locked up my virt-manager and the server hard.  After, rebooting all was well.

This worked for me.  On a TEST system.  Use at your own risk.

Thursday, June 5, 2014

Fun with Windows Server Core 2012 R2

As a Unix/Linux admin, I find administering Windows servers boring and tedious.  Until now.  With Server Core 2012, you get Powershell pre-installed.  Some of the windowish apps, still launch little windows (e.g. Xen Tools) but for the most part, I find I can deploy servers much more quickly simply by using SCONFIG.  To me, this is much quicker than hovering a mouse all over trying to find where they've hidden the thing you need now (seems to change in every version).

This isn't Windows Server Core, but you get the idea.  The menus are pretty self explanatory.  I find configuring a system with this much quicker!  All I needed was a command line disk utility (which has been around a long time I know).  Brushing up on the commands, I was able to quickly deploy a new disk.  I will post the command line version of iSCSI when that comes up (soon, I'm sure).
Again, all pretty straight forward.  This looks a little more complicated than the GUI but I'll bet if I raced someone, they would not be able to find the utility, right click, etc. etc. as quickly.

Although this isn't showing it off much, I am really loving the power of Powershell.  I like how all of my Unix commands like ls and cat are already aliased (I used to have to make a ls.bat and cat.bat, etc.).  With Windows Server Core 2012 R2's command line utilities, and powerful shell, I might actually start to enjoy this OS again.  It's becoming more Unix-like everyday - and that's a good thing (for me anyway).

Friday, May 16, 2014

Netbackup Ports

Note to self: After installing the Netbackup client, don't forget to open ports 1556 and 13724 in the firewall.

Simple Things

Having a Red Hat 6 install with no GUI, and little else, made installing Netbackup clients from CDROM rather problematic.  No definition for the CDROM drive was in /etc/fstab so "mount /dev/cdrom" wasn't going to work.  Running "fdisk -l" did not show a CDROM and scanning and grepping dmesg revealed no clues.  Then I found this command:

# wodim --devices

That worked (after running "yum install wodim").  Handy command!

Thursday, April 10, 2014

SELinux and CentOS 6 with Special Guest: BackupPC

I was trying to tighten things back up on the BackupPC after getting it running.  SELinux is a pain - but I like to have it running on all systems.  I had two BackupPC installs - one on a CentOS 5 server and one a CentOS 6 server.  You would think the latter would be the easiest - but not so!  

For the most part, I just used this blog article BackupPC on CentOS 5 (selinux fix) but I had a few issues between the two servers so I'm documenting that.


CentOS 5 didn't have the semodule command.  So...

# yum install selinux*

And then create a source policy module...

# grep httpd /var/log/audit/audit.log | audit2allow -m backuppc > backuppc.te

And then build the policy module...

# grep httpd /var/log/audit/audit.log | audit2allow -M backuppc

And finally, install the module...

# semodule -i backuppc.pp

After that, I turned on SELinux=enforcing at the command line and edited the /etc/selinux/conf to default to enforcing.

# setenforce 1
# sestatus 
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   enforcing
Mode from config file:          enforcing
Policy version:                 21
Policy from config file:        targeted

CentOS 6

CentOS 6 also needed to have all of the SELinux tools installed (I think).  However, when I tried the exact same things as above, the semodule command gave an error:

     Tried to link in a non-MLS module with an MLS base

After some searching I found that I needed to run system-config-selinux which is a GUI (no system-config-selinux-tui I could find).

# system-config-selinux

Here, I was expecting to see MLS instead of targeted.  Not sure why, but it was already toggled to the correct setting.  (So why does it think it's MLS?)  So, I checked the box to "Relabel on next reboot" and rebooted.  I was a little afraid of this because it said it could take a long time if you had a large filesystem and this had already used about 23% of 3TB.  It was probably done well under 20 minutes (by the time I tried it again) and it worked!


Tuesday, February 11, 2014

iSCSI LVM Red Hat/CEntOS/Oracle Linux

Today I learned a couple of important things about LVMs on iSCSI.  First, a silly one - pvcreate against a slice (or partition - sorry, my Sun Solaris is showing) - NOT against a disk.  That is, /dev/sda1 NOT /dev/sda.  I coulda swore you could use the disk and LVM would take care of the details.  Again, probably thinking of ZFS and Sun Solaris.

The other really important things is you MUST use _netdev in place of defaults in the /etc/fstab.  For example:

   /dev/mapper/vg_oracle-lv_u01 /u01 ext4 _netdev  0 0

This is a serious gotcha!  If you don't do this, the device disappears from /dev/mapper.  Pretty unnerving!  

The other cool thing I picked up was, if you lose the /dev/mapper device, you can get it back (assuming you tinkered with iscsi restarts a bit) simply by issuing the command "vgchange -ay".  That was a neat trick and prompted this blog post.

That is all.

Friday, April 26, 2013

Slackware 14 under Xen

I have tried (halfway) to get Slackware to run under Xen (which I run on 32 bit CentOS 5.x).  It never seems to have a working network.  This time, I took a (very) little amount of time to fix this.  Googling resulting only in running Xen on Slackware.  I couldn't find anything on this problem.  I love to load each new distro that comes out but I really enjoy just using Slackware (SLS was my first distro).  So, when Slackware would repeatedly come up without a network interface, I was disappointed and a bit surprised.  If you're having this issue - here one possible fix:   Use a virtual Ethernet and do NOT use the default hypervisor network interface.  Instead, use "ne2k_pci".  I intend to try to get the shared physical interface to work with a real outside address and even test the other options under the virtual Ethernet.  But, this solved my problem.  If anyone else tries these, I'd love the hear the results.