Saturday, August 16, 2008
Modifying NIS+ entries
First, know the format of the tables you're dealing with:
# niscat -o ethers.org_dir
Object Name : "ethers"
Directory : "org_dir.example.com."
Owner : "server1.example.com."
Group : "admin.example.com."
Access Rights : r---rmcdrmcdr---
Time to Live : 12:0:0
Creation Time : Mon Sep 9 18:09:55 1996
Mod. Time : Fri Sep 10 17:16:44 1999
Object Type : TABLE
Table Type : ethers_tbl
Number of Columns : 3
Character Separator :
Search Path :
Columns :
[0] Name : addr
Attributes : (SEARCHABLE, TEXTUAL DATA, CASE INSENSITIVE)
Access Rights : ----------------
[1] Name : name
Attributes : (SEARCHABLE, TEXTUAL DATA, CASE INSENSITIVE)
Access Rights : ----------------
[2] Name : comment
Attributes : (TEXTUAL DATA)
Access Rights : ----------------
#
Find the entry you wish to modify:
# niscat ethers.org_dir | grep client2
8:0:20:a5:e:f client2
#
or
# nisgrep name=client2 ethers.org_dir
8:0:20:a5:e:f client2
#
or
# nismatch name=client2 ethers.org_dir
8:0:20:a5:e:f client2
#
Now modify the table entry:
# nistbladm -m name=client1 '[addr=8:0:20:a5:e:f]',ethers.org_dir
#
Force an update:
# /usr/lib/nis/nisping org_dir
Pinging replicas serving "directory org_dir.example.com." :
Master server is "server1.example.com."
Last update occurred at Tue Aug 12 10:51:30 2008
Replica server is "server2.example.com."
Last Update seen was Tue Aug 12 10:47:40 2008
Pinging ... "server2.example.com."
#
Check that the update worked:
# niscat ethers.org_dir | grep client1
8:0:20:a5:e:f client1
#
Thursday, August 14, 2008
Software Patents
I've recently come across a software patent applied for and granted to a former work-colleague. Not from my current company, I hasten to add. No names. No pack drill!
I read the patent application. And I've thought about it for a while.
Was it obvious? Completely!
Is there oodles of prior art? I'm sure of it!
Is it enforcible? Given the previous two answers, almost certainly not!
Are software patents a complete waste of everyone's time? I think they probably ought to be. Unfortunately, I suspect they will still be around for some time.
Error adding an ESX host to Virtual Center Server
This was our first VMware Server in the DMZ. I was expecting some problems because of the requirement to punch some holes in the FireWall. The Server Configuration Guide is an excellent source of information for the details on how to manage an ESX server through a FireWall. It details the ports that need to be opened; the protocols those ports will be using; and the reasons why they need to be opened.
In the VI client attached to the VirtualCenter Server, I selected the DataCenter and selected Add Host from the menu. After resolving some FireWall problems, of which more in a later blog, I entered the server name, admin id and password, checked the returned information and clicked through the next three pages.
Only to receive the pop-up error message "Failed to install the VirtualCenter Agent Service"!
I googled online for other people's experiences. Most people encounter this sort of problem after an upgrade. In those instances, the problems occur when the VirtualCenter agent service on the ESX host hasn't been upgraded for some reason.
There were some suggestions that this could occur if /tmp/vmware-root doesn't exist. It did on my server.
Others suggest that a simple restart of the mgmt-vmware service would resolve the problem. It didn't on my server.
Others again suggested restarting the vmware-vpxa service. My server had no such service. Aha!
rpm -qa | grep -i vpxareturned nothing.
A chap called Rene has a blog where he describes how to perform a manual upgrade process of the vpxa-vmware agent. Unfortunately, it is for an earlier version of VMware. The path referenced is slightly different on my VirtualCenter server. The version number of the file is distinctly different. The real scoop on manually upgrading/installing the vpxa-vmware service can be found in the this thread from the VMware Communities website.
N.B. The correct path on my VirtualCenter server is C:\program files\vmware\infrastructure\virtualcenter server\upgrade. Check the bundleversion.xml file for the correct file to copy across to server. For my VirtualCenter server v2.5 and ESX server v3.5 was vpx-upgrade-esx-7-linux-64192.
So I sftp-ed the file to the server. I ran the shell script. The service still wasn't installed! The rpm was still not installed! So I tried to install the rpm from the command line with rpm -ivh
The /opt partition did not have sufficient space left! Which was odd as it should be a 1Gb in size. I checked. It was 24 Mb. And already half used! Oh dear!
I created a /optn directory - there was plenty of space left on the / partition; tar-ed the contents of /opt to /optn; umount-ed /opt; renamed /optn to /opt; and commented out the /opt entry from the /etc/fstab file. I ran the install script again. Success! Huzzah!
Well! That's that!
Tuesday, August 5, 2008
Strange ESX Console Behaviour
A colleague built a new ESX server out in our DMZ and we were having some problems accessing it. The fireWall was being just a tad too restrictive. Strike that. It was being completely restrictive.
Anyhow, I was using the Web interface, but the ESX login seemed to be frozen. Alt-F1 & Alt-F11 worked, but nothing else.
I walked into the server room and used the console directly attached to the KVM system in there. Same problem!
I attached a keyboard and monitor directly to the server!! A complete PITA. Same Problem!
My colleague was being to get worried he'd built the server incorrectly.
I added an USB Keyboard to the system. Same Problem!
Whilst pondering what to do I randomly turned the Scroll, Num and Function Lock keys off and hit enter. Eureka!! It worked!!
After some experimenting, it turns out that the Scroll Lock was the problem. Which is a slight issue when double-
Back at my desk I googled for ESX and Scroll Lock and there were two interesting posts. They both came from VMware's own community pages, with the more interesting being spot on! Although the other perhaps provides some explanation?!
Well! That's that!
Monday, August 4, 2008
Ehwotay!
After a refreshing few days off on holiday, its back to the grindstone.
Just about the first thing that happened after I had sat down at my desk, was that I received a phone call from a vendor we had spoken to just over a year ago. Tideway have an application dependency mapping product that looks like it might be quite useful in those situations where your development team has largely left and you're sitting looking at your infrastructure wondering what the heck all that smoke and mirrors is doing!
Tideway came in a year ago and spoke to us about their software then, but the requirement we thought we had wasn't really there on further inspection. Apparently their software has come on leaps and bounds and is even better and whizzier!
From the conversation, I think the biggest problem will be that it seems to overlap a great deal on systems that we already have: CMDB; performance monitoring (ish); etc. I’m not sure that it makes sense to consider linking them all, so it seems to me that if we were to use Tideway’s software correctly it means we’d have to consider giving up a number of pre-existing internal systems. And not using it properly would be a waste.
I guess a good question is what is a CMDB. It seems to be one of those useful marketing terms used to imply a good thing, but which is nebulous, yet a desirable tool. Such a wide diversity of tool claims to be or to have CMDB capability that the question is very valid.You can always have a look at the wikipedia page on CMDB, although other than placing the blame for the term fairly and squarely at the feet of the UK's OGC the most remarkable aspect of this page is the reference to the excellent, cynical but completely realistic IT Skeptic site. I agree with a lot that this site outlines. The writer obviously bares a number of scars from having implemented CMDBs and other aspects of ITIL.
Having attained ITIL Foundation certification, a couple of years ago I have queried in private how practical some aspects are. It is so reassuring to discover that others are doing so too.
So, having said all that, I claimed my company had a CMDB.
What do we actually have? In a lot of ways it is very similar to a number of the products out there. Essentially a home brew system built as a Lotus Notes DB, every server should be listed, including:
General Info:
- Name
- Region
- Country
- Site
- Manufacturer/Model
- Server Serial
- OS
- Database software (if appropriate)
- Application
- PO Number
- PO Cost
- Purchase Date
- Asset Management record Number
- Supplier Details
- Asset Number
- OS LicenceApplication licence
- Manufacturer Type
- System Number
- BootProm rev
- IP Address
- RAM
- Server Type
- Type
- Make
- Model
- Model No
- S/N
- Disks, i.e. number & size
- Total Disk size
- Disk Partitions
- Disk Partition size
- Config i.e. RAID 0, 1, 0+1, 1+0, 4, 5, 5E, 5EE, etc
- Total Storage (GB) after Config
- Drive No
- Model
- Drive Capacity
- Drive Location
- Drive Mount Point
- Make
- Model
- S/N
- Make
- Type
- Patches
- Identifier
- Make
- Model
- MAC Address
- IP Address
- IPX Addresses
- Make
- Name
- Version
- Type
- Provider
- Start
- End
- Agreement
Any Additional Information
- a free text area
- space for attachments
- Primary Contact Name
- Escalation Contact
- Support Queue
- Name
- Usage
- Password
Author
Date
Status: Active/Retired
- if retired, when
Of course, the majority of these fields are rarely documented!
What do we not have:
- sufficient application configuration information
- cross server relationship information
- cross application relationship information
- no auto-discovery
- ...
So it goes!
Friday, August 1, 2008
Integrating Wikis
I was quite nervous about this, about integrating the Wikis.
Unnecessarily nervous, though.
I was able to use Special Pages:Export Pages to export all the pages listed on the Special Pages:All Pages page. Although you have to type them all in.
However, some of those pages reference uploaded files. Jpegs actually. There doesn't appear to be a Special Page to export all uploaded files. I had to log on to the server and copy them onto another server from whence I was able to ftp them to my desktop. From my Desktop, I logged into the Company Wiki and uploaded the files one by one. Not fun, but the file upload mechanism is surprisingly well written. It remembers your previous directory - at least whilst you are logged in! If you mistakenly try to upload a file twice an error message is displayed with the text "A file with this name exists already, please check Image:Example24.jpg if you are not sure if you want to change it." And you have the choice of replacing it or choosing a new file. Some thought has been used here! Much kudos is deserved by the developers. Regrettably, you do not often see that much thought.
To import the pages I used the Special Pages:Import Pages page and uploaded the exported xml file from the "test" Wiki. It just loads the pages.
I checked out the uploaded pages. There were a couple of instances where uploaded images had slightly different names as the references are case sensitive. It was trivial to change those and then everything looked perfect.
Despite the ease with which this went, it could have been easier still:
- the export pages page could list all the pages with tick-box selection boxes
- the export pages function could also export referenced images
- the import pages function could also upload the images with the correct case