[ngw] GroupWise, VMWare ESXi, ZCM, etc...
Barry Eriksen
barry_eriksen at ncsu.edu
Mon Apr 19 12:43:54 UTC 2010
Thanks for the soapbox...always good to hear contrarian viewpoints against the (new) conventional wisdom.
Hope your recovery is speedy,
- Barry
--
Barry Eriksen
Office of Information Technology (OIT)
NC State University
516 BrickHaven, Suite 100
Campus Box 7208
Raleigh, NC 27695
Ph: 919-515-0126
>>>
From: Randy Grein <randygrein at comcast.net>
To:NGWList <ngw at ngwlist.com>
Date: 4/19/2010 1:51 AM
Subject: Re: [ngw] GroupWise, VMWare ESXi, ZCM, etc...
Well, there is a potential downside - complexity. Yes, I realize this
is heresy but I speak heresy fluently. (grin) I'm also a bit fed up
and with a few days off on medical leave I feel like spouting off a
bit. Apologies in advance if I offend sensibilities.
I'm not saying that virtualization is all bad - far from it. It does,
however add another layer we need to understand before we can engineer
a solution, something we should all aspire to do. Once you know the
potential pitfalls you can avoid them and leverage the power
virtualization offers. My suggestions are as follows:
1. It's just another technology, treat it as such. Too often we get
into a 'gee whiz' or 'everyone else is doing it...' mentality. Poor
reasons to do anything, worthy only of pointy haired bosses. Have a
damned good reason you need to do it before giving up physical control.
2. Understand the dependencies on underlying architectures. Big one
that hardly anyone addresses. Hey, guess what - ESX has already
demonstrated security vulnerabilities so no matter what you put it on
you MUST manage patches. One more layer to watch. Are you monitoring
network traffic on the back end for problems and overload? No? Do you
even HAVE a separate network for backend traffic? Or are you foolishly
relying on a collapsed backbone so that even momentary loss of your
core router results in massive corruption of databases and VMs?
3. iSCSI is the greatest missused technology of the decade, see point
#2 above. If you're going to use it make SURE you know what the
dependencies are and that VMs will connect to resources every time and
if they don't on boot that they keep trying. "Gee, I guess the iSCSI
volume didn't mount, sorry!" is the lamest, most common excuse I hear
from overconfident virtualization proponents.
4. Test, test, test. Test failure modes. Test recovery. Test your
assumptions, document recovery and make DAMNED sure someone else can
do it too. It requires some thinking, mapping out EVERY possible
failure mode - then have someone else go through it because it's
guaranteed you will miss something, no matter how brilliant you are.
Then, when it's all up and running test it regularly to make sure you
don't forget.
5. Fault tolerant isn't. Every fault tolerant technology I've seen
used I have also seen fail. Mirrored disks, mirrored memory and
processors, clusters, virtualization - none of them are perfect.
6. Instrumentation becomes even more important. Many of our systems
are more sluggish than they ever were as physical systems without
being impossible, they're just slower. Why? We don't really know, but
I suspect it's the leverage - the constant context switch handling so
many hosted systems. It's not impossible, just annoying when managing
servers, it takes longer for admins do to anything.
We have had far more problems due to virtualization complexity in the
past 2 years than we ever would have had with a physical system, but
it was due to violating every one of the above 5 rules. We virtualized
the network, storage and servers all at the same time. But I don't
expect any rational, intelligent systems expert do do anything this
stupid. Plan and implement your backend communications first and
separate, do peer review of design and implmentation, test at each
stage and you'll have relatively few problems. Above all start with a
cost analysis to see if the benefits outweigh a purely physical
implementation.
Randy Grein
On Apr 16, 2010, at 12:34 PM, Kevin Parris wrote:
> There is at least one advantage to putting *everything* in a VM
> environment - mobility. If you've got it hosted on a 1.0Ghz box and
> it can't keep up, you don't have to re-install or migrate or
> whatever, you just get a host ready on a 2Ghz box, copy the guest
> over, tweak host config and start it up. whammo you've got new
> horsepower deployed.
>
> I can't think of any downsides to virtualizing everything, these
> days. Some will say they have to get every last ounce of
> performance from something, but I call those the rare exception
> cases. Or maybe some specialty applications claim they are
> unsupported in VM environment, but that just means the developers
> haven't caught up to the modern world (or the app has some really
> exotic requirements). With suitable equipment now, the VMware
> overhead isn't really very much to worry about.
>
> And recovery, just thought of that - if you have a system running on
> bare metal, and that box breaks, getting that system back into
> operation without fixing that particular box can be tricky. But if
> the bare metal host for your VM breaks, you may be able to get the
> guests back in operation on anything you have handy that can run a
> VM host environment.
>
> No, I don't work for VMware or receive any consideration from them.
> I just love the virtualization way of doing things.
>
>>>> "Kenneth Etter" <kle at msktd.com> 04/16/10 3:11 PM >>>
> We are a small business - 65 employees. I currently have one
> NetWare 6.5 server running GW8(MTA, POA, and GWIA) and ZfD 7. I'm
> getting ready to purchase some new server hardware. My original
> thought was to get three new servers. Move the GW8 stuff onto the
> first running SLES or OES. Setup ZCM 10 on the second (I assume
> SLES). And setup ESXi on a third to host some VMs so I could
> replace a few auxilliary PCs doing individual tasks in my server
> room. Due to lack of hardware, I haven't had a chance to play with
> ESXi yet. I'm also fairly new to Linux, although I do have GMS and
> Reload running on SLES. Does this sound reasonable? Or does anyone
> have a better suggestion? Thanks!
>
> Regards,
> Ken Etter
>
> _______________________________________________
> ngw mailing list
> ngw at ngwlist.com
> http://ngwlist.com/mailman/listinfo/ngw
_______________________________________________
ngw mailing list
ngw at ngwlist.com
http://ngwlist.com/mailman/listinfo/ngw
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://ngwlist.com/pipermail/ngw/attachments/20100419/398fb4e5/attachment.html
More information about the ngw
mailing list