[ngw] (OT) GroupWise, VMWare ESXi, ZCM, etc...

Kenneth Etter kle at msktd.com
Fri Apr 16 20:10:06 UTC 2010


I wondered if that was the idea.

>>> "Patrick Farrell" <pfarrell at packereng.com> 4/16/2010 4:05 PM >>>
That's where you do a san :)  

>>> "Kenneth Etter" <kle at msktd.com> 4/16/2010 2:42 PM >>>
Kevin,

Thanks for the reply.  If the bare metal host breaks...and everything is on the server hard drives...how do I get things back in operation?  Or are you assuming that the VMs are not on the bare metal host?

Regards,
Ken


>>> "Kevin Parris" <KPARRIS at ed.sc.gov> 4/16/2010 3:34 PM >>>
There is at least one advantage to putting *everything* in a VM environment - mobility.  If you've got it hosted on a 1.0Ghz box and it can't keep up, you don't have to re-install or migrate or whatever, you just get a host ready on a 2Ghz box, copy the guest over, tweak host config and start it up.  whammo you've got new horsepower deployed.

I can't think of any downsides to virtualizing everything, these days.  Some will say they have to get every last ounce of performance from something, but I call those the rare exception cases.  Or maybe some specialty applications claim they are unsupported in VM environment, but that just means the developers haven't caught up to the modern world (or the app has some really exotic requirements).  With suitable equipment now, the VMware overhead isn't really very much to worry about.

And recovery, just thought of that - if you have a system running on bare metal, and that box breaks, getting that system back into operation without fixing that particular box can be tricky.  But if the bare metal host for your VM breaks, you may be able to get the guests back in operation on anything you have handy that can run a VM host environment.

No, I don't work for VMware or receive any consideration from them.  I just love the virtualization way of doing things.

>>> "Kenneth Etter" <kle at msktd.com> 04/16/10 3:11 PM >>>
We are a small business - 65 employees.  I currently have one NetWare 6.5 server running GW8(MTA, POA, and GWIA) and ZfD 7.  I'm getting ready to purchase some new server hardware.  My original thought was to get three new servers.  Move the GW8 stuff onto the first running SLES or OES.  Setup ZCM 10 on the second (I assume SLES).  And setup ESXi on a third to host some VMs so I could replace a few auxilliary PCs doing individual tasks in my server room.  Due to lack of hardware, I haven't had a chance to play with ESXi yet.  I'm also fairly new to Linux, although I do have GMS and Reload running on SLES.  Does this sound reasonable?  Or does anyone have a better suggestion?  Thanks!

Regards,
Ken Etter

_______________________________________________
ngw mailing list
ngw at ngwlist.com 
http://ngwlist.com/mailman/listinfo/ngw

Visit our website: http://www.packereng.com 
_______________________________________________
ngw mailing list
ngw at ngwlist.com
http://ngwlist.com/mailman/listinfo/ngw
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://ngwlist.com/pipermail/ngw/attachments/20100416/f1862ee1/attachment.html 


More information about the ngw mailing list