[ngw] GMS + SLES + HyperV

Marvin Huffaker mhuffaker at redjuju.com
Sun Apr 7 05:23:17 UTC 2019


It’s not just the Btrfs factor, but when you choose the default
partitioning of a SLES 12 install, you end up with at least 50% of your
drive assigned to the home partition. Since datasync data is stored in
/var/opt... that entire home partition space goes to waste.   So
generally I would say it’s best practice to manually partition and
control the entire configuration yourself. I usually do something like
this:

/boot - 500mb
/root - 2-4gb
Swap - 4-8 gb
/ - the rest which means most of the space would be available for
datasync.

If you wanted to get more granular, you could create another partition
for /var/opt that would largely be just for datasync. (Would require an
extended partition). The value of that config would be that largely the
OS would be limited to one partition while the datasync would be
isolated to another. And an out of space situation on one partition
wouldn’t affect the other. If the datasync or database partition runs
out of space it can corrupt the datasync database. 

Also someone at novell a couple years ago said XFS was their preferred
system. I did some testing and I have adopted it as my default file
system for everything.  I don’t like ext4 mainly because in theory it
isn’t that much better than ext3, which is horrible for a large
groupwise system.    XFS overcomes most of the limitations of ext3 and
I’ve been very happy even on extremely large systems.

Full disclosure; I’ve never setup either gms or groupwise with ext4 so
don’t have any experience with it to know how it performs in real world
production. 

Marvin 

Sent from my iPhone

> On Apr 4, 2019, at 12:57 PM, Jeanette Berg <jberg at digitalcsi.com>
wrote:
> 
> The GMS 18 documentation does mention that EXT4 should be used over
the
> default BTRFS
> 
> see
> 
>
https://www.novell.com/documentation/groupwise18/gwmob18_guide_install/data/inst_req_server.html
> 
> 
> 
> 
> On 4/4/2019 at 2:06 AM, "David Krotil" <David.Krotil at hilo.cz> wrote:
> BTRFS is option for pure OS partitions, but hey, why on GMS data
> partition ? Make no sense at all, there are sooo many files creates
> and
> deletes, you will need unlimited space for it.
> D.
> 
> 
> ------------------------------------------------------------------
> Obsah tohoto e-mailu a všechny připojené soubory jsou důvěrné a mohou
> být chráněny zákonem. Tento e-mail je určen výhradně jeho adresátovi
> a jiné osoby do něj nejsou oprávněny nahlížet či s ním jakkoliv
> nakládat, jinak se dopustí protiprávního jednání. V případě, že
> nejste adresátem tohoto e-mailu, prosíme o jeho vymazání a o podání
> e-mailové zprávy. 
> 
> The content of this e-mail and any attached files are confidential and
> may be legally privileged. It is intended solely for the addressee.
> Access to this e-mail by anyone else is unauthorized. If you are not
> the
> intended recipient, any disclosure, copying, distribution or any
> action
> taken or omitted to be taken in reliance on it, is prohibited and may
> be
> unlawful. In this case be so kind and delete this e-mail and inform us
> about it.
> 
>>>> "Marvin Huffaker" <mhuffaker at redjuju.com> 04.04.2019 8:36 >>>
> The first time I accidentally used BTRFS, I kept getting out of space
> errors trying to run GMS even though there was plenty of space.  Micro
> Focus support scolded me, told me to rebuild it, and never look at
> BTRFS
> again. lol
> 
> Marvin
>>>> "Scott Malugin" <smalugin at dcbe.org> 4/3/2019 8:05 PM >>>
> Why is BTRFS not supported it is the default file system for SLES? I
> miss that on one of my GMS systems, so far it is running fine. What if
> any problem could I see out of it running on BTRFS?
> 
> Scott E. Malugin
> Dickson County Schools
> Network & Systems Admin
> Voice # 615-740-5902
> E-Mail : smalugin at dcbe.org 
> 
> 
> LEGAL CONFIDENTIAL:  The information in this e-mail and in any
> attachment may contain information which is legally privileged and is
> the property of the Dickson County Schools.  It is intended only for
> the
> attention and use of the named recipient.  If you are not the intended
> recipient, you are not authorized to retain, disclose, copy or
> distribute the message and/or any of its attachments.  If you received
> this e-mail in error, please notify the sender and delete this
> message.
> 
>>>> Pam Robello <pkrobello at gmail.com> 4/3/2019 7:00 PM >>>
> One last little tidbit from me....GMS is not supported running on a
> BTRFS
> file system.  We recommend EXT4 for SLES 12 and 15
> 
> Pam
> 
> On Wed, Apr 3, 2019 at 4:33 PM Matt Weisberg <matt at weisberg.net>
> wrote:
> 
>> 
>> I'll just chime in here too and mention that I have another site
> that
> was
>> running multiple GMS servers under Hyper-V (both SLES 11 and 12) for
> many
>> years with no issues at all (I say was because they migrated to
> Office 365
>> this past December).  This site had well over 50 SLES VMs (several
> GW
> POs)
>> running in 2 different large Hyper-V clusters, so I can attest that
> SLES
>> 11, 12 and 15 all run fine under Hyper-V as well.
>> 
>> I'll echo Joe's comment to make sure the Hyper-V integration tools
> are
>> loaded (included in the SLES distro).  I also do typically use the
> kernel
>> option elevator=noop for these VMs too.
>> 
>> As for file systems, I've run ext 2, 3, 4, btrfs, xfs, and nss all
> under
>> Hyper-V without issue.
>> 
>> The SLES support folks may be of more help on this one though, so
> that may
>> be a route to take.
>> 
>> All that said, I still prefer VMWare!  Hyper-V is a confusing mess
> to
> me!
>> 
>> Matt
>> 
>> 
>> --
>> Matt Weisberg
>> Weisberg Consulting, Inc.
>> matt at weisberg.net 
>> www.weisberg.net 
>> ofc. 248.685.1970
>> cell 248.705.1950
>> fax 248.769.5963
>> 
>> 
>> On 4/3/19, 11:56 AM, "ngw-bounces+matt=weisberg.net at ngwlist.com on
>> behalf of Joseph Marton" <ngw-bounces+matt=weisberg.net at ngwlist.com 
> on
>> behalf of jmmarton at gmail.com> wrote:
>> 
>>     Also make sure the Hyper-V integration tools are enabled for
> the SLES
>> guest.
>> 
>> 
>> 
>
https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/supported-suse-virtual-machines-on-hyper-v
> 
> 
> 
> 
>> 
>>     Joe
>> 
>>     On Wed, Apr 3, 2019 at 10:36 AM Bruce Perrin <
>> Bruce.Perrin at lbb.texas.gov>
>>     wrote:
>> 
>>     > Marvin,
>>     >
>>     > Are they doing anything 'special' with the VM ( replicating,
> doing
>>     > backups, etc) ? I mean, we have had 3 GMS hosts up on HyperV
> since
>> gms
>>     > first came out with no issues that weren't caused by me.
>>     >
>>     > What version of Windows Server are they running? What
> version
> of
>> SLES?
>>     >
>>     > -bruce
>>     > >>> <pkrobello at gmail.com> 4/3/2019 10:21 AM >>>
>>     >
>>     > Hi Marvin,
>>     >
>>     > This does not appear to be a GMS issue however this is a
>> configuration
>>     > that our test teams are not familiar with.  You will likely
> need to
>> work
>>     > with the SLES team if this happens again so they can
> determine what
>> the
>>     > problem is.
>>     >
>>     > Thanks,
>>     >
>>     > Pam
>>     >
>>     > >>> Marvin Huffaker<mhuffaker at redjuju.com> 4/2/2019 6:53 PM
>>>> 
>>     > I have a customer that is a Windows shop running on HyperV. 
> Twice
>> in the
>>     > last 6 months, their GMS system has gone corrupted beyond
> repair.
>> Meaning
>>     > on reboot it couldn't mount the file system and a manual
> fsck
>> resulted in
>>     > 100,000's of errors.   Needless to say, even after the
> repairs the
>> system
>>     > was trashed and I had to rebuild it from scratch.
>>     >
>>     > Is there something about SLES on HYPERV that makes it more
>> susceptible to
>>     > corruption?  I have never experienced this kind of issue
> running on
>> VMWARE.
>>     >
>>     > Thanks.
>>     >
>>     > Marvin
>>     > _______________________________________________
>>     > ngw mailing list>>     > _______________________________________________
>>     > ngw mailing list
>>     > ngw at ngwlist.com 
>>     > http://ngwlist.com/mailman/listinfo/ngw 
> 
> 
> 
>>     >
>>     _______________________________________________
>>     ngw mailing list
>>     ngw at ngwlist.com 
>>     http://ngwlist.com/mailman/listinfo/ngw 
> 
> 
> 
>> 
>> 
>> _______________________________________________
>> ngw mailing list
>> ngw at ngwlist.com 
>> http://ngwlist.com/mailman/listinfo/ngw 
> 
> 
> 
>> 
> _______________________________________________
> ngw mailing list
> ngw at ngwlist.com 
> http://ngwlist.com/mailman/listinfo/ngw 
> 
> 
> 
> 
> 
> _______________________________________________
> ngw mailing list
> ngw at ngwlist.com
> http://ngwlist.com/mailman/listinfo/ngw
> 
> <Part.001>



More information about the ngw mailing list