Re: [vserver] Poll: High (ish) availability - how are you doing it?

From: Gordan Bobic <gordan_at_bobich.net>
Date: Sun 01 Aug 2010 - 09:13:09 BST
Message-ID: <4C552C95.3080009@bobich.net>

Edward Capriolo wrote:
> On Sat, Jul 31, 2010 at 5:59 PM, Gordan Bobic <gordan@bobich.net> wrote:
>> Edward Capriolo wrote:
>>> On Fri, Jul 30, 2010 at 5:19 AM, Jeff Jansen <jeff.jansen@kkoncepts.net>
>>> wrote:
>>>> Eugen Leitl <eugen@leitl.org> wrote on 2010-Jul-28:
>>>>> Please do; I would be also quite interested as well.
>>>> OK, my first pass at "HA Vserver with DRBD and Heartbeat" docs are up at:
>>>>
>>>> http://www.kkoncepts.net/HA
>>>>
>>>> Comments are enabled, so you can comment on the page if you've got
>>>> suggestions,
>>>> corrections, clarifications, etc.
>>>>
>>>> Jeff Jansen
>>>>
>>>>> It does not scale as well as some other solutions, but it may have other
>>>>> advantages that you want (maybe better locking, maybe >>better fail over
>>>>> support...).
>>> A little off topic, but it is important distinction between scaling
>>> and fail over. You really have to think hard on what your looking for.
>>>
>>> DRBD gives you disk replication Active/Passive and Active/Active 2
>>> nodes. Active Passive does not scale and active/active "scales" to two
>>> nodes which really is not scaling, in best case if you scaled a web
>>> server now you can handle twice the traffic, what happens when you get
>>> three or ten times the traffic? That solution no longer holds up.
>> I could be wrong, but I seem to remember that DRBD supports up to 3 nodes.
>>
>
> I mispoke. My point was it is not scalable beyond a certain number.
> This does not even take into account performance.

Scaling and performance are pretty fundamentally linked. What is the
point of scaling of not performance?

>>> At this point you have to look into file systems that allow multiple
>>> read/writes NFS or OCFS2. NFS does have locking but
>>> it seems to be the general case that no one was any luck with it in
>>> high contention situations.
>>> http://cyrusimap.web.cmu.edu/imapd/faq.html.
>> _ALL_ file systems that support concurrent access have performance problems
>> in high contention situations.
>>
>
> There is a difference between performance problems and failing to work
> correctly. If the locks really do not work they are not really locks.

Locks work fine on cluster file systems like GFC or OCFS2, but they slow
things down. As for NFS, it was always intended to be a little loose for
performance reasons.

>>> OCFS2 is multi-attach file systems and it supports much stronger
>>> locking semantics. Great! now that we have good locks and multi-mount
>>> the question becomes what software is designed to work with this type
>>> of file system? Can we have 10 nodes running mysql and
>>> managing/working with the same MYD tables? It may work in theory but
>>> practically I do not know of anyone doing it?
>> You can do this with GFS/GFS2 or OCFS2. You have to set MySQL's locking to
>> external. But the performance suffers as in any high-contention case.
>> Performance of such a solution is in most cases going to be worse than a
>> single node with a non-concurrent-access file system.
>>
>>> http://forums.mysql.com/read.php?144,205829,205829
>> That seems to be a weird isolated incident. There are plenty of accounts of
>> it working fine. Possibly a bug in the particular version of MySQL that the
>> poster was using.
>>
>
> Ok. Point was it does not work well and performs worse then single
> instance. That is performance scaling, but people usually mean to
> scale the performance UP with more nodes, not down. Does anyone want a
> more complex worse performing system?

Clustering isn't about performance - clustering is about HA. For
performance scaling you have to design your application from ground up
with parallelization in mind, and that means sheared-nothing (or
shared-nearly-nothing).

Gordan
Received on Sun Aug 1 09:13:28 2010

[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Sun 01 Aug 2010 - 09:13:28 BST by hypermail 2.1.8