Re: [vserver] Avoiding kernel internal routing among vserver clients

From: Thomas Weber <l_vserver_at_mail2news.4t2.com>
Date: Wed 08 Aug 2007 - 14:57:10 BST
Message-Id: <1186581430.25125.51.camel@localhost>

Am Mittwoch, den 08.08.2007, 11:06 +0200 schrieb Herbert Poetzl:
> On Wed, Aug 08, 2007 at 03:25:29AM +0200, Thomas Weber wrote:
> > Am Mittwoch, den 08.08.2007, 01:48 +0200 schrieb Herbert Poetzl:
> >
> > > > That was like the first thing i've tried. Routing to anything thats
> > > > not locally hosted works just fine. But once you try to reach another
> > > > vserver on another subnet that happens to be hosted on the same host
> > > > it will route internally and not hit the wire at all - which is bad
> > >
> > > which is actually quite good, as it avoids flooding
> > > the net (even a local network) with unnecessary
> > > packets ...
> >
> > don't try to sell me this as a feature :-)
>
> that's not a feature, that is how Linux networking
> works, and most folks see it as advantage to have
> lightning fast networking between different guests
> on the same host ...
>
> > If I could opt in/out I'd agree.
> > > From inside the vserver you just see your one interface and wouldn't
> > > expect certain packets to be routed completely different than the
> > > rest.
>
> but you do expect that local lan traffic does not
> go over the gateway on a typical setup :)

I am _not_ talking about traffic between vserver1 on eth0 to vserver2 on
eth0, that's perfectly fine to be done internally and it's what i'd
expect.
I have vserver1 on eth0 and vserver2 on eth1 and want this traffic
through the gw. From an inside the vserver view with just eth0
configured, this aint local traffic.
Didn't i make this clear in my initial mail?

As a sidenote, I could very well imagine that there are setups where
people put their private/internal vservers on an interface of their own
without realizing that their public vservers can reach the private one
without any restrictions.

> > > and actually makes vservers unusable if you want to move vservers
> > > > among different hosts.
> > >
> > > why do you think so? at least exactly this setup
> > > works perfectly fine here ...
> > >
> > > > Firewalling between the vserver clients for example is not
> > > > manageable.
>
> > > you just make the firewall rules for ethX _and_ lo
> > > and you are perfectly fine, wherever the guest is
> >
> > 3 hosts, 2 production, one for development/testing, later maybe more.
> > I'd have to manage firewalling rules on the GW and on 3 hosts. The
> > one responsible for the GW is not the one responsible for the vserver
> > hosts. Managing 3 different systems (GW, production,development) with
> > their own firewalling semantics for the same rules on 4+ boxes is
> > asking for trouble.
>
> > Don't you think that'd be bad design?
>
> if you go for a completely virtualized network stack
> (mainline is working on that already) and do not mind
> the larger overhead in resources and the drastically
> increased traffic on your DMZ network, as well as the
> lower network performance (virtualization here has
> quite noticeable overhead too) instead for lightweight
> IP isolation (that is what Linux-VServer is doing),
> then you can get your setup where all traffic (even
> naturally host local traffic' is routed to the
> gateway and back again ...

Where can I find more information about this? Once upon a time there was
this networking-ng stuff, which was supposed to do something thelike.
But all i can find about it seems rather outdated.

> you can also do some tricky NAT-ing and make the out-
> going IPs become non-local (as I showed in a quite old
> ML posting) but I would not suggest to do so ...

As described in my initial mail, I tried this already. I can even put
the packets from vserver1 to vserver2 on the wire. They look OK on the
gw, but once vserver2 tries to answer them the kernel seems to mess up
somehow (or i have some brainloop in my setup). See the tcpdump in my
posting.
I've tried a lot with iptables magic, but I didn't get it working.

> > > > IDS would be another issue.
> > >
> > > assuming that IDS stands for Intrusion-Detection System
> > > what problem do you see with that?
> >
> > IDS setup on the GW won't see all vserver-vserver traffic.
> > Same with accounting etc.
>
> > In case of an incident when one of the production machines goes down
> > and the other hosts all vservers, accounting would show less traffic
> > and the IDS wouldn't see anything at all.
>
> yeah, maybe Xen or even QEMU is a better approach
> for your specific requirements ...

I've been thinking about that already. Like putting the vservers for
ethX in their own virtual machine. But that seems rather cumbersome and
makes it more difficult for the people who are supposed to finally
manage that setup. Leaving alone the few vservers that need to serve
both LANs.
OpenVZ might be another approach since they seem to have a better
virtualized network stack. Haven't tested yet.
But I'd rather stay with vservers.

  Tom
Received on Wed Aug 8 14:57:46 2007

[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Wed 08 Aug 2007 - 14:57:49 BST by hypermail 2.1.8