Re: [vserver] Avoiding kernel internal routing among vserver clients

From: Herbert Poetzl <herbert_at_13thfloor.at>
Date: Wed 08 Aug 2007 - 16:19:03 BST
Message-ID: <20070808151903.GC31493@MAIL.13thfloor.at>

On Wed, Aug 08, 2007 at 03:57:10PM +0200, Thomas Weber wrote:
> Am Mittwoch, den 08.08.2007, 11:06 +0200 schrieb Herbert Poetzl:
> > On Wed, Aug 08, 2007 at 03:25:29AM +0200, Thomas Weber wrote:
> >
> > > Am Mittwoch, den 08.08.2007, 01:48 +0200 schrieb Herbert Poetzl:
> > >
> > > > > That was like the first thing i've tried. Routing to anything
> > > > > thats not locally hosted works just fine. But once you try to
> > > > > reach another vserver on another subnet that happens to be
> > > > > hosted on the same host it will route internally and not hit
> > > > > the wire at all - which is bad
> > > >
> > > > which is actually quite good, as it avoids flooding the net
> > > > (even a local network) with unnecessary packets ...
> > >
> > > don't try to sell me this as a feature :-)
> >
> > that's not a feature, that is how Linux networking works, and most
> > folks see it as advantage to have lightning fast networking between
> > different guests on the same host ...
> >
> > > If I could opt in/out I'd agree.
> > >
> > > > From inside the vserver you just see your one interface and
> > > > wouldn't expect certain packets to be routed completely
> > > > different than the rest.
> >
> > but you do expect that local lan traffic does not go over the
> > gateway on a typical setup :)
>
> I am _not_ talking about traffic between vserver1 on eth0 to vserver2
> on eth0, that's perfectly fine to be done internally and it's what i'd
> expect. I have vserver1 on eth0 and vserver2 on eth1 and want this
> traffic through the gw. From an inside the vserver view with just eth0
> configured, this aint local traffic. Didn't i make this clear in my
> initial mail?

the kernel implementation does not differenciate between
IPs on eth0 and IPs on eth1, they are configured on the
host, so they are _local_ and not very surprisingly, the
traffic will neither use eth0 nor eth1 but lo instead

> As a sidenote, I could very well imagine that there are setups where
> people put their private/internal vservers on an interface of their
> own without realizing that their public vservers can reach the private
> one without any restrictions.

if there are no restrictions (think iptables) then local
IPs can communicate (this is not different in any way
from what you get on every Linux box)

> > > > and actually makes vservers unusable if you want to move vservers
> > > > > among different hosts.
> > > >
> > > > why do you think so? at least exactly this setup
> > > > works perfectly fine here ...
> > > >
> > > > > Firewalling between the vserver clients for example is not
> > > > > manageable.
> >
> > > > you just make the firewall rules for ethX _and_ lo
> > > > and you are perfectly fine, wherever the guest is
> > >
> > > 3 hosts, 2 production, one for development/testing, later maybe more.
> > > I'd have to manage firewalling rules on the GW and on 3 hosts. The
> > > one responsible for the GW is not the one responsible for the vserver
> > > hosts. Managing 3 different systems (GW, production,development) with
> > > their own firewalling semantics for the same rules on 4+ boxes is
> > > asking for trouble.
> >
> > > Don't you think that'd be bad design?
> >
> > if you go for a completely virtualized network stack
> > (mainline is working on that already) and do not mind
> > the larger overhead in resources and the drastically
> > increased traffic on your DMZ network, as well as the
> > lower network performance (virtualization here has
> > quite noticeable overhead too) instead for lightweight
> > IP isolation (that is what Linux-VServer is doing),
> > then you can get your setup where all traffic (even
> > naturally host local traffic' is routed to the
> > gateway and back again ...
>
> Where can I find more information about this?
> Once upon a time there was this networking-ng stuff, which was
> supposed to do something thelike.

yep, we made a working prototype and figured that
the introduced overhead is not really acceptable
for a generic solution. shortly after that mainline
discovered virtualization and since then they are
working on a virtualized stack ...

> But all i can find about it seems rather outdated.
>
> > you can also do some tricky NAT-ing and make the out-
> > going IPs become non-local (as I showed in a quite old
> > ML posting) but I would not suggest to do so ...
>
> As described in my initial mail, I tried this already. I can even put
> the packets from vserver1 to vserver2 on the wire. They look OK on the
> gw, but once vserver2 tries to answer them the kernel seems to mess up
> somehow (or i have some brainloop in my setup). See the tcpdump in my
> posting.

> I've tried a lot with iptables magic, but I didn't get it working.

I would say a setup which does S/DNAT on output and
input from/to local IPs should do the trick ...

when I find some time, I might try it myself ...

> > > > > IDS would be another issue.
> > > >
> > > > assuming that IDS stands for Intrusion-Detection System
> > > > what problem do you see with that?
> > >
> > > IDS setup on the GW won't see all vserver-vserver traffic.
> > > Same with accounting etc.
> >
> > > In case of an incident when one of the production machines goes down
> > > and the other hosts all vservers, accounting would show less traffic
> > > and the IDS wouldn't see anything at all.
> >
> > yeah, maybe Xen or even QEMU is a better approach
> > for your specific requirements ...
>
> I've been thinking about that already. Like putting the vservers for
> ethX in their own virtual machine. But that seems rather cumbersome
> and makes it more difficult for the people who are supposed to finally
> manage that setup. Leaving alone the few vservers that need to serve
> both LANs.
> OpenVZ might be another approach since they seem to have a better
> virtualized network stack.

once again, Linux-VServer is doing IP Isolation, so
naturally the network stack is _not_ virtualized and
any stack virtualization (bad or good) will be 'better
virtualized' in your terminology :)

best,
Herbert

> Haven't tested yet. But I'd rather stay with vservers.
>
> Tom
>
Received on Wed Aug 8 16:19:12 2007

[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Wed 08 Aug 2007 - 16:19:15 BST by hypermail 2.1.8