Re: [Vserver] support for multicast?

From: Drew Lippolt <dlippolt_at_moverotech.com>
Date: Sat 26 Nov 2005 - 23:21:53 GMT
Message-Id: <72522943-B598-4775-AA81-920E7FA82B6C@moverotech.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

interesting. i tried exactly this (minus 1/2 the capabilities), with
both the old and new configs.

can you comment on your kernel and vs version, and if you are using a
distro supplied binaries? (aka vs utils out of debian sarge)

thanks,

<drew>

On Nov 26, 2005, at 3:52 PM, James B. MacLean wrote:

> Sorry I am joining this late. Also sorry that I am top posting :(.
> But we do the tomcat multicast clustering here and it was just as
> you had attempted. We run legacy mode and have :
>
> IPROOT="eth0:192.168.129.234 228.0.0.4"
> S_CAPS="CAP_KILL CAP_SETGID CAP_SETUID CAP_SETPCAP
> CAP_NET_BROADCAST CAP_SYS_NICE CAP_NET_ADMIN"
>
> Probably added too many capabilities, but security is not the
> issue, just being able to run in a vserver :).
>
> Yes on start the vserver spews out stuff, but the multicast was
> needed in the conf file.
>
> Hope this helps,
> JES
>
>
> Drew Lippolt wrote:
>
>>
>> i've upgraded to:
>>
>> kernel 2.6.12.5
>> vs 2.0
>>
>> i'm still getting the same problem as before.
>>
>> and believe i have narrowed the multicast problem.
>>
>> strangely, i can SEND/BROADCAST no problem. i say strange since
>> the creation of the mutlicast group is more complicated than
>> simply consuming the broadcast.
>>
>> running inside the vserver, the multicast client never sees traffic.
>>
>> i can run the broadcaster inside the vserver and talk to adjacent
>> receivers on other physical boxes.
>>
>>
>> but when i run mutlicast test code for RECEIVING multicast inside
>> a vserver strace shows:
>>
>>
>> mcaster@v237:~/try2$ strace ./multirec 224.0.0.9 9210
>> execve("./multirec", ["./multirec", "224.0.0.9", "9210"], [/* 13
>> vars */]) = 0
>> uname({sys="Linux", node="v237", ...}) = 0
>> brk(0) = 0x804a000
>> old_mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|
>> MAP_ANONYMOUS, -1, 0) = 0xb7fb7000
>> access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file
>> or directory)
>> open("/etc/ld.so.preload", O_RDONLY) = -1 ENOENT (No such file
>> or directory)
>> open("/etc/ld.so.cache", O_RDONLY) = 3
>> fstat64(3, {st_mode=S_IFREG|0644, st_size=8305, ...}) = 0
>> old_mmap(NULL, 8305, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7fb4000
>> close(3) = 0
>> access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file
>> or directory)
>> open("/lib/tls/libc.so.6", O_RDONLY) = 3
>> read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0`Z\1
>> \000"..., 512) = 512
>> fstat64(3, {st_mode=S_IFREG|0755, st_size=1254468, ...}) = 0
>> old_mmap(NULL, 1264780, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =
>> 0xb7e7f000
>> old_mmap(0xb7fa9000, 36864, PROT_READ|PROT_WRITE, MAP_PRIVATE|
>> MAP_FIXED, 3, 0x129000) = 0xb7fa9000
>> old_mmap(0xb7fb2000, 7308, PROT_READ|PROT_WRITE, MAP_PRIVATE|
>> MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7fb2000
>> close(3) = 0
>> old_mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|
>> MAP_ANONYMOUS, -1, 0) = 0xb7e7e000
>> set_thread_area({entry_number:-1 -> 6, base_addr:0xb7e7e460,
>> limit: 1048575, seg_32bit:1, contents:0, read_exec_only:0,
>> limit_in_pages:1, seg_not_present:0, useable:1}) = 0
>> munmap(0xb7fb4000, 8305) = 0
>> socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 3
>> bind(3, {sa_family=AF_INET, sin_port=htons(9210),
>> sin_addr=inet_addr ("0.0.0.0")}, 16) = 0
>> setsockopt(3, SOL_IP, IP_ADD_MEMBERSHIP, "\340\0\0\t\0\0\0\0", 8) = 0
>> recvfrom(3, <unfinished ...>
>>
>>
>> when running receiver code OUTSIDE the vserver (in the base host)
>>
>> app2:/vservers/v237/home/mcaster/try2# strace ./multirec 224.0.0.9
>> 9210
>> execve("./multirec", ["./multirec", "224.0.0.9", "9210"], [/* 17
>> vars */]) = 0
>> uname({sys="Linux", node="app2.moverotech.com", ...}) = 0
>> brk(0) = 0x804a000
>> old_mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|
>> MAP_ANONYMOUS, -1, 0) = 0xb7ef2000
>> access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file
>> or directory)
>> open("/etc/ld.so.preload", O_RDONLY) = -1 ENOENT (No such file
>> or directory)
>> open("/etc/ld.so.cache", O_RDONLY) = 3
>> fstat64(3, {st_mode=S_IFREG|0644, st_size=10985, ...}) = 0
>> old_mmap(NULL, 10985, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7eef000
>> close(3) = 0
>> access("/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file
>> or directory)
>> open("/lib/tls/libc.so.6", O_RDONLY) = 3
>> read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0`Z\1
>> \000"..., 512) = 512
>> fstat64(3, {st_mode=S_IFREG|0755, st_size=1254468, ...}) = 0
>> old_mmap(NULL, 1264780, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =
>> 0xb7dba000
>> old_mmap(0xb7ee4000, 36864, PROT_READ|PROT_WRITE, MAP_PRIVATE|
>> MAP_FIXED, 3, 0x129000) = 0xb7ee4000
>> old_mmap(0xb7eed000, 7308, PROT_READ|PROT_WRITE, MAP_PRIVATE|
>> MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7eed000
>> close(3) = 0
>> old_mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|
>> MAP_ANONYMOUS, -1, 0) = 0xb7db9000
>> set_thread_area({entry_number:-1 -> 6, base_addr:0xb7db9460,
>> limit: 1048575, seg_32bit:1, contents:0, read_exec_only:0,
>> limit_in_pages:1, seg_not_present:0, useable:1}) = 0
>> munmap(0xb7eef000, 10985) = 0
>> socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 3
>> bind(3, {sa_family=AF_INET, sin_port=htons(9210),
>> sin_addr=inet_addr ("0.0.0.0")}, 16) = 0
>> setsockopt(3, SOL_IP, IP_ADD_MEMBERSHIP, "\340\0\0\t\0\0\0\0", 8) = 0
>> recvfrom(3, "testtest", 255, 0, NULL, NULL) = 8
>> time(NULL) = 1132896911
>> brk(0) = 0x804a000
>> brk(0x806b000) = 0x806b000
>> brk(0) = 0x806b000
>> open("/etc/localtime", O_RDONLY) = 4
>> fstat64(4, {st_mode=S_IFREG|0644, st_size=1279, ...}) = 0
>> mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|
>> MAP_ANONYMOUS, -1, 0) = 0xb7ef1000
>> read(4, "TZif\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\5\0\0\0\5
>> \0"..., 4096) = 1279
>> close(4) = 0
>> munmap(0xb7ef1000, 4096) = 0
>> fstat64(1, {st_mode=S_IFCHR|0600, st_rdev=makedev(136, 3), ...}) = 0
>> mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|
>> MAP_ANONYMOUS, -1, 0) = 0xb7ef1000
>> write(1, "Time Received: Thu Nov 24 23:35:"..., 51Time Received:
>> Thu Nov 24 23:35:11 2005 : testtest
>> ) = 51
>> recvfrom(3, "testtest", 255, 0, NULL, NULL) = 8
>> time(NULL) = 1132896914
>> write(1, "Time Received: Thu Nov 24 23:35:"..., 51Time Received:
>> Thu Nov 24 23:35:14 2005 : testtest
>> ) = 51
>>
>>
>> they look identical up to the point where the vserver never
>> actually gets a recvfrom(3, "testtest", 255, 0, NULL, NULL) = 8
>>
>> thoughts?
>>
>> <drew>
>>
>>
>>
>> On Nov 16, 2005, at 5:55 AM, Drew Lippolt wrote:
>>
>>>
>>> On Nov 15, 2005, at 7:11 PM, Herbert Poetzl wrote:
>>>
>>>> On Mon, Nov 14, 2005 at 03:09:32AM -0600, Drew Lippolt wrote:
>>>>
>>>>>
>>>>> QUESTION: what is the current story with multicast support for
>>>>> both
>>>>> sending and receiving multicast traffic?
>>>>>
>>>>> BACKGROUND:
>>>>>
>>>>> trying to get tomcat clustering working in vserver.
>>>>>
>>>>> http://tomcat.apache.org/tomcat-5.5-doc/cluster-howto.html
>>>>>
>>>>> DETAILS:
>>>>>
>>>>> [root@app2 opt]# cat /proc/version
>>>>> Linux version 2.4.30-vs1.2.10 (root@app1.moverotech.com) (gcc
>>>>> version
>>>>> 3.2.3 20030502 (Red Hat Linux 3.2.3-52)) #1 Wed Aug 10
>>>>> 01:27:44 CDT 2005
>>>>>
>>>>> [root@app2 opt]# grep MULTICAST /boot/config-2.4.30-vs1.2.10
>>>>> CONFIG_IP_MULTICAST=y
>>>>>
>>>>> tomcat 5.5.12
>>>>>
>>>>> i'm running debian sarge vservers on redhat enterprise linux 3.0
>>>>> boxes at rackspace, using a stock kernel.org kernel with required
>>>>> tweaks for my hardware this is a production environment which has
>>>>> been supporting many apps beautifully for 3 months.
>>>>>
>>>>> the basic idea is that i have multiple app layer boxes, which i
>>>>> want
>>>>> to distribute tomcat clusters. single instance of tomcat,
>>>>> running in
>>>>> a single vserver, per real host, per application. so if i have 3
>>>>> apps, on 3 real servers, i'd have 9 total vservers across 3
>>>>> clusters. i'm not even getting that far. my test setup is 2 real
>>>>> hosts, each with one vserver with an 'out of the box' tomcat
>>>>> cluster
>>>>> config. the tomcat instances aren't finding each other.
>>>>>
>>>>> the stacktraces i'm getting on tomcat --shutdown-- are as follows,
>>>>> which doesn't look all that interesting, its in code looking for
>>>>> incoming connections on a tcp port, which never arrive, since the
>>>>> mcast conversation never happens (ReplicationListener.java:130):
>>>>>
>>>>> SEVERE: Unable to process request in ReplicationListener
>>>>> java.nio.channels.ClosedSelectorException
>>>>> at sun.nio.ch.SelectorImpl.lockAndDoSelect
>>>>> (SelectorImpl.java:
>>>>> 55)
>>>>> at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:70)
>>>>> at
>>>>> org.apache.catalina.cluster.tcp.ReplicationListener.listen
>>>>> (ReplicationListener.java:130)
>>>>> at org.apache.catalina.cluster.tcp.ClusterReceiverBase.run
>>>>> (ClusterReceiverBase.java:394)
>>>>> at java.lang.Thread.run(Thread.java:534)
>>>>>
>>>>>
>>>>> THINGS I"VE TRIED:
>>>>>
>>>>> * add multicast ip to IPROOT. this just causes barf messages at
>>>>> vserver startup
>>>>>
>>>>> Starting the virtual server v208
>>>>> Server v208 is not running
>>>>> SIOCSIFADDR: Invalid argument
>>>>> SIOCSIFFLAGS: Cannot assign requested address
>>>>> SIOCSIFNETMASK: Cannot assign requested address
>>>>> SIOCGIFADDR: Cannot assign requested address
>>>>> SIOCSIFBROADCAST: Cannot assign requested address
>>>>> SIOCSIFBRDADDR: Cannot assign requested address
>>>>> SIOCSIFFLAGS: Cannot assign requested address
>>>>> ipv4root is now 192.168.1.208 228.0.0.4
>>>>
>>>>
>>>> how did you add it?
>>>
>>>
>>> tried a few different ways.
>>>
>>> IPROOT="192.168.1.237 228.0.0.4"
>>> IPROOT="192.168.1.237 228.0.0.4/224.0.0.0"
>>>
>>>
>>>>
>>>>> * enabled NET_ADMIN and NET_BROADCAST. this offers no change
>>>>
>>>>
>>>> well, NET_ADMIN is what you probably need for
>>>> multicasting, NET_BROADCAST should suffice for
>>>> multicast reception ...
>>>
>>>
>>>
>>> the following are true, with NET_ADMIN, NET_BROADCAST, NET_RAW,
>>> SYS_ADMIN all set
>>>
>>> * clustering tool's test case suggests i'm SENDING multicast
>>> traffic, but not RECEIVING
>>>
>>> * tcpdump suggests i'm SENDING but not RECIEVING
>>>
>>> * i can ping the multicast address from the shell while the app
>>> server is running, but not when its not running
>>>
>>> * adding routes doesn't seem to effect it at all
>>>
>>>
>>>
>>>>
>>>> if you are interested in 'improving' multicast
>>>> capabilities in a safe way, and willing to do
>>>> some testing, please contact me on the IRC
>>>> channel ...
>>>
>>>
>>>
>>> i'm down for whatever. i have a good testbed. i just missed
>>> you on irc tonight. will try againtomorrow.
>>>
>>>
>>>>
>>>> best,
>>>> Herbert
>>>>
>>>> PS: will require switching to 2.6 kernel and
>>>> recent devel version (2.1.x)
>>>
>>>
>>> i'm planning on moving to 2.6 anyway. we can chat about the
>>> 2.1.x stuff.
>>>
>>> <snip>
>>>
>>>
>>>>
>>
>> _______________________________________________
>> Vserver mailing list
>> Vserver@list.linux-vserver.org
>> http://list.linux-vserver.org/mailman/listinfo/vserver
>
>
> _______________________________________________
> Vserver mailing list
> Vserver@list.linux-vserver.org
> http://list.linux-vserver.org/mailman/listinfo/vserver

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (Darwin)

iD8DBQFDiO4Vn4WX5PdaSqoRAthwAKCtdl8ut7SbEYF1rT+37VOMEBfRAwCfUbzi
90P9i7heVM0VNVLSabdoFJY=
=JMRm
-----END PGP SIGNATURE-----
_______________________________________________
Vserver mailing list
Vserver@list.linux-vserver.org
http://list.linux-vserver.org/mailman/listinfo/vserver
Received on Sat Nov 26 23:22:25 2005

[Next/Previous Months] [Main vserver Project Homepage] [Howto Subscribe/Unsubscribe] [Paul Sladen's vserver stuff]
Generated on Sat 26 Nov 2005 - 23:22:50 GMT by hypermail 2.1.8