Commit graph

835 commits

Author SHA1 Message Date
Herbert Xu
e5ed639913 [IPV4]: Replace __in_dev_get with __in_dev_get_rcu/rtnl
The following patch renames __in_dev_get() to __in_dev_get_rtnl() and
introduces __in_dev_get_rcu() to cover the second case.

1) RCU with refcnt should use in_dev_get().
2) RCU without refcnt should use __in_dev_get_rcu().
3) All others must hold RTNL and use __in_dev_get_rtnl().

There is one exception in net/ipv4/route.c which is in fact a pre-existing
race condition.  I've marked it as such so that we remember to fix it.

This patch is based on suggestions and prior work by Suzanne Wood and
Paul McKenney.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03 14:35:55 -07:00
David S. Miller
a5e7c210fe [IPV6]: Fix leak added by udp connect dst caching fix.
Based upon a patch from Mitsuru KANDA <mk@linux-ipv6.org>

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03 14:21:58 -07:00
Yan Zheng
f36d6ab182 [IPV6]: Fix ipv6 fragment ID selection at slow path
Signed-Off-By: Yan Zheng <yanzheng@21cn.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03 14:19:15 -07:00
Herbert Xu
444fc8fc3a [IPV4]: Fix "Proxy ARP seems broken"
Meelis Roos <mroos@linux.ee> wrote:
> RK> My firewall setup relies on proxyarp working.  However, with 2.6.14-rc3,
> RK> it appears to be completely broken.  The firewall is 212.18.232.186,
> 
> Same here with some kernel between 14-rc2 and 14-rc3 - no reposnse to
> ARP on a proxyarp gateway. Sorry, no exact revison and no more debugging
> yet since it'a a production gateway.

The breakage is caused by the change to use the CB area for flagging
whether a packet has been queued due to proxy_delay.  This area gets
cleared every time arp_rcv gets called.  Unfortunately packets delayed
due to proxy_delay also go through arp_rcv when they are reprocessed.

In fact, I can't think of a reason why delayed proxy packets should go
through netfilter again at all.  So the easiest solution is to bypass
that and go straight to arp_process.

This is essentially what would've happened before netfilter support
was added to ARP.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> 
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03 14:18:10 -07:00
Russell King
496a22b08f [NET]: Fix "sysctl_net.c:36: error: 'core_table' undeclared here"
During the build for ARM machine type "fortunet", this error occurred:

  CC      net/sysctl_net.o
net/sysctl_net.c:36: error: 'core_table' undeclared here (not in a function)

It appears that the following configuration settings cause this error
due to a missing include:
CONFIG_SYSCTL=y
CONFIG_NET=y
# CONFIG_INET is not set

core_table appears to be declared in net/sock.h.  if CONFIG_INET were
defined, net/sock.h would have been included via:
  sysctl_net.c -> net/ip.h -> linux/ip.h -> net/sock.h

so include it directly.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03 14:16:34 -07:00
Eric Dumazet
81c3d5470e [INET]: speedup inet (tcp/dccp) lookups
Arnaldo and I agreed it could be applied now, because I have other
pending patches depending on this one (Thank you Arnaldo)

(The other important patch moves skc_refcnt in a separate cache line,
so that the SMP/NUMA performance doesnt suffer from cache line ping pongs)

1) First some performance data :
--------------------------------

tcp_v4_rcv() wastes a *lot* of time in __inet_lookup_established()

The most time critical code is :

sk_for_each(sk, node, &head->chain) {
     if (INET_MATCH(sk, acookie, saddr, daddr, ports, dif))
         goto hit; /* You sunk my battleship! */
}

The sk_for_each() does use prefetch() hints but only the begining of
"struct sock" is prefetched.

As INET_MATCH first comparison uses inet_sk(__sk)->daddr, wich is far
away from the begining of "struct sock", it has to bring into CPU
cache cold cache line. Each iteration has to use at least 2 cache
lines.

This can be problematic if some chains are very long.

2) The goal
-----------

The idea I had is to change things so that INET_MATCH() may return
FALSE in 99% of cases only using the data already in the CPU cache,
using one cache line per iteration.

3) Description of the patch
---------------------------

Adds a new 'unsigned int skc_hash' field in 'struct sock_common',
filling a 32 bits hole on 64 bits platform.

struct sock_common {
	unsigned short		skc_family;
	volatile unsigned char	skc_state;
	unsigned char		skc_reuse;
	int			skc_bound_dev_if;
	struct hlist_node	skc_node;
	struct hlist_node	skc_bind_node;
	atomic_t		skc_refcnt;
+	unsigned int		skc_hash;
	struct proto		*skc_prot;
};

Store in this 32 bits field the full hash, not masked by (ehash_size -
1) Using this full hash as the first comparison done in INET_MATCH
permits us immediatly skip the element without touching a second cache
line in case of a miss.

Suppress the sk_hashent/tw_hashent fields since skc_hash (aliased to
sk_hash and tw_hash) already contains the slot number if we mask with
(ehash_size - 1)

File include/net/inet_hashtables.h

64 bits platforms :
#define INET_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
     (((__sk)->sk_hash == (__hash))
     ((*((__u64 *)&(inet_sk(__sk)->daddr)))== (__cookie))   &&  \
     ((*((__u32 *)&(inet_sk(__sk)->dport))) == (__ports))   &&  \
     (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))

32bits platforms:
#define TCP_IPV4_MATCH(__sk, __hash, __cookie, __saddr, __daddr, __ports, __dif)\
     (((__sk)->sk_hash == (__hash))                 &&  \
     (inet_sk(__sk)->daddr          == (__saddr))   &&  \
     (inet_sk(__sk)->rcv_saddr      == (__daddr))   &&  \
     (!((__sk)->sk_bound_dev_if) || ((__sk)->sk_bound_dev_if == (__dif))))


- Adds a prefetch(head->chain.first) in 
__inet_lookup_established()/__tcp_v4_check_established() and 
__inet6_lookup_established()/__tcp_v6_check_established() and 
__dccp_v4_check_established() to bring into cache the first element of the 
list, before the {read|write}_lock(&head->lock);

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Acked-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03 14:13:38 -07:00
Herbert Xu
325ed82393 [NET]: Fix packet timestamping.
I've found the problem in general.  It affects any 64-bit
architecture.  The problem occurs when you change the system time.

Suppose that when you boot your system clock is forward by a day.
This gets recorded down in skb_tv_base.  You then wind the clock back
by a day.  From that point onwards the offset will be negative which
essentially overflows the 32-bit variables they're stored in.

In fact, why don't we just store the real time stamp in those 32-bit
variables? After all, we're not going to overflow for quite a while
yet.

When we do overflow, we'll need a better solution of course.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-10-03 13:57:23 -07:00
Scott Talbert
75b895c15b [ATM]: [lec] reset retry counter when new arp issued
From: Scott Talbert <scott.talbert@lmco.com>
Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29 17:31:30 -07:00
Scott Talbert
4a7097fcc4 [ATM]: [lec] attempt to support cisco failover
From: Scott Talbert <scott.talbert@lmco.com>
Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29 17:30:54 -07:00
Alexey Kuznetsov
09e9ec8711 [TCP]: Don't over-clamp window in tcp_clamp_window()
From: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>

Handle better the case where the sender sends full sized
frames initially, then moves to a mode where it trickles
out small amounts of data at a time.

This known problem is even mentioned in the comments
above tcp_grow_window() in tcp_input.c, specifically:

...
 * The scheme does not work when sender sends good segments opening
 * window and then starts to feed us spagetti. But it should work
 * in common situations. Otherwise, we have to rely on queue collapsing.
...

When the sender gives full sized frames, the "struct sk_buff" overhead
from each packet is small.  So we'll advertize a larger window.
If the sender moves to a mode where small segments are sent, this
ratio becomes tilted to the other extreme and we start overrunning
the socket buffer space.

tcp_clamp_window() tries to address this, but it's clamping of
tp->window_clamp is a wee bit too aggressive for this particular case.

Fix confirmed by Ion Badulescu.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29 17:17:15 -07:00
David S. Miller
01ff367e62 [TCP]: Revert 6b251858d3
But retain the comment fix.

Alexey Kuznetsov has explained the situation as follows:

--------------------

I think the fix is incorrect. Look, the RFC function init_cwnd(mss) is
not continuous: f.e. for mss=1095 it needs initial window 1095*4, but
for mss=1096 it is 1096*3. We do not know exactly what mss sender used
for calculations. If we advertised 1096 (and calculate initial window
3*1096), the sender could limit it to some value < 1096 and then it
will need window his_mss*4 > 3*1096 to send initial burst.

See?

So, the honest function for inital rcv_wnd derived from
tcp_init_cwnd() is:

	init_rcv_wnd(mss)=
	  min { init_cwnd(mss1)*mss1 for mss1 <= mss }

It is something sort of:

	if (mss < 1096)
		return mss*4;
	if (mss < 1096*2)
		return 1096*4;
	return mss*2;

(I just scrablled a graph of piece of paper, it is difficult to see or
to explain without this)

I selected it differently giving more window than it is strictly
required.  Initial receive window must be large enough to allow sender
following to the rfc (or just setting initial cwnd to 2) to send
initial burst.  But besides that it is arbitrary, so I decided to give
slack space of one segment.

Actually, the logic was:

If mss is low/normal (<=ethernet), set window to receive more than
initial burst allowed by rfc under the worst conditions
i.e. mss*4. This gives slack space of 1 segment for ethernet frames.

For msses slighlty more than ethernet frame, take 3. Try to give slack
space of 1 frame again.

If mss is huge, force 2*mss. No slack space.

Value 1460*3 is really confusing. Minimal one is 1096*2, but besides
that it is an arbitrary value. It was meant to be ~4096. 1460*3 is
just the magic number from RFC, 1460*3 = 1095*4 is the magic :-), so
that I guess hands typed this themselves.

--------------------

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-29 17:07:20 -07:00
Linus Torvalds
eb693d2994 Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6 2005-09-29 08:56:47 -07:00
Al Viro
666002218d [PATCH] proc_mkdir() should be used to create procfs directories
A bunch of create_proc_dir_entry() calls creating directories had crept
in since the last sweep; converted to proc_mkdir().

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-29 08:46:26 -07:00
David S. Miller
01d40f28b1 [NET]: Fix reversed logic in eth_type_trans().
I got the second compare_eth_addr() test reversed, oops.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-28 22:37:53 -07:00
Martin Whitaker
735631a919 [ATM]: fix bug in atm address list handling
From: Martin Whitaker <atm@martin-whitaker.co.uk>
Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil>
2005-09-28 16:35:22 -07:00
Chas Williams
9301e320e9 [ATM]: track and close listen sockets when sigd exits
Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil>
2005-09-28 16:35:01 -07:00
Roman Kagan
e2c4b72158 [ATM]: net/atm/ioctl.c: autoload pppoatm and br2684
Signed-off-by: Roman Kagan <rkagan@mail.ru>
Signed-off-by: Chas Williams <chas@cmf.nrl.navy.mil>
2005-09-28 16:34:24 -07:00
David S. Miller
6b251858d3 [TCP]: Fix init_cwnd calculations in tcp_select_initial_window()
Match it up to what RFC2414 really specifies.
Noticed by Rick Jones.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-28 16:31:48 -07:00
Oliver Dawid
64233bffbb [APPLETALK]: Fix broadcast bug.
From: Oliver Dawid <oliver@helios.de>

we found a bug in net/appletalk/ddp.c concerning broadcast packets. In 
kernel 2.4 it was working fine. The bug first occured 4 years ago when 
switching to new SNAP layer handling. This bug can be splitted up into a 
sending(1) and reception(2) problem:

Sending(1)
In kernel 2.4 broadcast packets were sent to a matching ethernet device 
and atalk_rcv() was called to receive it as "loopback" (so loopback 
packets were shortcutted and handled in DDP layer).

When switching to the new SNAP structure, this shortcut was removed and 
the loopback packet was send to SNAP layer. The author forgot to replace 
the remote device pointer by the loopback device pointer before sending 
the packet to SNAP layer (by calling ddp_dl->request() ) therfor the 
packet was not sent back by underlying layers to ddp's atalk_rcv().

Reception(2)
In atalk_rcv() a packet received by this loopback mechanism contains now 
the (rigth) loopback device pointer (in Kernel 2.4 it was the (wrong) 
remote ethernet device pointer) and therefor no matching socket will be 
found to deliver this packet to. Because a broadcast packet should be 
send to the first matching socket (as it is done in many other protocols 
(?)), we removed the network comparison in broadcast case.

Below you will find a patch to correct this bug. Its diffed to kernel 
2.6.14-rc1

Signed-off-by: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27 16:11:29 -07:00
David S. Miller
ba645c1602 [NET]: Slightly optimize ethernet address comparison.
We know the thing is at least 2-byte aligned, so take
advantage of that instead of invoking memcmp() which
results in truly horrifically inefficient code because
it can't assume anything about alignment.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27 16:03:05 -07:00
Alexey Dobriyan
520d1b830a [ROSE]: fix typo (regeistration)
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27 15:45:15 -07:00
Alexey Dobriyan
a83cd2cc90 [ROSE]: check rose_ndevs earlier
* Don't bother with proto registering if rose_ndevs is bad.
* Make escape structure more coherent.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27 15:44:36 -07:00
Alexey Dobriyan
70ff3b66d7 [ROSE]: return sane -E* from rose_proto_init()
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27 15:43:46 -07:00
Alexey Dobriyan
c3c4ed652e [ROSE]: do proto_unregister() on exit paths
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27 15:42:58 -07:00
Frank Filz
a79af59efd [NET]: Fix module reference counts for loadable protocol modules
I have been experimenting with loadable protocol modules, and ran into
several issues with module reference counting.

The first issue was that __module_get failed at the BUG_ON check at
the top of the routine (checking that my module reference count was
not zero) when I created the first socket. When sk_alloc() is called,
my module reference count was still 0. When I looked at why sctp
didn't have this problem, I discovered that sctp creates a control
socket during module init (when the module ref count is not 0), which
keeps the reference count non-zero. This section has been updated to
address the point Stephen raised about checking the return value of
try_module_get().

The next problem arose when my socket init routine returned an error.
This resulted in my module reference count being decremented below 0.
My socket ops->release routine was also being called. The issue here
is that sock_release() calls the ops->release routine and decrements
the ref count if sock->ops is not NULL. Since the socket probably
didn't get correctly initialized, this should not be done, so we will
set sock->ops to NULL because we will not call try_module_get().

While searching for another bug, I also noticed that sys_accept() has
a possibility of doing a module_put() when it did not do an
__module_get so I re-ordered the call to security_socket_accept().

Signed-off-by: Frank Filz <ffilzlnx@us.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27 15:23:38 -07:00
Eric Dumazet
2d7ceece08 [NET]: Prefetch dev->qdisc_lock in dev_queue_xmit()
We know the lock is going to be taken.

Signed-off-by: Eric Dumazet <dada1@cosmosbay.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27 15:22:58 -07:00
Daniel Phillips
bc8dfcb939 [NET]: Use non-recursive algorithm in skb_copy_datagram_iovec()
Use iteration instead of recursion.  Fraglists within fraglists
should never occur, so we BUG check this.

Signed-off-by: Daniel Phillips <phillips@istop.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27 15:22:35 -07:00
David S. Miller
667347f1ca [NEIGH]: Add debugging check when adding timers.
If we double-add a neighbour entry timer, which should be
impossible but has been reported, dump the current state of
the entry so that we can debug this.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-27 12:07:44 -07:00
David S. Miller
56e9b26324 Merge master.kernel.org:/pub/scm/linux/kernel/git/acme/llc-2.6 2005-09-26 15:29:31 -07:00
Harald Welte
188bab3ae0 [NETFILTER]: Fix invalid module autoloading by splitting iptable_nat
When you've enabled conntrack and NAT as a module (standard case in all
distributions), and you've also enabled the new conntrack netlink
interface, loading ip_conntrack_netlink.ko will auto-load iptable_nat.ko.
This causes a huge performance penalty, since for every packet you iterate
the nat code, even if you don't want it.

This patch splits iptable_nat.ko into the NAT core (ip_nat.ko) and the
iptables frontend (iptable_nat.ko).  Threfore, ip_conntrack_netlink.ko will
only pull ip_nat.ko, but not the frontend.  ip_nat.ko will "only" allocate
some resources, but not affect runtime performance.

This separation is also a nice step in anticipation of new packet filters
(nf-hipac, ipset, pkttables) being able to use the NAT core.

Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26 15:25:11 -07:00
David S. Miller
b85daee0e4 [AF_PACKET]: Remove bogus checks added to packet_sendmsg().
These broke existing apps, and the checks are superfluous
as the values being verified aren't even used.

Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26 15:23:58 -07:00
Herbert Xu
c62dba9011 [IPV6]: Fix [Bug 5306] Oops on IPv6 route lookup
> Steps to reproduce:
> 1. Boot Linux, do NOT setup any IPv6 routes
> 2. ip route get 2001::1 (or any unroutable address)

Well caught.  We never set rt6i_idev on ip6_null_entry.
This patch should make the problem go away.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26 15:10:16 -07:00
Alex Williamson
b9d717a7b4 [NET]: Make sure ctl buffer is aligned properly in sys_sendmsg().
It's on the stack and declared as "unsigned char[]", but pointers
and similar can be in here thus we need to give it an explicit
alignment attribute.

Signed-off-by: Alex Williamson <alex.williamson@hp.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-26 14:28:02 -07:00
Harald Welte
8ddec7460d [NETFILTER] ip_conntrack: Update event cache when status changes
The GRE, SCTP and TCP protocol helpers did not call
ip_conntrack_event_cache() when updating ct->status.  This patch adds
the respective calls.

Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-24 16:56:08 -07:00
Alexey Dobriyan
8689c07e47 [IRDA]: *irttp cleanup
* Remove useless comment.
* Remove useless assertions.
* Remove useless comparison.

Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-24 16:55:17 -07:00
Alexey Dobriyan
15166fadb0 [IRDA]: Fix memory leak in irttp_init()
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-24 16:54:50 -07:00
Amos Waterland
45fc3b11f1 [NET]: Protect neigh_stat_seq_fops by CONFIG_PROC_FS
From: Amos Waterland <apw@us.ibm.com>

If CONFIG_PROC_FS is not selected, the compiler emits this warning:

 net/core/neighbour.c:64: warning: `neigh_stat_seq_fops' defined but not used

Which is correct, because neigh_stat_seq_fops is in fact only
initialized and used by code that is protected by CONFIG_PROC_FS.  So
this patch fixes that up.

Signed-off-by: Amos Waterland <apw@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-24 16:53:16 -07:00
Harald Welte
d67b24c40f [NETFILTER]: Fix ip[6]t_NFQUEUE Kconfig dependency
We have to introduce a separate Kconfig menu entry for the NFQUEUE targets.
They cannot "just" depend on nfnetlink_queue, since nfnetlink_queue could
be linked into the kernel, whereas iptables can be a module.

Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-24 16:52:03 -07:00
Sridhar Samudrala
eb0e007687 [SCTP]: Fix SCTP_SHUTDOWN notifications.
Fix to allow SCTP_SHUTDOWN notifications to be received on 1-1 style
SCTP SOCK_STREAM sockets.

Add SCTP_SHUTDOWN notification to the receive queue before updating
the state of the association.

Signed-off-by: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 23:48:38 -07:00
Harald Welte
1dfbab5949 [NETFILTER] Fix conntrack event cache deadlock/oops
This patch fixes a number of bugs.  It cannot be reasonably split up in
multiple fixes, since all bugs interact with each other and affect the same
function:

Bug #1:
The event cache code cannot be called while a lock is held.  Therefore, the
call to ip_conntrack_event_cache() within ip_ct_refresh_acct() needs to be
moved outside of the locked section.  This fixes a number of 2.6.14-rcX
oops and deadlock reports.

Bug #2:
We used to call ct_add_counters() for unconfirmed connections without
holding a lock.  Since the add operations are not atomic, we could race
with another CPU.

Bug #3:
ip_ct_refresh_acct() lost REFRESH events in some cases where refresh
(and the corresponding event) are desired, but no accounting shall be
performed.  Both, evenst and accounting implicitly depended on the skb
parameter bein non-null.   We now re-introduce a non-accounting
"ip_ct_refresh()" variant to explicitly state the desired behaviour.

Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 23:46:57 -07:00
Alexey Dobriyan
67497205b1 [NETFILTER] Fix sparse endian warnings in pptp helper
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 23:45:24 -07:00
Harald Welte
0ae5d253ad [NETFILTER] fix DEBUG statement in PPTP helper
As noted by Alexey Dobriyan, the DEBUGP statement prints the wrong
callID.

Signed-off-by: Harald Welte <laforge@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 23:44:58 -07:00
Vlad Drukker
2a7bc3c94c [BRIDGE]: TSO fix in br_dev_queue_push_xmit
Signed-off-by: Vlad Drukker <vlad@storewiz.com>
Acked-by: Stephen Hemminger <shemminger@osdl.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 23:35:34 -07:00
Herbert Xu
83ca28befc [TCP]: Adjust Reno SACK estimate in tcp_fragment
Since the introduction of TSO pcount a year ago, it has been possible
for tcp_fragment() to cause packets_out to decrease.  Prior to that,
tcp_retrans_try_collapse() was the only way for that to happen on the
retransmission path.

When this happens with Reno, it is possible for sasked_out to become
invalid because it is only an estimate and not tied to any particular
packet on the retransmission queue.

Therefore we need to adjust sacked_out as well as left_out in the Reno
case.  The following patch does exactly that.

This bug is pretty difficult to trigger in practice though since you
need a SACKless peer with a retransmission that occurs just as the
cached MTU value expires.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
2005-09-22 23:32:56 -07:00
Arnaldo Carvalho de Melo
8420e1b541 [LLC]: fix llc_ui_recvmsg, making it behave like tcp_recvmsg
In fact it is an exact copy of the parts that makes sense to LLC :-)

Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
2005-09-22 08:29:08 -03:00
Arnaldo Carvalho de Melo
d389424e00 [LLC]: Fix the accept path
Borrowing the structure of TCP/IP for this. On the receive of new connections I
was bh_lock_socking the _new_ sock, not the listening one, duh, now it survives
the ssh connections storm I've been using to test this specific bug.

Also fixes send side skb sock accounting.

Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
2005-09-22 07:57:21 -03:00
Arnaldo Carvalho de Melo
2928c19e10 [LLC]: Fix sparse warnings
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
2005-09-22 05:14:33 -03:00
Jochen Friedrich
0519d8fbab [TR]: Set correct frame type for SNAP packets
Signed-off-by: Jochen Friedrich <jochen@scram.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
2005-09-22 04:51:56 -03:00
Jochen Friedrich
096f0eb1df [LLC]: Fix llc_fixup_skb() bug
llc_fixup_skb() had a bug dropping 3 bytes packets (like UA frames). Token ring
doesn't pad these frames.

Signed-off-by: Jochen Friedrich <jochen@scram.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
2005-09-22 04:48:46 -03:00
Jochen Friedrich
5564af21ae [LLC]: Fix for Bugzilla ticket #5157
Signed-off-by: Jochen Friedrich <jochen@scram.de>
Signed-off-by: Arnaldo Carvalho de Melo <acme@mandriva.com>
2005-09-22 04:46:44 -03:00