- 06 Jan, 2009 7 commits
-
-
Mark McLoughlin authored
Replace s390_root_dev_register() with root_device_register() etc. [Includes fix from Cornelia Huck] Signed-off-by:
Mark McLoughlin <markmc@redhat.com> Cc: Cornelia Huck <cornelia.huck@de.ibm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@suse.de>
-
Jarek Poplawski authored
New nodes are inserted in u32_change() under rtnl_lock() with wmb(), so without tcf_tree_lock() like in other classifiers (e.g. cls_fw). This isn't enough without rmb() on the read side, but on the other hand adding such barriers doesn't give any savings, so the lock is added instead. Reported-by:
m0sia <m0sia@plotinka.ru> Signed-off-by:
Jarek Poplawski <jarkao2@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Heiko Carstens authored
If the iucv module is compiled in/loaded but no user is registered cpu hot remove doesn't work. Reason for that is that the iucv cpu hotplug notifier on CPU_DOWN_PREPARE checks if the iucv_buffer_cpumask would be empty after the corresponding bit would be cleared. However the bit was never set since iucv wasn't enable. That causes all cpu hot unplug operations to fail in this scenario. To fix this use iucv_path_table as an indicator wether iucv is enabled or not. Signed-off-by:
Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by:
Ursula Braun <ursula.braun@de.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Hendrik Brueckner authored
Free iucv path after iucv_path_sever() calls in iucv_callback_connreq() (path_pending() iucv callback). If iucv_path_accept() fails, free path and free/kill newly created socket. Signed-off-by:
Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by:
Ursula Braun <ursula.braun@de.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Ursula Braun authored
For certain types of AFIUCV socket connect failures IUCV connections are left over. Add some cleanup-statements to avoid cluttered IUCV connections. Signed-off-by:
Ursula Braun <ursula.braun@de.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Hendrik Brueckner authored
If the iucv_path_connect() call fails then return an error code that corresponds to the iucv_path_connect() failure condition; instead of returning -ECONNREFUSED for any failure. This helps to improve error handling for user space applications (e.g. inform the user that the z/VM guest is not authorized to connect to other guest virtual machines). The error return codes are based on those described in connect(2). Signed-off-by:
Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Signed-off-by:
Ursula Braun <ursula.braun@de.ibm.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
David S. Miller authored
This reverts commit 22604c86 . We can't fix this issue in this way, because we now can try to take the dev_base_lock rwlock as a writer in software interrupt context and that is not allowed without major surgery elsewhere. This initial link state problem needs to be solved in some other way. Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 05 Jan, 2009 14 commits
-
-
Al Viro authored
... and don't bother in callers. Don't bother with zeroing i_blocks, while we are at it - it's already been zeroed. i_mode is not worth the effort; it has no common default value. Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
David S. Miller authored
In splice TCP receive, the SPLICE_F_NONBLOCK flag is used to compute the "timeo" value. So checking it again inside of the main receive loop to trigger -EAGAIN processing is entirely unnecessary. Noticed by Jarek P. and Lennert Buytenhek. Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Lennert Buytenhek authored
Currently, setting SPLICE_F_NONBLOCK on splice from a TCP socket results in masking of EOF (RDHUP) and error conditions on the socket by an -EAGAIN return. Move the NONBLOCK check in tcp_splice_read() to be after the EOF and error checks to fix this. Signed-off-by:
Lennert Buytenhek <buytenh@marvell.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Gerrit Renker authored
This patch integrates the TFRC library, which is a dependency of CCID-3 (and CCID-4), with the new use of CCIDs in the DCCP module. Signed-off-by:
Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Gerrit Renker authored
This patch cleans up after integrating the CCID modules and, in addition, * moves the if/else cases from ccid_delete() into ccid_hc_{tx,rx}_delete(); * removes the 'gfp' argument to ccid_new() - since it is always gfp_any(). Signed-off-by:
Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Gerrit Renker authored
Based on Arnaldo's earlier patch, this patch integrates the standardised CCID congestion control plugins (CCID-2 and CCID-3) of DCCP with dccp.ko: * enables a faster connection path by eliminating the need to always go through the CCID registration lock; * updates the implementation to use only a single array whose size equals the number of configured CCIDs instead of the maximum (256); * since the CCIDs are now fixed array elements, synchronization is no longer needed, simplifying use and implementation. CCID-2 is suggested as minimum for a basic DCCP implementation (RFC 4340, 10); CCID-3 is a standards-track CCID supported by RFC 4342 and RFC 5348. Signed-off-by:
Gerrit Renker <gerrit@erg.abdn.ac.uk> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Oliver Hartkopp authored
Since commit ca109491 ("hrtimer: removing all ur callback modes") the hrtimer callbacks are processed only in hardirq context. This patch moves some functionality into tasklets to run in softirq context. Additionally some duplicated code was removed in bcm_rx_thr_flush() and an avoidable memcpy was removed from bcm_rx_handler(). Signed-off-by:
Oliver Hartkopp <oliver@hartkopp.net> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Roel Kluin authored
Use kfree_skb instead of kfree for struct sk_buff pointers. Signed-off-by:
Roel Kluin <roel.kluin@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Ilpo Järvinen authored
Signed-off-by:
Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Reported-by:
Eric Sesterhenn <snakebyte@gmx.de> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Michael Marineau authored
From: Michael Marineau <mike@marineau.org> Commit b4730016 "Do not fire linkwatch events until the device is registered." was made as a workaround for drivers that call netif_carrier_off before registering the device. Unfortunately this causes these drivers to incorrectly report their link status as IF_OPER_UNKNOWN which can falsely set the IFF_RUNNING flag when the interface is first brought up. This issues was previously pointed out[1] but was dismissed saying that IFF_RUNNING is not related to the link status. From my digging IFF_RUNNING, as reported to userspace, is based on the link state. It is set based on __LINK_STATE_START and IF_OPER_UP or IF_OPER_UNKNOWN. See [2], [3], and [4]. (Whether or not the kernel has IFF_RUNNING set in flags is not reported to user space so it may well be independent of the link, I don't know if and when it may get set.) The end result depends slightly depending on the driver. The the two I tested were e1000e and b44. With e1000e if the system is booted without a network cable attached the interface will falsely report RUNNING when it is brought up causing NetworkManager to attempt to start it and eventually time out. With b44 when the system is booted with a network cable attached and brought up with dhcpcd it will time out the first time. The attached patch that will still set the operstate variable correctly to IF_OPER_UP/DOWN/etc when linkwatch_fire_event is called but then return rather than skipping the linkwatch_fire_event call entirely as the previous fix did. (sorry it isn't inline, I don't have a patch friendly email client at the moment) Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Simon Holm Thøgersen authored
commit 4dec9b80 ("rfkill: strip pointless notifier chain") removed the only user of rfkill_led_trigger() that was not guarded by #ifdef CONFIG_RFKILL_LEDS. Therefore, move rfkill_led_trigger() completely inside #ifdef CONFIG_RFKILL_LEDS and avoid the compile time warning: net/rfkill/rfkill.c:59: warning: 'rfkill_led_trigger' defined but not used Signed-off-by:
Simon Holm Thøgersen <odie@cs.aau.dk> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
This patch allows GRO to merge page frags (skb_shinfo(skb)->frags) in one skb, rather than using the less efficient frag_list. It also adds a new interface, napi_gro_frags to allow drivers to inject page frags directly into the stack without allocating an skb. This is intended to be the GRO equivalent for LRO's lro_receive_frags interface. The existing GSO interface can already handle page frags with or without an appended frag_list so nothing needs to be changed there. The merging itself is rather simple. We store any new frag entries after the last existing entry, without checking whether the first new entry can be merged with the last existing entry. Making this check would actually be easy but since no existing driver can produce contiguous frags anyway it would just be mental masturbation. If the total number of entries would exceed the capacity of a single skb, we simply resort to using frag_list as we do now. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
In order to allow GRO packets without frag_list at all, we need to store the MSS in the packet itself. The obvious place is gso_size. The only thing to watch out for is if the packet ends up not being GRO then we need to clear gso_size before pushing the packet into the stack. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
David S. Miller authored
Thanks to excellent diagnosis by Eduard Guzovsky. The core problem is that on a network with lots of active multicast traffic, the neighbour cache can fill up. If we try to allocate a new route and thus neighbour cache entry, the bog-standard GC attempt the neighbour layer does in ineffective because route entries hold a reference to the existing neighbour entries and GC can only liberate entries with no references. IPV4 already has a way to handle this, by doing a route cache GC in such situations (when neigh attach returns -ENOBUFS). So simply mimick this on the ipv6 side. Tested-by:
Eduard Guzovsky <eguzovsky@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 04 Jan, 2009 2 commits
-
-
Al Viro authored
* no allocations * return void Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Al Viro authored
* don't bother with allocations * now that it can't fail, make it return void Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
- 02 Jan, 2009 1 commit
-
-
Alan Cox authored
Roel Kluin noted that line is unsigned so one test is unneccessary. Also add a warning for another flaw I noticed while making this change. Signed-off-by:
Alan Cox <alan@redhat.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org>
-
- 31 Dec, 2008 2 commits
-
-
Kentaro Takeda authored
Add new LSM hooks for path-based checks. Call them on directory-modifying operations at the points where we still know the vfsmount involved. Signed-off-by:
Kentaro Takeda <takedakn@nttdata.co.jp> Signed-off-by:
Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by:
Toshiharu Harada <haradats@nttdata.co.jp> Signed-off-by:
Al Viro <viro@zeniv.linux.org.uk>
-
Paul Moore authored
Update the NetLabel kernel API to expose the new features added in kernel releases 2.6.25 and 2.6.28: the static/fallback label functionality and network address based selectors. Signed-off-by:
Paul Moore <paul.moore@hp.com>
-
- 30 Dec, 2008 8 commits
-
-
Herbert Xu authored
When we converted the protocol atomic counters such as the orphan count and the total socket count deadlocks were introduced due to the mismatch in BH status of the spots that used the percpu counter operations. Based on the diagnosis and patch by Peter Zijlstra, this patch fixes these issues by disabling BH where we may be in process context. Reported-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Tested-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Rusty Russell authored
In future all cpumask ops will only be valid (in general) for bit numbers < nr_cpu_ids. So use that instead of NR_CPUS in iterators and other comparisons. This is always safe: no cpu number can be >= nr_cpu_ids, and nr_cpu_ids is initialized to NR_CPUS at boot. Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au> Signed-off-by:
Mike Travis <travis@sgi.com> Acked-by:
Ingo Molnar <mingo@elte.hu> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Li Zefan authored
cls_cgroup can't be compiled as a module, since it's not supported by cgroup. Signed-off-by:
Li Zefan <lizf@cn.fujitsu.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Li Zefan authored
- It's better to use container_of() instead of casting cgroup_subsys_state * to cgroup_cls_state *. - Add helper function task_cls_state(). - Rename net_cls_state() to cgrp_cls_state(). Signed-off-by:
Li Zefan <lizf@cn.fujitsu.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Li Zefan authored
When removing a cgroup, an oops was triggered immediately. The cause is wrong kfree() in cgrp_destroy(). Signed-off-by:
Li Zefan <lizf@cn.fujitsu.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Simon Horman authored
Acked-by:
Graeme Fowler <graeme@graemef.net> Signed-off-by:
Simon Horman <horms@verge.net.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eric W. Biederman authored
During network namespace teardown we either move or delete all of the network devices associated with a network namespace. In the case of veth devices deleting one will also delete it's pair device. If both devices are in the same network namespace then for_each_netdev_safe is insufficient as next may point to the second veth device we have deleted. To avoid problems I do what we do in __rtnl_kill_links and restart the scan of the device list, after we have deleted a device. Currently dev_change_netnamespace does not appear to suffer from this problem, but wireless devices are also paired and likely should be moved between network namespaces together. So I have errored on the side of caution and restart the scan of the network devices in that case as well. Signed-off-by:
Eric W. Biederman <ebiederm@aristanetworks.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Rusty Russell authored
No reason to roll our own here. Signed-off-by:
Rusty Russell <rusty@rustcorp.com.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- 26 Dec, 2008 6 commits
-
-
Herbert Xu authored
The initial skb may have been freed after napi_gro_complete in napi_gro_receive if it was merged into an existing packet. Thus we cannot check same_flow (which indicates whether it was merged) after calling napi_gro_complete. This patch fixes this by saving the same_flow status before the call to napi_gro_complete. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Peter P Waskiewicz Jr authored
The recent GRO patches introduced the NAPI removal of devices in free_netdev. For drivers that can change the number of queues during driver operation, the NAPI infrastructure doesn't allow the freeing and re-addition of NAPI entities without reloading the driver. This change reinitializes the dev_list in each NAPI struct on delete, instead of just deleting it (and assigning the list pointers to POISON). Drivers that wish to remove/re-add NAPI will need to re-initialize the netdev napi_list after removing all NAPI instances, before re-adding NAPI devices again. Signed-off-by:
Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
This patch removes a useless ret variable from the IPv4 ESP/UDP decapsulation code. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Julia Lawall authored
atif is tested for being NULL twice, with the same effect in each case. I have kept the second test, as it seems to fit well with the comment above it. A simplified version of the semantic patch that makes this change is as follows: (http://www.emn.fr/x-info/coccinelle/ ) // <smpl> @r exists@ local idexpression x; expression E; position p1,p2; @@ if (x@p1 == NULL || ...) { ... when forall return ...; } ... when != \(x=E\|x--\|x++\|--x\|++x\|x-=E\|x+=E\|x|=E\|x&=E\|&x\) ( x@p2 == NULL | x@p2 != NULL ) // another path to the test that is not through p1? @s exists@ local idexpression r.x; position r.p1,r.p2; @@ ... when != x@p1 ( x@p2 == NULL | x@p2 != NULL ) @fix depends on !s@ position r.p1,r.p2; expression x,E; statement S1,S2; @@ ( - if ((x@p2 != NULL) || ...) S1 | - if ((x@p2 == NULL) && ...) S1 | - BUG_ON(x@p2 == NULL); ) // </smpl> Signed-off-by:
Julia Lawall <julia@diku.dk> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Herbert Xu authored
Our TCP stack does not set the urgent flag if the urgent pointer does not fit in 16 bits, i.e., if it is more than 64K from the sequence number of a packet. This behaviour is different from the BSDs, and clearly contradicts the purpose of urgent mode, which is to send the notification (though not necessarily the associated data) as soon as possible. Our current behaviour may in fact delay the urgent notification indefinitely if the receiver window does not open up. Simply matching BSD however may break legacy applications which incorrectly rely on the out-of-band delivery of urgent data, and conversely the in-band delivery of non-urgent data. Alexey Kuznetsov suggested a safe solution of following BSD only if the urgent pointer itself has not yet been transmitted. This way we guarantee that when the remote end sees the packet with non-urgent data marked as urgent due to wrap-around we would have advanced the urgent pointer beyond, either to the actual urgent data or to an as-yet untransmitted packet. The only potential downside is that applications on the remote end may see multiple SIGURG notifications. However, this would occur anyway with other TCP stacks. More importantly, the outcome of such a duplicate notification is likely to be harmless since the signal itself does not carry any information other than the fact that we're in urgent mode. Signed-off-by:
Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Wei Yongjun authored
The latest ietf socket extensions API draft said: 8.1.21. Set or Get the SCTP Partial Delivery Point Note also that the call will fail if the user attempts to set this value larger than the socket receive buffer size. This patch add this validity check for SCTP_PARTIAL_DELIVERY_POINT socket option. Signed-off-by:
Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by:
Vlad Yasevich <vladislav.yasevich@hp.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-