1. 07 Jan, 2011 9 commits
    • Nick Piggin's avatar
      fs: provide rcu-walk aware permission i_ops · b74c79e9
      Nick Piggin authored
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      b74c79e9
    • Nick Piggin's avatar
      fs: rcu-walk aware d_revalidate method · 34286d66
      Nick Piggin authored
      
      Require filesystems be aware of .d_revalidate being called in rcu-walk
      mode (nd->flags & LOOKUP_RCU). For now do a simple push down, returning
      -ECHILD from all implementations.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      34286d66
    • Nick Piggin's avatar
      fs: dcache reduce branches in lookup path · fb045adb
      Nick Piggin authored
      
      Reduce some branches and memory accesses in dcache lookup by adding dentry
      flags to indicate common d_ops are set, rather than having to check them.
      This saves a pointer memory access (dentry->d_op) in common path lookup
      situations, and saves another pointer load and branch in cases where we
      have d_op but not the particular operation.
      
      Patched with:
      
      git grep -E '[.>]([[:space:]])*d_op([[:space:]])*=' | xargs sed -e 's/\([^\t ]*\)->d_op = \(.*\);/d_set_d_op(\1, \2);/' -e 's/\([^\t ]*\)\.d_op = \(.*\);/d_set_d_op(\&\1, \2);/' -i
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      fb045adb
    • Nick Piggin's avatar
      fs: icache RCU free inodes · fa0d7e3d
      Nick Piggin authored
      
      RCU free the struct inode. This will allow:
      
      - Subsequent store-free path walking patch. The inode must be consulted for
        permissions when walking, so an RCU inode reference is a must.
      - sb_inode_list_lock to be moved inside i_lock because sb list walkers who want
        to take i_lock no longer need to take sb_inode_list_lock to walk the list in
        the first place. This will simplify and optimize locking.
      - Could remove some nested trylock loops in dcache code
      - Could potentially simplify things a bit in VM land. Do not need to take the
        page lock to follow page->mapping.
      
      The downsides of this is the performance cost of using RCU. In a simple
      creat/unlink microbenchmark, performance drops by about 10% due to inability to
      reuse cache-hot slab objects. As iterations increase and RCU freeing starts
      kicking over, this increases to about 20%.
      
      In cases where inode lifetimes are longer (ie. many inodes may be allocated
      during the average life span of a single inode), a lot of this cache reuse is
      not applicable, so the regression caused by this patch is smaller.
      
      The cache-hot regression could largely be avoided by using SLAB_DESTROY_BY_RCU,
      however this adds some complexity to list walking and store-free path walking,
      so I prefer to implement this at a later date, if it is shown to be a win in
      real situations. I haven't found a regression in any non-micro benchmark so I
      doubt it will be a problem.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      fa0d7e3d
    • Nick Piggin's avatar
      fs: dcache remove dcache_lock · b5c84bf6
      Nick Piggin authored
      
      dcache_lock no longer protects anything. remove it.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      b5c84bf6
    • Nick Piggin's avatar
      fs: Use rename lock and RCU for multi-step operations · 949854d0
      Nick Piggin authored
      
      The remaining usages for dcache_lock is to allow atomic, multi-step read-side
      operations over the directory tree by excluding modifications to the tree.
      Also, to walk in the leaf->root direction in the tree where we don't have
      a natural d_lock ordering.
      
      This could be accomplished by taking every d_lock, but this would mean a
      huge number of locks and actually gets very tricky.
      
      Solve this instead by using the rename seqlock for multi-step read-side
      operations, retry in case of a rename so we don't walk up the wrong parent.
      Concurrent dentry insertions are not serialised against.  Concurrent deletes
      are tricky when walking up the directory: our parent might have been deleted
      when dropping locks so also need to check and retry for that.
      
      We can also use the rename lock in cases where livelock is a worry (and it
      is introduced in subsequent patch).
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      949854d0
    • Nick Piggin's avatar
      fs: scale inode alias list · b23fb0a6
      Nick Piggin authored
      
      Add a new lock, dcache_inode_lock, to protect the inode's i_dentry list
      from concurrent modification. d_alias is also protected by d_lock.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      b23fb0a6
    • Nick Piggin's avatar
      fs: dcache scale dentry refcount · b7ab39f6
      Nick Piggin authored
      
      Make d_count non-atomic and protect it with d_lock. This allows us to ensure a
      0 refcount dentry remains 0 without dcache_lock. It is also fairly natural when
      we start protecting many other dentry members with d_lock.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      b7ab39f6
    • Nick Piggin's avatar
      fs: change d_delete semantics · fe15ce44
      Nick Piggin authored
      
      Change d_delete from a dentry deletion notification to a dentry caching
      advise, more like ->drop_inode. Require it to be constant and idempotent,
      and not take d_lock. This is how all existing filesystems use the callback
      anyway.
      
      This makes fine grained dentry locking of dput and dentry lru scanning
      much simpler.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      fe15ce44
  2. 10 Dec, 2010 1 commit
  3. 08 Dec, 2010 4 commits
  4. 07 Dec, 2010 2 commits
  5. 02 Dec, 2010 1 commit
    • Trond Myklebust's avatar
      NFS: Fix a memory leak in nfs_readdir · 11de3b11
      Trond Myklebust authored
      
      We need to ensure that the entries in the nfs_cache_array get cleared
      when the page is removed from the page cache. To do so, we use the
      freepage address_space operation.
      
      Change nfs_readdir_clear_array to use kmap_atomic(), so that the
      function can be safely called from all contexts.
      
      Finally, modify the cache_page_release helper to call
      nfs_readdir_clear_array directly, when dealing with an anonymous
      page from 'uncached_readdir'.
      Signed-off-by: default avatarTrond Myklebust <Trond.Myklebust@netapp.com>
      11de3b11
  6. 01 Dec, 2010 1 commit
  7. 30 Nov, 2010 1 commit
  8. 22 Nov, 2010 9 commits
  9. 17 Nov, 2010 1 commit
  10. 16 Nov, 2010 5 commits
  11. 31 Oct, 2010 2 commits
  12. 29 Oct, 2010 2 commits
  13. 28 Oct, 2010 2 commits