1. 07 Jan, 2011 2 commits
    • Nick Piggin's avatar
      fs: scale mntget/mntput · b3e19d92
      Nick Piggin authored
      
      The problem that this patch aims to fix is vfsmount refcounting scalability.
      We need to take a reference on the vfsmount for every successful path lookup,
      which often go to the same mount point.
      
      The fundamental difficulty is that a "simple" reference count can never be made
      scalable, because any time a reference is dropped, we must check whether that
      was the last reference. To do that requires communication with all other CPUs
      that may have taken a reference count.
      
      We can make refcounts more scalable in a couple of ways, involving keeping
      distributed counters, and checking for the global-zero condition less
      frequently.
      
      - check the global sum once every interval (this will delay zero detection
        for some interval, so it's probably a showstopper for vfsmounts).
      
      - keep a local count and only taking the global sum when local reaches 0 (this
        is difficult for vfsmounts, because we can't hold preempt off for the life of
        a reference, so a counter would need to be per-thread or tied strongly to a
        particular CPU which requires more locking).
      
      - keep a local difference of increments and decrements, which allows us to sum
        the total difference and hence find the refcount when summing all CPUs. Then,
        keep a single integer "long" refcount for slow and long lasting references,
        and only take the global sum of local counters when the long refcount is 0.
      
      This last scheme is what I implemented here. Attached mounts and process root
      and working directory references are "long" references, and everything else is
      a short reference.
      
      This allows scalable vfsmount references during path walking over mounted
      subtrees and unattached (lazy umounted) mounts with processes still running
      in them.
      
      This results in one fewer atomic op in the fastpath: mntget is now just a
      per-CPU inc, rather than an atomic inc; and mntput just requires a spinlock
      and non-atomic decrement in the common case. However code is otherwise bigger
      and heavier, so single threaded performance is basically a wash.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      b3e19d92
    • Nick Piggin's avatar
      fs: fs_struct use seqlock · c28cc364
      Nick Piggin authored
      
      Use a seqlock in the fs_struct to enable us to take an atomic copy of the
      complete cwd and root paths. Use this in the RCU lookup path to avoid a
      thread-shared spinlock in RCU lookup operations.
      
      Multi-threaded apps may now perform path lookups with scalability matching
      multi-process apps. Operations such as stat(2) become very scalable for
      multi-threaded workload.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      c28cc364
  2. 18 Aug, 2010 1 commit
    • Nick Piggin's avatar
      fs: fs_struct rwlock to spinlock · 2a4419b5
      Nick Piggin authored
      
      fs: fs_struct rwlock to spinlock
      
      struct fs_struct.lock is an rwlock with the read-side used to protect root and
      pwd members while taking references to them. Taking a reference to a path
      typically requires just 2 atomic ops, so the critical section is very small.
      Parallel read-side operations would have cacheline contention on the lock, the
      dentry, and the vfsmount cachelines, so the rwlock is unlikely to ever give a
      real parallelism increase.
      
      Replace it with a spinlock to avoid one or two atomic operations in typical
      path lookup fastpath.
      Signed-off-by: default avatarNick Piggin <npiggin@kernel.dk>
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      2a4419b5
  3. 11 Aug, 2010 1 commit
  4. 01 Apr, 2009 4 commits
    • Al Viro's avatar
      Get rid of indirect include of fs_struct.h · 5ad4e53b
      Al Viro authored
      
      Don't pull it in sched.h; very few files actually need it and those
      can include directly.  sched.h itself only needs forward declaration
      of struct fs_struct;
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      5ad4e53b
    • Al Viro's avatar
      New helper - current_umask() · ce3b0f8d
      Al Viro authored
      
      current->fs->umask is what most of fs_struct users are doing.
      Put that into a helper function.
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      ce3b0f8d
    • Al Viro's avatar
      New locking/refcounting for fs_struct · 498052bb
      Al Viro authored
      
      * all changes of current->fs are done under task_lock and write_lock of
        old fs->lock
      * refcount is not atomic anymore (same protection)
      * its decrements are done when removing reference from current; at the
        same time we decide whether to free it.
      * put_fs_struct() is gone
      * new field - ->in_exec.  Set by check_unsafe_exec() if we are trying to do
        execve() and only subthreads share fs_struct.  Cleared when finishing exec
        (success and failure alike).  Makes CLONE_FS fail with -EAGAIN if set.
      * check_unsafe_exec() may fail with -EAGAIN if another execve() from subthread
        is in progress.
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      498052bb
    • Al Viro's avatar
      Take fs_struct handling to new file (fs/fs_struct.c) · 3e93cd67
      Al Viro authored
      
      Pure code move; two new helper functions for nfsd and daemonize
      (unshare_fs_struct() and daemonize_fs_struct() resp.; for now -
      the same code as used to be in callers).  unshare_fs_struct()
      exported (for nfsd, as copy_fs_struct()/exit_fs() used to be),
      copy_fs_struct() and exit_fs() don't need exports anymore.
      Signed-off-by: default avatarAl Viro <viro@zeniv.linux.org.uk>
      3e93cd67