1. 08 Oct, 2010 9 commits
  2. 07 Oct, 2010 2 commits
  3. 11 Aug, 2010 4 commits
  4. 01 Aug, 2010 2 commits
  5. 30 Mar, 2010 1 commit
    • Tejun Heo's avatar
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking... · 5a0e3ad6
      Tejun Heo authored
      include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h
      
      percpu.h is included by sched.h and module.h and thus ends up being
      included when building most .c files.  percpu.h includes slab.h which
      in turn includes gfp.h making everything defined by the two files
      universally available and complicating inclusion dependencies.
      
      percpu.h -> slab.h dependency is about to be removed.  Prepare for
      this change by updating users of gfp and slab facilities include those
      headers directly instead of assuming availability.  As this conversion
      needs to touch large number of source files, the following script is
      used as the basis of conversion.
      
        http://userweb.kernel.org/~tj/misc/slabh-sweep.py
      
      
      
      The script does the followings.
      
      * Scan files for gfp and slab usages and update includes such that
        only the necessary includes are there.  ie. if only gfp is used,
        gfp.h, if slab is used, slab.h.
      
      * When the script inserts a new include, it looks at the include
        blocks and try to put the new include such that its order conforms
        to its surrounding.  It's put in the include block which contains
        core kernel includes, in the same order that the rest are ordered -
        alphabetical, Christmas tree, rev-Xmas-tree or at the end if there
        doesn't seem to be any matching order.
      
      * If the script can't find a place to put a new include (mostly
        because the file doesn't have fitting include block), it prints out
        an error message indicating which .h file needs to be added to the
        file.
      
      The conversion was done in the following steps.
      
      1. The initial automatic conversion of all .c files updated slightly
         over 4000 files, deleting around 700 includes and adding ~480 gfp.h
         and ~3000 slab.h inclusions.  The script emitted errors for ~400
         files.
      
      2. Each error was manually checked.  Some didn't need the inclusion,
         some needed manual addition while adding it to implementation .h or
         embedding .c file was more appropriate for others.  This step added
         inclusions to around 150 files.
      
      3. The script was run again and the output was compared to the edits
         from #2 to make sure no file was left behind.
      
      4. Several build tests were done and a couple of problems were fixed.
         e.g. lib/decompress_*.c used malloc/free() wrappers around slab
         APIs requiring slab.h to be added manually.
      
      5. The script was run on all .h files but without automatically
         editing them as sprinkling gfp.h and slab.h inclusions around .h
         files could easily lead to inclusion dependency hell.  Most gfp.h
         inclusion directives were ignored as stuff from gfp.h was usually
         wildly available and often used in preprocessor macros.  Each
         slab.h inclusion directive was examined and added manually as
         necessary.
      
      6. percpu.h was updated not to include slab.h.
      
      7. Build test were done on the following configurations and failures
         were fixed.  CONFIG_GCOV_KERNEL was turned off for all tests (as my
         distributed build env didn't work with gcov compiles) and a few
         more options had to be turned off depending on archs to make things
         build (like ipr on powerpc/64 which failed due to missing writeq).
      
         * x86 and x86_64 UP and SMP allmodconfig and a custom test config.
         * powerpc and powerpc64 SMP allmodconfig
         * sparc and sparc64 SMP allmodconfig
         * ia64 SMP allmodconfig
         * s390 SMP allmodconfig
         * alpha SMP allmodconfig
         * um on x86_64 SMP allmodconfig
      
      8. percpu.h modifications were reverted so that it could be applied as
         a separate patch and serve as bisection point.
      
      Given the fact that I had only a couple of failures from tests on step
      6, I'm fairly confident about the coverage of this conversion patch.
      If there is a breakage, it's likely to be something in one of the arch
      headers which should be easily discoverable easily on most builds of
      the specific arch.
      Signed-off-by: default avatarTejun Heo <tj@kernel.org>
      Guess-its-ok-by: default avatarChristoph Lameter <cl@linux-foundation.org>
      Cc: Ingo Molnar <mingo@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      5a0e3ad6
  6. 06 Mar, 2010 1 commit
    • Rik van Riel's avatar
      mm: change anon_vma linking to fix multi-process server scalability issue · 5beb4930
      Rik van Riel authored
      
      The old anon_vma code can lead to scalability issues with heavily forking
      workloads.  Specifically, each anon_vma will be shared between the parent
      process and all its child processes.
      
      In a workload with 1000 child processes and a VMA with 1000 anonymous
      pages per process that get COWed, this leads to a system with a million
      anonymous pages in the same anon_vma, each of which is mapped in just one
      of the 1000 processes.  However, the current rmap code needs to walk them
      all, leading to O(N) scanning complexity for each page.
      
      This can result in systems where one CPU is walking the page tables of
      1000 processes in page_referenced_one, while all other CPUs are stuck on
      the anon_vma lock.  This leads to catastrophic failure for a benchmark
      like AIM7, where the total number of processes can reach in the tens of
      thousands.  Real workloads are still a factor 10 less process intensive
      than AIM7, but they are catching up.
      
      This patch changes the way anon_vmas and VMAs are linked, which allows us
      to associate multiple anon_vmas with a VMA.  At fork time, each child
      process gets its own anon_vmas, in which its COWed pages will be
      instantiated.  The parents' anon_vma is also linked to the VMA, because
      non-COWed pages could be present in any of the children.
      
      This reduces rmap scanning complexity to O(1) for the pages of the 1000
      child processes, with O(N) complexity for at most 1/N pages in the system.
       This reduces the average scanning cost in heavily forking workloads from
      O(N) to 2.
      
      The only real complexity in this patch stems from the fact that linking a
      VMA to anon_vmas now involves memory allocations.  This means vma_adjust
      can fail, if it needs to attach a VMA to anon_vma structures.  This in
      turn means error handling needs to be added to the calling functions.
      
      A second source of complexity is that, because there can be multiple
      anon_vmas, the anon_vma linking in vma_adjust can no longer be done under
      "the" anon_vma lock.  To prevent the rmap code from walking up an
      incomplete VMA, this patch introduces the VM_LOCK_RMAP VMA flag.  This bit
      flag uses the same slot as the NOMMU VM_MAPPED_COPY, with an ifdef in mm.h
      to make sure it is impossible to compile a kernel that needs both symbolic
      values for the same bitflag.
      
      Some test results:
      
      Without the anon_vma changes, when AIM7 hits around 9.7k users (on a test
      box with 16GB RAM and not quite enough IO), the system ends up running
      >99% in system time, with every CPU on the same anon_vma lock in the
      pageout code.
      
      With these changes, AIM7 hits the cross-over point around 29.7k users.
      This happens with ~99% IO wait time, there never seems to be any spike in
      system time.  The anon_vma lock contention appears to be resolved.
      
      [akpm@linux-foundation.org: cleanups]
      Signed-off-by: default avatarRik van Riel <riel@redhat.com>
      Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      Cc: Larry Woodman <lwoodman@redhat.com>
      Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
      Cc: Minchan Kim <minchan.kim@gmail.com>
      Cc: Andrea Arcangeli <aarcange@redhat.com>
      Cc: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      Signed-off-by: default avatarAndrew Morton <akpm@linux-foundation.org>
      Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
      5beb4930
  7. 21 Dec, 2009 1 commit
  8. 16 Dec, 2009 20 commits
    • Andi Kleen's avatar
      HWPOISON: Remove stray phrase in a comment · f2c03deb
      Andi Kleen authored
      
      Better to have complete sentences.
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      f2c03deb
    • Andi Kleen's avatar
      12686d15
    • Andi Kleen's avatar
      HWPOISON: Add soft page offline support · facb6011
      Andi Kleen authored
      
      This is a simpler, gentler variant of memory_failure() for soft page
      offlining controlled from user space.  It doesn't kill anything, just
      tries to invalidate and if that doesn't work migrate the
      page away.
      
      This is useful for predictive failure analysis, where a page has
      a high rate of corrected errors, but hasn't gone bad yet. Instead
      it can be offlined early and avoided.
      
      The offlining is controlled from sysfs, including a new generic
      entry point for hard page offlining for symmetry too.
      
      We use the page isolate facility to prevent re-allocation
      race. Normally this is only used by memory hotplug. To avoid
      races with memory allocation I am using lock_system_sleep().
      This avoids the situation where memory hotplug is about
      to isolate a page range and then hwpoison undoes that work.
      This is a big hammer currently, but the simplest solution
      currently.
      
      When the page is not free or LRU we try to free pages
      from slab and other caches. The slab freeing is currently
      quite dumb and does not try to focus on the specific slab
      cache which might own the page. This could be potentially
      improved later.
      
      Thanks to Fengguang Wu and Haicheng Li for some fixes.
      
      [Added fix from Andrew Morton to adapt to new migrate_pages prototype]
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      facb6011
    • Andi Kleen's avatar
    • Andi Kleen's avatar
      HWPOISON: Use new shake_page in memory_failure · 0474a60e
      Andi Kleen authored
      
      shake_page handles more types of page caches than
      the much simpler lru_add_drain_all:
      
      - slab (quite inefficiently for now)
      - any other caches with a shrinker callback
      - per cpu page allocator pages
      - per CPU LRU
      
      Use this call to try to turn pages into free or LRU pages.
      Then handle the case of the page becoming free after drain everything.
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      0474a60e
    • Haicheng Li's avatar
      HWPOISON: add an interface to switch off/on all the page filters · 1bfe5feb
      Haicheng Li authored
      
      In some use cases, user doesn't need extra filtering. E.g. user program
      can inject errors through madvise syscall to its own pages, however it
      might not know what the page state exactly is or which inode the page
      belongs to.
      
      So introduce an one-off interface "corrupt-filter-enable".
      
      Echo 0 to switch off page filters, and echo 1 to switch on the filters.
      [AK: changed default to 0]
      Signed-off-by: default avatarHaicheng Li <haicheng.li@linux.intel.com>
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      1bfe5feb
    • Andi Kleen's avatar
      HWPOISON: add memory cgroup filter · 4fd466eb
      Andi Kleen authored
      
      The hwpoison test suite need to inject hwpoison to a collection of
      selected task pages, and must not touch pages not owned by them and
      thus kill important system processes such as init. (But it's OK to
      mis-hwpoison free/unowned pages as well as shared clean pages.
      Mis-hwpoison of shared dirty pages will kill all tasks, so the test
      suite will target all or non of such tasks in the first place.)
      
      The memory cgroup serves this purpose well. We can put the target
      processes under the control of a memory cgroup, and tell the hwpoison
      injection code to only kill pages associated with some active memory
      cgroup.
      
      The prerequisite for doing hwpoison stress tests with mem_cgroup is,
      the mem_cgroup code tracks task pages _accurately_ (unless page is
      locked).  Which we believe is/should be true.
      
      The benefits are simplification of hwpoison injector code. Also the
      mem_cgroup code will automatically be tested by hwpoison test cases.
      
      The alternative interfaces pin-pfn/unpin-pfn can also delegate the
      (process and page flags) filtering functions reliably to user space.
      However prototype implementation shows that this scheme adds more
      complexity than we wanted.
      
      Example test case:
      
      	mkdir /cgroup/hwpoison
      
      	usemem -m 100 -s 1000 &
      	echo `jobs -p` > /cgroup/hwpoison/tasks
      
      	memcg_ino=$(ls -id /cgroup/hwpoison | cut -f1 -d' ')
      	echo $memcg_ino > /debug/hwpoison/corrupt-filter-memcg
      
      	page-types -p `pidof init`   --hwpoison  # shall do nothing
      	page-types -p `pidof usemem` --hwpoison  # poison its pages
      
      [AK: Fix documentation]
      [Add fix for problem noticed by Li Zefan <lizf@cn.fujitsu.com>;
      dentry in the css could be NULL]
      
      CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
      CC: Hugh Dickins <hugh.dickins@tiscali.co.uk>
      CC: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
      CC: Balbir Singh <balbir@linux.vnet.ibm.com>
      CC: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
      CC: Li Zefan <lizf@cn.fujitsu.com>
      CC: Paul Menage <menage@google.com>
      CC: Nick Piggin <npiggin@suse.de>
      CC: Andi Kleen <andi@firstfloor.org>
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      4fd466eb
    • Wu Fengguang's avatar
      HWPOISON: add page flags filter · 478c5ffc
      Wu Fengguang authored
      
      When specified, only poison pages if ((page_flags & mask) == value).
      
      -       corrupt-filter-flags-mask
      -       corrupt-filter-flags-value
      
      This allows stress testing of many kinds of pages.
      
      Strictly speaking, the buddy pages requires taking zone lock, to avoid
      setting PG_hwpoison on a "was buddy but now allocated to someone" page.
      However we can just do nothing because we set PG_locked in the beginning,
      this prevents the page allocator from allocating it to someone. (It will
      BUG() on the unexpected PG_locked, which is fine for hwpoison testing.)
      
      [AK: Add select PROC_PAGE_MONITOR to satisfy dependency]
      
      CC: Nick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      478c5ffc
    • Wu Fengguang's avatar
      HWPOISON: add fs/device filters · 7c116f2b
      Wu Fengguang authored
      
      Filesystem data/metadata present the most tricky-to-isolate pages.
      It requires careful code review and stress testing to get them right.
      
      The fs/device filter helps to target the stress tests to some specific
      filesystem pages. The filter condition is block device's major/minor
      numbers:
              - corrupt-filter-dev-major
              - corrupt-filter-dev-minor
      When specified (non -1), only page cache pages that belong to that
      device will be poisoned.
      
      The filters are checked reliably on the locked and refcounted page.
      
      Haicheng: clear PG_hwpoison and drop bad page count if filter not OK
      AK: Add documentation
      
      CC: Haicheng Li <haicheng.li@intel.com>
      CC: Nick Piggin <npiggin@suse.de>
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      7c116f2b
    • Wu Fengguang's avatar
      HWPOISON: return 0 to indicate success reliably · 138ce286
      Wu Fengguang authored
      
      Return 0 to indicate success, when
      - action result is RECOVERED or DELAYED
      - no extra page reference
      
      Note that dirty swapcache pages are kept in swapcache, so can have one
      more reference count.
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      138ce286
    • Wu Fengguang's avatar
      HWPOISON: make semantics of IGNORED/DELAYED clear · d95ea51e
      Wu Fengguang authored
      
      Change semantics for
      - IGNORED: not handled; it may well be _unsafe_
      - DELAYED: to be handled later; it is _safe_
      
      With this change,
      - IGNORED/FAILED mean (maybe) Error
      - DELAYED/RECOVERED mean Success
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      d95ea51e
    • Wu Fengguang's avatar
      HWPOISON: Add unpoisoning support · 847ce401
      Wu Fengguang authored
      
      The unpoisoning interface is useful for stress testing tools to
      reclaim poisoned pages (to prevent OOM)
      
      There is no hardware level unpoisioning, so this
      cannot be used for real memory errors, only for software injected errors.
      
      Note that it may leak pages silently - those who have been removed from
      LRU cache, but not isolated from page cache/swap cache at hwpoison time.
      Especially the stress test of dirty swap cache pages shall reboot system
      before exhausting memory.
      
      AK: Fix comments, add documentation, add printks, rename symbol
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      847ce401
    • Wu Fengguang's avatar
      HWPOISON: detect free buddy pages explicitly · 8d22ba1b
      Wu Fengguang authored
      
      Most free pages in the buddy system have no PG_buddy set.
      Introduce is_free_buddy_page() for detecting them reliably.
      
      CC: Nick Piggin <npiggin@suse.de>
      CC: Mel Gorman <mel@linux.vnet.ibm.com>
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      8d22ba1b
    • Wu Fengguang's avatar
      HWPOISON: remove the free buddy page handler · 95d01fc6
      Wu Fengguang authored
      
      The buddy page has already be handled in the very beginning.
      So remove redundant code.
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      95d01fc6
    • Wu Fengguang's avatar
      HWPOISON: introduce delete_from_lru_cache() · dc2a1cbf
      Wu Fengguang authored
      
      Introduce delete_from_lru_cache() to
      - clear PG_active, PG_unevictable to avoid complains at unpoison time
      - move the isolate_lru_page() call back to the handlers instead of the
        entrance of __memory_failure(), this is more hwpoison filter friendly
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      dc2a1cbf
    • Wu Fengguang's avatar
      db0480b3
    • Wu Fengguang's avatar
      HWPOISON: abort on failed unmap · 1668bfd5
      Wu Fengguang authored
      
      Don't try to isolate a still mapped page. Otherwise we will hit the
      BUG_ON(page_mapped(page)) in __remove_from_page_cache().
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      1668bfd5
    • Andi Kleen's avatar
      HWPOISON: Turn ref argument into flags argument · 82ba011b
      Andi Kleen authored
      
      Now that "ref" is just a boolean turn it into
      a flags argument. First step is only a single flag
      that makes the code's intention more clear, but more
      may follow.
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      82ba011b
    • Wu Fengguang's avatar
      HWPOISON: avoid grabbing the page count multiple times during madvise injection · bd1ce5f9
      Wu Fengguang authored
      
      If page is double referenced in madvise_hwpoison() and __memory_failure(),
      remove_mapping() will fail because it expects page_count=2. Fix it by
      not grabbing extra page count in __memory_failure().
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      bd1ce5f9
    • Wu Fengguang's avatar
      HWPOISON: return ENXIO on invalid page number · a7560fc8
      Wu Fengguang authored
      
      Use a different errno than the usual EIO for invalid page numbers.
      This is mainly for better reporting for the injector.
      
      This also avoids calling action_result() with invalid pfn.
      Signed-off-by: default avatarWu Fengguang <fengguang.wu@intel.com>
      Signed-off-by: default avatarAndi Kleen <ak@linux.intel.com>
      a7560fc8