Commit Graph

9 Commits

Author SHA1 Message Date
Ralph Siemsen
e40b1074af [ARM] 3815/1: headers_install support for ARM
Move kernel-only #includes into #ifdef __KERNEL__, so that
headers_install target can be used on ARM.

Signed-off-by: Ralph Siemsen <ralphs@netwinder.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-09-18 16:28:50 +01:00
Russell King
002547b4f8 [ARM] nommu: adjust headers for !MMU ARM systems
Majorily based on Hyok Choi's patches, this fixes up the asm-arm
header files for mmuless systems.  Over and above Hyok's patches:

- nommu.h merged into mmu.h (it's only a structure)
- nommu_context.h is essentially the same as mmu_context.h, but
  without the MM switching code.

so there's no point having separate files.  Also, in memory.h,
there's no point #ifndef'ing PHYS_OFFSET and END_MEM - both
CONFIG_DRAM_BASE and CONFIG_DRAM_SIZE will always be set by the
configuration scripts.

Other files have minor formatting changes, but are essentially
the same.  Hyok's original patches were signed off thusly:

  Signed-off-by: Hyok S. Choi <hyok.choi@samsung.com>

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-06-28 17:59:45 +01:00
David Woodhouse
62c4f0a2d5 Don't include linux/config.h from anywhere else in include/
Signed-off-by: David Woodhouse <dwmw2@infradead.org>
2006-04-26 12:56:16 +01:00
Lennert Buytenhek
23bdf86aa0 [ARM] 3377/2: add support for intel xsc3 core
Patch from Lennert Buytenhek

This patch adds support for the new XScale v3 core.  This is an
ARMv5 ISA core with the following additions:

- L2 cache
- I/O coherency support (on select chipsets)
- Low-Locality Reference cache attributes (replaces mini-cache)
- Supersections (v6 compatible)
- 36-bit addressing (v6 compatible)
- Single instruction cache line clean/invalidate
- LRU cache replacement (vs round-robin)

I attempted to merge the XSC3 support into proc-xscale.S, but XSC3
cores have separate errata and have to handle things like L2, so it
is simpler to keep it separate.

L2 cache support is currently a build option because the L2 enable
bit must be set before we enable the MMU and there is no easy way to
capture command line parameters at this point.

There are still optimizations that can be done such as using LLR for
copypage (in theory using the exisiting mini-cache code) but those
can be addressed down the road.

Signed-off-by: Deepak Saxena <dsaxena@plexity.net>
Signed-off-by: Lennert Buytenhek <buytenh@wantstofly.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-03-28 21:00:40 +01:00
Nicolas Pitre
da2b1cd619 [ARM] 3101/1: ARM EABI: slab memory must be 64-bit aligned
Patch from Nicolas Pitre

Although ARM is still using 32-bit pointers, version 5 and later
versions of the ARM architecture introduced the ldrd and strd
instructions to move 64-bit data which must be 64-bit aligned in memory,
and the EABI includes new constraints on structure data alignment to
allow for the compiler to use those instructions. This means that any
slab allocation must start on a 64-bit boundary which is not equivalent
to BYTES_PER_WORD, especially on those architecture versions that
implements the ldrd/strd instructions.

Overriding the default alignment disables some slab debug features. If
those debug features are really needed then the kernel will have to be
compiled for version 4 of the ARM architecture.

Signed-off-by: Nicolas Pitre <nico@cam.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2006-01-14 16:18:07 +00:00
Stephen Rothwell
fd4fd5aac1 [PATCH] mm: consolidate get_order
Someone mentioned that almost all the architectures used basically the same
implementation of get_order.  This patch consolidates them into
asm-generic/page.h and includes that in the appropriate places.  The
exceptions are ia64 and ppc which have their own (presumably optimised)
versions.

Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
2005-09-05 00:05:39 -07:00
Russell King
d2bab05ac1 [PATCH] ARM: Move copy/clear user_page locking into implementation
Move the locking for copy_user_page() and clear_user_page() into
the implementations which require locking.  For simple memcpy/
memset based implementations, the locking is extra overhead which
is not necessary, and prevents preemption occuring.

Signed-off-by: Russell King <rmk@arm.linux.org.uk>
2005-05-10 14:23:01 +01:00
Russell King
c4e1f6f6bf [PATCH] ARM: Add top_pmd, which points at the top-most page table
Signed-off-by: Russell King <rmk@arm.linux.org.uk>
2005-05-10 10:40:19 +01:00
Linus Torvalds
1da177e4c3 Linux-2.6.12-rc2
Initial git repository build. I'm not bothering with the full history,
even though we have it. We can create a separate "historical" git
archive of that later if we want to, and in the meantime it's about
3.2GB when imported into git - space that would just make the early
git days unnecessarily complicated, when we don't have a lot of good
infrastructure for it.

Let it rip!
2005-04-16 15:20:36 -07:00