S: Spain
N: Linus Torvalds
-E: torvalds@osdl.org
+E: torvalds@linux-foundation.org
D: Original kernel hacker
S: 12725 SW Millikan Way, Suite 400
S: Beaverton, Oregon 97005
If the new code is substantial, addition of subsystem-specific fault
injection might be appropriate.
+
+22: Newly-added code has been compiled with `gcc -W'. This will generate
+ lots of noise, but is good for finding bugs like "warning: comparison
+ between signed and unsigned".
Linus Torvalds is the final arbiter of all changes accepted into the
-Linux kernel. His e-mail address is <torvalds@osdl.org>. He gets
-a lot of e-mail, so typically you should do your best to -avoid- sending
-him e-mail.
+Linux kernel. His e-mail address is <torvalds@linux-foundation.org>.
+He gets a lot of e-mail, so typically you should do your best to -avoid-
+sending him e-mail.
Patches which are bug fixes, are "obvious" changes, or similarly
require little discussion should be sent or CC'd to Linus. Patches
Who: Len Brown <len.brown@intel.com>
---------------------------
+
+What: JFFS (version 1)
+When: 2.6.21
+Why: Unmaintained for years, superceded by JFFS2 for years.
+Who: Jeff Garzik <jeff@garzik.org>
+
+---------------------------
RESOURCES
=========
-The Linux version of the 9p server is now maintained under the npfs project
-on sourceforge (http://sourceforge.net/projects/npfs).
+Our current recommendation is to use Inferno (http://www.vitanuova.com/inferno)
+as the 9p server. You can start a 9p server under Inferno by issuing the
+following command:
+ ; styxlisten -A tcp!*!564 export '#U*'
+
+The -A specifies an unauthenticated export. The 564 is the port # (you may
+have to choose a higher port number if running as a normal user). The '#U*'
+specifies exporting the root of the Linux name space. You may specify a
+subset of the namespace by extending the path: '#U*'/tmp would just export
+/tmp. For more information, see the Inferno manual pages covering styxlisten
+and export.
+
+A Linux version of the 9p server is now maintained under the npfs project
+on sourceforge (http://sourceforge.net/projects/npfs). There is also a
+more stable single-threaded version of the server (named spfs) available from
+the same CVS repository.
There are user and developer mailing lists available through the v9fs project
on sourceforge (http://sourceforge.net/projects/v9fs).
The 2.6 kernel support is working on PPC and x86.
-PLEASE USE THE SOURCEFORGE BUG-TRACKER TO REPORT PROBLEMS.
+PLEASE USE THE KERNEL BUGZILLA TO REPORT PROBLEMS. (http://bugzilla.kernel.org)
----------------------------
H. Peter Anvin <hpa@zytor.com>
- Last update 2006-11-17
+ Last update 2007-01-26
On the i386 platform, the Linux kernel uses a rather complicated boot
convention. This has evolved partially due to historical aspects, as
7 GRuB
8 U-BOOT
9 Xen
+ A Gujin
Please contact <hpa@zytor.com> if you need a bootloader ID
value assigned.
memory image to a dump file on the local disk, or across the network to
a remote system.
-Kdump and kexec are currently supported on the x86, x86_64, ppc64 and IA64
+Kdump and kexec are currently supported on the x86, x86_64, ppc64 and ia64
architectures.
When the system kernel boots, it reserves a small section of memory for
2) Download the kexec-tools user-space package from the following URL:
-http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-tools-testing-20061214.tar.gz
+http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/kexec-tools-testing.tar.gz
+
+This is a symlink to the latest version, which at the time of writing is
+20061214, the only release of kexec-tools-testing so far. As other versions
+are made released, the older onese will remain available at
+http://www.kernel.org/pub/linux/kernel/people/horms/kexec-tools/
Note: Latest kexec-tools-testing git tree is available at
3) Unpack the tarball with the tar command, as follows:
- tar xvpzf kexec-tools-testing-20061214.tar.gz
+ tar xvpzf kexec-tools-testing.tar.gz
-4) Change to the kexec-tools-1.101 directory, as follows:
+4) Change to the kexec-tools directory, as follows:
- cd kexec-tools-testing-20061214
+ cd kexec-tools-testing-VERSION
5) Configure the package, as follows:
Dump-capture kernel config options (Arch Dependent, ia64)
----------------------------------------------------------
-(To be filled)
+
+- No specific options are required to create a dump-capture kernel
+ for ia64, other than those specified in the arch idependent section
+ above. This means that it is possible to use the system kernel
+ as a dump-capture kernel if desired.
+
+ The crashkernel region can be automatically placed by the system
+ kernel at run time. This is done by specifying the base address as 0,
+ or omitting it all together.
+
+ crashkernel=256M@0
+ or
+ crashkernel=256M
+
+ If the start address is specified, note that the start address of the
+ kernel will be aligned to 64Mb, so if the start address is not then
+ any space below the alignment point will be wasted.
Boot into System Kernel
On ppc64, use "crashkernel=128M@32M".
+ On ia64, 256M@256M is a generous value that typically works.
+ The region may be automatically placed on ia64, see the
+ dump-capture kernel config option notes above.
+
Load the Dump-capture Kernel
============================
For ppc64:
- Use vmlinux
For ia64:
- (To be filled)
+ - Use vmlinux or vmlinuz.gz
+
If you are using a uncompressed vmlinux image then use following command
to load dump-capture kernel.
--initrd=<initrd-for-dump-capture-kernel> \
--append="root=<root-dev> <arch-specific-options>"
+Please note, that --args-linux does not need to be specified for ia64.
+It is planned to make this a no-op on that architecture, but for now
+it should be omitted
+
Following are the arch specific command line options to be used while
loading dump-capture kernel.
-For i386 and x86_64:
+For i386, x86_64 and ia64:
"init 1 irqpoll maxcpus=1"
For ppc64:
"init 1 maxcpus=1 noirqdistrib"
-For IA64
- (To be filled)
-
Notes on loading the dump-capture kernel:
Bill Ryder <bryder@sgi.com>
Thomas Sailer <sailer@ife.ee.ethz.ch>
Gregory P. Smith <greg@electricrain.com>
- Linus Torvalds <torvalds@osdl.org>
+ Linus Torvalds <torvalds@linux-foundation.org>
Roman Weissgaerber <weissg@vienna.at>
<Kazuki.Yasumatsu@fujixerox.co.jp>
S: Maintained
DSCC4 DRIVER
-P: François Romieu
-M: romieu@cogenit.fr
-M: romieu@ensta.fr
+P: Francois Romieu
+M: romieu@fr.zoreil.com
+L: netdev@vger.kernel.org
S: Maintained
DVB SUBSYSTEM AND DRIVERS
ETHERNET BRIDGE
P: Stephen Hemminger
-M: shemminger@osdl.org
+M: shemminger@linux-foundation.org
L: bridge@osdl.org
W: http://bridge.sourceforge.net/
S: Maintained
KERNEL NFSD
P: Neil Brown
-M: neilb@cse.unsw.edu.au
+M: neilb@suse.de
L: nfs@lists.sourceforge.net
W: http://nfs.sourceforge.net/
-W: http://www.cse.unsw.edu.au/~neilb/patches/linux-devel/
-S: Maintained
+S: Supported
KERNEL VIRTUAL MACHINE (KVM)
P: Avi Kivity
NETEM NETWORK EMULATOR
P: Stephen Hemminger
-M: shemminger@osdl.org
+M: shemminger@linux-foundation.org
L: netem@osdl.org
S: Maintained
P: Ingo Molnar
M: mingo@redhat.com
P: Neil Brown
-M: neilb@cse.unsw.edu.au
+M: neilb@suse.de
L: linux-raid@vger.kernel.org
-S: Maintained
+S: Supported
SOFTWARE SUSPEND:
P: Pavel Machek
SKGE, SKY2 10/100/1000 GIGABIT ETHERNET DRIVERS
P: Stephen Hemminger
-M: shemminger@osdl.org
+M: shemminger@linux-foundation.org
L: netdev@vger.kernel.org
S: Maintained
L: i2c@lm-sensors.org
S: Maintained
+VIA VELOCITY NETWORK DRIVER
+P: Francois Romieu
+M: romieu@fr.zoreil.com
+L: netdev@vger.kernel.org
+S: Maintained
+
UCLINUX (AND M68KNOMMU)
P: Greg Ungerer
M: gerg@uclinux.org
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 20
-EXTRAVERSION =-rc5
+EXTRAVERSION =-rc6
NAME = Homicidal Dwarf Hamster
# *DOCUMENTATION*
the file MAINTAINERS to see if there is a particular person associated
with the part of the kernel that you are having trouble with. If there
isn't anyone listed there, then the second best thing is to mail
- them to me (torvalds@osdl.org), and possibly to any other relevant
- mailing-list or to the newsgroup.
+ them to me (torvalds@linux-foundation.org), and possibly to any other
+ relevant mailing-list or to the newsgroup.
- In all bug-reports, *please* tell what kernel you are talking about,
how to duplicate the problem, and what your setup is (use your common
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.19-rc2
-# Fri Oct 20 11:52:37 2006
+# Linux kernel version: 2.6.20-rc6
+# Fri Jan 26 13:12:59 2007
#
CONFIG_AVR32=y
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_RWSEM_GENERIC_SPINLOCK=y
CONFIG_GENERIC_TIME=y
+# CONFIG_ARCH_HAS_ILOG2_U32 is not set
+# CONFIG_ARCH_HAS_ILOG2_U64 is not set
CONFIG_GENERIC_HWEIGHT=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
# CONFIG_UTS_NS is not set
CONFIG_AUDIT=y
# CONFIG_IKCONFIG is not set
+CONFIG_SYSFS_DEPRECATED=y
CONFIG_RELAY=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_CC_OPTIMIZE_FOR_SIZE=y
# Block layer
#
CONFIG_BLOCK=y
+# CONFIG_LBD is not set
# CONFIG_BLK_DEV_IO_TRACE is not set
+# CONFIG_LSF is not set
#
# IO Schedulers
# CONFIG_OWNERSHIP_TRACE is not set
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
+# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_CMDLINE=""
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_CUBIC=y
CONFIG_DEFAULT_TCP_CONG="cubic"
+# CONFIG_TCP_MD5SIG is not set
# CONFIG_IPV6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
# User Modules And Translation Layers
#
CONFIG_MTD_CHAR=y
+CONFIG_MTD_BLKDEVS=y
CONFIG_MTD_BLOCK=y
# CONFIG_FTL is not set
# CONFIG_NFTL is not set
#
# Misc devices
#
-# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
#
#
# PHY device support
#
+# CONFIG_PHYLIB is not set
#
# Ethernet (10 or 100Mbit)
#
-# CONFIG_NET_ETHERNET is not set
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+CONFIG_MACB=y
#
# Ethernet (1000 Mbit)
# CONFIG_GEN_RTC is not set
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
-
-#
-# Ftape, the floppy tape device driver
-#
# CONFIG_RAW_DRIVER is not set
#
# DMA Devices
#
+#
+# Virtualization
+#
+
#
# File systems
#
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
-# CONFIG_JFFS_FS is not set
CONFIG_JFFS2_FS=y
CONFIG_JFFS2_FS_DEBUG=0
CONFIG_JFFS2_FS_WRITEBUFFER=y
# CONFIG_NLS_KOI8_U is not set
CONFIG_NLS_UTF8=m
+#
+# Distributed Lock Manager
+#
+# CONFIG_DLM is not set
+
#
# Kernel hacking
#
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_MAGIC_SYSRQ=y
# CONFIG_UNUSED_SYMBOLS is not set
+CONFIG_DEBUG_FS=y
+# CONFIG_HEADERS_CHECK is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_INFO is not set
-CONFIG_DEBUG_FS=y
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_LIST is not set
CONFIG_FRAME_POINTER=y
-# CONFIG_UNWIND_INFO is not set
CONFIG_FORCED_INLINING=y
-# CONFIG_HEADERS_CHECK is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_KPROBES is not set
#
# Library routines
#
+CONFIG_BITREVERSE=y
CONFIG_CRC_CCITT=m
# CONFIG_CRC16 is not set
CONFIG_CRC32=y
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_PLIST=y
+CONFIG_IOMAP_COPY=y
*/
EXPORT_SYMBOL(memset);
EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL(clear_page);
/*
* Userspace access stuff.
pushl $(__USER_CS)
CFI_ADJUST_CFA_OFFSET 4
/*CFI_REL_OFFSET cs, 0*/
+#ifndef CONFIG_COMPAT_VDSO
/*
* Push current_thread_info()->sysenter_return to the stack.
* A tiny bit of offset fixup is necessary - 4*4 means the 4 words
* pushed above; +8 corresponds to copy_thread's esp0 setting.
*/
pushl (TI_sysenter_return-THREAD_SIZE+8+4*4)(%esp)
+#else
+ pushl $SYSENTER_RETURN
+#endif
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET eip, 0
if ((nmi >= NMI_INVALID) || (nmi < NMI_NONE))
return 0;
- /*
- * If any other x86 CPU has a local APIC, then
- * please test the NMI stuff there and send me the
- * missing bits. Right now Intel P6/P4 and AMD K7 only.
- */
- if ((nmi == NMI_LOCAL_APIC) && (nmi_known_cpu() == 0))
- return 0; /* no lapic support */
+
nmi_watchdog = nmi;
return 1;
}
.irq_enable_sysexit = native_irq_enable_sysexit,
.iret = native_iret,
};
-EXPORT_SYMBOL(paravirt_ops);
+
+/*
+ * NOTE: CONFIG_PARAVIRT is experimental and the paravirt_ops
+ * semantics are subject to change. Hence we only do this
+ * internal-only export of this, until it gets sorted out and
+ * all lowlevel CPU ops used by modules are separately exported.
+ */
+EXPORT_SYMBOL_GPL(paravirt_ops);
#ifdef CONFIG_COMPAT_VDSO
__set_fixmap(FIX_VDSO, __pa(syscall_page), PAGE_READONLY);
printk("Compat vDSO mapped to %08lx.\n", __fix_to_virt(FIX_VDSO));
-#else
- /*
- * In the non-compat case the ELF coredumping code needs the fixmap:
- */
- __set_fixmap(FIX_VDSO, __pa(syscall_page), PAGE_KERNEL_RO);
#endif
if (!boot_cpu_has(X86_FEATURE_SEP)) {
return 0;
}
+#ifndef CONFIG_COMPAT_VDSO
static struct page *syscall_nopage(struct vm_area_struct *vma,
unsigned long adr, int *type)
{
vma->vm_end = addr + PAGE_SIZE;
/* MAYWRITE to allow gdb to COW and set breakpoints */
vma->vm_flags = VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC|VM_MAYWRITE;
+ /*
+ * Make sure the vDSO gets into every core dump.
+ * Dumping its contents makes post-mortem fully interpretable later
+ * without matching up the same kernel and hardware config to see
+ * what PC values meant.
+ */
+ vma->vm_flags |= VM_ALWAYSDUMP;
vma->vm_flags |= mm->def_flags;
vma->vm_page_prot = protection_map[vma->vm_flags & 7];
vma->vm_ops = &syscall_vm_ops;
{
return 0;
}
+#endif
depends on MIPS_MT
default y
+config MIPS_MT_SMTC_INSTANT_REPLAY
+ bool "Low-latency Dispatch of Deferred SMTC IPIs"
+ depends on MIPS_MT_SMTC
+ default y
+ help
+ SMTC pseudo-interrupts between TCs are deferred and queued
+ if the target TC is interrupt-inhibited (IXMT). In the first
+ SMTC prototypes, these queued IPIs were serviced on return
+ to user mode, or on entry into the kernel idle loop. The
+ INSTANT_REPLAY option dispatches them as part of local_irq_restore()
+ processing, which adds runtime overhead (hence the option to turn
+ it off), but ensures that IPIs are handled promptly even under
+ heavy I/O interrupt load.
+
config MIPS_VPE_LOADER_TOM
bool "Load VPE program into memory hidden from linux"
depends on MIPS_VPE_LOADER
ifdef CONFIG_MIPS
CHECKFLAGS += $(shell $(CC) $(CFLAGS) -dM -E -xc /dev/null | \
- egrep -vw '__GNUC_(MAJOR|MINOR|PATCHLEVEL)__' | \
+ egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \
sed -e 's/^\#define /-D/' -e "s/ /='/" -e "s/$$/'/")
ifdef CONFIG_64BIT
CHECKFLAGS += -m64
addr += PAGE_SIZE;
}
- printk("Freeing unused PROM memory: %ldk freed\n",
+ printk("Freeing unused PROM memory: %ldkb freed\n",
(end - PAGE_SIZE) >> 10);
return end - PAGE_SIZE;
#include <linux/sched.h>
#include <linux/cpumask.h>
#include <linux/interrupt.h>
+#include <linux/module.h>
#include <asm/cpu.h>
#include <asm/processor.h>
* of their initialization in smtc_cpu_setup().
*/
- tlbsiz = tlbsiz & 0x3f; /* MIPS32 limits TLB indices to 64 */
- cpu_data[0].tlbsize = tlbsiz;
+ /* MIPS32 limits TLB indices to 64 */
+ if (tlbsiz > 64)
+ tlbsiz = 64;
+ cpu_data[0].tlbsize = current_cpu_data.tlbsize = tlbsiz;
smtc_status |= SMTC_TLB_SHARED;
+ local_flush_tlb_all();
printk("TLB of %d entry pairs shared by %d VPEs\n",
tlbsiz, vpes);
* SMTC-specific hacks invoked from elsewhere in the kernel.
*/
+void smtc_ipi_replay(void)
+{
+ /*
+ * To the extent that we've ever turned interrupts off,
+ * we may have accumulated deferred IPIs. This is subtle.
+ * If we use the smtc_ipi_qdepth() macro, we'll get an
+ * exact number - but we'll also disable interrupts
+ * and create a window of failure where a new IPI gets
+ * queued after we test the depth but before we re-enable
+ * interrupts. So long as IXMT never gets set, however,
+ * we should be OK: If we pick up something and dispatch
+ * it here, that's great. If we see nothing, but concurrent
+ * with this operation, another TC sends us an IPI, IXMT
+ * is clear, and we'll handle it as a real pseudo-interrupt
+ * and not a pseudo-pseudo interrupt.
+ */
+ if (IPIQ[smp_processor_id()].depth > 0) {
+ struct smtc_ipi *pipi;
+ extern void self_ipi(struct smtc_ipi *);
+
+ while ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()]))) {
+ self_ipi(pipi);
+ smtc_cpu_stats[smp_processor_id()].selfipis++;
+ }
+ }
+}
+
+EXPORT_SYMBOL(smtc_ipi_replay);
+
void smtc_idle_loop_hook(void)
{
#ifdef SMTC_IDLE_HOOK_DEBUG
if (pdb_msg != &id_ho_db_msg[0])
printk("CPU%d: %s", smp_processor_id(), id_ho_db_msg);
#endif /* SMTC_IDLE_HOOK_DEBUG */
+
/*
- * To the extent that we've ever turned interrupts off,
- * we may have accumulated deferred IPIs. This is subtle.
- * If we use the smtc_ipi_qdepth() macro, we'll get an
- * exact number - but we'll also disable interrupts
- * and create a window of failure where a new IPI gets
- * queued after we test the depth but before we re-enable
- * interrupts. So long as IXMT never gets set, however,
- * we should be OK: If we pick up something and dispatch
- * it here, that's great. If we see nothing, but concurrent
- * with this operation, another TC sends us an IPI, IXMT
- * is clear, and we'll handle it as a real pseudo-interrupt
- * and not a pseudo-pseudo interrupt.
+ * Replay any accumulated deferred IPIs. If "Instant Replay"
+ * is in use, there should never be any.
*/
- if (IPIQ[smp_processor_id()].depth > 0) {
- struct smtc_ipi *pipi;
- extern void self_ipi(struct smtc_ipi *);
-
- if ((pipi = smtc_ipi_dq(&IPIQ[smp_processor_id()])) != NULL) {
- self_ipi(pipi);
- smtc_cpu_stats[smp_processor_id()].selfipis++;
- }
- }
+#ifndef CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY
+ smtc_ipi_replay();
+#endif /* CONFIG_MIPS_MT_SMTC_INSTANT_REPLAY */
}
void smtc_soft_dump(void)
struct list_head list;
};
-struct vpecontrol_ {
+struct {
/* Virtual processing elements */
struct list_head vpe_list;
/* Thread contexts */
struct list_head tc_list;
-} vpecontrol;
+} vpecontrol = {
+ .vpe_list = LIST_HEAD_INIT(vpecontrol.vpe_list),
+ .tc_list = LIST_HEAD_INIT(vpecontrol.tc_list)
+};
static void release_progmem(void *ptr);
/* static __attribute_used__ void dump_vpe(struct vpe * v); */
/* dump_mtregs(); */
- INIT_LIST_HEAD(&vpecontrol.vpe_list);
- INIT_LIST_HEAD(&vpecontrol.tc_list);
val = read_c0_mvpconf0();
for (i = 0; i < ((val & MVPCONF0_PTC) + 1); i++) {
freed = prom_free_prom_memory();
if (freed)
- printk(KERN_INFO "Freeing firmware memory: %ldk freed\n",freed);
+ printk(KERN_INFO "Freeing firmware memory: %ldkb freed\n",
+ freed >> 10);
free_init_pages("unused kernel memory",
__pa_symbol(&__init_begin),
/*
* Interrupt handing routines for NEC VR4100 series.
*
- * Copyright (C) 2005 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
+ * Copyright (C) 2005-2007 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
if (cascade->get_irq != NULL) {
unsigned int source_irq = irq;
desc = irq_desc + source_irq;
- desc->chip->ack(source_irq);
+ if (desc->chip->mask_ack)
+ desc->chip->mask_ack(source_irq);
+ else {
+ desc->chip->mask(source_irq);
+ desc->chip->ack(source_irq);
+ }
irq = cascade->get_irq(irq);
if (irq < 0)
atomic_inc(&irq_err_count);
else
irq_dispatch(irq);
- desc->chip->end(source_irq);
+ if (!(desc->status & IRQ_DISABLED) && desc->chip->unmask)
+ desc->chip->unmask(source_irq);
} else
do_IRQ(irq);
}
select PPC_970_NAP
select PPC_NATIVE
select PPC_RTAS
+ select ATA_NONSTANDARD if ATA
default n
help
This option enables support for the Maple 970FX Evaluation Board.
* pages though
*/
vma->vm_flags = VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYWRITE|VM_MAYEXEC;
+ /*
+ * Make sure the vDSO gets into every core dump.
+ * Dumping its contents makes post-mortem fully interpretable later
+ * without matching up the same kernel and hardware config to see
+ * what PC values meant.
+ */
+ vma->vm_flags |= VM_ALWAYSDUMP;
vma->vm_flags |= mm->def_flags;
vma->vm_page_prot = protection_map[vma->vm_flags & 0x7];
vma->vm_ops = &vdso_vmops;
rdpr %tl, %g1
cmp %g1, 1
bgu,pn %xcc, winfix_trampoline
- nop
- ba,pt %xcc, sparc64_realfault_common
mov FAULT_CODE_DTLB | FAULT_CODE_WRITE, %g4
+ ba,pt %xcc, sparc64_realfault_common
+ nop
/* Called from trap table:
* %g4: vaddr
choice
prompt "Host memory split"
default HOST_VMSPLIT_3G
- ---help---
- This is needed when the host kernel on which you run has a non-default
- (like 2G/2G) memory split, instead of the customary 3G/1G. If you did
- not recompile your own kernel but use the default distro's one, you can
- safely accept the "Default split" option.
+ help
+ This is needed when the host kernel on which you run has a non-default
+ (like 2G/2G) memory split, instead of the customary 3G/1G. If you did
+ not recompile your own kernel but use the default distro's one, you can
+ safely accept the "Default split" option.
- It can be enabled on recent (>=2.6.16-rc2) vanilla kernels via
- CONFIG_VM_SPLIT_*, or on previous kernels with special patches (-ck
- patchset by Con Kolivas, or other ones) - option names match closely the
- host CONFIG_VM_SPLIT_* ones.
+ It can be enabled on recent (>=2.6.16-rc2) vanilla kernels via
+ CONFIG_VM_SPLIT_*, or on previous kernels with special patches (-ck
+ patchset by Con Kolivas, or other ones) - option names match closely the
+ host CONFIG_VM_SPLIT_* ones.
- A lower setting (where 1G/3G is lowest and 3G/1G is higher) will
- tolerate even more "normal" host kernels, but an higher setting will be
- stricter.
+ A lower setting (where 1G/3G is lowest and 3G/1G is higher) will
+ tolerate even more "normal" host kernels, but an higher setting will be
+ stricter.
- So, if you do not know what to do here, say 'Default split'.
+ So, if you do not know what to do here, say 'Default split'.
config HOST_VMSPLIT_3G
bool "Default split (3G/1G user/kernel host split)"
config STUB_CODE
hex
- default 0xbfffe000 if !HOST_2G_2G
- default 0x7fffe000 if HOST_2G_2G
+ default 0xbfffe000 if !HOST_VMSPLIT_2G
+ default 0x7fffe000 if HOST_VMSPLIT_2G
config STUB_DATA
hex
- default 0xbffff000 if !HOST_2G_2G
- default 0x7ffff000 if HOST_2G_2G
+ default 0xbffff000 if !HOST_VMSPLIT_2G
+ default 0x7ffff000 if HOST_VMSPLIT_2G
config STUB_START
hex
#define ELF_NGREG (sizeof (struct user_regs_struct32) / sizeof(elf_greg_t))
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
-/*
- * These macros parameterize elf_core_dump in fs/binfmt_elf.c to write out
- * extra segments containing the vsyscall DSO contents. Dumping its
- * contents makes post-mortem fully interpretable later without matching up
- * the same kernel and hardware config to see what PC values meant.
- * Dumping its extra ELF program headers includes all the other information
- * a debugger needs to easily find how the vsyscall DSO was being used.
- */
-#define ELF_CORE_EXTRA_PHDRS (find_vma(current->mm, VSYSCALL32_BASE) ? \
- (VSYSCALL32_EHDR->e_phnum) : 0)
-#define ELF_CORE_WRITE_EXTRA_PHDRS \
-do { \
- if (find_vma(current->mm, VSYSCALL32_BASE)) { \
- const struct elf32_phdr *const vsyscall_phdrs = \
- (const struct elf32_phdr *) (VSYSCALL32_BASE \
- + VSYSCALL32_EHDR->e_phoff);\
- int i; \
- Elf32_Off ofs = 0; \
- for (i = 0; i < VSYSCALL32_EHDR->e_phnum; ++i) { \
- struct elf32_phdr phdr = vsyscall_phdrs[i]; \
- if (phdr.p_type == PT_LOAD) { \
- BUG_ON(ofs != 0); \
- ofs = phdr.p_offset = offset; \
- phdr.p_memsz = PAGE_ALIGN(phdr.p_memsz); \
- phdr.p_filesz = phdr.p_memsz; \
- offset += phdr.p_filesz; \
- } \
- else \
- phdr.p_offset += ofs; \
- phdr.p_paddr = 0; /* match other core phdrs */ \
- DUMP_WRITE(&phdr, sizeof(phdr)); \
- } \
- } \
-} while (0)
-#define ELF_CORE_WRITE_EXTRA_DATA \
-do { \
- if (find_vma(current->mm, VSYSCALL32_BASE)) { \
- const struct elf32_phdr *const vsyscall_phdrs = \
- (const struct elf32_phdr *) (VSYSCALL32_BASE \
- + VSYSCALL32_EHDR->e_phoff); \
- int i; \
- for (i = 0; i < VSYSCALL32_EHDR->e_phnum; ++i) { \
- if (vsyscall_phdrs[i].p_type == PT_LOAD) \
- DUMP_WRITE((void *) (u64) vsyscall_phdrs[i].p_vaddr,\
- PAGE_ALIGN(vsyscall_phdrs[i].p_memsz)); \
- } \
- } \
-} while (0)
-
struct elf_siginfo
{
int si_signo; /* signal number */
vma->vm_end = VSYSCALL32_END;
/* MAYWRITE to allow gdb to COW and set breakpoints */
vma->vm_flags = VM_READ|VM_EXEC|VM_MAYREAD|VM_MAYEXEC|VM_MAYWRITE;
+ /*
+ * Make sure the vDSO gets into every core dump.
+ * Dumping its contents makes post-mortem fully interpretable later
+ * without matching up the same kernel and hardware config to see
+ * what PC values meant.
+ */
+ vma->vm_flags |= VM_ALWAYSDUMP;
vma->vm_flags |= mm->def_flags;
vma->vm_page_prot = protection_map[vma->vm_flags & 7];
vma->vm_ops = &syscall32_vm_ops;
return 0;
}
+const char *arch_vma_name(struct vm_area_struct *vma)
+{
+ if (vma->vm_start == VSYSCALL32_BASE &&
+ vma->vm_mm && vma->vm_mm->task_size == IA32_PAGE_OFFSET)
+ return "[vdso]";
+ return NULL;
+}
+
static int __init init_syscall32(void)
{
syscall32_page = (void *)get_zeroed_page(GFP_KERNEL);
if ((nmi >= NMI_INVALID) || (nmi < NMI_NONE))
return 0;
- if ((nmi == NMI_LOCAL_APIC) && (nmi_known_cpu() == 0))
- return 0; /* no lapic support */
nmi_watchdog = nmi;
return 1;
}
*/
rq->cmd_flags |= REQ_SOFTBARRIER;
+ /*
+ * Most requeues happen because of a busy condition,
+ * don't force unplug of the queue for that case.
+ */
+ unplug_it = 0;
+
if (q->ordseq == 0) {
list_add(&rq->queuelist, &q->queue_head);
break;
}
list_add_tail(&rq->queuelist, pos);
- /*
- * most requeues happen because of a busy condition, don't
- * force unplug of the queue for that case.
- */
- unplug_it = 0;
break;
default:
if (result)
return result;
- result = acpi_processor_get_platform_limit(pr);
- if (result)
- return result;
-
return 0;
}
struct acpi_video_device *video_device = data;
struct acpi_device *device = NULL;
-
- printk("video device notify\n");
if (!video_device)
return;
if ATA
+config ATA_NONSTANDARD
+ bool
+ default n
+
config SATA_AHCI
tristate "AHCI SATA support"
depends on PCI
AHCI_CMD_CLR_BUSY = (1 << 10),
RX_FIS_D2H_REG = 0x40, /* offset of D2H Register FIS data */
+ RX_FIS_SDB = 0x58, /* offset of SDB FIS data */
RX_FIS_UNK = 0x60, /* offset of Unknown FIS data */
board_ahci = 0,
dma_addr_t cmd_tbl_dma;
void *rx_fis;
dma_addr_t rx_fis_dma;
+ /* for NCQ spurious interrupt analysis */
+ int ncq_saw_spurious_sdb_cnt;
+ unsigned int ncq_saw_d2h:1;
+ unsigned int ncq_saw_dmas:1;
};
static u32 ahci_scr_read (struct ata_port *ap, unsigned int sc_reg);
{ PCI_VDEVICE(INTEL, 0x27c1), board_ahci }, /* ICH7 */
{ PCI_VDEVICE(INTEL, 0x27c5), board_ahci }, /* ICH7M */
{ PCI_VDEVICE(INTEL, 0x27c3), board_ahci }, /* ICH7R */
- { PCI_VDEVICE(AL, 0x5288), board_ahci }, /* ULi M5288 */
+ { PCI_VDEVICE(AL, 0x5288), board_ahci_ign_iferr }, /* ULi M5288 */
{ PCI_VDEVICE(INTEL, 0x2681), board_ahci }, /* ESB2 */
{ PCI_VDEVICE(INTEL, 0x2682), board_ahci }, /* ESB2 */
{ PCI_VDEVICE(INTEL, 0x2683), board_ahci }, /* ESB2 */
{
u32 cmd, scontrol;
- cmd = readl(port_mmio + PORT_CMD) & ~PORT_CMD_ICC_MASK;
-
- if (cap & HOST_CAP_SSC) {
- /* enable transitions to slumber mode */
- scontrol = readl(port_mmio + PORT_SCR_CTL);
- if ((scontrol & 0x0f00) > 0x100) {
- scontrol &= ~0xf00;
- writel(scontrol, port_mmio + PORT_SCR_CTL);
- }
-
- /* put device into slumber mode */
- writel(cmd | PORT_CMD_ICC_SLUMBER, port_mmio + PORT_CMD);
-
- /* wait for the transition to complete */
- ata_wait_register(port_mmio + PORT_CMD, PORT_CMD_ICC_SLUMBER,
- PORT_CMD_ICC_SLUMBER, 1, 50);
- }
+ if (!(cap & HOST_CAP_SSS))
+ return;
- /* put device into listen mode */
- if (cap & HOST_CAP_SSS) {
- /* first set PxSCTL.DET to 0 */
- scontrol = readl(port_mmio + PORT_SCR_CTL);
- scontrol &= ~0xf;
- writel(scontrol, port_mmio + PORT_SCR_CTL);
+ /* put device into listen mode, first set PxSCTL.DET to 0 */
+ scontrol = readl(port_mmio + PORT_SCR_CTL);
+ scontrol &= ~0xf;
+ writel(scontrol, port_mmio + PORT_SCR_CTL);
- /* then set PxCMD.SUD to 0 */
- cmd &= ~PORT_CMD_SPIN_UP;
- writel(cmd, port_mmio + PORT_CMD);
- }
+ /* then set PxCMD.SUD to 0 */
+ cmd = readl(port_mmio + PORT_CMD) & ~PORT_CMD_ICC_MASK;
+ cmd &= ~PORT_CMD_SPIN_UP;
+ writel(cmd, port_mmio + PORT_CMD);
}
static void ahci_init_port(void __iomem *port_mmio, u32 cap,
/* clear D2H reception area to properly wait for D2H FIS */
ata_tf_init(ap->device, &tf);
- tf.command = 0xff;
+ tf.command = 0x80;
ata_tf_to_fis(&tf, d2h_fis, 0);
rc = sata_std_hardreset(ap, class);
void __iomem *mmio = ap->host->mmio_base;
void __iomem *port_mmio = ahci_port_base(mmio, ap->port_no);
struct ata_eh_info *ehi = &ap->eh_info;
+ struct ahci_port_priv *pp = ap->private_data;
u32 status, qc_active;
- int rc;
+ int rc, known_irq = 0;
status = readl(port_mmio + PORT_IRQ_STAT);
writel(status, port_mmio + PORT_IRQ_STAT);
/* hmmm... a spurious interupt */
- /* some devices send D2H reg with I bit set during NCQ command phase */
- if (ap->sactive && (status & PORT_IRQ_D2H_REG_FIS))
+ /* if !NCQ, ignore. No modern ATA device has broken HSM
+ * implementation for non-NCQ commands.
+ */
+ if (!ap->sactive)
return;
- /* ignore interim PIO setup fis interrupts */
- if (ata_tag_valid(ap->active_tag) && (status & PORT_IRQ_PIOS_FIS))
- return;
+ if (status & PORT_IRQ_D2H_REG_FIS) {
+ if (!pp->ncq_saw_d2h)
+ ata_port_printk(ap, KERN_INFO,
+ "D2H reg with I during NCQ, "
+ "this message won't be printed again\n");
+ pp->ncq_saw_d2h = 1;
+ known_irq = 1;
+ }
+
+ if (status & PORT_IRQ_DMAS_FIS) {
+ if (!pp->ncq_saw_dmas)
+ ata_port_printk(ap, KERN_INFO,
+ "DMAS FIS during NCQ, "
+ "this message won't be printed again\n");
+ pp->ncq_saw_dmas = 1;
+ known_irq = 1;
+ }
+
+ if (status & PORT_IRQ_SDB_FIS &&
+ pp->ncq_saw_spurious_sdb_cnt < 10) {
+ /* SDB FIS containing spurious completions might be
+ * dangerous, we need to know more about them. Print
+ * more of it.
+ */
+ const u32 *f = pp->rx_fis + RX_FIS_SDB;
+
+ ata_port_printk(ap, KERN_INFO, "Spurious SDB FIS during NCQ "
+ "issue=0x%x SAct=0x%x FIS=%08x:%08x%s\n",
+ readl(port_mmio + PORT_CMD_ISSUE),
+ readl(port_mmio + PORT_SCR_ACT),
+ le32_to_cpu(f[0]), le32_to_cpu(f[1]),
+ pp->ncq_saw_spurious_sdb_cnt < 10 ?
+ "" : ", shutting up");
+
+ pp->ncq_saw_spurious_sdb_cnt++;
+ known_irq = 1;
+ }
- if (ata_ratelimit())
+ if (!known_irq)
ata_port_printk(ap, KERN_INFO, "spurious interrupt "
- "(irq_stat 0x%x active_tag %d sactive 0x%x)\n",
+ "(irq_stat 0x%x active_tag 0x%x sactive 0x%x)\n",
status, ap->active_tag, ap->sactive);
}
/**
* generic_set_mode - mode setting
* @ap: interface to set up
+ * @unused: returned device on error
*
* Use a non standard set_mode function. We don't want to be tuned.
* The BIOS configured everything. Our job is not to fiddle. We
* and respect them.
*/
-static void generic_set_mode(struct ata_port *ap)
+static int generic_set_mode(struct ata_port *ap, struct ata_device **unused)
{
int dma_enabled = 0;
int i;
for (i = 0; i < ATA_MAX_DEVICES; i++) {
struct ata_device *dev = &ap->device[i];
- if (ata_dev_enabled(dev)) {
+ if (ata_dev_ready(dev)) {
/* We don't really care */
dev->pio_mode = XFER_PIO_0;
dev->dma_mode = XFER_MW_DMA_0;
}
}
}
+ return 0;
}
static struct scsi_host_template generic_sht = {
int i, rc = 0, used_dma = 0, found = 0;
/* has private set_mode? */
- if (ap->ops->set_mode) {
- /* FIXME: make ->set_mode handle no device case and
- * return error code and failing device on failure.
- */
- for (i = 0; i < ATA_MAX_DEVICES; i++) {
- if (ata_dev_ready(&ap->device[i])) {
- ap->ops->set_mode(ap);
- break;
- }
- }
- return 0;
- }
+ if (ap->ops->set_mode)
+ return ap->ops->set_mode(ap, r_failed_dev);
/* step 1: calculate xfer_mask */
for (i = 0; i < ATA_MAX_DEVICES; i++) {
if (cmd->use_sg) {
qc->__sg = (struct scatterlist *) cmd->request_buffer;
qc->n_elem = cmd->use_sg;
- } else {
+ } else if (cmd->request_bufflen) {
qc->__sg = &qc->sgent;
qc->n_elem = 1;
}
*/
void ata_bmdma_post_internal_cmd(struct ata_queued_cmd *qc)
{
- ata_bmdma_stop(qc);
+ if (qc->ap->ioaddr.bmdma_addr)
+ ata_bmdma_stop(qc);
}
#ifdef CONFIG_PCI
pci_resource_start(pdev, 1) | ATA_PCI_CTL_OFS;
bmdma = pci_resource_start(pdev, 4);
if (bmdma) {
- if (inb(bmdma + 2) & 0x80)
+ if ((!(port[p]->flags & ATA_FLAG_IGN_SIMPLEX)) &&
+ (inb(bmdma + 2) & 0x80))
probe_ent->_host_flags |= ATA_HOST_SIMPLEX;
probe_ent->port[p].bmdma_addr = bmdma;
}
bmdma = pci_resource_start(pdev, 4);
if (bmdma) {
bmdma += 8;
- if(inb(bmdma + 2) & 0x80)
+ if ((!(port[p]->flags & ATA_FLAG_IGN_SIMPLEX)) &&
+ (inb(bmdma + 2) & 0x80))
probe_ent->_host_flags |= ATA_HOST_SIMPLEX;
probe_ent->port[p].bmdma_addr = bmdma;
}
probe_ent->irq_flags = IRQF_SHARED;
if (port_mask & ATA_PORT_PRIMARY) {
- probe_ent->irq = ATA_PRIMARY_IRQ;
+ probe_ent->irq = ATA_PRIMARY_IRQ(pdev);
probe_ent->port[0].cmd_addr = ATA_PRIMARY_CMD;
probe_ent->port[0].altstatus_addr =
probe_ent->port[0].ctl_addr = ATA_PRIMARY_CTL;
if (bmdma) {
probe_ent->port[0].bmdma_addr = bmdma;
- if (inb(bmdma + 2) & 0x80)
+ if ((!(port[0]->flags & ATA_FLAG_IGN_SIMPLEX)) &&
+ (inb(bmdma + 2) & 0x80))
probe_ent->_host_flags |= ATA_HOST_SIMPLEX;
}
ata_std_ports(&probe_ent->port[0]);
if (port_mask & ATA_PORT_SECONDARY) {
if (probe_ent->irq)
- probe_ent->irq2 = ATA_SECONDARY_IRQ;
+ probe_ent->irq2 = ATA_SECONDARY_IRQ(pdev);
else
- probe_ent->irq = ATA_SECONDARY_IRQ;
+ probe_ent->irq = ATA_SECONDARY_IRQ(pdev);
probe_ent->port[1].cmd_addr = ATA_SECONDARY_CMD;
probe_ent->port[1].altstatus_addr =
probe_ent->port[1].ctl_addr = ATA_SECONDARY_CTL;
if (bmdma) {
probe_ent->port[1].bmdma_addr = bmdma + 8;
- if (inb(bmdma + 10) & 0x80)
+ if ((!(port[1]->flags & ATA_FLAG_IGN_SIMPLEX)) &&
+ (inb(bmdma + 10) & 0x80))
probe_ent->_host_flags |= ATA_HOST_SIMPLEX;
}
ata_std_ports(&probe_ent->port[1]);
static void cmd64x_set_dmamode(struct ata_port *ap, struct ata_device *adev)
{
static const u8 udma_data[] = {
- 0x31, 0x21, 0x11, 0x25, 0x15, 0x05
+ 0x30, 0x20, 0x10, 0x20, 0x10, 0x00
};
static const u8 mwdma_data[] = {
0x30, 0x20, 0x10
pci_read_config_byte(pdev, pciD, ®D);
pci_read_config_byte(pdev, pciU, ®U);
- regD &= ~(0x20 << shift);
- regU &= ~(0x35 << shift);
+ /* DMA bits off */
+ regD &= ~(0x20 << adev->devno);
+ /* DMA control bits */
+ regU &= ~(0x30 << shift);
+ /* DMA timing bits */
+ regU &= ~(0x05 << adev->devno);
- if (adev->dma_mode >= XFER_UDMA_0)
+ if (adev->dma_mode >= XFER_UDMA_0) {
+ /* Merge thge timing value */
regU |= udma_data[adev->dma_mode - XFER_UDMA_0] << shift;
- else
+ /* Merge the control bits */
+ regU |= 1 << adev->devno; /* UDMA on */
+ if (adev->dma_mode > 2) /* 15nS timing */
+ regU |= 4 << adev->devno;
+ } else
regD |= mwdma_data[adev->dma_mode - XFER_MW_DMA_0] << shift;
regD |= 0x20 << adev->devno;
struct ata_port *ap = qc->ap;
struct pci_dev *pdev = to_pci_dev(ap->host->dev);
u8 dma_intr;
- int dma_reg = ap->port_no ? ARTTIM23_INTR_CH1 : CFR_INTR_CH0;
- int dma_mask = ap->port_no ? ARTTIM2 : CFR;
+ int dma_mask = ap->port_no ? ARTTIM23_INTR_CH1 : CFR_INTR_CH0;
+ int dma_reg = ap->port_no ? ARTTIM2 : CFR;
ata_bmdma_stop(qc);
#include <linux/libata.h>
#define DRV_NAME "pata_hpt3x2n"
-#define DRV_VERSION "0.3"
+#define DRV_VERSION "0.3.2"
enum {
HPT_PCI_FAST = (1 << 31),
return 0;
}
-static int hpt3x2n_use_dpll(struct ata_port *ap, int reading)
+static int hpt3x2n_use_dpll(struct ata_port *ap, int writing)
{
long flags = (long)ap->host->private_data;
/* See if we should use the DPLL */
- if (reading == 0)
+ if (writing)
return USE_DPLL; /* Needed for write */
if (flags & PCI66)
return USE_DPLL; /* Needed at 66Mhz */
/**
* it821x_smart_set_mode - mode setting
* @ap: interface to set up
+ * @unused: device that failed (error only)
*
* Use a non standard set_mode function. We don't want to be tuned.
* The BIOS configured everything. Our job is not to fiddle. We
* and respect them.
*/
-static void it821x_smart_set_mode(struct ata_port *ap)
+static int it821x_smart_set_mode(struct ata_port *ap, struct ata_device **unused)
{
int dma_enabled = 0;
int i;
}
}
}
+ return 0;
}
/**
#include <scsi/scsi_host.h>
#define DRV_NAME "pata_ixp4xx_cf"
-#define DRV_VERSION "0.1.1"
+#define DRV_VERSION "0.1.1ac1"
-static void ixp4xx_set_mode(struct ata_port *ap)
+static int ixp4xx_set_mode(struct ata_port *ap, struct ata_device *adev)
{
int i;
dev->flags |= ATA_DFLAG_PIO;
}
}
+ return 0;
}
static void ixp4xx_phy_reset(struct ata_port *ap)
/**
* legacy_set_mode - mode setting
* @ap: IDE interface
+ * @unused: Device that failed when error is returned
*
* Use a non standard set_mode function. We don't want to be tuned.
*
* expand on this as per hdparm in the base kernel.
*/
-static void legacy_set_mode(struct ata_port *ap)
+static int legacy_set_mode(struct ata_port *ap, struct ata_device **unused)
{
int i;
dev->flags |= ATA_DFLAG_PIO;
}
}
+ return 0;
}
static struct scsi_host_template legacy_sht = {
/**
* rz1000_set_mode - mode setting function
* @ap: ATA interface
+ * @unused: returned device on set_mode failure
*
* Use a non standard set_mode function. We don't want to be tuned. We
* would prefer to be BIOS generic but for the fact our hardware is
* whacked out.
*/
-static void rz1000_set_mode(struct ata_port *ap)
+static int rz1000_set_mode(struct ata_port *ap, struct ata_device **unused)
{
int i;
for (i = 0; i < ATA_MAX_DEVICES; i++) {
struct ata_device *dev = &ap->device[i];
- if (ata_dev_enabled(dev)) {
+ if (ata_dev_ready(dev)) {
/* We don't really care */
dev->pio_mode = XFER_PIO_0;
dev->xfer_mode = XFER_PIO_0;
dev->flags |= ATA_DFLAG_PIO;
}
}
+ return 0;
}
static int nv_host_intr(struct ata_port *ap, u8 irq_stat)
{
struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->active_tag);
- int handled;
/* freeze if hotplugged */
if (unlikely(irq_stat & (NV_INT_ADDED | NV_INT_REMOVED))) {
}
/* handle interrupt */
- handled = ata_host_intr(ap, qc);
- if (unlikely(!handled)) {
- /* spurious, clear it */
- ata_check_status(ap);
- }
-
- return 1;
+ return ata_host_intr(ap, qc);
}
static irqreturn_t nv_adma_interrupt(int irq, void *dev_instance)
if (pp->flags & NV_ADMA_PORT_REGISTER_MODE) {
u8 irq_stat = readb(host->mmio_base + NV_INT_STATUS_CK804)
>> (NV_INT_PORT_SHIFT * i);
+ if(ata_tag_valid(ap->active_tag))
+ /** NV_INT_DEV indication seems unreliable at times
+ at least in ADMA mode. Force it on always when a
+ command is active, to prevent losing interrupts. */
+ irq_stat |= NV_INT_DEV;
handled += nv_host_intr(ap, irq_stat);
continue;
}
static struct ata_port_info uli_port_info = {
.sht = &uli_sht,
- .flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY,
+ .flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
+ ATA_FLAG_IGN_SIMPLEX,
.pio_mask = 0x1f, /* pio0-4 */
.udma_mask = 0x7f, /* udma0-6 */
.port_ops = &uli_ops,
static int svia_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
static u32 svia_scr_read (struct ata_port *ap, unsigned int sc_reg);
static void svia_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val);
+static void svia_noop_freeze(struct ata_port *ap);
static void vt6420_error_handler(struct ata_port *ap);
static const struct pci_device_id svia_pci_tbl[] = {
.qc_issue = ata_qc_issue_prot,
.data_xfer = ata_pio_data_xfer,
- .freeze = ata_bmdma_freeze,
+ .freeze = svia_noop_freeze,
.thaw = ata_bmdma_thaw,
.error_handler = vt6420_error_handler,
.post_internal_cmd = ata_bmdma_post_internal_cmd,
outl(val, ap->ioaddr.scr_addr + (4 * sc_reg));
}
+static void svia_noop_freeze(struct ata_port *ap)
+{
+ /* Some VIA controllers choke if ATA_NIEN is manipulated in
+ * certain way. Leave it alone and just clear pending IRQ.
+ */
+ ata_chk_status(ap);
+ ata_bmdma_irq_clear(ap);
+}
+
/**
* vt6420_prereset - prereset for vt6420
* @ap: target ATA port
/********** initialise a card **********/
-static int __init hrz_init (hrz_dev * dev) {
+static int __devinit hrz_init (hrz_dev * dev) {
int onefivefive;
u16 chan;
static void switchover_timeout(unsigned long data);
static struct timer_list switchover_timer =
TIMER_INITIALIZER(switchover_timeout , 0, 0);
+static unsigned long tlclk_timer_data;
static struct tlclk_alarms *alarm_events;
static DECLARE_WAIT_QUEUE_HEAD(wq);
+static unsigned long useflags;
+static DEFINE_MUTEX(tlclk_mutex);
+
static int tlclk_open(struct inode *inode, struct file *filp)
{
int result;
+ if (test_and_set_bit(0, &useflags))
+ return -EBUSY;
+ /* this legacy device is always one per system and it doesn't
+ * know how to handle multiple concurrent clients.
+ */
+
/* Make sure there is no interrupt pending while
* initialising interrupt handler */
inb(TLCLK_REG6);
static int tlclk_release(struct inode *inode, struct file *filp)
{
free_irq(telclk_interrupt, tlclk_interrupt);
+ clear_bit(0, &useflags);
return 0;
}
{
if (count < sizeof(struct tlclk_alarms))
return -EIO;
+ if (mutex_lock_interruptible(&tlclk_mutex))
+ return -EINTR;
+
wait_event_interruptible(wq, got_event);
- if (copy_to_user(buf, alarm_events, sizeof(struct tlclk_alarms)))
+ if (copy_to_user(buf, alarm_events, sizeof(struct tlclk_alarms))) {
+ mutex_unlock(&tlclk_mutex);
return -EFAULT;
+ }
memset(alarm_events, 0, sizeof(struct tlclk_alarms));
got_event = 0;
+ mutex_unlock(&tlclk_mutex);
return sizeof(struct tlclk_alarms);
}
-static ssize_t tlclk_write(struct file *filp, const char __user *buf, size_t count,
- loff_t *f_pos)
-{
- return 0;
-}
-
static const struct file_operations tlclk_fops = {
.read = tlclk_read,
- .write = tlclk_write,
.open = tlclk_open,
.release = tlclk_release,
SET_PORT_BITS(TLCLK_REG3, 0xf8, 0x7);
switch (val) {
case CLK_8_592MHz:
- SET_PORT_BITS(TLCLK_REG0, 0xfc, 1);
+ SET_PORT_BITS(TLCLK_REG0, 0xfc, 2);
break;
case CLK_11_184MHz:
SET_PORT_BITS(TLCLK_REG0, 0xfc, 0);
SET_PORT_BITS(TLCLK_REG0, 0xfc, 3);
break;
case CLK_44_736MHz:
- SET_PORT_BITS(TLCLK_REG0, 0xfc, 2);
+ SET_PORT_BITS(TLCLK_REG0, 0xfc, 1);
break;
}
} else
static void switchover_timeout(unsigned long data)
{
- if ((data & 1)) {
- if ((inb(TLCLK_REG1) & 0x08) != (data & 0x08))
+ unsigned long flags = *(unsigned long *) data;
+
+ if ((flags & 1)) {
+ if ((inb(TLCLK_REG1) & 0x08) != (flags & 0x08))
alarm_events->switchover_primary++;
} else {
- if ((inb(TLCLK_REG1) & 0x08) != (data & 0x08))
+ if ((inb(TLCLK_REG1) & 0x08) != (flags & 0x08))
alarm_events->switchover_secondary++;
}
/* TIMEOUT in ~10ms */
switchover_timer.expires = jiffies + msecs_to_jiffies(10);
- switchover_timer.data = inb(TLCLK_REG1);
- add_timer(&switchover_timer);
+ tlclk_timer_data = inb(TLCLK_REG1);
+ switchover_timer.data = (unsigned long) &tlclk_timer_data;
+ mod_timer(&switchover_timer, switchover_timer.expires);
} else {
got_event = 1;
wake_up(&wq);
*
* Copyright (C) 2002 MontaVista Software Inc.
* Author: Yoichi Yuasa <yyuasa@mvista.com or source@mvista.com>
- * Copyright (C) 2003-2005 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
+ * Copyright (C) 2003-2007 Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
return data;
}
-static unsigned int startup_giuint_low_irq(unsigned int irq)
+static void ack_giuint_low(unsigned int irq)
{
- unsigned int pin;
-
- pin = GPIO_PIN_OF_IRQ(irq);
- giu_write(GIUINTSTATL, 1 << pin);
- giu_set(GIUINTENL, 1 << pin);
-
- return 0;
+ giu_write(GIUINTSTATL, 1 << GPIO_PIN_OF_IRQ(irq));
}
-static void shutdown_giuint_low_irq(unsigned int irq)
+static void mask_giuint_low(unsigned int irq)
{
giu_clear(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq));
}
-static void enable_giuint_low_irq(unsigned int irq)
-{
- giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq));
-}
-
-#define disable_giuint_low_irq shutdown_giuint_low_irq
-
-static void ack_giuint_low_irq(unsigned int irq)
+static void mask_ack_giuint_low(unsigned int irq)
{
unsigned int pin;
giu_write(GIUINTSTATL, 1 << pin);
}
-static void end_giuint_low_irq(unsigned int irq)
+static void unmask_giuint_low(unsigned int irq)
{
- if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
- giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq));
+ giu_set(GIUINTENL, 1 << GPIO_PIN_OF_IRQ(irq));
}
-static struct hw_interrupt_type giuint_low_irq_type = {
- .typename = "GIUINTL",
- .startup = startup_giuint_low_irq,
- .shutdown = shutdown_giuint_low_irq,
- .enable = enable_giuint_low_irq,
- .disable = disable_giuint_low_irq,
- .ack = ack_giuint_low_irq,
- .end = end_giuint_low_irq,
+static struct irq_chip giuint_low_irq_chip = {
+ .name = "GIUINTL",
+ .ack = ack_giuint_low,
+ .mask = mask_giuint_low,
+ .mask_ack = mask_ack_giuint_low,
+ .unmask = unmask_giuint_low,
};
-static unsigned int startup_giuint_high_irq(unsigned int irq)
+static void ack_giuint_high(unsigned int irq)
{
- unsigned int pin;
-
- pin = GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET;
- giu_write(GIUINTSTATH, 1 << pin);
- giu_set(GIUINTENH, 1 << pin);
-
- return 0;
+ giu_write(GIUINTSTATH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET));
}
-static void shutdown_giuint_high_irq(unsigned int irq)
+static void mask_giuint_high(unsigned int irq)
{
giu_clear(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET));
}
-static void enable_giuint_high_irq(unsigned int irq)
-{
- giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET));
-}
-
-#define disable_giuint_high_irq shutdown_giuint_high_irq
-
-static void ack_giuint_high_irq(unsigned int irq)
+static void mask_ack_giuint_high(unsigned int irq)
{
unsigned int pin;
giu_write(GIUINTSTATH, 1 << pin);
}
-static void end_giuint_high_irq(unsigned int irq)
+static void unmask_giuint_high(unsigned int irq)
{
- if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
- giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET));
+ giu_set(GIUINTENH, 1 << (GPIO_PIN_OF_IRQ(irq) - GIUINT_HIGH_OFFSET));
}
-static struct hw_interrupt_type giuint_high_irq_type = {
- .typename = "GIUINTH",
- .startup = startup_giuint_high_irq,
- .shutdown = shutdown_giuint_high_irq,
- .enable = enable_giuint_high_irq,
- .disable = disable_giuint_high_irq,
- .ack = ack_giuint_high_irq,
- .end = end_giuint_high_irq,
+static struct irq_chip giuint_high_irq_chip = {
+ .name = "GIUINTH",
+ .ack = ack_giuint_high,
+ .mask = mask_giuint_high,
+ .mask_ack = mask_ack_giuint_high,
+ .unmask = unmask_giuint_high,
};
static int giu_get_irq(unsigned int irq)
break;
}
}
+ set_irq_chip_and_handler(GIU_IRQ(pin),
+ &giuint_low_irq_chip,
+ handle_edge_irq);
} else {
giu_clear(GIUINTTYPL, mask);
giu_clear(GIUINTHTSELL, mask);
+ set_irq_chip_and_handler(GIU_IRQ(pin),
+ &giuint_low_irq_chip,
+ handle_level_irq);
}
giu_write(GIUINTSTATL, mask);
} else if (pin < GIUINT_HIGH_MAX) {
break;
}
}
+ set_irq_chip_and_handler(GIU_IRQ(pin),
+ &giuint_high_irq_chip,
+ handle_edge_irq);
} else {
giu_clear(GIUINTTYPH, mask);
giu_clear(GIUINTHTSELH, mask);
+ set_irq_chip_and_handler(GIU_IRQ(pin),
+ &giuint_high_irq_chip,
+ handle_level_irq);
}
giu_write(GIUINTSTATH, mask);
}
static int __devinit giu_probe(struct platform_device *dev)
{
unsigned long start, size, flags = 0;
- unsigned int nr_pins = 0;
+ unsigned int nr_pins = 0, trigger, i, pin;
struct resource *res1, *res2 = NULL;
void *base;
- int retval, i;
+ struct irq_chip *chip;
+ int retval;
switch (current_cpu_data.cputype) {
case CPU_VR4111:
giu_write(GIUINTENL, 0);
giu_write(GIUINTENH, 0);
+ trigger = giu_read(GIUINTTYPH) << 16;
+ trigger |= giu_read(GIUINTTYPL);
for (i = GIU_IRQ_BASE; i <= GIU_IRQ_LAST; i++) {
- if (i < GIU_IRQ(GIUINT_HIGH_OFFSET))
- irq_desc[i].chip = &giuint_low_irq_type;
+ pin = GPIO_PIN_OF_IRQ(i);
+ if (pin < GIUINT_HIGH_OFFSET)
+ chip = &giuint_low_irq_chip;
else
- irq_desc[i].chip = &giuint_high_irq_type;
+ chip = &giuint_high_irq_chip;
+
+ if (trigger & (1 << pin))
+ set_irq_chip_and_handler(i, chip, handle_edge_irq);
+ else
+ set_irq_chip_and_handler(i, chip, handle_level_irq);
+
}
return cascade_irq(GIUINT_IRQ, giu_get_irq);
struct kobject kobj;
};
-#define get_efivar_entry(n) list_entry(n, struct efivar_entry, list)
-
struct efivar_attribute {
struct attribute attr;
ssize_t (*show) (struct efivar_entry *entry, char *buf);
static void efivar_release(struct kobject *kobj)
{
struct efivar_entry *var = container_of(kobj, struct efivar_entry, kobj);
- spin_lock(&efivars_lock);
- list_del(&var->list);
- spin_unlock(&efivars_lock);
kfree(var);
}
efivar_create(struct subsystem *sub, const char *buf, size_t count)
{
struct efi_variable *new_var = (struct efi_variable *)buf;
- struct efivar_entry *search_efivar = NULL;
+ struct efivar_entry *search_efivar, *n;
unsigned long strsize1, strsize2;
- struct list_head *pos, *n;
efi_status_t status = EFI_NOT_FOUND;
int found = 0;
/*
* Does this variable already exist?
*/
- list_for_each_safe(pos, n, &efivar_list) {
- search_efivar = get_efivar_entry(pos);
+ list_for_each_entry_safe(search_efivar, n, &efivar_list, list) {
strsize1 = utf8_strsize(search_efivar->var.VariableName, 1024);
strsize2 = utf8_strsize(new_var->VariableName, 1024);
if (strsize1 == strsize2 &&
efivar_delete(struct subsystem *sub, const char *buf, size_t count)
{
struct efi_variable *del_var = (struct efi_variable *)buf;
- struct efivar_entry *search_efivar = NULL;
+ struct efivar_entry *search_efivar, *n;
unsigned long strsize1, strsize2;
- struct list_head *pos, *n;
efi_status_t status = EFI_NOT_FOUND;
int found = 0;
/*
* Does this variable already exist?
*/
- list_for_each_safe(pos, n, &efivar_list) {
- search_efivar = get_efivar_entry(pos);
+ list_for_each_entry_safe(search_efivar, n, &efivar_list, list) {
strsize1 = utf8_strsize(search_efivar->var.VariableName, 1024);
strsize2 = utf8_strsize(del_var->VariableName, 1024);
if (strsize1 == strsize2 &&
spin_unlock(&efivars_lock);
return -EIO;
}
+ list_del(&search_efivar->list);
/* We need to release this lock before unregistering. */
spin_unlock(&efivars_lock);
-
efivar_unregister(search_efivar);
/* It's dead Jim.... */
static void __exit
efivars_exit(void)
{
- struct list_head *pos, *n;
+ struct efivar_entry *entry, *n;
- list_for_each_safe(pos, n, &efivar_list)
- efivar_unregister(get_efivar_entry(pos));
+ list_for_each_entry_safe(entry, n, &efivar_list, list) {
+ spin_lock(&efivars_lock);
+ list_del(&entry->list);
+ spin_unlock(&efivars_lock);
+ efivar_unregister(entry);
+ }
subsystem_unregister(&vars_subsys);
firmware_unregister(&efi_subsys);
unsigned long flags;
spin_lock_irqsave(&ehca_cq_idr_lock, flags);
- while (my_cq->nr_callbacks)
+ while (my_cq->nr_callbacks) {
+ spin_unlock_irqrestore(&ehca_cq_idr_lock, flags);
yield();
+ spin_lock_irqsave(&ehca_cq_idr_lock, flags);
+ }
idr_remove(&ehca_cq_idr, my_cq->token);
spin_unlock_irqrestore(&ehca_cq_idr_lock, flags);
cq = idr_find(&ehca_cq_idr, token);
if (cq == NULL) {
- spin_unlock(&ehca_cq_idr_lock);
+ spin_unlock_irqrestore(&ehca_cq_idr_lock,
+ flags);
break;
}
switch (token) {
case SRP_OPT_ID_EXT:
p = match_strdup(args);
+ if (!p) {
+ ret = -ENOMEM;
+ goto out;
+ }
target->id_ext = cpu_to_be64(simple_strtoull(p, NULL, 16));
kfree(p);
break;
case SRP_OPT_IOC_GUID:
p = match_strdup(args);
+ if (!p) {
+ ret = -ENOMEM;
+ goto out;
+ }
target->ioc_guid = cpu_to_be64(simple_strtoull(p, NULL, 16));
kfree(p);
break;
case SRP_OPT_DGID:
p = match_strdup(args);
+ if (!p) {
+ ret = -ENOMEM;
+ goto out;
+ }
if (strlen(p) != 32) {
printk(KERN_WARNING PFX "bad dest GID parameter '%s'\n", p);
kfree(p);
case SRP_OPT_SERVICE_ID:
p = match_strdup(args);
+ if (!p) {
+ ret = -ENOMEM;
+ goto out;
+ }
target->service_id = cpu_to_be64(simple_strtoull(p, NULL, 16));
kfree(p);
break;
case SRP_OPT_INITIATOR_EXT:
p = match_strdup(args);
+ if (!p) {
+ ret = -ENOMEM;
+ goto out;
+ }
target->initiator_ext = cpu_to_be64(simple_strtoull(p, NULL, 16));
kfree(p);
break;
{
unsigned long flags;
unsigned i;
- static struct cardstate *ret = NULL;
+ struct cardstate *ret = NULL;
spin_lock_irqsave(&drv->lock, flags);
for (i = 0; i < drv->minors; ++i) {
if (!(drv->flags[i] & VALID_MINOR)) {
- drv->flags[i] = VALID_MINOR;
- ret = drv->cs + i;
- }
- if (ret)
+ if (try_module_get(drv->owner)) {
+ drv->flags[i] = VALID_MINOR;
+ ret = drv->cs + i;
+ }
break;
+ }
}
spin_unlock_irqrestore(&drv->lock, flags);
return ret;
unsigned long flags;
struct gigaset_driver *drv = cs->driver;
spin_lock_irqsave(&drv->lock, flags);
+ if (drv->flags[cs->minor_index] & VALID_MINOR)
+ module_put(drv->owner);
drv->flags[cs->minor_index] = 0;
spin_unlock_irqrestore(&drv->lock, flags);
}
} else if ((bcs->skb = dev_alloc_skb(SBUFSIZE + HW_HDR_LEN)) != NULL)
skb_reserve(bcs->skb, HW_HDR_LEN);
else {
- warn("could not allocate skb\n");
+ warn("could not allocate skb");
bcs->inputstate |= INS_skip_frame;
}
int i;
gig_dbg(DEBUG_INIT, "allocating cs");
- cs = alloc_cs(drv);
- if (!cs)
- goto error;
+ if (!(cs = alloc_cs(drv))) {
+ err("maximum number of devices exceeded");
+ return NULL;
+ }
+ mutex_init(&cs->mutex);
+ mutex_lock(&cs->mutex);
+
gig_dbg(DEBUG_INIT, "allocating bcs[0..%d]", channels - 1);
cs->bcs = kmalloc(channels * sizeof(struct bc_state), GFP_KERNEL);
- if (!cs->bcs)
+ if (!cs->bcs) {
+ err("out of memory");
goto error;
+ }
gig_dbg(DEBUG_INIT, "allocating inbuf");
cs->inbuf = kmalloc(sizeof(struct inbuf_t), GFP_KERNEL);
- if (!cs->inbuf)
+ if (!cs->inbuf) {
+ err("out of memory");
goto error;
+ }
cs->cs_init = 0;
cs->channels = channels;
spin_lock_init(&cs->ev_lock);
cs->ev_tail = 0;
cs->ev_head = 0;
- mutex_init(&cs->mutex);
- mutex_lock(&cs->mutex);
tasklet_init(&cs->event_tasklet, &gigaset_handle_event,
(unsigned long) cs);
for (i = 0; i < channels; ++i) {
gig_dbg(DEBUG_INIT, "setting up bcs[%d].read", i);
- if (!gigaset_initbcs(cs->bcs + i, cs, i))
+ if (!gigaset_initbcs(cs->bcs + i, cs, i)) {
+ err("could not allocate channel %d data", i);
goto error;
+ }
}
++cs->cs_init;
make_valid(cs, VALID_ID);
++cs->cs_init;
gig_dbg(DEBUG_INIT, "setting up hw");
- if (!cs->ops->initcshw(cs))
+ if (!cs->ops->initcshw(cs)) {
+ err("could not allocate device specific data");
goto error;
+ }
++cs->cs_init;
mutex_unlock(&cs->mutex);
return cs;
-error: if (cs)
- mutex_unlock(&cs->mutex);
+error:
+ mutex_unlock(&cs->mutex);
gig_dbg(DEBUG_INIT, "failed");
gigaset_freecs(cs);
return NULL;
spin_unlock_irqrestore(&driver_lock, flags);
gigaset_if_freedriver(drv);
- module_put(drv->owner);
kfree(drv->cs);
kfree(drv->flags);
if (!drv)
return NULL;
- if (!try_module_get(owner))
- goto out1;
-
- drv->cs = NULL;
drv->have_tty = 0;
drv->minor = minor;
drv->minors = minors;
drv->cs = kmalloc(minors * sizeof *drv->cs, GFP_KERNEL);
if (!drv->cs)
- goto out2;
+ goto error;
drv->flags = kmalloc(minors * sizeof *drv->flags, GFP_KERNEL);
if (!drv->flags)
- goto out3;
+ goto error;
for (i = 0; i < minors; ++i) {
drv->flags[i] = 0;
return drv;
-out3:
+error:
kfree(drv->cs);
-out2:
- module_put(owner);
-out1:
kfree(drv);
return NULL;
}
u64 pdptrs[4]; /* pae */
u64 shadow_efer;
u64 apic_base;
+ u64 ia32_misc_enable_msr;
int nmsrs;
struct vmx_msr_entry *guest_msrs;
struct vmx_msr_entry *host_msrs;
static void kvm_free_vcpu(struct kvm_vcpu *vcpu)
{
+ vcpu_load(vcpu->kvm, vcpu_slot(vcpu));
kvm_mmu_destroy(vcpu);
+ vcpu_put(vcpu);
kvm_arch_ops->vcpu_free(vcpu);
}
case MSR_IA32_APICBASE:
data = vcpu->apic_base;
break;
+ case MSR_IA32_MISC_ENABLE:
+ data = vcpu->ia32_misc_enable_msr;
+ break;
#ifdef CONFIG_X86_64
case MSR_EFER:
data = vcpu->shadow_efer;
case MSR_IA32_APICBASE:
vcpu->apic_base = data;
break;
+ case MSR_IA32_MISC_ENABLE:
+ vcpu->ia32_misc_enable_msr = data;
+ break;
default:
printk(KERN_ERR "kvm: unhandled wrmsr: 0x%x\n", msr);
return 1;
static unsigned num_msrs_to_save;
+static u32 emulated_msrs[] = {
+ MSR_IA32_MISC_ENABLE,
+};
+
static __init void kvm_init_msr_list(void)
{
u32 dummy[2];
if (copy_from_user(&msr_list, user_msr_list, sizeof msr_list))
goto out;
n = msr_list.nmsrs;
- msr_list.nmsrs = num_msrs_to_save;
+ msr_list.nmsrs = num_msrs_to_save + ARRAY_SIZE(emulated_msrs);
if (copy_to_user(user_msr_list, &msr_list, sizeof msr_list))
goto out;
r = -E2BIG;
if (copy_to_user(user_msr_list->indices, &msrs_to_save,
num_msrs_to_save * sizeof(u32)))
goto out;
+ if (copy_to_user(user_msr_list->indices
+ + num_msrs_to_save * sizeof(u32),
+ &emulated_msrs,
+ ARRAY_SIZE(emulated_msrs) * sizeof(u32)))
+ goto out;
r = 0;
break;
}
#define PFERR_PRESENT_MASK (1U << 0)
#define PFERR_WRITE_MASK (1U << 1)
#define PFERR_USER_MASK (1U << 2)
+#define PFERR_FETCH_MASK (1U << 4)
#define PT64_ROOT_LEVEL 4
#define PT32_ROOT_LEVEL 2
return 1;
}
+static int is_nx(struct kvm_vcpu *vcpu)
+{
+ return vcpu->shadow_efer & EFER_NX;
+}
+
static int is_present_pte(unsigned long pte)
{
return pte & PT_PRESENT_MASK;
return 0;
}
-static int may_access(u64 pte, int write, int user)
-{
-
- if (user && !(pte & PT_USER_MASK))
- return 0;
- if (write && !(pte & PT_WRITABLE_MASK))
- return 0;
- return 1;
-}
-
static void paging_free(struct kvm_vcpu *vcpu)
{
nonpaging_free(vcpu);
pt_element_t *ptep;
pt_element_t inherited_ar;
gfn_t gfn;
+ u32 error_code;
};
/*
* Fetch a guest pte for a guest virtual address
*/
-static void FNAME(walk_addr)(struct guest_walker *walker,
- struct kvm_vcpu *vcpu, gva_t addr)
+static int FNAME(walk_addr)(struct guest_walker *walker,
+ struct kvm_vcpu *vcpu, gva_t addr,
+ int write_fault, int user_fault, int fetch_fault)
{
hpa_t hpa;
struct kvm_memory_slot *slot;
walker->ptep = &vcpu->pdptrs[(addr >> 30) & 3];
root = *walker->ptep;
if (!(root & PT_PRESENT_MASK))
- return;
+ goto not_present;
--walker->level;
}
#endif
ASSERT(((unsigned long)walker->table & PAGE_MASK) ==
((unsigned long)ptep & PAGE_MASK));
- if (is_present_pte(*ptep) && !(*ptep & PT_ACCESSED_MASK))
- *ptep |= PT_ACCESSED_MASK;
-
if (!is_present_pte(*ptep))
- break;
+ goto not_present;
+
+ if (write_fault && !is_writeble_pte(*ptep))
+ if (user_fault || is_write_protection(vcpu))
+ goto access_error;
+
+ if (user_fault && !(*ptep & PT_USER_MASK))
+ goto access_error;
+
+#if PTTYPE == 64
+ if (fetch_fault && is_nx(vcpu) && (*ptep & PT64_NX_MASK))
+ goto access_error;
+#endif
+
+ if (!(*ptep & PT_ACCESSED_MASK))
+ *ptep |= PT_ACCESSED_MASK; /* avoid rmw */
if (walker->level == PT_PAGE_TABLE_LEVEL) {
walker->gfn = (*ptep & PT_BASE_ADDR_MASK)
}
walker->ptep = ptep;
pgprintk("%s: pte %llx\n", __FUNCTION__, (u64)*ptep);
+ return 1;
+
+not_present:
+ walker->error_code = 0;
+ goto err;
+
+access_error:
+ walker->error_code = PFERR_PRESENT_MASK;
+
+err:
+ if (write_fault)
+ walker->error_code |= PFERR_WRITE_MASK;
+ if (user_fault)
+ walker->error_code |= PFERR_USER_MASK;
+ if (fetch_fault)
+ walker->error_code |= PFERR_FETCH_MASK;
+ return 0;
}
static void FNAME(release_walker)(struct guest_walker *walker)
struct kvm_mmu_page *page;
if (is_writeble_pte(*shadow_ent))
- return 0;
+ return !user || (*shadow_ent & PT_USER_MASK);
writable_shadow = *shadow_ent & PT_SHADOW_WRITABLE_MASK;
if (user) {
u32 error_code)
{
int write_fault = error_code & PFERR_WRITE_MASK;
- int pte_present = error_code & PFERR_PRESENT_MASK;
int user_fault = error_code & PFERR_USER_MASK;
+ int fetch_fault = error_code & PFERR_FETCH_MASK;
struct guest_walker walker;
u64 *shadow_pte;
int fixed;
/*
* Look up the shadow pte for the faulting address.
*/
- FNAME(walk_addr)(&walker, vcpu, addr);
- shadow_pte = FNAME(fetch)(vcpu, addr, &walker);
+ r = FNAME(walk_addr)(&walker, vcpu, addr, write_fault, user_fault,
+ fetch_fault);
/*
* The page is not mapped by the guest. Let the guest handle it.
*/
- if (!shadow_pte) {
- pgprintk("%s: not mapped\n", __FUNCTION__);
- inject_page_fault(vcpu, addr, error_code);
+ if (!r) {
+ pgprintk("%s: guest page fault\n", __FUNCTION__);
+ inject_page_fault(vcpu, addr, walker.error_code);
FNAME(release_walker)(&walker);
return 0;
}
+ shadow_pte = FNAME(fetch)(vcpu, addr, &walker);
pgprintk("%s: shadow pte %p %llx\n", __FUNCTION__,
shadow_pte, *shadow_pte);
* mmio: emulate if accessible, otherwise its a guest fault.
*/
if (is_io_pte(*shadow_pte)) {
- if (may_access(*shadow_pte, write_fault, user_fault))
- return 1;
- pgprintk("%s: io work, no access\n", __FUNCTION__);
- inject_page_fault(vcpu, addr,
- error_code | PFERR_PRESENT_MASK);
- kvm_mmu_audit(vcpu, "post page fault (io)");
- return 0;
- }
-
- /*
- * pte not present, guest page fault.
- */
- if (pte_present && !fixed && !write_pt) {
- inject_page_fault(vcpu, addr, error_code);
- kvm_mmu_audit(vcpu, "post page fault (guest)");
- return 0;
+ return 1;
}
++kvm_stat.pf_fixed;
pt_element_t guest_pte;
gpa_t gpa;
- FNAME(walk_addr)(&walker, vcpu, vaddr);
+ FNAME(walk_addr)(&walker, vcpu, vaddr, 0, 0, 0);
guest_pte = *walker.ptep;
FNAME(release_walker)(&walker);
(1ULL << INTERCEPT_IOIO_PROT) |
(1ULL << INTERCEPT_MSR_PROT) |
(1ULL << INTERCEPT_TASK_SWITCH) |
+ (1ULL << INTERCEPT_SHUTDOWN) |
(1ULL << INTERCEPT_VMRUN) |
(1ULL << INTERCEPT_VMMCALL) |
(1ULL << INTERCEPT_VMLOAD) |
static void svm_get_idt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
{
- dt->limit = vcpu->svm->vmcb->save.ldtr.limit;
- dt->base = vcpu->svm->vmcb->save.ldtr.base;
+ dt->limit = vcpu->svm->vmcb->save.idtr.limit;
+ dt->base = vcpu->svm->vmcb->save.idtr.base;
}
static void svm_set_idt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
{
- vcpu->svm->vmcb->save.ldtr.limit = dt->limit;
- vcpu->svm->vmcb->save.ldtr.base = dt->base ;
+ vcpu->svm->vmcb->save.idtr.limit = dt->limit;
+ vcpu->svm->vmcb->save.idtr.base = dt->base ;
}
static void svm_get_gdt(struct kvm_vcpu *vcpu, struct descriptor_table *dt)
return 0;
}
+static int shutdown_interception(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run)
+{
+ /*
+ * VMCB is undefined after a SHUTDOWN intercept
+ * so reinitialize it.
+ */
+ memset(vcpu->svm->vmcb, 0, PAGE_SIZE);
+ init_vmcb(vcpu->svm->vmcb);
+
+ kvm_run->exit_reason = KVM_EXIT_SHUTDOWN;
+ return 0;
+}
+
static int io_get_override(struct kvm_vcpu *vcpu,
struct vmcb_seg **seg,
int *addr_override)
[SVM_EXIT_IOIO] = io_interception,
[SVM_EXIT_MSR] = msr_interception,
[SVM_EXIT_TASK_SWITCH] = task_switch_interception,
+ [SVM_EXIT_SHUTDOWN] = shutdown_interception,
[SVM_EXIT_VMRUN] = invalid_op_interception,
[SVM_EXIT_VMMCALL] = invalid_op_interception,
[SVM_EXIT_VMLOAD] = invalid_op_interception,
int r;
again:
- do_interrupt_requests(vcpu, kvm_run);
+ if (!vcpu->mmio_read_completed)
+ do_interrupt_requests(vcpu, kvm_run);
clgi();
vmcs_writel(HOST_GS_BASE, segment_base(gs_sel));
#endif
- do_interrupt_requests(vcpu, kvm_run);
+ if (!vcpu->mmio_read_completed)
+ do_interrupt_requests(vcpu, kvm_run);
if (vcpu->guest_debug.enabled)
kvm_guest_debug_pre(vcpu);
#define ModRM (1<<6)
/* Destination is only written; never read. */
#define Mov (1<<7)
+#define BitOp (1<<8)
static u8 opcode_table[256] = {
/* 0x00 - 0x07 */
0, 0, ByteOp | DstMem | SrcNone | ModRM, DstMem | SrcNone | ModRM
};
-static u8 twobyte_table[256] = {
+static u16 twobyte_table[256] = {
/* 0x00 - 0x0F */
0, SrcMem | ModRM | DstReg, 0, 0, 0, 0, ImplicitOps, 0,
0, 0, 0, 0, 0, ImplicitOps | ModRM, 0, 0,
/* 0x90 - 0x9F */
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
/* 0xA0 - 0xA7 */
- 0, 0, 0, DstMem | SrcReg | ModRM, 0, 0, 0, 0,
+ 0, 0, 0, DstMem | SrcReg | ModRM | BitOp, 0, 0, 0, 0,
/* 0xA8 - 0xAF */
- 0, 0, 0, DstMem | SrcReg | ModRM, 0, 0, 0, 0,
+ 0, 0, 0, DstMem | SrcReg | ModRM | BitOp, 0, 0, 0, 0,
/* 0xB0 - 0xB7 */
ByteOp | DstMem | SrcReg | ModRM, DstMem | SrcReg | ModRM, 0,
- DstMem | SrcReg | ModRM,
+ DstMem | SrcReg | ModRM | BitOp,
0, 0, ByteOp | DstReg | SrcMem | ModRM | Mov,
DstReg | SrcMem16 | ModRM | Mov,
/* 0xB8 - 0xBF */
- 0, 0, DstMem | SrcImmByte | ModRM, DstMem | SrcReg | ModRM,
+ 0, 0, DstMem | SrcImmByte | ModRM, DstMem | SrcReg | ModRM | BitOp,
0, 0, ByteOp | DstReg | SrcMem | ModRM | Mov,
DstReg | SrcMem16 | ModRM | Mov,
/* 0xC0 - 0xCF */
int
x86_emulate_memop(struct x86_emulate_ctxt *ctxt, struct x86_emulate_ops *ops)
{
- u8 b, d, sib, twobyte = 0, rex_prefix = 0;
+ unsigned d;
+ u8 b, sib, twobyte = 0, rex_prefix = 0;
u8 modrm, modrm_mod = 0, modrm_reg = 0, modrm_rm = 0;
unsigned long *override_base = NULL;
unsigned int op_bytes, ad_bytes, lock_prefix = 0, rep_prefix = 0, i;
;
}
- /* Decode and fetch the destination operand: register or memory. */
- switch (d & DstMask) {
- case ImplicitOps:
- /* Special instructions do their own operand decoding. */
- goto special_insn;
- case DstReg:
- dst.type = OP_REG;
- if ((d & ByteOp)
- && !(twobyte_table && (b == 0xb6 || b == 0xb7))) {
- dst.ptr = decode_register(modrm_reg, _regs,
- (rex_prefix == 0));
- dst.val = *(u8 *) dst.ptr;
- dst.bytes = 1;
- } else {
- dst.ptr = decode_register(modrm_reg, _regs, 0);
- switch ((dst.bytes = op_bytes)) {
- case 2:
- dst.val = *(u16 *)dst.ptr;
- break;
- case 4:
- dst.val = *(u32 *)dst.ptr;
- break;
- case 8:
- dst.val = *(u64 *)dst.ptr;
- break;
- }
- }
- break;
- case DstMem:
- dst.type = OP_MEM;
- dst.ptr = (unsigned long *)cr2;
- dst.bytes = (d & ByteOp) ? 1 : op_bytes;
- if (!(d & Mov) && /* optimisation - avoid slow emulated read */
- ((rc = ops->read_emulated((unsigned long)dst.ptr,
- &dst.val, dst.bytes, ctxt)) != 0))
- goto done;
- break;
- }
- dst.orig_val = dst.val;
-
/*
* Decode and fetch the source operand: register, memory
* or immediate.
break;
}
+ /* Decode and fetch the destination operand: register or memory. */
+ switch (d & DstMask) {
+ case ImplicitOps:
+ /* Special instructions do their own operand decoding. */
+ goto special_insn;
+ case DstReg:
+ dst.type = OP_REG;
+ if ((d & ByteOp)
+ && !(twobyte_table && (b == 0xb6 || b == 0xb7))) {
+ dst.ptr = decode_register(modrm_reg, _regs,
+ (rex_prefix == 0));
+ dst.val = *(u8 *) dst.ptr;
+ dst.bytes = 1;
+ } else {
+ dst.ptr = decode_register(modrm_reg, _regs, 0);
+ switch ((dst.bytes = op_bytes)) {
+ case 2:
+ dst.val = *(u16 *)dst.ptr;
+ break;
+ case 4:
+ dst.val = *(u32 *)dst.ptr;
+ break;
+ case 8:
+ dst.val = *(u64 *)dst.ptr;
+ break;
+ }
+ }
+ break;
+ case DstMem:
+ dst.type = OP_MEM;
+ dst.ptr = (unsigned long *)cr2;
+ dst.bytes = (d & ByteOp) ? 1 : op_bytes;
+ if (d & BitOp) {
+ dst.ptr += src.val / BITS_PER_LONG;
+ dst.bytes = sizeof(long);
+ }
+ if (!(d & Mov) && /* optimisation - avoid slow emulated read */
+ ((rc = ops->read_emulated((unsigned long)dst.ptr,
+ &dst.val, dst.bytes, ctxt)) != 0))
+ goto done;
+ break;
+ }
+ dst.orig_val = dst.val;
+
if (twobyte)
goto twobyte_insn;
int err = -EINVAL;
/* page 0 is the superblock, read it... */
- if (bitmap->file)
- bitmap->sb_page = read_page(bitmap->file, 0, bitmap, PAGE_SIZE);
- else {
+ if (bitmap->file) {
+ loff_t isize = i_size_read(bitmap->file->f_mapping->host);
+ int bytes = isize > PAGE_SIZE ? PAGE_SIZE : isize;
+
+ bitmap->sb_page = read_page(bitmap->file, 0, bitmap, bytes);
+ } else {
bitmap->sb_page = read_sb_page(bitmap->mddev, bitmap->offset, 0);
}
if (IS_ERR(bitmap->sb_page)) {
int count;
/* unmap the old page, we're done with it */
if (index == num_pages-1)
- count = bytes - index * PAGE_SIZE;
+ count = bytes + sizeof(bitmap_super_t)
+ - index * PAGE_SIZE;
else
count = PAGE_SIZE;
if (index == 0) {
if (size != get_capacity(md->disk))
memset(&md->geometry, 0, sizeof(md->geometry));
- __set_size(md, size);
+ if (md->suspended_bdev)
+ __set_size(md, size);
if (size == 0)
return 0;
if (!dm_suspended(md))
goto out;
+ /* without bdev, the device size cannot be changed */
+ if (!md->suspended_bdev)
+ if (get_capacity(md->disk) != dm_table_get_size(table))
+ goto out;
+
__unbind(md);
r = __bind(md, table);
/* This does not get reverted if there's an error later. */
dm_table_presuspend_targets(map);
- md->suspended_bdev = bdget_disk(md->disk, 0);
- if (!md->suspended_bdev) {
- DMWARN("bdget failed in dm_suspend");
- r = -ENOMEM;
- goto flush_and_out;
+ /* bdget() can stall if the pending I/Os are not flushed */
+ if (!noflush) {
+ md->suspended_bdev = bdget_disk(md->disk, 0);
+ if (!md->suspended_bdev) {
+ DMWARN("bdget failed in dm_suspend");
+ r = -ENOMEM;
+ goto flush_and_out;
+ }
}
/*
unlock_fs(md);
- bdput(md->suspended_bdev);
- md->suspended_bdev = NULL;
+ if (md->suspended_bdev) {
+ bdput(md->suspended_bdev);
+ md->suspended_bdev = NULL;
+ }
clear_bit(DMF_SUSPENDED, &md->flags);
* and 'events' is odd, we can roll back to the previous clean state */
if (nospares
&& (mddev->in_sync && mddev->recovery_cp == MaxSector)
- && (mddev->events & 1))
+ && (mddev->events & 1)
+ && mddev->events != 1)
mddev->events--;
else {
/* otherwise we have to go forward and ... */
char *ptr, *buf = NULL;
int err = -ENOMEM;
+ md_allow_write(mddev);
+
file = kmalloc(sizeof(*file), GFP_KERNEL);
if (!file)
goto out;
}
}
+/* md_allow_write(mddev)
+ * Calling this ensures that the array is marked 'active' so that writes
+ * may proceed without blocking. It is important to call this before
+ * attempting a GFP_KERNEL allocation while holding the mddev lock.
+ * Must be called with mddev_lock held.
+ */
+void md_allow_write(mddev_t *mddev)
+{
+ if (!mddev->pers)
+ return;
+ if (mddev->ro)
+ return;
+
+ spin_lock_irq(&mddev->write_lock);
+ if (mddev->in_sync) {
+ mddev->in_sync = 0;
+ set_bit(MD_CHANGE_CLEAN, &mddev->flags);
+ if (mddev->safemode_delay &&
+ mddev->safemode == 0)
+ mddev->safemode = 1;
+ spin_unlock_irq(&mddev->write_lock);
+ md_update_sb(mddev, 0);
+ } else
+ spin_unlock_irq(&mddev->write_lock);
+}
+EXPORT_SYMBOL_GPL(md_allow_write);
+
static DECLARE_WAIT_QUEUE_HEAD(resync_wait);
#define SYNC_MARKS 10
sbio->bi_sector = r1_bio->sector +
conf->mirrors[i].rdev->data_offset;
sbio->bi_bdev = conf->mirrors[i].rdev->bdev;
+ for (j = 0; j < vcnt ; j++)
+ memcpy(page_address(sbio->bi_io_vec[j].bv_page),
+ page_address(pbio->bi_io_vec[j].bv_page),
+ PAGE_SIZE);
+
}
}
}
return -EINVAL;
}
+ md_allow_write(mddev);
+
raid_disks = mddev->raid_disks + mddev->delta_disks;
if (raid_disks < conf->raid_disks) {
if (newsize <= conf->pool_size)
return 0; /* never bother to shrink */
+ md_allow_write(conf->mddev);
+
/* Step 1 */
sc = kmem_cache_create(conf->cache_name[1-conf->active_name],
sizeof(struct stripe_head)+(newsize-1)*sizeof(struct r5dev),
mdk_rdev_t *rdev;
if (!in_chunk_boundary(mddev, raid_bio)) {
- printk("chunk_aligned_read : non aligned\n");
+ PRINTK("chunk_aligned_read : non aligned\n");
return 0;
}
/*
else
break;
}
+ md_allow_write(mddev);
while (new > conf->max_nr_stripes) {
if (grow_one_stripe(conf))
conf->max_nr_stripes++;
goto done;
}
if (buf->state == STATE_QUEUED ||
+ buf->state == STATE_PREPARED ||
buf->state == STATE_ACTIVE) {
dprintk(1,"qbuf: buffer is already queued or active.\n");
goto done;
#define DRV_MODULE_NAME "bnx2"
#define PFX DRV_MODULE_NAME ": "
-#define DRV_MODULE_VERSION "1.5.3"
-#define DRV_MODULE_RELDATE "January 8, 2007"
+#define DRV_MODULE_VERSION "1.5.4"
+#define DRV_MODULE_RELDATE "January 24, 2007"
#define RUN_AT(x) (jiffies + (x))
reg = REG_RD_IND(bp, BNX2_SHM_HDR_SIGNATURE);
if ((reg & BNX2_SHM_HDR_SIGNATURE_SIG_MASK) ==
- BNX2_SHM_HDR_SIGNATURE_SIG)
- bp->shmem_base = REG_RD_IND(bp, BNX2_SHM_HDR_ADDR_0);
- else
+ BNX2_SHM_HDR_SIGNATURE_SIG) {
+ u32 off = PCI_FUNC(pdev->devfn) << 2;
+
+ bp->shmem_base = REG_RD_IND(bp, BNX2_SHM_HDR_ADDR_0 + off);
+ } else
bp->shmem_base = HOST_VIEW_SHMEM_BASE;
/* Get the permanent MAC address. First we need to make sure the
#include <asm/io.h>
#define DRV_NAME "ehea"
-#define DRV_VERSION "EHEA_0043"
+#define DRV_VERSION "EHEA_0044"
#define EHEA_MSG_DEFAULT (NETIF_MSG_LINK | NETIF_MSG_TIMER \
| NETIF_MSG_RX_ERR | NETIF_MSG_TX_ERR)
u32 qp_token;
eqe = ehea_poll_eq(port->qp_eq);
- ehea_debug("eqe=%p", eqe);
+
while (eqe) {
- ehea_debug("*eqe=%lx", *(u64*)eqe);
- eqe = ehea_poll_eq(port->qp_eq);
qp_token = EHEA_BMASK_GET(EHEA_EQE_QP_TOKEN, eqe->entry);
- ehea_debug("next eqe=%p", eqe);
+ ehea_error("QP aff_err: entry=0x%lx, token=0x%x",
+ eqe->entry, qp_token);
+ eqe = ehea_poll_eq(port->qp_eq);
}
return IRQ_HANDLED;
int i;
for (i = 0; i < adapter->num_ports; i++)
- if (adapter->port[i]->logical_port_id == logical_port)
- return adapter->port[i];
+ if (adapter->port[i])
+ if (adapter->port[i]->logical_port_id == logical_port)
+ return adapter->port[i];
return NULL;
}
break;
}
+ port->autoneg = 1;
+
/* Number of default QPs */
port->num_def_qps = cb0->num_default_qps;
}
} else {
if (hret == H_AUTHORITY) {
- ehea_info("Hypervisor denied setting port speed. Either"
- " this partition is not authorized to set "
- "port speed or another partition has modified"
- " port speed first.");
+ ehea_info("Hypervisor denied setting port speed");
ret = -EPERM;
} else {
ret = -EIO;
| EHEA_BMASK_SET(PXLY_RC_JUMBO_FRAME, 1);
for (i = 0; i < port->num_def_qps; i++)
- cb0->default_qpn_arr[i] = port->port_res[i].qp->init_attr.qp_nr;
+ cb0->default_qpn_arr[i] = port->port_res[0].qp->init_attr.qp_nr;
if (netif_msg_ifup(port))
ehea_dump(cb0, sizeof(*cb0), "ehea_configure_port");
static void ehea_promiscuous_error(u64 hret, int enable)
{
- ehea_info("Hypervisor denied %sabling promiscuous mode.%s",
- enable == 1 ? "en" : "dis",
- hret != H_AUTHORITY ? "" : " Another partition owning a "
- "logical port on the same physical port might have altered "
- "promiscuous mode first.");
+ if (hret == H_AUTHORITY)
+ ehea_info("Hypervisor denied %sabling promiscuous mode",
+ enable == 1 ? "en" : "dis");
+ else
+ ehea_error("failed %sabling promiscuous mode",
+ enable == 1 ? "en" : "dis");
}
static void ehea_promiscuous(struct net_device *dev, int enable)
int ehea_sense_adapter_attr(struct ehea_adapter *adapter)
{
struct hcp_query_ehea *cb;
+ struct device_node *lhea_dn = NULL;
+ struct device_node *eth_dn = NULL;
u64 hret;
int ret;
goto out_herr;
}
- adapter->num_ports = cb->num_ports;
+ /* Determine the number of available logical ports
+ * by counting the child nodes of the lhea OFDT entry
+ */
+ adapter->num_ports = 0;
+ lhea_dn = of_find_node_by_name(lhea_dn, "lhea");
+ do {
+ eth_dn = of_get_next_child(lhea_dn, eth_dn);
+ if (eth_dn)
+ adapter->num_ports++;
+ } while ( eth_dn );
+ of_node_put(lhea_dn);
+
adapter->max_mc_mac = cb->max_mc_mac - 1;
ret = 0;
INIT_LIST_HEAD(&port->mc_list->list);
- ehea_set_portspeed(port, EHEA_SPEED_AUTONEG);
-
ret = ehea_sense_port_attr(port);
if (ret)
goto out;
adapter_handle = (u64*)get_property(dev->ofdev.node, "ibm,hea-handle",
NULL);
- if (!adapter_handle) {
+ if (adapter_handle)
+ adapter->handle = *adapter_handle;
+
+ if (!adapter->handle) {
dev_err(&dev->ofdev.dev, "failed getting handle for adapter"
" '%s'\n", dev->ofdev.node->full_name);
ret = -ENODEV;
goto out_free_ad;
}
- adapter->handle = *adapter_handle;
adapter->pd = EHEA_PD_ID;
dev->ofdev.dev.driver_data = adapter;
{
long ret;
int i, sleep_msecs;
+ u8 cb_cat;
for (i = 0; i < 5; i++) {
ret = plpar_hcall9(opcode, outs,
continue;
}
- if (ret < H_SUCCESS)
+ cb_cat = EHEA_BMASK_GET(H_MEHEAPORT_CAT, arg2);
+
+ if ((ret < H_SUCCESS) && !(((ret == H_AUTHORITY)
+ && (opcode == H_MODIFY_HEA_PORT))
+ && (((cb_cat == H_PORT_CB4) && ((arg3 == H_PORT_CB4_JUMBO)
+ || (arg3 == H_PORT_CB4_SPEED))) || ((cb_cat == H_PORT_CB7)
+ && (arg3 == H_PORT_CB7_DUCQPN)))))
ehea_error("opcode=%lx ret=%lx"
" arg1=%lx arg2=%lx arg3=%lx arg4=%lx"
" arg5=%lx arg6=%lx arg7=%lx arg8=%lx"
outs[0], outs[1], outs[2], outs[3],
outs[4], outs[5], outs[6], outs[7],
outs[8]);
-
return ret;
}
goto drop;
}
- /* Make sure there is room for IrDA-USB header. The actual
- * allocation will be done lower in skb_push().
- * Also, we don't use directly skb_cow(), because it require
- * headroom >= 16, which force unnecessary copies - Jean II */
- if (skb_headroom(skb) < self->header_length) {
- IRDA_DEBUG(0, "%s(), Insuficient skb headroom.\n", __FUNCTION__);
- if (skb_cow(skb, self->header_length)) {
- IRDA_WARNING("%s(), failed skb_cow() !!!\n", __FUNCTION__);
- goto drop;
- }
- }
+ memcpy(self->tx_buff + self->header_length, skb->data, skb->len);
/* Change setting for next frame */
-
if (self->capability & IUC_STIR421X) {
__u8 turnaround_time;
- __u8* frame;
+ __u8* frame = self->tx_buff;
turnaround_time = get_turnaround_time( skb );
- frame= skb_push(skb, self->header_length);
irda_usb_build_header(self, frame, 0);
frame[2] = turnaround_time;
if ((skb->len != 0) &&
frame[1] = 0;
}
} else {
- irda_usb_build_header(self, skb_push(skb, self->header_length), 0);
+ irda_usb_build_header(self, self->tx_buff, 0);
}
/* FIXME: Make macro out of this one */
((struct irda_skb_cb *)skb->cb)->context = self;
- usb_fill_bulk_urb(urb, self->usbdev,
+ usb_fill_bulk_urb(urb, self->usbdev,
usb_sndbulkpipe(self->usbdev, self->bulk_out_ep),
- skb->data, IRDA_SKB_MAX_MTU,
+ self->tx_buff, skb->len + self->header_length,
write_bulk_callback, skb);
- urb->transfer_buffer_length = skb->len;
+
/* This flag (URB_ZERO_PACKET) indicates that what we send is not
* a continuous stream of data but separate packets.
* In this case, the USB layer will insert an empty USB frame (TD)
/* Remove the speed buffer */
kfree(self->speed_buff);
self->speed_buff = NULL;
+
+ kfree(self->tx_buff);
+ self->tx_buff = NULL;
}
/********************** USB CONFIG SUBROUTINES **********************/
IRDA_DEBUG(0, "%s(), And our endpoints are : in=%02X, out=%02X (%d), int=%02X\n",
__FUNCTION__, self->bulk_in_ep, self->bulk_out_ep, self->bulk_out_mtu, self->bulk_int_ep);
- /* Should be 8, 16, 32 or 64 bytes */
- IRDA_ASSERT(self->bulk_out_mtu == 64, ;);
return((self->bulk_in_ep != 0) && (self->bulk_out_ep != 0));
}
memset(self->speed_buff, 0, IRDA_USB_SPEED_MTU);
+ self->tx_buff = kzalloc(IRDA_SKB_MAX_MTU + self->header_length,
+ GFP_KERNEL);
+ if (self->tx_buff == NULL)
+ goto err_out_4;
+
ret = irda_usb_open(self);
if (ret)
- goto err_out_4;
+ goto err_out_5;
IRDA_MESSAGE("IrDA: Registered device %s\n", net->name);
usb_set_intfdata(intf, self);
self->needspatch = (ret < 0);
if (self->needspatch) {
IRDA_ERROR("STIR421X: Couldn't upload patch\n");
- goto err_out_5;
+ goto err_out_6;
}
/* replace IrDA class descriptor with what patched device is now reporting */
irda_desc = irda_usb_find_class_desc (self->usbintf);
if (irda_desc == NULL) {
ret = -ENODEV;
- goto err_out_5;
+ goto err_out_6;
}
if (self->irda_desc)
kfree (self->irda_desc);
}
return 0;
-
-err_out_5:
+err_out_6:
unregister_netdev(self->netdev);
+err_out_5:
+ kfree(self->tx_buff);
err_out_4:
kfree(self->speed_buff);
err_out_3:
struct irlap_cb *irlap; /* The link layer we are binded to */
struct qos_info qos;
char *speed_buff; /* Buffer for speed changes */
+ char *tx_buff;
struct timeval stamp;
struct timeval now;
#include <asm/byteorder.h>
#include <asm/unaligned.h>
-MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>");
+MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>");
MODULE_DESCRIPTION("IrDA-USB Dongle Driver for SigmaTel STIr4200");
MODULE_LICENSE("GPL");
unsigned i;
seq_printf(seq, "\n%s (vid/did: %04x/%04x)\n",
- PCIDEV_NAME(pdev), (int)pdev->vendor, (int)pdev->device);
+ pci_name(pdev), (int)pdev->vendor, (int)pdev->device);
seq_printf(seq, "pci-power-state: %u\n", (unsigned) pdev->current_state);
seq_printf(seq, "resources: irq=%u / io=0x%04x / dma_mask=0x%016Lx\n",
pdev->irq, (unsigned)pci_resource_start(pdev, 0), (unsigned long long)pdev->dma_mask);
if (vlsi_start_hw(idev))
IRDA_ERROR("%s: failed to restart hw - %s(%s) unusable!\n",
- __FUNCTION__, PCIDEV_NAME(idev->pdev), ndev->name);
+ __FUNCTION__, pci_name(idev->pdev), ndev->name);
else
netif_start_queue(ndev);
}
pdev->current_state = 0; /* hw must be running now */
IRDA_MESSAGE("%s: IrDA PCI controller %s detected\n",
- drivername, PCIDEV_NAME(pdev));
+ drivername, pci_name(pdev));
if ( !pci_resource_start(pdev,0)
|| !(pci_resource_flags(pdev,0) & IORESOURCE_IO) ) {
pci_set_drvdata(pdev, NULL);
- IRDA_MESSAGE("%s: %s removed\n", drivername, PCIDEV_NAME(pdev));
+ IRDA_MESSAGE("%s: %s removed\n", drivername, pci_name(pdev));
}
#ifdef CONFIG_PM
if (!ndev) {
IRDA_ERROR("%s - %s: no netdevice \n",
- __FUNCTION__, PCIDEV_NAME(pdev));
+ __FUNCTION__, pci_name(pdev));
return 0;
}
idev = ndev->priv;
pdev->current_state = state.event;
}
else
- IRDA_ERROR("%s - %s: invalid suspend request %u -> %u\n", __FUNCTION__, PCIDEV_NAME(pdev), pdev->current_state, state.event);
+ IRDA_ERROR("%s - %s: invalid suspend request %u -> %u\n", __FUNCTION__, pci_name(pdev), pdev->current_state, state.event);
up(&idev->sem);
return 0;
}
if (!ndev) {
IRDA_ERROR("%s - %s: no netdevice \n",
- __FUNCTION__, PCIDEV_NAME(pdev));
+ __FUNCTION__, pci_name(pdev));
return 0;
}
idev = ndev->priv;
if (pdev->current_state == 0) {
up(&idev->sem);
IRDA_WARNING("%s - %s: already resumed\n",
- __FUNCTION__, PCIDEV_NAME(pdev));
+ __FUNCTION__, pci_name(pdev));
return 0;
}
#define PCI_CLASS_SUBCLASS_MASK 0xffff
#endif
-/* in recent 2.5 interrupt handlers have non-void return value */
-#ifndef IRQ_RETVAL
-typedef void irqreturn_t;
-#define IRQ_NONE
-#define IRQ_HANDLED
-#define IRQ_RETVAL(x)
-#endif
-
-/* some stuff need to check kernelversion. Not all 2.5 stuff was present
- * in early 2.5.x - the test is merely to separate 2.4 from 2.5
- */
-#include <linux/version.h>
-
-#if LINUX_VERSION_CODE < KERNEL_VERSION(2,5,0)
-
-/* PDE() introduced in 2.5.4 */
-#ifdef CONFIG_PROC_FS
-#define PDE(inode) ((inode)->i_private)
-#endif
-
-/* irda crc16 calculation exported in 2.5.42 */
-#define irda_calc_crc16(fcs,buf,len) (GOOD_FCS)
-
-/* we use this for unified pci device name access */
-#define PCIDEV_NAME(pdev) ((pdev)->name)
-
-#else /* 2.5 or later */
-
-/* whatever we get from the associated struct device - bus:slot:dev.fn id */
-#define PCIDEV_NAME(pdev) (pci_name(pdev))
-
-#endif
-
/* ================================================================ */
/* non-standard PCI registers */
while (mp->tx_desc_count > 0) {
spin_lock_irqsave(&mp->lock, flags);
+
+ /* tx_desc_count might have changed before acquiring the lock */
+ if (mp->tx_desc_count <= 0) {
+ spin_unlock_irqrestore(&mp->lock, flags);
+ return released;
+ }
+
tx_index = mp->tx_used_desc_q;
desc = &mp->p_tx_desc_area[tx_index];
cmd_sts = desc->cmd_sts;
if (skb)
mp->tx_skb[tx_index] = NULL;
- spin_unlock_irqrestore(&mp->lock, flags);
-
if (cmd_sts & ETH_ERROR_SUMMARY) {
printk("%s: Error in TX\n", dev->name);
mp->stats.tx_errors++;
}
+ spin_unlock_irqrestore(&mp->lock, flags);
+
if (cmd_sts & ETH_TX_FIRST_DESC)
dma_unmap_single(NULL, addr, count, DMA_TO_DEVICE);
else
#include "netxen_nic_hw.h"
-#define NETXEN_NIC_BUILD_NO "4"
+#define NETXEN_NIC_BUILD_NO "2"
#define _NETXEN_NIC_LINUX_MAJOR 3
#define _NETXEN_NIC_LINUX_MINOR 3
-#define _NETXEN_NIC_LINUX_SUBVERSION 2
-#define NETXEN_NIC_LINUX_VERSIONID "3.3.2" "-" NETXEN_NIC_BUILD_NO
-#define NETXEN_NIC_FW_VERSIONID "3.3.2"
+#define _NETXEN_NIC_LINUX_SUBVERSION 3
+#define NETXEN_NIC_LINUX_VERSIONID "3.3.3" "-" NETXEN_NIC_BUILD_NO
#define RCV_DESC_RINGSIZE \
(sizeof(struct rcv_desc) * adapter->max_rx_desc_count)
_NETXEN_NIC_LINUX_MAJOR, fw_major);
adapter->driver_mismatch = 1;
}
- if (fw_minor != _NETXEN_NIC_LINUX_MINOR) {
+ if (fw_minor != _NETXEN_NIC_LINUX_MINOR &&
+ fw_minor != (_NETXEN_NIC_LINUX_MINOR + 1)) {
printk(KERN_ERR "The mismatch in driver version and firmware "
"version minor number\n"
"Driver version minor number = %d \t"
if ((netxen_workq = create_singlethread_workqueue("netxen")) == 0)
return -ENOMEM;
- return pci_module_init(&netxen_driver);
+ return pci_register_driver(&netxen_driver);
}
module_init(netxen_init_module);
{
kio_addr_t ioaddr = dev->base_addr;
struct el3_private *priv = netdev_priv(dev);
+ unsigned long flags;
DEBUG(3, "%s: el3_start_xmit(length = %ld) called, "
"status %4.4x.\n", dev->name, (long)skb->len,
inw(ioaddr + EL3_STATUS));
+ spin_lock_irqsave(&priv->lock, flags);
+
priv->stats.tx_bytes += skb->len;
/* Put out the doubleword header... */
dev_kfree_skb(skb);
pop_tx_status(dev);
+ spin_unlock_irqrestore(&priv->lock, flags);
return 0;
}
if (!netif_device_present(dev)) goto reschedule;
- EL3WINDOW(1);
/* Check for pending interrupt with expired latency timer: with
this, we can limp along even if the interrupt is blocked */
if ((inw(ioaddr + EL3_STATUS) & IntLatch) &&
(inb(ioaddr + EL3_TIMER) == 0xff)) {
if (!lp->fast_poll)
printk(KERN_WARNING "%s: interrupt(s) dropped!\n", dev->name);
- el3_interrupt(dev->irq, lp);
+ el3_interrupt(dev->irq, dev);
lp->fast_poll = HZ;
}
if (lp->fast_poll) {
return 0;
}
+EXPORT_SYMBOL(phy_ethtool_sset);
int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd)
{
return 0;
}
-
+EXPORT_SYMBOL(phy_ethtool_gset);
/* Note that this function is currently incompatible with the
* PHYCONTROL layer. It changes registers without regard to
}
}
- nic->ufo_in_band_v = kmalloc((sizeof(u64) * size), GFP_KERNEL);
+ nic->ufo_in_band_v = kcalloc(size, sizeof(u64), GFP_KERNEL);
if (!nic->ufo_in_band_v)
return -ENOMEM;
- memset(nic->ufo_in_band_v, 0, size);
/* Allocation and initialization of RXDs in Rings */
size = 0;
#define LINK_HZ (HZ/2)
MODULE_DESCRIPTION("SysKonnect Gigabit Ethernet driver");
-MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>");
+MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>");
MODULE_LICENSE("GPL");
MODULE_VERSION(DRV_VERSION);
module_exit(sky2_cleanup_module);
MODULE_DESCRIPTION("Marvell Yukon 2 Gigabit Ethernet driver");
-MODULE_AUTHOR("Stephen Hemminger <shemminger@osdl.org>");
+MODULE_AUTHOR("Stephen Hemminger <shemminger@linux-foundation.org>");
MODULE_LICENSE("GPL");
MODULE_VERSION(DRV_VERSION);
spin_lock_irq(&rtc->lock);
- /* disable alarm interrupt and clear flag */
+ /* disable alarm interrupt and clear the alarm flag */
rcr1 = readb(rtc->regbase + RCR1);
- rcr1 &= ~RCR1_AF;
- writeb(rcr1 & ~RCR1_AIE, rtc->regbase + RCR1);
+ rcr1 &= ~(RCR1_AF|RCR1_AIE);
+ writeb(rcr1, rtc->regbase + RCR1);
rtc->rearm_aie = 0;
mon += 1;
sh_rtc_write_alarm_value(rtc, mon, RMONAR);
- /* Restore interrupt activation status */
- writeb(rcr1, rtc->regbase + RCR1);
+ if (wkalrm->enabled) {
+ rcr1 |= RCR1_AIE;
+ writeb(rcr1, rtc->regbase + RCR1);
+ }
spin_unlock_irq(&rtc->lock);
.attrs = rtc_attrs,
};
-static int __devinit rtc_sysfs_add_device(struct class_device *class_dev,
+static int rtc_sysfs_add_device(struct class_device *class_dev,
struct class_interface *class_intf)
{
int err;
spi->bits_per_word - 16 : spi->bits_per_word)
| SSCR0_SSE
| (spi->bits_per_word > 16 ? SSCR0_EDSS : 0);
- chip->cr1 |= (((spi->mode & SPI_CPHA) != 0) << 4)
- | (((spi->mode & SPI_CPOL) != 0) << 3);
+ chip->cr1 &= ~(SSCR1_SPO | SSCR1_SPH);
+ chip->cr1 |= (((spi->mode & SPI_CPHA) != 0) ? SSCR1_SPH : 0)
+ | (((spi->mode & SPI_CPOL) != 0) ? SSCR1_SPO : 0);
/* NOTE: PXA25x_SSP _could_ use external clocking ... */
if (drv_data->ssp_type != PXA25x_SSP)
class_device_initialize(&master->cdev);
master->cdev.class = &spi_master_class;
- kobj_set_kset_s(&master->cdev, spi_master_class.subsys);
master->cdev.dev = get_device(dev);
spi_master_set_devdata(master, &master[1]);
*/
struct spi_master *spi_busnum_to_master(u16 bus_num)
{
- char name[9];
- struct kobject *bus;
-
- snprintf(name, sizeof name, "spi%u", bus_num);
- bus = kset_find_obj(&spi_master_class.subsys.kset, name);
- if (bus)
- return container_of(bus, struct spi_master, cdev.kobj);
- return NULL;
+ struct class_device *cdev;
+ struct spi_master *master = NULL;
+ struct spi_master *m;
+
+ down(&spi_master_class.sem);
+ list_for_each_entry(cdev, &spi_master_class.children, node) {
+ m = container_of(cdev, struct spi_master, cdev);
+ if (m->bus_num == bus_num) {
+ master = spi_master_get(m);
+ break;
+ }
+ }
+ up(&spi_master_class.sem);
+ return master;
}
EXPORT_SYMBOL_GPL(spi_busnum_to_master);
*
*/
-
-//#define DEBUG
-
#include <linux/init.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
int len;
int count;
+ int (*set_cs)(struct s3c2410_spi_info *spi,
+ int cs, int pol);
+
/* data buffers */
const unsigned char *tx;
unsigned char *rx;
return spi_master_get_devdata(sdev->master);
}
+static void s3c24xx_spi_gpiocs(struct s3c2410_spi_info *spi, int cs, int pol)
+{
+ s3c2410_gpio_setpin(spi->pin_cs, pol);
+}
+
static void s3c24xx_spi_chipsel(struct spi_device *spi, int value)
{
struct s3c24xx_spi *hw = to_hw(spi);
switch (value) {
case BITBANG_CS_INACTIVE:
- if (hw->pdata->set_cs)
- hw->pdata->set_cs(hw->pdata, value, cspol);
- else
- s3c2410_gpio_setpin(hw->pdata->pin_cs, cspol ^ 1);
+ hw->pdata->set_cs(hw->pdata, spi->chip_select, cspol^1);
break;
case BITBANG_CS_ACTIVE:
/* write new configration */
writeb(spcon, hw->regs + S3C2410_SPCON);
-
- if (hw->pdata->set_cs)
- hw->pdata->set_cs(hw->pdata, value, cspol);
- else
- s3c2410_gpio_setpin(hw->pdata->pin_cs, cspol);
+ hw->pdata->set_cs(hw->pdata, spi->chip_select, cspol);
break;
-
}
}
/* setup any gpio we can */
if (!hw->pdata->set_cs) {
+ hw->set_cs = s3c24xx_spi_gpiocs;
+
s3c2410_gpio_setpin(hw->pdata->pin_cs, 1);
s3c2410_gpio_cfgpin(hw->pdata->pin_cs, S3C2410_GPIO_OUTPUT);
- }
+ } else
+ hw->set_cs = hw->pdata->set_cs;
/* register our spi controller */
static int funsoft_ioctl(struct usb_serial_port *port, struct file *file,
unsigned int cmd, unsigned long arg)
{
- struct termios t;
+ struct ktermios t;
dbg("%s - port %d, cmd 0x%04x", __FUNCTION__, port->number, cmd);
if (errno == 0) {
/* TODO: if error isn't found, add it dynamically */
+ errstr[len] = 0;
printk(KERN_ERR "%s: errstr :%s: not found\n", __FUNCTION__,
errstr);
errno = 1;
#include <linux/fs.h>
#include <linux/sched.h>
#include <linux/idr.h>
+#include <asm/semaphore.h>
#include "debug.h"
#include "v9fs.h"
new->iounit = 0;
new->rdir_pos = 0;
new->rdir_fcall = NULL;
+ init_MUTEX(&new->lock);
INIT_LIST_HEAD(&new->list);
return new;
}
/**
- * v9fs_fid_lookup - retrieve the right fid from a particular dentry
+ * v9fs_fid_lookup - return a locked fid from a dentry
* @dentry: dentry to look for fid in
- * @type: intent of lookup (operation or traversal)
*
- * find a fid in the dentry
+ * find a fid in the dentry, obtain its semaphore and return a reference to it.
+ * code calling lookup is responsible for releasing lock
*
* TODO: only match fids that have the same uid as current user
*
if (!return_fid) {
dprintk(DEBUG_ERROR, "Couldn't find a fid in dentry\n");
+ return_fid = ERR_PTR(-EBADF);
}
+ if(down_interruptible(&return_fid->lock))
+ return ERR_PTR(-EINTR);
+
return return_fid;
}
+
+/**
+ * v9fs_fid_clone - lookup the fid for a dentry, clone a private copy and release it
+ * @dentry: dentry to look for fid in
+ *
+ * find a fid in the dentry and then clone to a new private fid
+ *
+ * TODO: only match fids that have the same uid as current user
+ *
+ */
+
+struct v9fs_fid *v9fs_fid_clone(struct dentry *dentry)
+{
+ struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dentry->d_inode);
+ struct v9fs_fid *base_fid, *new_fid = ERR_PTR(-EBADF);
+ struct v9fs_fcall *fcall = NULL;
+ int fid, err;
+
+ base_fid = v9fs_fid_lookup(dentry);
+
+ if(IS_ERR(base_fid))
+ return base_fid;
+
+ if(base_fid) { /* clone fid */
+ fid = v9fs_get_idpool(&v9ses->fidpool);
+ if (fid < 0) {
+ eprintk(KERN_WARNING, "newfid fails!\n");
+ new_fid = ERR_PTR(-ENOSPC);
+ goto Release_Fid;
+ }
+
+ err = v9fs_t_walk(v9ses, base_fid->fid, fid, NULL, &fcall);
+ if (err < 0) {
+ dprintk(DEBUG_ERROR, "clone walk didn't work\n");
+ v9fs_put_idpool(fid, &v9ses->fidpool);
+ new_fid = ERR_PTR(err);
+ goto Free_Fcall;
+ }
+ new_fid = v9fs_fid_create(v9ses, fid);
+ if (new_fid == NULL) {
+ dprintk(DEBUG_ERROR, "out of memory\n");
+ new_fid = ERR_PTR(-ENOMEM);
+ }
+Free_Fcall:
+ kfree(fcall);
+ }
+
+Release_Fid:
+ up(&base_fid->lock);
+ return new_fid;
+}
+
+void v9fs_fid_clunk(struct v9fs_session_info *v9ses, struct v9fs_fid *fid)
+{
+ v9fs_t_clunk(v9ses, fid->fid);
+ v9fs_fid_destroy(fid);
+}
struct list_head list; /* list of fids associated with a dentry */
struct list_head active; /* XXX - debug */
+ struct semaphore lock;
+
u32 fid;
unsigned char fidopen; /* set when fid is opened */
unsigned char fidclunked; /* set when fid has already been clunked */
void v9fs_fid_destroy(struct v9fs_fid *fid);
struct v9fs_fid *v9fs_fid_create(struct v9fs_session_info *, int fid);
int v9fs_fid_insert(struct v9fs_fid *fid, struct dentry *dentry);
+struct v9fs_fid *v9fs_fid_clone(struct dentry *dentry);
+void v9fs_fid_clunk(struct v9fs_session_info *v9ses, struct v9fs_fid *fid);
+
v9fs_mux_poll_tasks[i].task = NULL;
v9fs_mux_wq = create_workqueue("v9fs");
- if (!v9fs_mux_wq)
+ if (!v9fs_mux_wq) {
+ printk(KERN_WARNING "v9fs: mux: creating workqueue failed\n");
return -ENOMEM;
+ }
return 0;
}
v9fs_error_init();
- printk(KERN_INFO "Installing v9fs 9P2000 file system support\n");
+ printk(KERN_INFO "Installing v9fs 9p2000 file system support\n");
ret = v9fs_mux_global_init();
- if (!ret)
+ if (ret) {
+ printk(KERN_WARNING "v9fs: starting mux failed\n");
return ret;
+ }
ret = register_filesystem(&v9fs_fs_type);
- if (!ret)
+ if (ret) {
+ printk(KERN_WARNING "v9fs: registering file system failed\n");
v9fs_mux_global_exit();
+ }
+
return ret;
}
struct v9fs_fid *vfid;
struct v9fs_fcall *fcall = NULL;
int omode;
- int fid = V9FS_NOFID;
int err;
dprintk(DEBUG_VFS, "inode: %p file: %p \n", inode, file);
- vfid = v9fs_fid_lookup(file->f_path.dentry);
- if (!vfid) {
- dprintk(DEBUG_ERROR, "Couldn't resolve fid from dentry\n");
- return -EBADF;
- }
-
- fid = v9fs_get_idpool(&v9ses->fidpool);
- if (fid < 0) {
- eprintk(KERN_WARNING, "newfid fails!\n");
- return -ENOSPC;
- }
+ vfid = v9fs_fid_clone(file->f_path.dentry);
+ if (IS_ERR(vfid))
+ return PTR_ERR(vfid);
- err = v9fs_t_walk(v9ses, vfid->fid, fid, NULL, &fcall);
- if (err < 0) {
- dprintk(DEBUG_ERROR, "rewalk didn't work\n");
- if (fcall && fcall->id == RWALK)
- goto clunk_fid;
- else {
- v9fs_put_idpool(fid, &v9ses->fidpool);
- goto free_fcall;
- }
- }
- kfree(fcall);
-
- /* TODO: do special things for O_EXCL, O_NOFOLLOW, O_SYNC */
- /* translate open mode appropriately */
omode = v9fs_uflags2omode(file->f_flags);
- err = v9fs_t_open(v9ses, fid, omode, &fcall);
+ err = v9fs_t_open(v9ses, vfid->fid, omode, &fcall);
if (err < 0) {
PRINT_FCALL_ERROR("open failed", fcall);
- goto clunk_fid;
- }
-
- vfid = kmalloc(sizeof(struct v9fs_fid), GFP_KERNEL);
- if (vfid == NULL) {
- dprintk(DEBUG_ERROR, "out of memory\n");
- err = -ENOMEM;
- goto clunk_fid;
+ goto Clunk_Fid;
}
file->private_data = vfid;
- vfid->fid = fid;
vfid->fidopen = 1;
vfid->fidclunked = 0;
vfid->iounit = fcall->params.ropen.iounit;
return 0;
-clunk_fid:
- v9fs_t_clunk(v9ses, fid);
-
-free_fcall:
+Clunk_Fid:
+ v9fs_fid_clunk(v9ses, vfid);
kfree(fcall);
return err;
sb = file_inode->i_sb;
v9ses = v9fs_inode2v9ses(file_inode);
v9fid = v9fs_fid_lookup(file);
-
- if (!v9fid) {
- dprintk(DEBUG_ERROR,
- "no v9fs_fid\n");
- return -EBADF;
- }
+ if(IS_ERR(v9fid))
+ return PTR_ERR(v9fid);
fid = v9fid->fid;
if (fid < 0) {
result = v9fs_t_remove(v9ses, fid, &fcall);
if (result < 0) {
PRINT_FCALL_ERROR("remove fails", fcall);
+ goto Error;
}
v9fs_put_idpool(fid, &v9ses->fidpool);
v9fs_fid_destroy(v9fid);
+Error:
kfree(fcall);
return result;
}
inode = NULL;
vfid = NULL;
v9ses = v9fs_inode2v9ses(dir);
- dfid = v9fs_fid_lookup(dentry->d_parent);
- perm = unixmode2p9mode(v9ses, mode);
+ dfid = v9fs_fid_clone(dentry->d_parent);
+ if(IS_ERR(dfid)) {
+ err = PTR_ERR(dfid);
+ goto error;
+ }
+ perm = unixmode2p9mode(v9ses, mode);
if (nd && nd->flags & LOOKUP_OPEN)
flags = nd->intent.open.flags - 1;
else
perm, v9fs_uflags2omode(flags), NULL, &fid, &qid, &iounit);
if (err)
- goto error;
+ goto clunk_dfid;
vfid = v9fs_clone_walk(v9ses, dfid->fid, dentry);
+ v9fs_fid_clunk(v9ses, dfid);
if (IS_ERR(vfid)) {
err = PTR_ERR(vfid);
vfid = NULL;
return 0;
+clunk_dfid:
+ v9fs_fid_clunk(v9ses, dfid);
+
error:
if (vfid)
v9fs_fid_destroy(vfid);
inode = NULL;
vfid = NULL;
v9ses = v9fs_inode2v9ses(dir);
- dfid = v9fs_fid_lookup(dentry->d_parent);
+ dfid = v9fs_fid_clone(dentry->d_parent);
+ if(IS_ERR(dfid)) {
+ err = PTR_ERR(dfid);
+ goto error;
+ }
+
perm = unixmode2p9mode(v9ses, mode | S_IFDIR);
err = v9fs_create(v9ses, dfid->fid, (char *) dentry->d_name.name,
if (err) {
dprintk(DEBUG_ERROR, "create error %d\n", err);
- goto error;
- }
-
- err = v9fs_t_clunk(v9ses, fid);
- if (err) {
- dprintk(DEBUG_ERROR, "clunk error %d\n", err);
- goto error;
+ goto clean_up_dfid;
}
vfid = v9fs_clone_walk(v9ses, dfid->fid, dentry);
if (IS_ERR(vfid)) {
err = PTR_ERR(vfid);
vfid = NULL;
- goto error;
+ goto clean_up_dfid;
}
+ v9fs_fid_clunk(v9ses, dfid);
inode = v9fs_inode_from_fid(v9ses, vfid->fid, dir->i_sb);
if (IS_ERR(inode)) {
err = PTR_ERR(inode);
inode = NULL;
- goto error;
+ goto clean_up_fids;
}
dentry->d_op = &v9fs_dentry_operations;
d_instantiate(dentry, inode);
return 0;
-error:
+clean_up_fids:
if (vfid)
v9fs_fid_destroy(vfid);
+clean_up_dfid:
+ v9fs_fid_clunk(v9ses, dfid);
+
+error:
return err;
}
dentry->d_op = &v9fs_dentry_operations;
dirfid = v9fs_fid_lookup(dentry->d_parent);
- if (!dirfid) {
- dprintk(DEBUG_ERROR, "no dirfid\n");
- return ERR_PTR(-EINVAL);
- }
+ if(IS_ERR(dirfid))
+ return ERR_PTR(PTR_ERR(dirfid));
dirfidnum = dirfid->fid;
- if (dirfidnum < 0) {
- dprintk(DEBUG_ERROR, "no dirfid for inode %p, #%lu\n",
- dir, dir->i_ino);
- return ERR_PTR(-EBADF);
- }
-
newfid = v9fs_get_idpool(&v9ses->fidpool);
if (newfid < 0) {
eprintk(KERN_WARNING, "newfid fails!\n");
- return ERR_PTR(-ENOSPC);
+ result = -ENOSPC;
+ goto Release_Dirfid;
}
result = v9fs_t_walk(v9ses, dirfidnum, newfid,
(char *)dentry->d_name.name, &fcall);
+ up(&dirfid->lock);
+
if (result < 0) {
if (fcall && fcall->id == RWALK)
v9fs_t_clunk(v9ses, newfid);
return NULL;
- FreeFcall:
+Release_Dirfid:
+ up(&dirfid->lock);
+
+FreeFcall:
kfree(fcall);
+
return ERR_PTR(result);
}
struct inode *old_inode = old_dentry->d_inode;
struct v9fs_session_info *v9ses = v9fs_inode2v9ses(old_inode);
struct v9fs_fid *oldfid = v9fs_fid_lookup(old_dentry);
- struct v9fs_fid *olddirfid =
- v9fs_fid_lookup(old_dentry->d_parent);
- struct v9fs_fid *newdirfid =
- v9fs_fid_lookup(new_dentry->d_parent);
+ struct v9fs_fid *olddirfid;
+ struct v9fs_fid *newdirfid;
struct v9fs_wstat wstat;
struct v9fs_fcall *fcall = NULL;
int fid = -1;
dprintk(DEBUG_VFS, "\n");
- if ((!oldfid) || (!olddirfid) || (!newdirfid)) {
- dprintk(DEBUG_ERROR, "problem with arguments\n");
- return -EBADF;
+ if(IS_ERR(oldfid))
+ return PTR_ERR(oldfid);
+
+ olddirfid = v9fs_fid_clone(old_dentry->d_parent);
+ if(IS_ERR(olddirfid)) {
+ retval = PTR_ERR(olddirfid);
+ goto Release_lock;
+ }
+
+ newdirfid = v9fs_fid_clone(new_dentry->d_parent);
+ if(IS_ERR(newdirfid)) {
+ retval = PTR_ERR(newdirfid);
+ goto Clunk_olddir;
}
/* 9P can only handle file rename in the same directory */
if (memcmp(&olddirfid->qid, &newdirfid->qid, sizeof(newdirfid->qid))) {
dprintk(DEBUG_ERROR, "old dir and new dir are different\n");
- retval = -EPERM;
- goto FreeFcallnBail;
+ retval = -EXDEV;
+ goto Clunk_newdir;
}
fid = oldfid->fid;
dprintk(DEBUG_ERROR, "no fid for old file #%lu\n",
old_inode->i_ino);
retval = -EBADF;
- goto FreeFcallnBail;
+ goto Clunk_newdir;
}
v9fs_blank_wstat(&wstat);
retval = v9fs_t_wstat(v9ses, fid, &wstat, &fcall);
- FreeFcallnBail:
if (retval < 0)
PRINT_FCALL_ERROR("wstat error", fcall);
kfree(fcall);
+
+Clunk_newdir:
+ v9fs_fid_clunk(v9ses, newdirfid);
+
+Clunk_olddir:
+ v9fs_fid_clunk(v9ses, olddirfid);
+
+Release_lock:
+ up(&oldfid->lock);
+
return retval;
}
{
struct v9fs_fcall *fcall = NULL;
struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dentry->d_inode);
- struct v9fs_fid *fid = v9fs_fid_lookup(dentry);
+ struct v9fs_fid *fid = v9fs_fid_clone(dentry);
int err = -EPERM;
dprintk(DEBUG_VFS, "dentry: %p\n", dentry);
- if (!fid) {
- dprintk(DEBUG_ERROR,
- "couldn't find fid associated with dentry\n");
- return -EBADF;
- }
+ if(IS_ERR(fid))
+ return PTR_ERR(fid);
err = v9fs_t_stat(v9ses, fid->fid, &fcall);
}
kfree(fcall);
+ v9fs_fid_clunk(v9ses, fid);
return err;
}
static int v9fs_vfs_setattr(struct dentry *dentry, struct iattr *iattr)
{
struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dentry->d_inode);
- struct v9fs_fid *fid = v9fs_fid_lookup(dentry);
+ struct v9fs_fid *fid = v9fs_fid_clone(dentry);
struct v9fs_fcall *fcall = NULL;
struct v9fs_wstat wstat;
int res = -EPERM;
dprintk(DEBUG_VFS, "\n");
-
- if (!fid) {
- dprintk(DEBUG_ERROR,
- "Couldn't find fid associated with dentry\n");
- return -EBADF;
- }
+ if(IS_ERR(fid))
+ return PTR_ERR(fid);
v9fs_blank_wstat(&wstat);
if (iattr->ia_valid & ATTR_MODE)
if (res >= 0)
res = inode_setattr(dentry->d_inode, iattr);
+ v9fs_fid_clunk(v9ses, fid);
return res;
}
struct v9fs_fcall *fcall = NULL;
struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dentry->d_inode);
- struct v9fs_fid *fid = v9fs_fid_lookup(dentry);
+ struct v9fs_fid *fid = v9fs_fid_clone(dentry);
- if (!fid) {
- dprintk(DEBUG_ERROR, "could not resolve fid from dentry\n");
- retval = -EBADF;
- goto FreeFcall;
- }
+ if(IS_ERR(fid))
+ return PTR_ERR(fid);
if (!v9ses->extended) {
retval = -EBADF;
dprintk(DEBUG_ERROR, "not extended\n");
- goto FreeFcall;
+ goto ClunkFid;
}
dprintk(DEBUG_VFS, " %s\n", dentry->d_name.name);
goto FreeFcall;
}
- if (!fcall)
- return -EIO;
+ if (!fcall) {
+ retval = -EIO;
+ goto ClunkFid;
+ }
if (!(fcall->params.rstat.stat.mode & V9FS_DMSYMLINK)) {
retval = -EINVAL;
fcall->params.rstat.stat.extension.str, buffer);
retval = buflen;
- FreeFcall:
+FreeFcall:
kfree(fcall);
+ClunkFid:
+ v9fs_fid_clunk(v9ses, fid);
+
return retval;
}
int err;
u32 fid, perm;
struct v9fs_session_info *v9ses;
- struct v9fs_fid *dfid, *vfid;
- struct inode *inode;
+ struct v9fs_fid *dfid, *vfid = NULL;
+ struct inode *inode = NULL;
- inode = NULL;
- vfid = NULL;
v9ses = v9fs_inode2v9ses(dir);
- dfid = v9fs_fid_lookup(dentry->d_parent);
- perm = unixmode2p9mode(v9ses, mode);
-
if (!v9ses->extended) {
dprintk(DEBUG_ERROR, "not extended\n");
return -EPERM;
}
+ dfid = v9fs_fid_clone(dentry->d_parent);
+ if(IS_ERR(dfid)) {
+ err = PTR_ERR(dfid);
+ goto error;
+ }
+
+ perm = unixmode2p9mode(v9ses, mode);
+
err = v9fs_create(v9ses, dfid->fid, (char *) dentry->d_name.name,
perm, V9FS_OREAD, (char *) extension, &fid, NULL, NULL);
if (err)
- goto error;
+ goto clunk_dfid;
err = v9fs_t_clunk(v9ses, fid);
if (err)
- goto error;
+ goto clunk_dfid;
vfid = v9fs_clone_walk(v9ses, dfid->fid, dentry);
if (IS_ERR(vfid)) {
err = PTR_ERR(vfid);
vfid = NULL;
- goto error;
+ goto clunk_dfid;
}
inode = v9fs_inode_from_fid(v9ses, vfid->fid, dir->i_sb);
if (IS_ERR(inode)) {
err = PTR_ERR(inode);
inode = NULL;
- goto error;
+ goto free_vfid;
}
dentry->d_op = &v9fs_dentry_operations;
d_instantiate(dentry, inode);
return 0;
-error:
- if (vfid)
- v9fs_fid_destroy(vfid);
+free_vfid:
+ v9fs_fid_destroy(vfid);
+
+clunk_dfid:
+ v9fs_fid_clunk(v9ses, dfid);
+error:
return err;
}
struct dentry *dentry)
{
int retval;
+ struct v9fs_session_info *v9ses = v9fs_inode2v9ses(dir);
struct v9fs_fid *oldfid;
char *name;
dprintk(DEBUG_VFS, " %lu,%s,%s\n", dir->i_ino, dentry->d_name.name,
old_dentry->d_name.name);
- oldfid = v9fs_fid_lookup(old_dentry);
- if (!oldfid) {
- dprintk(DEBUG_ERROR, "can't find oldfid\n");
- return -EPERM;
- }
+ oldfid = v9fs_fid_clone(old_dentry);
+ if(IS_ERR(oldfid))
+ return PTR_ERR(oldfid);
name = __getname();
- if (unlikely(!name))
- return -ENOMEM;
+ if (unlikely(!name)) {
+ retval = -ENOMEM;
+ goto clunk_fid;
+ }
sprintf(name, "%d\n", oldfid->fid);
retval = v9fs_vfs_mkspecial(dir, dentry, V9FS_DMLINK, name);
__putname(name);
+clunk_fid:
+ v9fs_fid_clunk(v9ses, oldfid);
return retval;
}
retval = PTR_ERR(interpreter);
if (IS_ERR(interpreter))
goto out_free_interp;
+
+ /*
+ * If the binary is not readable then enforce
+ * mm->dumpable = 0 regardless of the interpreter's
+ * permissions.
+ */
+ if (file_permission(interpreter, MAY_READ) < 0)
+ bprm->interp_flags |= BINPRM_FLAGS_ENFORCE_NONDUMP;
+
retval = kernel_read(interpreter, 0, bprm->buf,
BINPRM_BUF_SIZE);
if (retval != BINPRM_BUF_SIZE) {
*/
static int maydump(struct vm_area_struct *vma)
{
+ /* The vma can be set up to tell us the answer directly. */
+ if (vma->vm_flags & VM_ALWAYSDUMP)
+ return 1;
+
/* Do not dump I/O mapped devices or special mappings */
if (vma->vm_flags & (VM_IO | VM_RESERVED))
return 0;
return sz;
}
+static struct vm_area_struct *first_vma(struct task_struct *tsk,
+ struct vm_area_struct *gate_vma)
+{
+ struct vm_area_struct *ret = tsk->mm->mmap;
+
+ if (ret)
+ return ret;
+ return gate_vma;
+}
+/*
+ * Helper function for iterating across a vma list. It ensures that the caller
+ * will visit `gate_vma' prior to terminating the search.
+ */
+static struct vm_area_struct *next_vma(struct vm_area_struct *this_vma,
+ struct vm_area_struct *gate_vma)
+{
+ struct vm_area_struct *ret;
+
+ ret = this_vma->vm_next;
+ if (ret)
+ return ret;
+ if (this_vma == gate_vma)
+ return NULL;
+ return gate_vma;
+}
+
/*
* Actual dumper
*
int segs;
size_t size = 0;
int i;
- struct vm_area_struct *vma;
+ struct vm_area_struct *vma, *gate_vma;
struct elfhdr *elf = NULL;
loff_t offset = 0, dataoff, foffset;
unsigned long limit = current->signal->rlim[RLIMIT_CORE].rlim_cur;
segs += ELF_CORE_EXTRA_PHDRS;
#endif
+ gate_vma = get_gate_vma(current);
+ if (gate_vma != NULL)
+ segs++;
+
/* Set up header */
fill_elf_header(elf, segs + 1); /* including notes section */
dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);
/* Write program headers for segments dump */
- for (vma = current->mm->mmap; vma != NULL; vma = vma->vm_next) {
+ for (vma = first_vma(current, gate_vma); vma != NULL;
+ vma = next_vma(vma, gate_vma)) {
struct elf_phdr phdr;
size_t sz;
/* Align to page */
DUMP_SEEK(dataoff - foffset);
- for (vma = current->mm->mmap; vma != NULL; vma = vma->vm_next) {
+ for (vma = first_vma(current, gate_vma); vma != NULL;
+ vma = next_vma(vma, gate_vma)) {
unsigned long addr;
if (!maydump(vma))
goto error;
}
+ /*
+ * If the binary is not readable then enforce
+ * mm->dumpable = 0 regardless of the interpreter's
+ * permissions.
+ */
+ if (file_permission(interpreter, MAY_READ) < 0)
+ bprm->interp_flags |= BINPRM_FLAGS_ENFORCE_NONDUMP;
+
retval = kernel_read(interpreter, 0, bprm->buf,
BINPRM_BUF_SIZE);
if (retval < 0)
iocb->ki_nbytes = -EIO;
if (atomic_dec_and_test(bio_count)) {
- if (iocb->ki_nbytes < 0)
+ if ((long)iocb->ki_nbytes < 0)
aio_complete(iocb, iocb->ki_nbytes, 0);
else
aio_complete(iocb, iocb->ki_left, 0);
return pvec->page[pvec->idx++];
}
+/* return a page back to pvec array */
+static void blk_unget_page(struct page *page, struct pvec *pvec)
+{
+ pvec->page[--pvec->idx] = page;
+}
+
static ssize_t
blkdev_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov,
loff_t pos, unsigned long nr_segs)
count = min(count, nbytes);
goto same_bio;
}
+ } else {
+ blk_unget_page(page, &pvec);
}
/* bio is ready, submit it */
int ret = 0;
BUG_ON(!PageLocked(page));
- if (PageDirty(page) || PageWriteback(page))
+ if (PageWriteback(page))
return 0;
if (mapping == NULL) { /* can this still happen? */
spin_lock(&mapping->private_lock);
ret = drop_buffers(page, &buffers_to_free);
spin_unlock(&mapping->private_lock);
+
+ /*
+ * If the filesystem writes its buffers by hand (eg ext3)
+ * then we can have clean buffers against a dirty page. We
+ * clean the page here; otherwise the VM will never notice
+ * that the filesystem did any IO at all.
+ *
+ * Also, during truncate, discard_buffer will have marked all
+ * the page's buffers clean. We discover that here and clean
+ * the page also.
+ */
+ if (ret)
+ cancel_dirty_page(page, PAGE_CACHE_SIZE);
out:
if (buffers_to_free) {
struct buffer_head *bh = buffers_to_free;
+Version 1.47
+------------
+Fix oops in list_del during mount caused by unaligned string.
+
Version 1.46
------------
Support deep tree mounts. Better support OS/2, Win9x (DOS) time stamps.
ses = list_entry(tmp, struct cifsSesInfo, cifsSessionList);
if((ses->serverDomain == NULL) || (ses->serverOS == NULL) ||
(ses->serverNOS == NULL)) {
- buf += sprintf("\nentry for %s not fully displayed\n\t",
- ses->serverName);
+ buf += sprintf(buf, "\nentry for %s not fully "
+ "displayed\n\t", ses->serverName);
} else {
length =
extern ssize_t cifs_listxattr(struct dentry *, char *, size_t);
extern int cifs_ioctl (struct inode * inode, struct file * filep,
unsigned int command, unsigned long arg);
-#define CIFS_VERSION "1.46"
+#define CIFS_VERSION "1.47"
#endif /* _CIFSFS_H */
{
struct cifsSesInfo *ret_buf;
- ret_buf =
- (struct cifsSesInfo *) kzalloc(sizeof (struct cifsSesInfo),
- GFP_KERNEL);
+ ret_buf = kzalloc(sizeof (struct cifsSesInfo), GFP_KERNEL);
if (ret_buf) {
write_lock(&GlobalSMBSeslock);
atomic_inc(&sesInfoAllocCount);
tconInfoAlloc(void)
{
struct cifsTconInfo *ret_buf;
- ret_buf =
- (struct cifsTconInfo *) kzalloc(sizeof (struct cifsTconInfo),
- GFP_KERNEL);
+ ret_buf = kzalloc(sizeof (struct cifsTconInfo), GFP_KERNEL);
if (ret_buf) {
write_lock(&GlobalSMBSeslock);
atomic_inc(&tconInfoAllocCount);
cFYI(1,("bleft %d",bleft));
- /* word align, if bytes remaining is not even */
- if(bleft % 2) {
- bleft--;
- data++;
- }
+ /* SMB header is unaligned, so cifs servers word align start of
+ Unicode strings */
+ data++;
+ bleft--; /* Windows servers do not always double null terminate
+ their final Unicode string - in which case we
+ now will not attempt to decode the byte of junk
+ which follows it */
+
words_left = bleft / 2;
/* save off server operating system */
WARN_ON(inode->i_state & I_WILL_FREE);
if ((wbc->sync_mode != WB_SYNC_ALL) && (inode->i_state & I_LOCK)) {
+ struct address_space *mapping = inode->i_mapping;
+ int ret;
+
list_move(&inode->i_list, &inode->i_sb->s_dirty);
- return 0;
+
+ /*
+ * Even if we don't actually write the inode itself here,
+ * we can at least start some of the data writeout..
+ */
+ spin_unlock(&inode_lock);
+ ret = do_writepages(mapping, wbc);
+ spin_lock(&inode_lock);
+ return ret;
}
/*
lock_kernel();
- res = nfs_revalidate_mapping(inode, filp->f_mapping);
+ res = nfs_revalidate_mapping_nolock(inode, filp->f_mapping);
if (res < 0) {
unlock_kernel();
return res;
return __nfs_revalidate_inode(server, inode);
}
+static int nfs_invalidate_mapping_nolock(struct inode *inode, struct address_space *mapping)
+{
+ struct nfs_inode *nfsi = NFS_I(inode);
+
+ if (mapping->nrpages != 0) {
+ int ret = invalidate_inode_pages2(mapping);
+ if (ret < 0)
+ return ret;
+ }
+ spin_lock(&inode->i_lock);
+ nfsi->cache_validity &= ~NFS_INO_INVALID_DATA;
+ if (S_ISDIR(inode->i_mode)) {
+ memset(nfsi->cookieverf, 0, sizeof(nfsi->cookieverf));
+ /* This ensures we revalidate child dentries */
+ nfsi->cache_change_attribute = jiffies;
+ }
+ spin_unlock(&inode->i_lock);
+ nfs_inc_stats(inode, NFSIOS_DATAINVALIDATE);
+ dfprintk(PAGECACHE, "NFS: (%s/%Ld) data cache invalidated\n",
+ inode->i_sb->s_id, (long long)NFS_FILEID(inode));
+ return 0;
+}
+
+static int nfs_invalidate_mapping(struct inode *inode, struct address_space *mapping)
+{
+ int ret = 0;
+
+ mutex_lock(&inode->i_mutex);
+ if (NFS_I(inode)->cache_validity & NFS_INO_INVALID_DATA) {
+ ret = nfs_sync_mapping(mapping);
+ if (ret == 0)
+ ret = nfs_invalidate_mapping_nolock(inode, mapping);
+ }
+ mutex_unlock(&inode->i_mutex);
+ return ret;
+}
+
/**
- * nfs_revalidate_mapping - Revalidate the pagecache
+ * nfs_revalidate_mapping_nolock - Revalidate the pagecache
* @inode - pointer to host inode
* @mapping - pointer to mapping
*/
-int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping)
+int nfs_revalidate_mapping_nolock(struct inode *inode, struct address_space *mapping)
{
struct nfs_inode *nfsi = NFS_I(inode);
int ret = 0;
- if (NFS_STALE(inode))
- ret = -ESTALE;
if ((nfsi->cache_validity & NFS_INO_REVAL_PAGECACHE)
- || nfs_attribute_timeout(inode))
+ || nfs_attribute_timeout(inode) || NFS_STALE(inode)) {
ret = __nfs_revalidate_inode(NFS_SERVER(inode), inode);
- if (ret < 0)
- goto out;
+ if (ret < 0)
+ goto out;
+ }
+ if (nfsi->cache_validity & NFS_INO_INVALID_DATA)
+ ret = nfs_invalidate_mapping_nolock(inode, mapping);
+out:
+ return ret;
+}
- if (nfsi->cache_validity & NFS_INO_INVALID_DATA) {
- if (mapping->nrpages != 0) {
- if (S_ISREG(inode->i_mode)) {
- ret = nfs_sync_mapping(mapping);
- if (ret < 0)
- goto out;
- }
- ret = invalidate_inode_pages2(mapping);
- if (ret < 0)
- goto out;
- }
- spin_lock(&inode->i_lock);
- nfsi->cache_validity &= ~NFS_INO_INVALID_DATA;
- if (S_ISDIR(inode->i_mode)) {
- memset(nfsi->cookieverf, 0, sizeof(nfsi->cookieverf));
- /* This ensures we revalidate child dentries */
- nfsi->cache_change_attribute = jiffies;
- }
- spin_unlock(&inode->i_lock);
+/**
+ * nfs_revalidate_mapping - Revalidate the pagecache
+ * @inode - pointer to host inode
+ * @mapping - pointer to mapping
+ *
+ * This version of the function will take the inode->i_mutex and attempt to
+ * flush out all dirty data if it needs to invalidate the page cache.
+ */
+int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping)
+{
+ struct nfs_inode *nfsi = NFS_I(inode);
+ int ret = 0;
- nfs_inc_stats(inode, NFSIOS_DATAINVALIDATE);
- dfprintk(PAGECACHE, "NFS: (%s/%Ld) data cache invalidated\n",
- inode->i_sb->s_id,
- (long long)NFS_FILEID(inode));
+ if ((nfsi->cache_validity & NFS_INO_REVAL_PAGECACHE)
+ || nfs_attribute_timeout(inode) || NFS_STALE(inode)) {
+ ret = __nfs_revalidate_inode(NFS_SERVER(inode), inode);
+ if (ret < 0)
+ goto out;
}
+ if (nfsi->cache_validity & NFS_INO_INVALID_DATA)
+ ret = nfs_invalidate_mapping(inode, mapping);
out:
return ret;
}
{
struct inode *inode = dentry->d_inode;
struct page *page;
- void *err = ERR_PTR(nfs_revalidate_mapping(inode, inode->i_mapping));
+ void *err;
+
+ err = ERR_PTR(nfs_revalidate_mapping_nolock(inode, inode->i_mapping));
if (err)
goto read_failed;
page = read_cache_page(&inode->i_data, 0,
}
int
-nfs3svc_encode_entry(struct readdir_cd *cd, const char *name,
- int namlen, loff_t offset, ino_t ino, unsigned int d_type)
+nfs3svc_encode_entry(void *cd, const char *name,
+ int namlen, loff_t offset, u64 ino, unsigned int d_type)
{
return encode_entry(cd, name, namlen, offset, ino, d_type, 0);
}
int
-nfs3svc_encode_entry_plus(struct readdir_cd *cd, const char *name,
- int namlen, loff_t offset, ino_t ino, unsigned int d_type)
+nfs3svc_encode_entry_plus(void *cd, const char *name,
+ int namlen, loff_t offset, u64 ino,
+ unsigned int d_type)
{
return encode_entry(cd, name, namlen, offset, ino, d_type, 1);
}
}
static int
-nfsd4_encode_dirent(struct readdir_cd *ccd, const char *name, int namlen,
- loff_t offset, ino_t ino, unsigned int d_type)
+nfsd4_encode_dirent(void *ccdv, const char *name, int namlen,
+ loff_t offset, u64 ino, unsigned int d_type)
{
+ struct readdir_cd *ccd = ccdv;
struct nfsd4_readdir *cd = container_of(ccd, struct nfsd4_readdir, common);
int buflen;
__be32 *p = cd->buffer;
.pg_prog = NFS_ACL_PROGRAM,
.pg_nvers = NFSD_ACL_NRVERS,
.pg_vers = nfsd_acl_versions,
- .pg_name = "nfsd",
+ .pg_name = "nfsacl",
.pg_class = "nfsd",
.pg_stats = &nfsd_acl_svcstats,
.pg_authenticate = &svc_set_client,
switch(change) {
case NFSD_SET:
nfsd_versions[vers] = nfsd_version[vers];
- break;
#if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
if (vers < NFSD_ACL_NRVERS)
- nfsd_acl_version[vers] = nfsd_acl_version[vers];
+ nfsd_acl_versions[vers] = nfsd_acl_version[vers];
#endif
+ break;
case NFSD_CLEAR:
nfsd_versions[vers] = NULL;
#if defined(CONFIG_NFSD_V2_ACL) || defined(CONFIG_NFSD_V3_ACL)
if (vers < NFSD_ACL_NRVERS)
- nfsd_acl_version[vers] = NULL;
+ nfsd_acl_versions[vers] = NULL;
#endif
break;
case NFSD_TEST:
}
int
-nfssvc_encode_entry(struct readdir_cd *ccd, const char *name,
- int namlen, loff_t offset, ino_t ino, unsigned int d_type)
+nfssvc_encode_entry(void *ccdv, const char *name,
+ int namlen, loff_t offset, u64 ino, unsigned int d_type)
{
+ struct readdir_cd *ccd = ccdv;
struct nfsd_readdirres *cd = container_of(ccd, struct nfsd_readdirres, common);
__be32 *p = cd->buffer;
int buflen, slen;
rqstp->rq_res.page_len = size;
} else if (page != pp[-1]) {
get_page(page);
- put_page(*pp);
+ if (*pp)
+ put_page(*pp);
*pp = page;
rqstp->rq_resused++;
rqstp->rq_res.page_len += size;
__be32 err;
int host_err;
__u32 v_mtime=0, v_atime=0;
- int v_mode=0;
err = nfserr_perm;
if (!flen)
goto out;
if (createmode == NFS3_CREATE_EXCLUSIVE) {
- /* while the verifier would fit in mtime+atime,
- * solaris7 gets confused (bugid 4218508) if these have
- * the high bit set, so we use the mode as well
+ /* solaris7 gets confused (bugid 4218508) if these have
+ * the high bit set, so just clear the high bits.
*/
v_mtime = verifier[0]&0x7fffffff;
v_atime = verifier[1]&0x7fffffff;
- v_mode = S_IFREG
- | ((verifier[0]&0x80000000) >> (32-7)) /* u+x */
- | ((verifier[1]&0x80000000) >> (32-9)) /* u+r */
- ;
}
if (dchild->d_inode) {
case NFS3_CREATE_EXCLUSIVE:
if ( dchild->d_inode->i_mtime.tv_sec == v_mtime
&& dchild->d_inode->i_atime.tv_sec == v_atime
- && dchild->d_inode->i_mode == v_mode
&& dchild->d_inode->i_size == 0 )
break;
/* fallthru */
}
if (createmode == NFS3_CREATE_EXCLUSIVE) {
- /* Cram the verifier into atime/mtime/mode */
+ /* Cram the verifier into atime/mtime */
iap->ia_valid = ATTR_MTIME|ATTR_ATIME
- | ATTR_MTIME_SET|ATTR_ATIME_SET
- | ATTR_MODE;
+ | ATTR_MTIME_SET|ATTR_ATIME_SET;
/* XXX someone who knows this better please fix it for nsec */
iap->ia_mtime.tv_sec = v_mtime;
iap->ia_atime.tv_sec = v_atime;
iap->ia_mtime.tv_nsec = 0;
iap->ia_atime.tv_nsec = 0;
- iap->ia_mode = v_mode;
}
/* Set file attributes.
- * Mode has already been set but we might need to reset it
- * for CREATE_EXCLUSIVE
* Irix appears to send along the gid when it tries to
* implement setgid directories via NFS. Clear out all that cruft.
*/
set_attr:
- if ((iap->ia_valid &= ~(ATTR_UID|ATTR_GID)) != 0) {
+ if ((iap->ia_valid &= ~(ATTR_UID|ATTR_GID|ATTR_MODE)) != 0) {
__be32 err2 = nfsd_setattr(rqstp, resfhp, iap, 0, (time_t)0);
if (err2)
err = err2;
*/
__be32
nfsd_readdir(struct svc_rqst *rqstp, struct svc_fh *fhp, loff_t *offsetp,
- struct readdir_cd *cdp, encode_dent_fn func)
+ struct readdir_cd *cdp, filldir_t func)
{
__be32 err;
int host_err;
do {
cdp->err = nfserr_eof; /* will be cleared on successful read */
- host_err = vfs_readdir(file, (filldir_t) func, cdp);
+ host_err = vfs_readdir(file, func, cdp);
} while (host_err >=0 && cdp->err == nfs_ok);
if (host_err)
err = nfserrno(host_err);